tower-0.4.13/.cargo_vcs_info.json0000644000000001430000000000100122570ustar { "git": { "sha1": "04527aeb439761875a3e4f96d2090622731bc719" }, "path_in_vcs": "tower" }tower-0.4.13/CHANGELOG.md000064400000000000000000000410070072674642500127140ustar 00000000000000# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). # Unreleased - None. # 0.4.13 (June 17, 2022) ### Added - **load_shed**: Public constructor for `Overloaded` error ([#661]) ### Fixed - **util**: Fix hang with `call_all` when the `Stream` of requests is pending ([#656]) - **ready_cache**: Ensure cancelation is observed by pending services ([#668], fixes [#415]) - **docs**: Fix a missing section header due to a typo ([#646]) - **docs**: Fix broken links to `Service` trait ([#659]) [#661]: https://github.com/tower-rs/tower/pull/661 [#656]: https://github.com/tower-rs/tower/pull/656 [#668]: https://github.com/tower-rs/tower/pull/668 [#415]: https://github.com/tower-rs/tower/pull/415 [#646]: https://github.com/tower-rs/tower/pull/646 [#659]: https://github.com/tower-rs/tower/pull/659 # 0.4.12 (February 16, 2022) ### Fixed - **hedge**, **load**, **retry**: Fix use of `Instant` operations that can panic on platforms where `Instant` is not monotonic ([#633]) - Disable `attributes` feature on `tracing` dependency ([#623]) - Remove unused dependencies and dependency features with some feature combinations ([#603], [#602]) - **docs**: Fix a typo in the RustDoc for `Buffer` ([#622]) ### Changed - Updated minimum supported Rust version (MSRV) to 1.49.0. - **hedge**: Updated `hdrhistogram` dependency to v7.0 ([#602]) - Updated `tokio-util` dependency to v0.7 ([#638]) [#633]: https://github.com/tower-rs/tower/pull/633 [#623]: https://github.com/tower-rs/tower/pull/623 [#603]: https://github.com/tower-rs/tower/pull/603 [#602]: https://github.com/tower-rs/tower/pull/602 [#622]: https://github.com/tower-rs/tower/pull/622 [#638]: https://github.com/tower-rs/tower/pull/638 # 0.4.11 (November 18, 2021) ### Added - **util**: Add `BoxCloneService` which is a `Clone + Send` boxed `Service` ([#615]) - **util**: Add `ServiceExt::boxed` and `ServiceExt::boxed_clone` for applying the `BoxService` and `BoxCloneService` middleware ([#616]) - **builder**: Add `ServiceBuilder::boxed` and `ServiceBuilder::boxed_clone` for applying `BoxService` and `BoxCloneService` layers ([#616]) ### Fixed - **util**: Remove redundant `F: Clone` bound from `ServiceExt::map_request` ([#607]) - **util**: Remove unnecessary `Debug` bounds from `impl Debug for BoxService` ([#617]) - **util**: Remove unnecessary `Debug` bounds from `impl Debug for UnsyncBoxService` ([#617]) - **balance**: Remove redundant `Req: Clone` bound from `Clone` impls for `MakeBalance`, and `MakeBalanceLayer` ([#607]) - **balance**: Remove redundant `Req: Debug` bound from `Debug` impls for `MakeBalance`, `MakeFuture`, `Balance`, and `Pool` ([#607]) - **ready-cache**: Remove redundant `Req: Debug` bound from `Debug` impl for `ReadyCache` ([#607]) - **steer**: Remove redundant `Req: Debug` bound from `Debug` impl for `Steer` ([#607]) - **docs**: Fix `doc(cfg(...))` attributes of `PeakEwmaDiscover`, and `PendingRequestsDiscover` ([#610]) [#607]: https://github.com/tower-rs/tower/pull/607 [#610]: https://github.com/tower-rs/tower/pull/610 [#615]: https://github.com/tower-rs/tower/pull/615 [#616]: https://github.com/tower-rs/tower/pull/616 [#617]: https://github.com/tower-rs/tower/pull/617 # 0.4.10 (October 19, 2021) - Fix accidental breaking change when using the `rustdoc::broken_intra_doc_links` lint ([#605]) - Clarify that tower's minimum supported rust version is 1.46 ([#605]) [#605]: https://github.com/tower-rs/tower/pull/605 # 0.4.9 (October 13, 2021) - Migrate to [pin-project-lite] ([#595]) - **builder**: Implement `Layer` for `ServiceBuilder` ([#600]) - **builder**: Add `ServiceBuilder::and_then` analogous to `ServiceExt::and_then` ([#601]) [#600]: https://github.com/tower-rs/tower/pull/600 [#601]: https://github.com/tower-rs/tower/pull/601 [#595]: https://github.com/tower-rs/tower/pull/595 [pin-project-lite]: https://crates.io/crates/pin-project-lite # 0.4.8 (May 28, 2021) - **builder**: Add `ServiceBuilder::map_result` analogous to `ServiceExt::map_result` ([#583]) - **limit**: Add `GlobalConcurrencyLimitLayer` to allow reusing a concurrency limit across multiple services ([#574]) [#574]: https://github.com/tower-rs/tower/pull/574 [#583]: https://github.com/tower-rs/tower/pull/583 # 0.4.7 (April 27, 2021) ### Added - **builder**: Add `ServiceBuilder::check_service` to check the request, response, and error types of the output service. ([#576]) - **builder**: Add `ServiceBuilder::check_service_clone` to check the output service can be cloned. ([#576]) ### Fixed - **spawn_ready**: Abort spawned background tasks when the `SpawnReady` service is dropped, fixing a potential task/resource leak (#[581]) - Fixed broken documentation links ([#578]) [#576]: https://github.com/tower-rs/tower/pull/576 [#578]: https://github.com/tower-rs/tower/pull/578 [#581]: https://github.com/tower-rs/tower/pull/581 # 0.4.6 (February 26, 2021) ### Deprecated - **util**: Deprecated `ServiceExt::ready_and` (renamed to `ServiceExt::ready`). ([#567]) - **util**: Deprecated `ReadyAnd` future (renamed to `Ready`). ([#567]) ### Added - **builder**: Add `ServiceBuilder::layer_fn` to add a layer built from a function. ([#560]) - **builder**: Add `ServiceBuilder::map_future` for transforming the futures produced by a service. ([#559]) - **builder**: Add `ServiceBuilder::service_fn` for applying `Layer`s to an async function using `util::service_fn`. ([#564]) - **util**: Add example for `service_fn`. ([#563]) - **util**: Add `BoxLayer` for creating boxed `Layer` trait objects. ([#569]) [#567]: https://github.com/tower-rs/tower/pull/567 [#560]: https://github.com/tower-rs/tower/pull/560 [#559]: https://github.com/tower-rs/tower/pull/559 [#564]: https://github.com/tower-rs/tower/pull/564 [#563]: https://github.com/tower-rs/tower/pull/563 [#569]: https://github.com/tower-rs/tower/pull/569 # 0.4.5 (February 10, 2021) ### Added - **util**: Add `ServiceExt::map_future`. ([#542]) - **builder**: Add `ServiceBuilder::option_layer` to optionally add a layer. ([#555]) - **make**: Add `Shared` which lets you implement `MakeService` by cloning a service. ([#533]) ### Fixed - **util**: Make combinators that contain closures implement `Debug`. They previously wouldn't since closures never implement `Debug`. ([#552]) - **steer**: Implement `Clone` for `Steer`. ([#554]) - **spawn-ready**: SpawnReady now propagates the current `tracing` span to spawned tasks ([#557]) - Only pull in `tracing` for the features that need it. ([#551]) [#542]: https://github.com/tower-rs/tower/pull/542 [#555]: https://github.com/tower-rs/tower/pull/555 [#557]: https://github.com/tower-rs/tower/pull/557 [#533]: https://github.com/tower-rs/tower/pull/533 [#551]: https://github.com/tower-rs/tower/pull/551 [#554]: https://github.com/tower-rs/tower/pull/554 [#552]: https://github.com/tower-rs/tower/pull/552 # 0.4.4 (January 20, 2021) ### Added - **util**: Implement `Layer` for `Either`. ([#531]) - **util**: Implement `Clone` for `FilterLayer`. ([#535]) - **timeout**: Implement `Clone` for `TimeoutLayer`. ([#535]) - **limit**: Implement `Clone` for `RateLimitLayer`. ([#535]) ### Fixed - Added "full" feature which turns on all other features. ([#532]) - **spawn-ready**: Avoid oneshot allocations. ([#538]) [#531]: https://github.com/tower-rs/tower/pull/531 [#532]: https://github.com/tower-rs/tower/pull/532 [#535]: https://github.com/tower-rs/tower/pull/535 [#538]: https://github.com/tower-rs/tower/pull/538 # 0.4.3 (January 13, 2021) ### Added - **filter**: `Filter::check` and `AsyncFilter::check` methods which check a request against the filter's `Predicate` ([#521]) - **filter**: Added `get_ref`, `get_mut`, and `into_inner` methods to `Filter` and `AsyncFilter`, allowing access to the wrapped service ([#522]) - **util**: Added `layer` associated function to `AndThen`, `Then`, `MapRequest`, `MapResponse`, and `MapResult` types. These return a `Layer` that produces middleware of that type, as a convenience to avoid having to import the `Layer` type separately. ([#524]) - **util**: Added missing `Clone` impls to `AndThenLayer`, `MapRequestLayer`, and `MapErrLayer`, when the mapped function implements `Clone` ([#525]) - **util**: Added `FutureService::new` constructor, with less restrictive bounds than the `future_service` free function ([#523]) [#521]: https://github.com/tower-rs/tower/pull/521 [#522]: https://github.com/tower-rs/tower/pull/522 [#523]: https://github.com/tower-rs/tower/pull/523 [#524]: https://github.com/tower-rs/tower/pull/524 [#525]: https://github.com/tower-rs/tower/pull/525 # 0.4.2 (January 11, 2021) ### Added - Export `layer_fn` and `LayerFn` from the `tower::layer` module. ([#516]) ### Fixed - Fix missing `Sync` implementation for `Buffer` and `ConcurrencyLimit` ([#518]) [#518]: https://github.com/tower-rs/tower/pull/518 [#516]: https://github.com/tower-rs/tower/pull/516 # 0.4.1 (January 7, 2021) ### Fixed - Updated `tower-layer` to 0.3.1 to fix broken re-exports. # 0.4.0 (January 7, 2021) This is a major breaking release including a large number of changes. In particular, this release updates `tower` to depend on Tokio 1.0, and moves all middleware into the `tower` crate. In addition, Tower 0.4 reworks several middleware APIs, as well as introducing new ones. This release does *not* change the core `Service` or `Layer` traits, so `tower` 0.4 still depends on `tower-service` 0.3 and `tower-layer` 0.3. This means that `tower` 0.4 is still compatible with libraries that depend on those crates. ### Added - **make**: Added `MakeService::into_service` and `MakeService::as_service` for converting `MakeService`s into `Service`s ([#492]) - **steer**: Added `steer` middleware for routing requests to one of a set of services ([#426]) - **util**: Added `MapRequest` middleware and `ServiceExt::map_request`, for applying a function to a request before passing it to the inner service ([#435]) - **util**: Added `MapResponse` middleware and `ServiceExt::map_response`, for applying a function to the `Response` type of an inner service after its future completes ([#435]) - **util**: Added `MapErr` middleware and `ServiceExt::map_err`, for applying a function to the `Error` returned by an inner service if it fails ([#396]) - **util**: Added `MapResult` middleware and `ServiceExt::map_result`, for applying a function to the `Result` returned by an inner service's future regardless of whether it succeeds or fails ([#499]) - **util**: Added `Then` middleware and `ServiceExt::then`, for chaining another future after an inner service's future completes (with a `Response` or an `Error`) ([#500]) - **util**: Added `AndThen` middleware and `ServiceExt::and_then`, for chaining another future after an inner service's future completes successfully ([#485]) - **util**: Added `layer_fn`, for constructing a `Layer` from a function taking a `Service` and returning a different `Service` ([#491]) - **util**: Added `FutureService`, which implements `Service` for a `Future` whose `Output` type is a `Service` ([#496]) - **util**: Added `BoxService::layer` and `UnsyncBoxService::layer`, to make constructing layers more ergonomic ([#503]) - **layer**: Added `Layer` impl for `&Layer` ([#446]) - **retry**: Added `Retry::get_ref`, `Retry::get_mut`, and `Retry::into_inner` to access the inner service ([#463]) - **timeout**: Added `Timeout::get_ref`, `Timeout::get_mut`, and `Timeout::into_inner` to access the inner service ([#463]) - **buffer**: Added `Clone` and `Copy` impls for `BufferLayer` (#[493]) - Several documentation improvements ([#442], [#444], [#445], [#449], [#487], [#490], [#506]]) ### Changed - All middleware `tower-*` crates were merged into `tower` and placed behind feature flags ([#432]) - Updated Tokio dependency to 1.0 ([#489]) - **builder**: Make `ServiceBuilder::service` take `self` by reference rather than by value ([#504]) - **reconnect**: Return errors from `MakeService` in the response future, rather than in `poll_ready`, allowing the reconnect service to be reused when a reconnect fails ([#386], [#437]) - **discover**: Changed `Discover` to be a sealed trait alias for a `TryStream`. `Discover` implementations are now written by implementing `Stream`. ([#443]) - **load**: Renamed the `Instrument` trait to `TrackCompletion` ([#445]) - **load**: Renamed `NoInstrument` to `CompleteOnResponse` ([#445]) - **balance**: Renamed `BalanceLayer` to `MakeBalanceLayer` ([#449]) - **balance**: Renamed `BalanceMake` to `MakeBalance` ([#449]) - **ready-cache**: Changed `ready_cache::error::Failed`'s `fmt::Debug` impl to require the key type to also implement `fmt::Debug` ([#467]) - **filter**: Changed `Filter` and `Predicate` to use a synchronous function as a predicate ([#508]) - **filter**: Renamed the previous `Filter` and `Predicate` (where `Predicate`s returned a `Future`) to `AsyncFilter` and `AsyncPredicate` ([#508]) - **filter**: `Predicate`s now take a `Request` type by value and may return a new request, potentially of a different type ([#508]) - **filter**: `Predicate`s may now return an error of any type ([#508]) ### Fixed - **limit**: Fixed an issue where `RateLimit` services do not reset the remaining count when rate limiting ([#438], [#439]) - **util**: Fixed a bug where `oneshot` futures panic if the service does not immediately become ready ([#447]) - **ready-cache**: Fixed `ready_cache::error::Failed` not returning inner error types via `Error::source` ([#467]) - **hedge**: Fixed an interaction with `buffer` where `buffer` slots were eagerly reserved for hedge requests even if they were not sent ([#472]) - **hedge**: Fixed the use of a fixed 10 second bound on the hedge latency histogram resulting on errors with longer-lived requests. The latency histogram now automatically resizes ([#484]) - **buffer**: Fixed an issue where tasks waiting for buffer capacity were not woken when a buffer is dropped, potentially resulting in a task leak ([#480]) ### Removed - Remove `ServiceExt::ready`. - **discover**: Removed `discover::stream` module, since `Discover` is now an alias for `Stream` ([#443]) - **buffer**: Removed `MakeBalance::from_rng`, which caused all balancers to use the same RNG ([#497]) [#432]: https://github.com/tower-rs/tower/pull/432 [#426]: https://github.com/tower-rs/tower/pull/426 [#435]: https://github.com/tower-rs/tower/pull/435 [#499]: https://github.com/tower-rs/tower/pull/499 [#386]: https://github.com/tower-rs/tower/pull/386 [#437]: https://github.com/tower-rs/tower/pull/487 [#438]: https://github.com/tower-rs/tower/pull/438 [#437]: https://github.com/tower-rs/tower/pull/439 [#443]: https://github.com/tower-rs/tower/pull/443 [#442]: https://github.com/tower-rs/tower/pull/442 [#444]: https://github.com/tower-rs/tower/pull/444 [#445]: https://github.com/tower-rs/tower/pull/445 [#446]: https://github.com/tower-rs/tower/pull/446 [#447]: https://github.com/tower-rs/tower/pull/447 [#449]: https://github.com/tower-rs/tower/pull/449 [#463]: https://github.com/tower-rs/tower/pull/463 [#396]: https://github.com/tower-rs/tower/pull/396 [#467]: https://github.com/tower-rs/tower/pull/467 [#472]: https://github.com/tower-rs/tower/pull/472 [#480]: https://github.com/tower-rs/tower/pull/480 [#484]: https://github.com/tower-rs/tower/pull/484 [#489]: https://github.com/tower-rs/tower/pull/489 [#497]: https://github.com/tower-rs/tower/pull/497 [#487]: https://github.com/tower-rs/tower/pull/487 [#493]: https://github.com/tower-rs/tower/pull/493 [#491]: https://github.com/tower-rs/tower/pull/491 [#495]: https://github.com/tower-rs/tower/pull/495 [#503]: https://github.com/tower-rs/tower/pull/503 [#504]: https://github.com/tower-rs/tower/pull/504 [#492]: https://github.com/tower-rs/tower/pull/492 [#500]: https://github.com/tower-rs/tower/pull/500 [#490]: https://github.com/tower-rs/tower/pull/490 [#506]: https://github.com/tower-rs/tower/pull/506 [#508]: https://github.com/tower-rs/tower/pull/508 [#485]: https://github.com/tower-rs/tower/pull/485 # 0.3.1 (January 17, 2020) - Allow opting out of tracing/log (#410). # 0.3.0 (December 19, 2019) - Update all tower based crates to `0.3`. - Update to `tokio 0.2` - Update to `futures 0.3` # 0.3.0-alpha.2 (September 30, 2019) - Move to `futures-*-preview 0.3.0-alpha.19` - Move to `pin-project 0.4` # 0.3.0-alpha.1a (September 13, 2019) - Update `tower-buffer` to `0.3.0-alpha.1b` # 0.3.0-alpha.1 (September 11, 2019) - Move to `std::future` # 0.1.1 (July 19, 2019) - Add `ServiceBuilder::into_inner` # 0.1.0 (April 26, 2019) - Initial release tower-0.4.13/Cargo.lock0000644000000346200000000000100102410ustar # This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "ansi_term" version = "0.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d52a9bb7ec0cf484c551830a7ce27bd20d67eac647e1befb56b0be4ee39a55d2" dependencies = [ "winapi", ] [[package]] name = "async-stream" version = "0.3.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dad5c83079eae9969be7fadefe640a1c566901f05ff91ab221de4b6f68d9507e" dependencies = [ "async-stream-impl", "futures-core", ] [[package]] name = "async-stream-impl" version = "0.3.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "10f203db73a71dfa2fb6dd22763990fa26f3d2625a6da2da900d23b87d26be27" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "autocfg" version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d468802bab17cbc0cc575e9b053f41e72aa36bfa6b7f55e3529ffa43161b97fa" [[package]] name = "byteorder" version = "1.4.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610" [[package]] name = "bytes" version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c4872d67bab6358e59559027aa3b9157c53d9358c51423c17554809a8858e0f8" [[package]] name = "cfg-if" version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "fnv" version = "1.0.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1" [[package]] name = "futures" version = "0.3.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f73fe65f54d1e12b726f517d3e2135ca3125a437b6d998caf1962961f7172d9e" dependencies = [ "futures-channel", "futures-core", "futures-executor", "futures-io", "futures-sink", "futures-task", "futures-util", ] [[package]] name = "futures-channel" version = "0.3.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c3083ce4b914124575708913bca19bfe887522d6e2e6d0952943f5eac4a74010" dependencies = [ "futures-core", "futures-sink", ] [[package]] name = "futures-core" version = "0.3.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c09fd04b7e4073ac7156a9539b57a484a8ea920f79c7c675d05d289ab6110d3" [[package]] name = "futures-executor" version = "0.3.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9420b90cfa29e327d0429f19be13e7ddb68fa1cccb09d65e5706b8c7a749b8a6" dependencies = [ "futures-core", "futures-task", "futures-util", ] [[package]] name = "futures-io" version = "0.3.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fc4045962a5a5e935ee2fdedaa4e08284547402885ab326734432bed5d12966b" [[package]] name = "futures-macro" version = "0.3.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "33c1e13800337f4d4d7a316bf45a567dbcb6ffe087f16424852d97e97a91f512" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "futures-sink" version = "0.3.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "21163e139fa306126e6eedaf49ecdb4588f939600f0b1e770f4205ee4b7fa868" [[package]] name = "futures-task" version = "0.3.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "57c66a976bf5909d801bbef33416c41372779507e7a6b3a5e25e4749c58f776a" [[package]] name = "futures-util" version = "0.3.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d8b7abd5d659d9b90c8cba917f6ec750a74e2dc23902ef9cd4cc8c8b22e6036a" dependencies = [ "futures-channel", "futures-core", "futures-io", "futures-macro", "futures-sink", "futures-task", "memchr", "pin-project-lite", "pin-utils", "slab", ] [[package]] name = "getrandom" version = "0.2.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4eb1a864a501629691edf6c15a593b7a51eebaa1e8468e9ddc623de7c9b58ec6" dependencies = [ "cfg-if", "libc", "wasi", ] [[package]] name = "hashbrown" version = "0.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "db0d4cf898abf0081f964436dc980e96670a0f36863e4b83aaacdb65c9d7ccc3" [[package]] name = "hdrhistogram" version = "7.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "31672b7011be2c4f7456c4ddbcb40e7e9a4a9fad8efe49a6ebaf5f307d0109c0" dependencies = [ "byteorder", "num-traits", ] [[package]] name = "hermit-abi" version = "0.1.19" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "62b467343b94ba476dcb2500d242dadbb39557df889310ac77c5d99100aaac33" dependencies = [ "libc", ] [[package]] name = "http" version = "0.2.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "75f43d41e26995c17e71ee126451dd3941010b0514a81a9d11f3b341debc2399" dependencies = [ "bytes", "fnv", "itoa", ] [[package]] name = "indexmap" version = "1.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6c6392766afd7964e2531940894cffe4bd8d7d17dbc3c1c4857040fd4b33bdb3" dependencies = [ "autocfg", "hashbrown", ] [[package]] name = "itoa" version = "1.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "112c678d4050afce233f4f2852bb2eb519230b3cf12f33585275537d7e41578d" [[package]] name = "lazy_static" version = "1.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646" [[package]] name = "libc" version = "0.2.126" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "349d5a591cd28b49e1d1037471617a32ddcda5731b99419008085f72d5a53836" [[package]] name = "log" version = "0.4.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "abb12e687cfb44aa40f41fc3978ef76448f9b6038cad6aef4259d3c095a2382e" dependencies = [ "cfg-if", ] [[package]] name = "memchr" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2dffe52ecf27772e601905b7522cb4ef790d2cc203488bbd0e2fe85fcb74566d" [[package]] name = "num-traits" version = "0.2.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "578ede34cf02f8924ab9447f50c28075b4d3e5b269972345e7e0372b38c6cdcd" dependencies = [ "autocfg", ] [[package]] name = "num_cpus" version = "1.13.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "19e64526ebdee182341572e50e9ad03965aa510cd94427a4549448f285e957a1" dependencies = [ "hermit-abi", "libc", ] [[package]] name = "once_cell" version = "1.12.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7709cef83f0c1f58f666e746a08b21e0085f7440fa6a29cc194d68aac97a4225" [[package]] name = "pin-project" version = "1.0.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "58ad3879ad3baf4e44784bc6a718a8698867bb991f8ce24d1bcbe2cfb4c3a75e" dependencies = [ "pin-project-internal", ] [[package]] name = "pin-project-internal" version = "1.0.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "744b6f092ba29c3650faf274db506afd39944f48420f6c86b17cfe0ee1cb36bb" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "pin-project-lite" version = "0.2.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e0a7ae3ac2f1173085d398531c705756c94a4c56843785df85a60c1a0afac116" [[package]] name = "pin-utils" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184" [[package]] name = "ppv-lite86" version = "0.2.16" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "eb9f9e6e233e5c4a35559a617bf40a4ec447db2e84c20b55a6f83167b7e57872" [[package]] name = "proc-macro2" version = "1.0.39" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c54b25569025b7fc9651de43004ae593a75ad88543b17178aa5e1b9c4f15f56f" dependencies = [ "unicode-ident", ] [[package]] name = "quote" version = "1.0.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a1feb54ed693b93a84e14094943b84b7c4eae204c512b7ccb95ab0c66d278ad1" dependencies = [ "proc-macro2", ] [[package]] name = "rand" version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404" dependencies = [ "libc", "rand_chacha", "rand_core", ] [[package]] name = "rand_chacha" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88" dependencies = [ "ppv-lite86", "rand_core", ] [[package]] name = "rand_core" version = "0.6.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d34f1408f55294453790c48b2f1ebbb1c5b4b7563eb1f418bcfcfdbb06ebb4e7" dependencies = [ "getrandom", ] [[package]] name = "sharded-slab" version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "900fba806f70c630b0a382d0d825e17a0f19fcd059a2ade1ff237bcddf446b31" dependencies = [ "lazy_static", ] [[package]] name = "slab" version = "0.4.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "eb703cfe953bccee95685111adeedb76fabe4e97549a58d16f03ea7b9367bb32" [[package]] name = "syn" version = "1.0.96" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0748dd251e24453cb8717f0354206b91557e4ec8703673a4b30208f2abaf1ebf" dependencies = [ "proc-macro2", "quote", "unicode-ident", ] [[package]] name = "thread_local" version = "1.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5516c27b78311c50bf42c071425c560ac799b11c30b31f87e3081965fe5e0180" dependencies = [ "once_cell", ] [[package]] name = "tokio" version = "1.19.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c51a52ed6686dd62c320f9b89299e9dfb46f730c7a48e635c19f21d116cb1439" dependencies = [ "num_cpus", "once_cell", "pin-project-lite", "tokio-macros", ] [[package]] name = "tokio-macros" version = "1.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9724f9a975fb987ef7a3cd9be0350edcbe130698af5b8f7a631e23d42d052484" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "tokio-stream" version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "df54d54117d6fdc4e4fea40fe1e4e566b3505700e148a6827e59b34b0d2600d9" dependencies = [ "futures-core", "pin-project-lite", "tokio", ] [[package]] name = "tokio-test" version = "0.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "53474327ae5e166530d17f2d956afcb4f8a004de581b3cae10f12006bc8163e3" dependencies = [ "async-stream", "bytes", "futures-core", "tokio", "tokio-stream", ] [[package]] name = "tokio-util" version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cc463cd8deddc3770d20f9852143d50bf6094e640b485cb2e189a2099085ff45" dependencies = [ "bytes", "futures-core", "futures-sink", "pin-project-lite", "tokio", ] [[package]] name = "tower" version = "0.4.13" dependencies = [ "futures", "futures-core", "futures-util", "hdrhistogram", "http", "indexmap", "lazy_static", "pin-project", "pin-project-lite", "rand", "slab", "tokio", "tokio-stream", "tokio-test", "tokio-util", "tower-layer", "tower-service", "tower-test", "tracing", "tracing-subscriber", ] [[package]] name = "tower-layer" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "343bc9466d3fe6b0f960ef45960509f84480bf4fd96f92901afe7ff3df9d3a62" [[package]] name = "tower-service" version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b6bc1c9ce2b5135ac7f93c72918fc37feb872bdc6a5533a8b85eb4b86bfdae52" [[package]] name = "tower-test" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a4546773ffeab9e4ea02b8872faa49bb616a80a7da66afc2f32688943f97efa7" dependencies = [ "futures-util", "pin-project", "tokio", "tokio-test", "tower-layer", "tower-service", ] [[package]] name = "tracing" version = "0.1.35" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a400e31aa60b9d44a52a8ee0343b5b18566b03a8321e0d321f695cf56e940160" dependencies = [ "cfg-if", "log", "pin-project-lite", "tracing-core", ] [[package]] name = "tracing-core" version = "0.1.27" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7709595b8878a4965ce5e87ebf880a7d39c9afc6837721b21a5a816a8117d921" dependencies = [ "once_cell", "valuable", ] [[package]] name = "tracing-subscriber" version = "0.3.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4bc28f93baff38037f64e6f43d34cfa1605f27a49c34e8a04c5e78b0babf2596" dependencies = [ "ansi_term", "sharded-slab", "thread_local", "tracing-core", ] [[package]] name = "unicode-ident" version = "1.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5bd2fe26506023ed7b5e1e315add59d6f584c621d037f9368fea9cfb988f368c" [[package]] name = "valuable" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "830b7e5d4d90034032940e4ace0d9a9a057e7a45cd94e6c007832e39edb82f6d" [[package]] name = "wasi" version = "0.11.0+wasi-snapshot-preview1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423" [[package]] name = "winapi" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419" dependencies = [ "winapi-i686-pc-windows-gnu", "winapi-x86_64-pc-windows-gnu", ] [[package]] name = "winapi-i686-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6" [[package]] name = "winapi-x86_64-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f" tower-0.4.13/Cargo.toml0000644000000104760000000000100102670ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2018" rust-version = "1.49.0" name = "tower" version = "0.4.13" authors = ["Tower Maintainers "] description = """ Tower is a library of modular and reusable components for building robust clients and servers. """ homepage = "https://github.com/tower-rs/tower" documentation = "https://docs.rs/tower/0.4.13" readme = "README.md" keywords = [ "io", "async", "non-blocking", "futures", "service", ] categories = [ "asynchronous", "network-programming", ] license = "MIT" repository = "https://github.com/tower-rs/tower" [package.metadata.docs.rs] all-features = true rustdoc-args = [ "--cfg", "docsrs", ] [package.metadata.playground] features = ["full"] [[example]] name = "tower-balance" path = "examples/tower-balance.rs" required-features = ["full"] [dependencies.futures-core] version = "0.3" optional = true [dependencies.futures-util] version = "0.3" features = ["alloc"] optional = true default-features = false [dependencies.hdrhistogram] version = "7.0" optional = true default-features = false [dependencies.indexmap] version = "1.0.2" optional = true [dependencies.pin-project] version = "1" optional = true [dependencies.pin-project-lite] version = "0.2.7" optional = true [dependencies.rand] version = "0.8" features = ["small_rng"] optional = true [dependencies.slab] version = "0.4" optional = true [dependencies.tokio] version = "1.6" features = ["sync"] optional = true [dependencies.tokio-stream] version = "0.1.0" optional = true [dependencies.tokio-util] version = "0.7.0" optional = true default-features = false [dependencies.tower-layer] version = "0.3.1" [dependencies.tower-service] version = "0.3.1" [dependencies.tracing] version = "0.1.2" features = ["std"] optional = true default-features = false [dev-dependencies.futures] version = "0.3" [dev-dependencies.hdrhistogram] version = "7.0" default-features = false [dev-dependencies.http] version = "0.2" [dev-dependencies.lazy_static] version = "1.4.0" [dev-dependencies.pin-project-lite] version = "0.2.7" [dev-dependencies.tokio] version = "1.6.2" features = [ "macros", "sync", "test-util", "rt-multi-thread", ] [dev-dependencies.tokio-stream] version = "0.1" [dev-dependencies.tokio-test] version = "0.4" [dev-dependencies.tower-test] version = "0.4" [dev-dependencies.tracing-subscriber] version = "0.3" features = [ "fmt", "ansi", ] default-features = false [features] __common = [ "futures-core", "pin-project-lite", ] balance = [ "discover", "load", "ready-cache", "make", "rand", "slab", ] buffer = [ "__common", "tokio/sync", "tokio/rt", "tokio-util", "tracing", ] default = ["log"] discover = ["__common"] filter = [ "__common", "futures-util", ] full = [ "balance", "buffer", "discover", "filter", "hedge", "limit", "load", "load-shed", "make", "ready-cache", "reconnect", "retry", "spawn-ready", "steer", "timeout", "util", ] hedge = [ "util", "filter", "futures-util", "hdrhistogram", "tokio/time", "tracing", ] limit = [ "__common", "tokio/time", "tokio/sync", "tokio-util", "tracing", ] load = [ "__common", "tokio/time", "tracing", ] load-shed = ["__common"] log = ["tracing/log"] make = [ "futures-util", "pin-project-lite", "tokio/io-std", ] ready-cache = [ "futures-core", "futures-util", "indexmap", "tokio/sync", "tracing", "pin-project-lite", ] reconnect = [ "make", "tokio/io-std", "tracing", ] retry = [ "__common", "tokio/time", ] spawn-ready = [ "__common", "futures-util", "tokio/sync", "tokio/rt", "util", "tracing", ] steer = [] timeout = [ "pin-project-lite", "tokio/time", ] util = [ "__common", "futures-util", "pin-project", ] tower-0.4.13/Cargo.toml.orig000064400000000000000000000070160072674642500137740ustar 00000000000000[package] name = "tower" # When releasing to crates.io: # - Update doc url # - Cargo.toml # - README.md # - Update CHANGELOG.md. # - Create "vX.X.X" git tag. version = "0.4.13" authors = ["Tower Maintainers "] license = "MIT" readme = "README.md" repository = "https://github.com/tower-rs/tower" homepage = "https://github.com/tower-rs/tower" documentation = "https://docs.rs/tower/0.4.13" description = """ Tower is a library of modular and reusable components for building robust clients and servers. """ categories = ["asynchronous", "network-programming"] keywords = ["io", "async", "non-blocking", "futures", "service"] edition = "2018" rust-version = "1.49.0" [features] default = ["log"] # Internal __common = ["futures-core", "pin-project-lite"] full = [ "balance", "buffer", "discover", "filter", "hedge", "limit", "load", "load-shed", "make", "ready-cache", "reconnect", "retry", "spawn-ready", "steer", "timeout", "util", ] # FIXME: Use weak dependency once available (https://github.com/rust-lang/cargo/issues/8832) log = ["tracing/log"] balance = ["discover", "load", "ready-cache", "make", "rand", "slab"] buffer = ["__common", "tokio/sync", "tokio/rt", "tokio-util", "tracing"] discover = ["__common"] filter = ["__common", "futures-util"] hedge = ["util", "filter", "futures-util", "hdrhistogram", "tokio/time", "tracing"] limit = ["__common", "tokio/time", "tokio/sync", "tokio-util", "tracing"] load = ["__common", "tokio/time", "tracing"] load-shed = ["__common"] make = ["futures-util", "pin-project-lite", "tokio/io-std"] ready-cache = ["futures-core", "futures-util", "indexmap", "tokio/sync", "tracing", "pin-project-lite"] reconnect = ["make", "tokio/io-std", "tracing"] retry = ["__common", "tokio/time"] spawn-ready = ["__common", "futures-util", "tokio/sync", "tokio/rt", "util", "tracing"] steer = [] timeout = ["pin-project-lite", "tokio/time"] util = ["__common", "futures-util", "pin-project"] [dependencies] tower-layer = { version = "0.3.1", path = "../tower-layer" } tower-service = { version = "0.3.1", path = "../tower-service" } futures-core = { version = "0.3", optional = true } futures-util = { version = "0.3", default-features = false, features = ["alloc"], optional = true } hdrhistogram = { version = "7.0", optional = true, default-features = false } indexmap = { version = "1.0.2", optional = true } rand = { version = "0.8", features = ["small_rng"], optional = true } slab = { version = "0.4", optional = true } tokio = { version = "1.6", optional = true, features = ["sync"] } tokio-stream = { version = "0.1.0", optional = true } tokio-util = { version = "0.7.0", default-features = false, optional = true } tracing = { version = "0.1.2", default-features = false, features = ["std"], optional = true } pin-project = { version = "1", optional = true } pin-project-lite = { version = "0.2.7", optional = true } [dev-dependencies] futures = "0.3" hdrhistogram = { version = "7.0", default-features = false } pin-project-lite = "0.2.7" tokio = { version = "1.6.2", features = ["macros", "sync", "test-util", "rt-multi-thread"] } tokio-stream = "0.1" tokio-test = "0.4" tower-test = { version = "0.4", path = "../tower-test" } tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt", "ansi"] } http = "0.2" lazy_static = "1.4.0" [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] [package.metadata.playground] features = ["full"] [[example]] name = "tower-balance" path = "examples/tower-balance.rs" required-features = ["full"] tower-0.4.13/LICENSE000064400000000000000000000020460072674642500121100ustar 00000000000000Copyright (c) 2019 Tower Contributors Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. tower-0.4.13/README.md000064400000000000000000000210350072674642500123610ustar 00000000000000# Tower Tower is a library of modular and reusable components for building robust networking clients and servers. [![Crates.io][crates-badge]][crates-url] [![Documentation][docs-badge]][docs-url] [![Documentation (master)][docs-master-badge]][docs-master-url] [![MIT licensed][mit-badge]][mit-url] [![Build Status][actions-badge]][actions-url] [![Discord chat][discord-badge]][discord-url] [crates-badge]: https://img.shields.io/crates/v/tower.svg [crates-url]: https://crates.io/crates/tower [docs-badge]: https://docs.rs/tower/badge.svg [docs-url]: https://docs.rs/tower [docs-master-badge]: https://img.shields.io/badge/docs-master-blue [docs-master-url]: https://tower-rs.github.io/tower/tower [mit-badge]: https://img.shields.io/badge/license-MIT-blue.svg [mit-url]: LICENSE [actions-badge]: https://github.com/tower-rs/tower/workflows/CI/badge.svg [actions-url]:https://github.com/tower-rs/tower/actions?query=workflow%3ACI [discord-badge]: https://img.shields.io/discord/500028886025895936?logo=discord&label=discord&logoColor=white [discord-url]: https://discord.gg/EeF3cQw ## Overview Tower aims to make it as easy as possible to build robust networking clients and servers. It is protocol agnostic, but is designed around a request / response pattern. If your protocol is entirely stream based, Tower may not be a good fit. Tower provides a simple core abstraction, the [`Service`] trait, which represents an asynchronous function taking a request and returning either a response or an error. This abstraction can be used to model both clients and servers. Generic components, like [timeouts], [rate limiting], and [load balancing], can be modeled as [`Service`]s that wrap some inner service and apply additional behavior before or after the inner service is called. This allows implementing these components in a protocol-agnostic, composable way. Typically, such services are referred to as _middleware_. An additional abstraction, the [`Layer`] trait, is used to compose middleware with [`Service`]s. If a [`Service`] can be thought of as an asynchronous function from a request type to a response type, a [`Layer`] is a function taking a [`Service`] of one type and returning a [`Service`] of a different type. The [`ServiceBuilder`] type is used to add middleware to a service by composing it with multiple multiple [`Layer`]s. ### The Tower Ecosystem Tower is made up of the following crates: * [`tower`] (this crate) * [`tower-service`] * [`tower-layer`] * [`tower-test`] Since the [`Service`] and [`Layer`] traits are important integration points for all libraries using Tower, they are kept as stable as possible, and breaking changes are made rarely. Therefore, they are defined in separate crates, [`tower-service`] and [`tower-layer`]. This crate contains re-exports of those core traits, implementations of commonly-used middleware, and [utilities] for working with [`Service`]s and [`Layer`]s. Finally, the [`tower-test`] crate provides tools for testing programs using Tower. ## Usage Tower provides an abstraction layer, and generic implementations of various middleware. This means that the `tower` crate on its own does *not* provide a working implementation of a network client or server. Instead, Tower's [`Service` trait][`Service`] provides an integration point between application code, libraries providing middleware implementations, and libraries that implement servers and/or clients for various network protocols. Depending on your particular use case, you might use Tower in several ways: * **Implementing application logic** for a networked program. You might use the [`Service`] trait to model your application's behavior, and use the middleware [provided by this crate][all_layers] and by other libraries to add functionality to clients and servers provided by one or more protocol implementations. * **Implementing middleware** to add custom behavior to network clients and servers in a reusable manner. This might be general-purpose middleware (and if it is, please consider releasing your middleware as a library for other Tower users!) or application-specific behavior that needs to be shared between multiple clients or servers. * **Implementing a network protocol**. Libraries that implement network protocols (such as HTTP) can depend on `tower-service` to use the [`Service`] trait as an integration point between the protocol and user code. For example, a client for some protocol might implement [`Service`], allowing users to add arbitrary Tower middleware to those clients. Similarly, a server might be created from a user-provided [`Service`]. Additionally, when a network protocol requires functionality already provided by existing Tower middleware, a protocol implementation might use Tower middleware internally, as well as as an integration point. ### Library Support A number of third-party libraries support Tower and the [`Service`] trait. The following is an incomplete list of such libraries: * [`hyper`]: A fast and correct low-level HTTP implementation. * [`tonic`]: A [gRPC-over-HTTP/2][grpc] implementation built on top of [`hyper`]. See [here][tonic-examples] for examples of using [`tonic`] with Tower. * [`warp`]: A lightweight, composable web framework. See [here][warp-service] for details on using [`warp`] with Tower. * [`tower-lsp`] and its fork, [`lspower`]: implementations of the [Language Server Protocol][lsp] based on Tower. * [`kube`]: Kubernetes client and futures controller runtime. [`kube::Client`] makes use of the Tower ecosystem: [`tower`], [`tower-http`], and [`tower-test`]. See [here][kube-example-minimal] and [here][kube-example-trace] for examples of using [`kube`] with Tower. [`hyper`]: https://crates.io/crates/hyper [`tonic`]: https://crates.io/crates/tonic [tonic-examples]: https://github.com/hyperium/tonic/tree/master/examples/src/tower [grpc]: https://grpc.io [`warp`]: https://crates.io/crates/warp [warp-service]: https://docs.rs/warp/0.2.5/warp/fn.service.html [`tower-lsp`]: https://crates.io/crates/tower-lsp [`lspower`]: https://crates.io/crates/lspower [lsp]: https://microsoft.github.io/language-server-protocol/ [`kube`]: https://crates.io/crates/kube [`kube::Client`]: https://docs.rs/kube/latest/kube/struct.Client.html [kube-example-minimal]: https://github.com/clux/kube-rs/blob/master/examples/custom_client.rs [kube-example-trace]: https://github.com/clux/kube-rs/blob/master/examples/custom_client_trace.rs [`tower-http`]: https://crates.io/crates/tower-http If you're the maintainer of a crate that supports Tower, we'd love to add your crate to this list! Please [open a PR] adding a brief description of your library! ### Getting Started The various middleware implementations provided by this crate are feature flagged, so that users can only compile the parts of Tower they need. By default, all the optional middleware are disabled. To get started using all of Tower's optional middleware, add this to your `Cargo.toml`: ```toml tower = { version = "0.4", features = ["full"] } ``` Alternatively, you can only enable some features. For example, to enable only the [`retry`] and [`timeout`][timeouts] middleware, write: ```toml tower = { version = "0.4", features = ["retry", "timeout"] } ``` See [here][all_layers] for a complete list of all middleware provided by Tower. [`Service`]: https://docs.rs/tower/latest/tower/trait.Service.html [`Layer`]: https://docs.rs/tower/latest/tower/trait.Layer.html [all_layers]: https://docs.rs/tower/latest/tower/#modules [timeouts]: https://docs.rs/tower/latest/tower/timeout/ [rate limiting]: https://docs.rs/tower/latest/tower/limit/rate [load balancing]: https://docs.rs/tower/latest/tower/balance/ [`ServiceBuilder`]: https://docs.rs/tower/latest/tower/struct.ServiceBuilder.html [utilities]: https://docs.rs/tower/latest/tower/trait.ServiceExt.html [`tower`]: https://crates.io/crates/tower [`tower-service`]: https://crates.io/crates/tower-service [`tower-layer`]: https://crates.io/crates/tower-layer [`tower-test`]: https://crates.io/crates/tower-test [`retry`]: https://docs.rs/tower/latest/tower/retry [open a PR]: https://github.com/tower-rs/tower/compare ## Supported Rust Versions Tower will keep a rolling MSRV (minimum supported Rust version) policy of **at least** 6 months. When increasing the MSRV, the new Rust version must have been released at least six months ago. The current MSRV is 1.49.0. ## License This project is licensed under the [MIT license](LICENSE). ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Tower by you, shall be licensed as MIT, without any additional terms or conditions. tower-0.4.13/examples/tower-balance.rs000064400000000000000000000150100072674642500160050ustar 00000000000000//! Exercises load balancers with mocked services. use futures_core::{Stream, TryStream}; use futures_util::{stream, stream::StreamExt, stream::TryStreamExt}; use hdrhistogram::Histogram; use pin_project_lite::pin_project; use rand::{self, Rng}; use std::hash::Hash; use std::time::Duration; use std::{ pin::Pin, task::{Context, Poll}, }; use tokio::time::{self, Instant}; use tower::balance as lb; use tower::discover::{Change, Discover}; use tower::limit::concurrency::ConcurrencyLimit; use tower::load; use tower::util::ServiceExt; use tower_service::Service; const REQUESTS: usize = 100_000; const CONCURRENCY: usize = 500; const DEFAULT_RTT: Duration = Duration::from_millis(30); static ENDPOINT_CAPACITY: usize = CONCURRENCY; static MAX_ENDPOINT_LATENCIES: [Duration; 10] = [ Duration::from_millis(1), Duration::from_millis(5), Duration::from_millis(10), Duration::from_millis(10), Duration::from_millis(10), Duration::from_millis(100), Duration::from_millis(100), Duration::from_millis(100), Duration::from_millis(500), Duration::from_millis(1000), ]; struct Summary { latencies: Histogram, start: Instant, count_by_instance: [usize; 10], } #[tokio::main] async fn main() { tracing::subscriber::set_global_default(tracing_subscriber::FmtSubscriber::default()).unwrap(); println!("REQUESTS={}", REQUESTS); println!("CONCURRENCY={}", CONCURRENCY); println!("ENDPOINT_CAPACITY={}", ENDPOINT_CAPACITY); print!("MAX_ENDPOINT_LATENCIES=["); for max in &MAX_ENDPOINT_LATENCIES { let l = max.as_secs() * 1_000 + u64::from(max.subsec_millis()); print!("{}ms, ", l); } println!("]"); let decay = Duration::from_secs(10); let d = gen_disco(); let pe = lb::p2c::Balance::new(load::PeakEwmaDiscover::new( d, DEFAULT_RTT, decay, load::CompleteOnResponse::default(), )); run("P2C+PeakEWMA...", pe).await; let d = gen_disco(); let ll = lb::p2c::Balance::new(load::PendingRequestsDiscover::new( d, load::CompleteOnResponse::default(), )); run("P2C+LeastLoaded...", ll).await; } type Error = Box; type Key = usize; pin_project! { struct Disco { services: Vec<(Key, S)> } } impl Disco { fn new(services: Vec<(Key, S)>) -> Self { Self { services } } } impl Stream for Disco where S: Service, { type Item = Result, Error>; fn poll_next(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll> { match self.project().services.pop() { Some((k, service)) => Poll::Ready(Some(Ok(Change::Insert(k, service)))), None => { // there may be more later Poll::Pending } } } } fn gen_disco() -> impl Discover< Key = Key, Error = Error, Service = ConcurrencyLimit< impl Service + Send, >, > + Send { Disco::new( MAX_ENDPOINT_LATENCIES .iter() .enumerate() .map(|(instance, latency)| { let svc = tower::service_fn(move |_| { let start = Instant::now(); let maxms = u64::from(latency.subsec_millis()) .saturating_add(latency.as_secs().saturating_mul(1_000)); let latency = Duration::from_millis(rand::thread_rng().gen_range(0..maxms)); async move { time::sleep_until(start + latency).await; let latency = start.elapsed(); Ok(Rsp { latency, instance }) } }); (instance, ConcurrencyLimit::new(svc, ENDPOINT_CAPACITY)) }) .collect(), ) } async fn run(name: &'static str, lb: lb::p2c::Balance) where D: Discover + Unpin + Send + 'static, D::Error: Into, D::Key: Clone + Send + Hash, D::Service: Service + load::Load + Send, >::Error: Into, >::Future: Send, ::Metric: std::fmt::Debug, { println!("{}", name); let requests = stream::repeat(Req).take(REQUESTS); let service = ConcurrencyLimit::new(lb, CONCURRENCY); let responses = service.call_all(requests).unordered(); compute_histo(responses).await.unwrap().report(); } async fn compute_histo(mut times: S) -> Result where S: TryStream + 'static + Unpin, { let mut summary = Summary::new(); while let Some(rsp) = times.try_next().await? { summary.count(rsp); } Ok(summary) } impl Summary { fn new() -> Self { Self { // The max delay is 2000ms. At 3 significant figures. latencies: Histogram::::new_with_max(3_000, 3).unwrap(), start: Instant::now(), count_by_instance: [0; 10], } } fn count(&mut self, rsp: Rsp) { let ms = rsp.latency.as_secs() * 1_000; let ms = ms + u64::from(rsp.latency.subsec_nanos()) / 1_000 / 1_000; self.latencies += ms; self.count_by_instance[rsp.instance] += 1; } fn report(&self) { let mut total = 0; for c in &self.count_by_instance { total += c; } for (i, c) in self.count_by_instance.iter().enumerate() { let p = *c as f64 / total as f64 * 100.0; println!(" [{:02}] {:>5.01}%", i, p); } println!(" wall {:4}s", self.start.elapsed().as_secs()); if self.latencies.len() < 2 { return; } println!(" p50 {:4}ms", self.latencies.value_at_quantile(0.5)); if self.latencies.len() < 10 { return; } println!(" p90 {:4}ms", self.latencies.value_at_quantile(0.9)); if self.latencies.len() < 50 { return; } println!(" p95 {:4}ms", self.latencies.value_at_quantile(0.95)); if self.latencies.len() < 100 { return; } println!(" p99 {:4}ms", self.latencies.value_at_quantile(0.99)); if self.latencies.len() < 1000 { return; } println!(" p999 {:4}ms", self.latencies.value_at_quantile(0.999)); } } #[derive(Debug, Clone)] struct Req; #[derive(Debug)] struct Rsp { latency: Duration, instance: usize, } tower-0.4.13/src/balance/error.rs000064400000000000000000000010300072674642500147460ustar 00000000000000//! Error types for the [`tower::balance`] middleware. //! //! [`tower::balance`]: crate::balance use std::fmt; /// The balancer's endpoint discovery stream failed. #[derive(Debug)] pub struct Discover(pub(crate) crate::BoxError); impl fmt::Display for Discover { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "load balancer discovery error: {}", self.0) } } impl std::error::Error for Discover { fn source(&self) -> Option<&(dyn std::error::Error + 'static)> { Some(&*self.0) } } tower-0.4.13/src/balance/mod.rs000064400000000000000000000050400072674642500144010ustar 00000000000000//! Middleware that allows balancing load among multiple services. //! //! In larger systems, multiple endpoints are often available for a given service. As load //! increases, you want to ensure that that load is spread evenly across the available services. //! Otherwise, clients could see spikes in latency if their request goes to a particularly loaded //! service, even when spare capacity is available to handle that request elsewhere. //! //! This module provides two pieces of middleware that helps with this type of load balancing: //! //! First, [`p2c`] implements the "[Power of Two Random Choices]" algorithm, a simple but robust //! technique for spreading load across services with only inexact load measurements. Use this if //! the set of available services is not within your control, and you simply want to spread load //! among that set of services. //! //! [Power of Two Random Choices]: http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf //! //! Second, [`pool`] implements a dynamically sized pool of services. It estimates the overall //! current load by tracking successful and unsuccessful calls to [`poll_ready`], and uses an //! exponentially weighted moving average to add (using [`MakeService`]) or remove (by dropping) //! services in response to increases or decreases in load. Use this if you are able to //! dynamically add more service endpoints to the system to handle added load. //! //! # Examples //! //! ```rust //! # #[cfg(feature = "util")] //! # #[cfg(feature = "load")] //! # fn warnings_are_errors() { //! use tower::balance::p2c::Balance; //! use tower::load::Load; //! use tower::{Service, ServiceExt}; //! use futures_util::pin_mut; //! # use futures_core::Stream; //! # use futures_util::StreamExt; //! //! async fn spread + Load>(svc1: S, svc2: S, reqs: impl Stream) //! where //! S::Error: Into, //! # // this bound is pretty unfortunate, and the compiler does _not_ help //! S::Metric: std::fmt::Debug, //! { //! // Spread load evenly across the two services //! let p2c = Balance::new(tower::discover::ServiceList::new(vec![svc1, svc2])); //! //! // Issue all the requests that come in. //! // Some will go to svc1, some will go to svc2. //! pin_mut!(reqs); //! let mut responses = p2c.call_all(reqs); //! while let Some(rsp) = responses.next().await { //! // ... //! } //! } //! # } //! ``` //! //! [`MakeService`]: crate::MakeService //! [`poll_ready`]: crate::Service::poll_ready pub mod error; pub mod p2c; pub mod pool; tower-0.4.13/src/balance/p2c/layer.rs000064400000000000000000000035610072674642500154300ustar 00000000000000use super::MakeBalance; use std::{fmt, marker::PhantomData}; use tower_layer::Layer; /// Construct load balancers ([`Balance`]) over dynamic service sets ([`Discover`]) produced by the /// "inner" service in response to requests coming from the "outer" service. /// /// This construction may seem a little odd at first glance. This is not a layer that takes /// requests and produces responses in the traditional sense. Instead, it is more like /// [`MakeService`] in that it takes service _descriptors_ (see `Target` on [`MakeService`]) /// and produces _services_. Since [`Balance`] spreads requests across a _set_ of services, /// the inner service should produce a [`Discover`], not just a single /// [`Service`], given a service descriptor. /// /// See the [module-level documentation](crate::balance) for details on load balancing. /// /// [`Balance`]: crate::balance::p2c::Balance /// [`Discover`]: crate::discover::Discover /// [`MakeService`]: crate::MakeService /// [`Service`]: crate::Service pub struct MakeBalanceLayer { _marker: PhantomData, } impl MakeBalanceLayer { /// Build balancers using operating system entropy. pub fn new() -> Self { Self { _marker: PhantomData, } } } impl Default for MakeBalanceLayer { fn default() -> Self { Self::new() } } impl Clone for MakeBalanceLayer { fn clone(&self) -> Self { Self { _marker: PhantomData, } } } impl Layer for MakeBalanceLayer { type Service = MakeBalance; fn layer(&self, make_discover: S) -> Self::Service { MakeBalance::new(make_discover) } } impl fmt::Debug for MakeBalanceLayer { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("MakeBalanceLayer").finish() } } tower-0.4.13/src/balance/p2c/make.rs000064400000000000000000000066170072674642500152360ustar 00000000000000use super::Balance; use crate::discover::Discover; use futures_core::ready; use pin_project_lite::pin_project; use std::hash::Hash; use std::marker::PhantomData; use std::{ fmt, future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; /// Constructs load balancers over dynamic service sets produced by a wrapped "inner" service. /// /// This is effectively an implementation of [`MakeService`] except that it forwards the service /// descriptors (`Target`) to an inner service (`S`), and expects that service to produce a /// service set in the form of a [`Discover`]. It then wraps the service set in a [`Balance`] /// before returning it as the "made" service. /// /// See the [module-level documentation](crate::balance) for details on load balancing. /// /// [`MakeService`]: crate::MakeService /// [`Discover`]: crate::discover::Discover /// [`Balance`]: crate::balance::p2c::Balance pub struct MakeBalance { inner: S, _marker: PhantomData, } pin_project! { /// A [`Balance`] in the making. /// /// [`Balance`]: crate::balance::p2c::Balance pub struct MakeFuture { #[pin] inner: F, _marker: PhantomData, } } impl MakeBalance { /// Build balancers using operating system entropy. pub fn new(make_discover: S) -> Self { Self { inner: make_discover, _marker: PhantomData, } } } impl Clone for MakeBalance where S: Clone, { fn clone(&self) -> Self { Self { inner: self.inner.clone(), _marker: PhantomData, } } } impl Service for MakeBalance where S: Service, S::Response: Discover, ::Key: Hash, ::Service: Service, <::Service as Service>::Error: Into, { type Response = Balance; type Error = S::Error; type Future = MakeFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx) } fn call(&mut self, target: Target) -> Self::Future { MakeFuture { inner: self.inner.call(target), _marker: PhantomData, } } } impl fmt::Debug for MakeBalance where S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let Self { inner, _marker } = self; f.debug_struct("MakeBalance").field("inner", inner).finish() } } impl Future for MakeFuture where F: Future>, T: Discover, ::Key: Hash, ::Service: Service, <::Service as Service>::Error: Into, { type Output = Result, E>; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let this = self.project(); let inner = ready!(this.inner.poll(cx))?; let svc = Balance::new(inner); Poll::Ready(Ok(svc)) } } impl fmt::Debug for MakeFuture where F: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let Self { inner, _marker } = self; f.debug_struct("MakeFuture").field("inner", inner).finish() } } tower-0.4.13/src/balance/p2c/mod.rs000064400000000000000000000041770072674642500150770ustar 00000000000000//! This module implements the "[Power of Two Random Choices]" load balancing algorithm. //! //! It is a simple but robust technique for spreading load across services with only inexact load //! measurements. As its name implies, whenever a request comes in, it samples two ready services //! at random, and issues the request to whichever service is less loaded. How loaded a service is //! is determined by the return value of [`Load`](crate::load::Load). //! //! As described in the [Finagle Guide][finagle]: //! //! > The algorithm randomly picks two services from the set of ready endpoints and //! > selects the least loaded of the two. By repeatedly using this strategy, we can //! > expect a manageable upper bound on the maximum load of any server. //! > //! > The maximum load variance between any two servers is bound by `ln(ln(n))` where //! > `n` is the number of servers in the cluster. //! //! The balance service and layer implementations rely on _service discovery_ to provide the //! underlying set of services to balance requests across. This happens through the //! [`Discover`](crate::discover::Discover) trait, which is essentially a [`Stream`] that indicates //! when services become available or go away. If you have a fixed set of services, consider using //! [`ServiceList`](crate::discover::ServiceList). //! //! Since the load balancer needs to perform _random_ choices, the constructors in this module //! usually come in two forms: one that uses randomness provided by the operating system, and one //! that lets you specify the random seed to use. Usually the former is what you'll want, though //! the latter may come in handy for reproducability or to reduce reliance on the operating system. //! //! [Power of Two Random Choices]: http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf //! [finagle]: https://twitter.github.io/finagle/guide/Clients.html#power-of-two-choices-p2c-least-loaded //! [`Stream`]: https://docs.rs/futures/0.3/futures/stream/trait.Stream.html mod layer; mod make; mod service; #[cfg(test)] mod test; pub use layer::MakeBalanceLayer; pub use make::{MakeBalance, MakeFuture}; pub use service::Balance; tower-0.4.13/src/balance/p2c/service.rs000064400000000000000000000257210072674642500157560ustar 00000000000000use super::super::error; use crate::discover::{Change, Discover}; use crate::load::Load; use crate::ready_cache::{error::Failed, ReadyCache}; use futures_core::ready; use futures_util::future::{self, TryFutureExt}; use pin_project_lite::pin_project; use rand::{rngs::SmallRng, Rng, SeedableRng}; use std::hash::Hash; use std::marker::PhantomData; use std::{ fmt, future::Future, pin::Pin, task::{Context, Poll}, }; use tokio::sync::oneshot; use tower_service::Service; use tracing::{debug, trace}; /// Efficiently distributes requests across an arbitrary number of services. /// /// See the [module-level documentation](..) for details. /// /// Note that [`Balance`] requires that the [`Discover`] you use is [`Unpin`] in order to implement /// [`Service`]. This is because it needs to be accessed from [`Service::poll_ready`], which takes /// `&mut self`. You can achieve this easily by wrapping your [`Discover`] in [`Box::pin`] before you /// construct the [`Balance`] instance. For more details, see [#319]. /// /// [`Box::pin`]: std::boxed::Box::pin() /// [#319]: https://github.com/tower-rs/tower/issues/319 pub struct Balance where D: Discover, D::Key: Hash, { discover: D, services: ReadyCache, ready_index: Option, rng: SmallRng, _req: PhantomData, } impl fmt::Debug for Balance where D: fmt::Debug, D::Key: Hash + fmt::Debug, D::Service: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("Balance") .field("discover", &self.discover) .field("services", &self.services) .finish() } } pin_project! { /// A Future that becomes satisfied when an `S`-typed service is ready. /// /// May fail due to cancelation, i.e., if [`Discover`] removes the service from the service set. struct UnreadyService { key: Option, #[pin] cancel: oneshot::Receiver<()>, service: Option, _req: PhantomData, } } enum Error { Inner(E), Canceled, } impl Balance where D: Discover, D::Key: Hash, D::Service: Service, >::Error: Into, { /// Constructs a load balancer that uses operating system entropy. pub fn new(discover: D) -> Self { Self::from_rng(discover, &mut rand::thread_rng()).expect("ThreadRNG must be valid") } /// Constructs a load balancer seeded with the provided random number generator. pub fn from_rng(discover: D, rng: R) -> Result { let rng = SmallRng::from_rng(rng)?; Ok(Self { rng, discover, services: ReadyCache::default(), ready_index: None, _req: PhantomData, }) } /// Returns the number of endpoints currently tracked by the balancer. pub fn len(&self) -> usize { self.services.len() } /// Returns whether or not the balancer is empty. pub fn is_empty(&self) -> bool { self.services.is_empty() } } impl Balance where D: Discover + Unpin, D::Key: Hash + Clone, D::Error: Into, D::Service: Service + Load, ::Metric: std::fmt::Debug, >::Error: Into, { /// Polls `discover` for updates, adding new items to `not_ready`. /// /// Removals may alter the order of either `ready` or `not_ready`. fn update_pending_from_discover( &mut self, cx: &mut Context<'_>, ) -> Poll>> { debug!("updating from discover"); loop { match ready!(Pin::new(&mut self.discover).poll_discover(cx)) .transpose() .map_err(|e| error::Discover(e.into()))? { None => return Poll::Ready(None), Some(Change::Remove(key)) => { trace!("remove"); self.services.evict(&key); } Some(Change::Insert(key, svc)) => { trace!("insert"); // If this service already existed in the set, it will be // replaced as the new one becomes ready. self.services.push(key, svc); } } } } fn promote_pending_to_ready(&mut self, cx: &mut Context<'_>) { loop { match self.services.poll_pending(cx) { Poll::Ready(Ok(())) => { // There are no remaining pending services. debug_assert_eq!(self.services.pending_len(), 0); break; } Poll::Pending => { // None of the pending services are ready. debug_assert!(self.services.pending_len() > 0); break; } Poll::Ready(Err(error)) => { // An individual service was lost; continue processing // pending services. debug!(%error, "dropping failed endpoint"); } } } trace!( ready = %self.services.ready_len(), pending = %self.services.pending_len(), "poll_unready" ); } /// Performs P2C on inner services to find a suitable endpoint. fn p2c_ready_index(&mut self) -> Option { match self.services.ready_len() { 0 => None, 1 => Some(0), len => { // Get two distinct random indexes (in a random order) and // compare the loads of the service at each index. let idxs = rand::seq::index::sample(&mut self.rng, len, 2); let aidx = idxs.index(0); let bidx = idxs.index(1); debug_assert_ne!(aidx, bidx, "random indices must be distinct"); let aload = self.ready_index_load(aidx); let bload = self.ready_index_load(bidx); let chosen = if aload <= bload { aidx } else { bidx }; trace!( a.index = aidx, a.load = ?aload, b.index = bidx, b.load = ?bload, chosen = if chosen == aidx { "a" } else { "b" }, "p2c", ); Some(chosen) } } } /// Accesses a ready endpoint by index and returns its current load. fn ready_index_load(&self, index: usize) -> ::Metric { let (_, svc) = self.services.get_ready_index(index).expect("invalid index"); svc.load() } pub(crate) fn discover_mut(&mut self) -> &mut D { &mut self.discover } } impl Service for Balance where D: Discover + Unpin, D::Key: Hash + Clone, D::Error: Into, D::Service: Service + Load, ::Metric: std::fmt::Debug, >::Error: Into, { type Response = >::Response; type Error = crate::BoxError; type Future = future::MapErr< >::Future, fn(>::Error) -> crate::BoxError, >; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { // `ready_index` may have already been set by a prior invocation. These // updates cannot disturb the order of existing ready services. let _ = self.update_pending_from_discover(cx)?; self.promote_pending_to_ready(cx); loop { // If a service has already been selected, ensure that it is ready. // This ensures that the underlying service is ready immediately // before a request is dispatched to it (i.e. in the same task // invocation). If, e.g., a failure detector has changed the state // of the service, it may be evicted from the ready set so that // another service can be selected. if let Some(index) = self.ready_index.take() { match self.services.check_ready_index(cx, index) { Ok(true) => { // The service remains ready. self.ready_index = Some(index); return Poll::Ready(Ok(())); } Ok(false) => { // The service is no longer ready. Try to find a new one. trace!("ready service became unavailable"); } Err(Failed(_, error)) => { // The ready endpoint failed, so log the error and try // to find a new one. debug!(%error, "endpoint failed"); } } } // Select a new service by comparing two at random and using the // lesser-loaded service. self.ready_index = self.p2c_ready_index(); if self.ready_index.is_none() { debug_assert_eq!(self.services.ready_len(), 0); // We have previously registered interest in updates from // discover and pending services. return Poll::Pending; } } } fn call(&mut self, request: Req) -> Self::Future { let index = self.ready_index.take().expect("called before ready"); self.services .call_ready_index(index, request) .map_err(Into::into) } } impl, Req> Future for UnreadyService { type Output = Result<(K, S), (K, Error)>; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let this = self.project(); if let Poll::Ready(Ok(())) = this.cancel.poll(cx) { let key = this.key.take().expect("polled after ready"); return Poll::Ready(Err((key, Error::Canceled))); } let res = ready!(this .service .as_mut() .expect("poll after ready") .poll_ready(cx)); let key = this.key.take().expect("polled after ready"); let svc = this.service.take().expect("polled after ready"); match res { Ok(()) => Poll::Ready(Ok((key, svc))), Err(e) => Poll::Ready(Err((key, Error::Inner(e)))), } } } impl fmt::Debug for UnreadyService where K: fmt::Debug, S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let Self { key, cancel, service, _req, } = self; f.debug_struct("UnreadyService") .field("key", key) .field("cancel", cancel) .field("service", service) .finish() } } tower-0.4.13/src/balance/p2c/test.rs000064400000000000000000000065360072674642500153000ustar 00000000000000use crate::discover::ServiceList; use crate::load; use futures_util::pin_mut; use std::task::Poll; use tokio_test::{assert_pending, assert_ready, assert_ready_ok, task}; use tower_test::{assert_request_eq, mock}; use super::*; #[tokio::test] async fn empty() { let empty: Vec, usize>> = vec![]; let disco = ServiceList::new(empty); let mut svc = mock::Spawn::new(Balance::new(disco)); assert_pending!(svc.poll_ready()); } #[tokio::test] async fn single_endpoint() { let (mut svc, mut handle) = mock::spawn_with(|s| { let mock = load::Constant::new(s, 0); let disco = ServiceList::new(vec![mock].into_iter()); Balance::new(disco) }); handle.allow(0); assert_pending!(svc.poll_ready()); assert_eq!( svc.get_ref().len(), 1, "balancer must have discovered endpoint" ); handle.allow(1); assert_ready_ok!(svc.poll_ready()); let mut fut = task::spawn(svc.call(())); assert_request_eq!(handle, ()).send_response(1); assert_eq!(assert_ready_ok!(fut.poll()), 1); handle.allow(1); assert_ready_ok!(svc.poll_ready()); handle.send_error("endpoint lost"); assert_pending!(svc.poll_ready()); assert!( svc.get_ref().is_empty(), "balancer must drop failed endpoints" ); } #[tokio::test] async fn two_endpoints_with_equal_load() { let (mock_a, handle_a) = mock::pair(); let (mock_b, handle_b) = mock::pair(); let mock_a = load::Constant::new(mock_a, 1); let mock_b = load::Constant::new(mock_b, 1); pin_mut!(handle_a); pin_mut!(handle_b); let disco = ServiceList::new(vec![mock_a, mock_b].into_iter()); let mut svc = mock::Spawn::new(Balance::new(disco)); handle_a.allow(0); handle_b.allow(0); assert_pending!(svc.poll_ready()); assert_eq!( svc.get_ref().len(), 2, "balancer must have discovered both endpoints" ); handle_a.allow(1); handle_b.allow(0); assert_ready_ok!( svc.poll_ready(), "must be ready when one of two services is ready" ); { let mut fut = task::spawn(svc.call(())); assert_request_eq!(handle_a, ()).send_response("a"); assert_eq!(assert_ready_ok!(fut.poll()), "a"); } handle_a.allow(0); handle_b.allow(1); assert_ready_ok!( svc.poll_ready(), "must be ready when both endpoints are ready" ); { let mut fut = task::spawn(svc.call(())); assert_request_eq!(handle_b, ()).send_response("b"); assert_eq!(assert_ready_ok!(fut.poll()), "b"); } handle_a.allow(1); handle_b.allow(1); for _ in 0..2 { assert_ready_ok!( svc.poll_ready(), "must be ready when both endpoints are ready" ); let mut fut = task::spawn(svc.call(())); for (ref mut h, c) in &mut [(&mut handle_a, "a"), (&mut handle_b, "b")] { if let Poll::Ready(Some((_, tx))) = h.as_mut().poll_request() { tracing::info!("using {}", c); tx.send_response(c); h.allow(0); } } assert_ready_ok!(fut.poll()); } handle_a.send_error("endpoint lost"); assert_pending!(svc.poll_ready()); assert_eq!( svc.get_ref().len(), 1, "balancer must drop failed endpoints", ); } tower-0.4.13/src/balance/pool/mod.rs000064400000000000000000000375070072674642500153670ustar 00000000000000//! This module defines a load-balanced pool of services that adds new services when load is high. //! //! The pool uses `poll_ready` as a signal indicating whether additional services should be spawned //! to handle the current level of load. Specifically, every time `poll_ready` on the inner service //! returns `Ready`, [`Pool`] consider that a 0, and every time it returns `Pending`, [`Pool`] //! considers it a 1. [`Pool`] then maintains an [exponential moving //! average](https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average) over those //! samples, which gives an estimate of how often the underlying service has been ready when it was //! needed "recently" (see [`Builder::urgency`]). If the service is loaded (see //! [`Builder::loaded_above`]), a new service is created and added to the underlying [`Balance`]. //! If the service is underutilized (see [`Builder::underutilized_below`]) and there are two or //! more services, then the latest added service is removed. In either case, the load estimate is //! reset to its initial value (see [`Builder::initial`] to prevent services from being rapidly //! added or removed. #![deny(missing_docs)] use super::p2c::Balance; use crate::discover::Change; use crate::load::Load; use crate::make::MakeService; use futures_core::{ready, Stream}; use pin_project_lite::pin_project; use slab::Slab; use std::{ fmt, future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; #[cfg(test)] mod test; #[derive(Debug, Clone, Copy, Eq, PartialEq)] enum Level { /// Load is low -- remove a service instance. Low, /// Load is normal -- keep the service set as it is. Normal, /// Load is high -- add another service instance. High, } pin_project! { /// A wrapper around `MakeService` that discovers a new service when load is high, and removes a /// service when load is low. See [`Pool`]. pub struct PoolDiscoverer where MS: MakeService, { maker: MS, #[pin] making: Option, target: Target, load: Level, services: Slab<()>, died_tx: tokio::sync::mpsc::UnboundedSender, #[pin] died_rx: tokio::sync::mpsc::UnboundedReceiver, limit: Option, } } impl fmt::Debug for PoolDiscoverer where MS: MakeService + fmt::Debug, Target: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("PoolDiscoverer") .field("maker", &self.maker) .field("making", &self.making.is_some()) .field("target", &self.target) .field("load", &self.load) .field("services", &self.services) .field("limit", &self.limit) .finish() } } impl Stream for PoolDiscoverer where MS: MakeService, MS::MakeError: Into, MS::Error: Into, Target: Clone, { type Item = Result>, MS::MakeError>; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { let mut this = self.project(); while let Poll::Ready(Some(sid)) = this.died_rx.as_mut().poll_recv(cx) { this.services.remove(sid); tracing::trace!( pool.services = this.services.len(), message = "removing dropped service" ); } if this.services.is_empty() && this.making.is_none() { let _ = ready!(this.maker.poll_ready(cx))?; tracing::trace!("construct initial pool connection"); this.making .set(Some(this.maker.make_service(this.target.clone()))); } if let Level::High = this.load { if this.making.is_none() { if this .limit .map(|limit| this.services.len() >= limit) .unwrap_or(false) { return Poll::Pending; } tracing::trace!( pool.services = this.services.len(), message = "decided to add service to loaded pool" ); ready!(this.maker.poll_ready(cx))?; tracing::trace!("making new service"); // TODO: it'd be great if we could avoid the clone here and use, say, &Target this.making .set(Some(this.maker.make_service(this.target.clone()))); } } if let Some(fut) = this.making.as_mut().as_pin_mut() { let svc = ready!(fut.poll(cx))?; this.making.set(None); let id = this.services.insert(()); let svc = DropNotifyService { svc, id, notify: this.died_tx.clone(), }; tracing::trace!( pool.services = this.services.len(), message = "finished creating new service" ); *this.load = Level::Normal; return Poll::Ready(Some(Ok(Change::Insert(id, svc)))); } match this.load { Level::High => { unreachable!("found high load but no Service being made"); } Level::Normal => Poll::Pending, Level::Low if this.services.len() == 1 => Poll::Pending, Level::Low => { *this.load = Level::Normal; // NOTE: this is a little sad -- we'd prefer to kill short-living services let rm = this.services.iter().next().unwrap().0; // note that we _don't_ remove from self.services here // that'll happen automatically on drop tracing::trace!( pool.services = this.services.len(), message = "removing service for over-provisioned pool" ); Poll::Ready(Some(Ok(Change::Remove(rm)))) } } } } /// A [builder] that lets you configure how a [`Pool`] determines whether the underlying service is /// loaded or not. See the [module-level documentation](self) and the builder's methods for /// details. /// /// [builder]: https://rust-lang-nursery.github.io/api-guidelines/type-safety.html#builders-enable-construction-of-complex-values-c-builder #[derive(Copy, Clone, Debug)] pub struct Builder { low: f64, high: f64, init: f64, alpha: f64, limit: Option, } impl Default for Builder { fn default() -> Self { Builder { init: 0.1, low: 0.00001, high: 0.2, alpha: 0.03, limit: None, } } } impl Builder { /// Create a new builder with default values for all load settings. /// /// If you just want to use the defaults, you can just use [`Pool::new`]. pub fn new() -> Self { Self::default() } /// When the estimated load (see the [module-level docs](self)) drops below this /// threshold, and there are at least two services active, a service is removed. /// /// The default value is 0.01. That is, when one in every 100 `poll_ready` calls return /// `Pending`, then the underlying service is considered underutilized. pub fn underutilized_below(&mut self, low: f64) -> &mut Self { self.low = low; self } /// When the estimated load (see the [module-level docs](self)) exceeds this /// threshold, and no service is currently in the process of being added, a new service is /// scheduled to be added to the underlying [`Balance`]. /// /// The default value is 0.5. That is, when every other call to `poll_ready` returns /// `Pending`, then the underlying service is considered highly loaded. pub fn loaded_above(&mut self, high: f64) -> &mut Self { self.high = high; self } /// The initial estimated load average. /// /// This is also the value that the estimated load will be reset to whenever a service is added /// or removed. /// /// The default value is 0.1. pub fn initial(&mut self, init: f64) -> &mut Self { self.init = init; self } /// How aggressively the estimated load average is updated. /// /// This is the α parameter of the formula for the [exponential moving /// average](https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average), and /// dictates how quickly new samples of the current load affect the estimated load. If the /// value is closer to 1, newer samples affect the load average a lot (when α is 1, the load /// average is immediately set to the current load). If the value is closer to 0, newer samples /// affect the load average very little at a time. /// /// The given value is clamped to `[0,1]`. /// /// The default value is 0.05, meaning, in very approximate terms, that each new load sample /// affects the estimated load by 5%. pub fn urgency(&mut self, alpha: f64) -> &mut Self { self.alpha = alpha.max(0.0).min(1.0); self } /// The maximum number of backing `Service` instances to maintain. /// /// When the limit is reached, the load estimate is clamped to the high load threshhold, and no /// new service is spawned. /// /// No maximum limit is imposed by default. pub fn max_services(&mut self, limit: Option) -> &mut Self { self.limit = limit; self } /// See [`Pool::new`]. pub fn build( &self, make_service: MS, target: Target, ) -> Pool where MS: MakeService, MS::Service: Load, ::Metric: std::fmt::Debug, MS::MakeError: Into, MS::Error: Into, Target: Clone, { let (died_tx, died_rx) = tokio::sync::mpsc::unbounded_channel(); let d = PoolDiscoverer { maker: make_service, making: None, target, load: Level::Normal, services: Slab::new(), died_tx, died_rx, limit: self.limit, }; Pool { balance: Balance::new(Box::pin(d)), options: *self, ewma: self.init, } } } /// A dynamically sized, load-balanced pool of `Service` instances. pub struct Pool where MS: MakeService, MS::MakeError: Into, MS::Error: Into, Target: Clone, { // the Pin> here is needed since Balance requires the Service to be Unpin balance: Balance>>, Request>, options: Builder, ewma: f64, } impl fmt::Debug for Pool where MS: MakeService + fmt::Debug, MS::MakeError: Into, MS::Error: Into, Target: Clone + fmt::Debug, MS::Service: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("Pool") .field("balance", &self.balance) .field("options", &self.options) .field("ewma", &self.ewma) .finish() } } impl Pool where MS: MakeService, MS::Service: Load, ::Metric: std::fmt::Debug, MS::MakeError: Into, MS::Error: Into, Target: Clone, { /// Construct a new dynamically sized `Pool`. /// /// If many calls to `poll_ready` return `Pending`, `new_service` is used to /// construct another `Service` that is then added to the load-balanced pool. /// If many calls to `poll_ready` succeed, the most recently added `Service` /// is dropped from the pool. pub fn new(make_service: MS, target: Target) -> Self { Builder::new().build(make_service, target) } } type PinBalance = Balance>, Request>; impl Service for Pool where MS: MakeService, MS::Service: Load, ::Metric: std::fmt::Debug, MS::MakeError: Into, MS::Error: Into, Target: Clone, { type Response = , Req> as Service>::Response; type Error = , Req> as Service>::Error; type Future = , Req> as Service>::Future; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { if let Poll::Ready(()) = self.balance.poll_ready(cx)? { // services was ready -- there are enough services // update ewma with a 0 sample self.ewma *= 1.0 - self.options.alpha; let discover = self.balance.discover_mut().as_mut().project(); if self.ewma < self.options.low { if *discover.load != Level::Low { tracing::trace!({ ewma = %self.ewma }, "pool is over-provisioned"); } *discover.load = Level::Low; if discover.services.len() > 1 { // reset EWMA so we don't immediately try to remove another service self.ewma = self.options.init; } } else { if *discover.load != Level::Normal { tracing::trace!({ ewma = %self.ewma }, "pool is appropriately provisioned"); } *discover.load = Level::Normal; } return Poll::Ready(Ok(())); } let discover = self.balance.discover_mut().as_mut().project(); if discover.making.is_none() { // no services are ready -- we're overloaded // update ewma with a 1 sample self.ewma = self.options.alpha + (1.0 - self.options.alpha) * self.ewma; if self.ewma > self.options.high { if *discover.load != Level::High { tracing::trace!({ ewma = %self.ewma }, "pool is under-provisioned"); } *discover.load = Level::High; // don't reset the EWMA -- in theory, poll_ready should now start returning // `Ready`, so we won't try to launch another service immediately. // we clamp it to high though in case the # of services is limited. self.ewma = self.options.high; // we need to call balance again for PoolDiscover to realize // it can make a new service return self.balance.poll_ready(cx); } else { *discover.load = Level::Normal; } } Poll::Pending } fn call(&mut self, req: Req) -> Self::Future { self.balance.call(req) } } #[doc(hidden)] #[derive(Debug)] pub struct DropNotifyService { svc: Svc, id: usize, notify: tokio::sync::mpsc::UnboundedSender, } impl Drop for DropNotifyService { fn drop(&mut self) { let _ = self.notify.send(self.id).is_ok(); } } impl Load for DropNotifyService { type Metric = Svc::Metric; fn load(&self) -> Self::Metric { self.svc.load() } } impl> Service for DropNotifyService { type Response = Svc::Response; type Future = Svc::Future; type Error = Svc::Error; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.svc.poll_ready(cx) } fn call(&mut self, req: Request) -> Self::Future { self.svc.call(req) } } tower-0.4.13/src/balance/pool/test.rs000064400000000000000000000143330072674642500155570ustar 00000000000000use crate::load; use futures_util::pin_mut; use tokio_test::{assert_pending, assert_ready, assert_ready_ok, task}; use tower_test::{assert_request_eq, mock}; use super::*; #[tokio::test] async fn basic() { // start the pool let (mock, handle) = mock::pair::<(), load::Constant, usize>>(); pin_mut!(handle); let mut pool = mock::Spawn::new(Builder::new().build(mock, ())); assert_pending!(pool.poll_ready()); // give the pool a backing service let (svc1_m, svc1) = mock::pair(); pin_mut!(svc1); assert_request_eq!(handle, ()).send_response(load::Constant::new(svc1_m, 0)); assert_ready_ok!(pool.poll_ready()); // send a request to the one backing service let mut fut = task::spawn(pool.call(())); assert_pending!(fut.poll()); assert_request_eq!(svc1, ()).send_response("foobar"); assert_eq!(assert_ready_ok!(fut.poll()), "foobar"); } #[tokio::test] async fn high_load() { // start the pool let (mock, handle) = mock::pair::<(), load::Constant, usize>>(); pin_mut!(handle); let pool = Builder::new() .urgency(1.0) // so _any_ Pending will add a service .underutilized_below(0.0) // so no Ready will remove a service .max_services(Some(2)) .build(mock, ()); let mut pool = mock::Spawn::new(pool); assert_pending!(pool.poll_ready()); // give the pool a backing service let (svc1_m, svc1) = mock::pair(); pin_mut!(svc1); svc1.allow(1); assert_request_eq!(handle, ()).send_response(load::Constant::new(svc1_m, 0)); assert_ready_ok!(pool.poll_ready()); // make the one backing service not ready let mut fut1 = task::spawn(pool.call(())); // if we poll_ready again, pool should notice that load is increasing // since urgency == 1.0, it should immediately enter high load assert_pending!(pool.poll_ready()); // it should ask the maker for another service, so we give it one let (svc2_m, svc2) = mock::pair(); pin_mut!(svc2); svc2.allow(1); assert_request_eq!(handle, ()).send_response(load::Constant::new(svc2_m, 0)); // the pool should now be ready again for one more request assert_ready_ok!(pool.poll_ready()); let mut fut2 = task::spawn(pool.call(())); assert_pending!(pool.poll_ready()); // the pool should _not_ try to add another service // sicen we have max_services(2) assert_pending!(handle.as_mut().poll_request()); // let see that each service got one request assert_request_eq!(svc1, ()).send_response("foo"); assert_request_eq!(svc2, ()).send_response("bar"); assert_eq!(assert_ready_ok!(fut1.poll()), "foo"); assert_eq!(assert_ready_ok!(fut2.poll()), "bar"); } #[tokio::test] async fn low_load() { // start the pool let (mock, handle) = mock::pair::<(), load::Constant, usize>>(); pin_mut!(handle); let pool = Builder::new() .urgency(1.0) // so any event will change the service count .build(mock, ()); let mut pool = mock::Spawn::new(pool); assert_pending!(pool.poll_ready()); // give the pool a backing service let (svc1_m, svc1) = mock::pair(); pin_mut!(svc1); svc1.allow(1); assert_request_eq!(handle, ()).send_response(load::Constant::new(svc1_m, 0)); assert_ready_ok!(pool.poll_ready()); // cycling a request should now work let mut fut = task::spawn(pool.call(())); assert_request_eq!(svc1, ()).send_response("foo"); assert_eq!(assert_ready_ok!(fut.poll()), "foo"); // and pool should now not be ready (since svc1 isn't ready) // it should immediately try to add another service // which we give it assert_pending!(pool.poll_ready()); let (svc2_m, svc2) = mock::pair(); pin_mut!(svc2); svc2.allow(1); assert_request_eq!(handle, ()).send_response(load::Constant::new(svc2_m, 0)); // pool is now ready // which (because of urgency == 1.0) should immediately cause it to drop a service // it'll drop svc1, so it'll still be ready assert_ready_ok!(pool.poll_ready()); // and even with another ready, it won't drop svc2 since its now the only service assert_ready_ok!(pool.poll_ready()); // cycling a request should now work on svc2 let mut fut = task::spawn(pool.call(())); assert_request_eq!(svc2, ()).send_response("foo"); assert_eq!(assert_ready_ok!(fut.poll()), "foo"); // and again (still svc2) svc2.allow(1); assert_ready_ok!(pool.poll_ready()); let mut fut = task::spawn(pool.call(())); assert_request_eq!(svc2, ()).send_response("foo"); assert_eq!(assert_ready_ok!(fut.poll()), "foo"); } #[tokio::test] async fn failing_service() { // start the pool let (mock, handle) = mock::pair::<(), load::Constant, usize>>(); pin_mut!(handle); let pool = Builder::new() .urgency(1.0) // so _any_ Pending will add a service .underutilized_below(0.0) // so no Ready will remove a service .build(mock, ()); let mut pool = mock::Spawn::new(pool); assert_pending!(pool.poll_ready()); // give the pool a backing service let (svc1_m, svc1) = mock::pair(); pin_mut!(svc1); svc1.allow(1); assert_request_eq!(handle, ()).send_response(load::Constant::new(svc1_m, 0)); assert_ready_ok!(pool.poll_ready()); // one request-response cycle let mut fut = task::spawn(pool.call(())); assert_request_eq!(svc1, ()).send_response("foo"); assert_eq!(assert_ready_ok!(fut.poll()), "foo"); // now make svc1 fail, so it has to be removed svc1.send_error("ouch"); // polling now should recognize the failed service, // try to create a new one, and then realize the maker isn't ready assert_pending!(pool.poll_ready()); // then we release another service let (svc2_m, svc2) = mock::pair(); pin_mut!(svc2); svc2.allow(1); assert_request_eq!(handle, ()).send_response(load::Constant::new(svc2_m, 0)); // the pool should now be ready again assert_ready_ok!(pool.poll_ready()); // and a cycle should work (and go through svc2) let mut fut = task::spawn(pool.call(())); assert_request_eq!(svc2, ()).send_response("bar"); assert_eq!(assert_ready_ok!(fut.poll()), "bar"); } tower-0.4.13/src/buffer/error.rs000064400000000000000000000030400072674642500146350ustar 00000000000000//! Error types for the `Buffer` middleware. use crate::BoxError; use std::{fmt, sync::Arc}; /// An error produced by a [`Service`] wrapped by a [`Buffer`] /// /// [`Service`]: crate::Service /// [`Buffer`]: crate::buffer::Buffer #[derive(Debug)] pub struct ServiceError { inner: Arc, } /// An error produced when the a buffer's worker closes unexpectedly. pub struct Closed { _p: (), } // ===== impl ServiceError ===== impl ServiceError { pub(crate) fn new(inner: BoxError) -> ServiceError { let inner = Arc::new(inner); ServiceError { inner } } // Private to avoid exposing `Clone` trait as part of the public API pub(crate) fn clone(&self) -> ServiceError { ServiceError { inner: self.inner.clone(), } } } impl fmt::Display for ServiceError { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { write!(fmt, "buffered service failed: {}", self.inner) } } impl std::error::Error for ServiceError { fn source(&self) -> Option<&(dyn std::error::Error + 'static)> { Some(&**self.inner) } } // ===== impl Closed ===== impl Closed { pub(crate) fn new() -> Self { Closed { _p: () } } } impl fmt::Debug for Closed { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { fmt.debug_tuple("Closed").finish() } } impl fmt::Display for Closed { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { fmt.write_str("buffer's worker closed unexpectedly") } } impl std::error::Error for Closed {} tower-0.4.13/src/buffer/future.rs000064400000000000000000000040650072674642500150260ustar 00000000000000//! Future types for the [`Buffer`] middleware. //! //! [`Buffer`]: crate::buffer::Buffer use super::{error::Closed, message}; use futures_core::ready; use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; pin_project! { /// Future that completes when the buffered service eventually services the submitted request. #[derive(Debug)] pub struct ResponseFuture { #[pin] state: ResponseState, } } pin_project! { #[project = ResponseStateProj] #[derive(Debug)] enum ResponseState { Failed { error: Option, }, Rx { #[pin] rx: message::Rx, }, Poll { #[pin] fut: T, }, } } impl ResponseFuture { pub(crate) fn new(rx: message::Rx) -> Self { ResponseFuture { state: ResponseState::Rx { rx }, } } pub(crate) fn failed(err: crate::BoxError) -> Self { ResponseFuture { state: ResponseState::Failed { error: Some(err) }, } } } impl Future for ResponseFuture where F: Future>, E: Into, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let mut this = self.project(); loop { match this.state.as_mut().project() { ResponseStateProj::Failed { error } => { return Poll::Ready(Err(error.take().expect("polled after error"))); } ResponseStateProj::Rx { rx } => match ready!(rx.poll(cx)) { Ok(Ok(fut)) => this.state.set(ResponseState::Poll { fut }), Ok(Err(e)) => return Poll::Ready(Err(e.into())), Err(_) => return Poll::Ready(Err(Closed::new().into())), }, ResponseStateProj::Poll { fut } => return fut.poll(cx).map_err(Into::into), } } } } tower-0.4.13/src/buffer/layer.rs000064400000000000000000000045630072674642500146330ustar 00000000000000use super::service::Buffer; use std::{fmt, marker::PhantomData}; use tower_layer::Layer; use tower_service::Service; /// Adds an mpsc buffer in front of an inner service. /// /// The default Tokio executor is used to run the given service, /// which means that this layer can only be used on the Tokio runtime. /// /// See the module documentation for more details. pub struct BufferLayer { bound: usize, _p: PhantomData, } impl BufferLayer { /// Creates a new [`BufferLayer`] with the provided `bound`. /// /// `bound` gives the maximal number of requests that can be queued for the service before /// backpressure is applied to callers. /// /// # A note on choosing a `bound` /// /// When [`Buffer`]'s implementation of [`poll_ready`] returns [`Poll::Ready`], it reserves a /// slot in the channel for the forthcoming [`call`]. However, if this call doesn't arrive, /// this reserved slot may be held up for a long time. As a result, it's advisable to set /// `bound` to be at least the maximum number of concurrent requests the [`Buffer`] will see. /// If you do not, all the slots in the buffer may be held up by futures that have just called /// [`poll_ready`] but will not issue a [`call`], which prevents other senders from issuing new /// requests. /// /// [`Poll::Ready`]: std::task::Poll::Ready /// [`call`]: crate::Service::call /// [`poll_ready`]: crate::Service::poll_ready pub fn new(bound: usize) -> Self { BufferLayer { bound, _p: PhantomData, } } } impl Layer for BufferLayer where S: Service + Send + 'static, S::Future: Send, S::Error: Into + Send + Sync, Request: Send + 'static, { type Service = Buffer; fn layer(&self, service: S) -> Self::Service { Buffer::new(service, self.bound) } } impl fmt::Debug for BufferLayer { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("BufferLayer") .field("bound", &self.bound) .finish() } } impl Clone for BufferLayer { fn clone(&self) -> Self { Self { bound: self.bound, _p: PhantomData, } } } impl Copy for BufferLayer {} tower-0.4.13/src/buffer/message.rs000064400000000000000000000007660072674642500151440ustar 00000000000000use super::error::ServiceError; use tokio::sync::{oneshot, OwnedSemaphorePermit}; /// Message sent over buffer #[derive(Debug)] pub(crate) struct Message { pub(crate) request: Request, pub(crate) tx: Tx, pub(crate) span: tracing::Span, pub(super) _permit: OwnedSemaphorePermit, } /// Response sender pub(crate) type Tx = oneshot::Sender>; /// Response receiver pub(crate) type Rx = oneshot::Receiver>; tower-0.4.13/src/buffer/mod.rs000064400000000000000000000030620072674642500142670ustar 00000000000000//! Middleware that provides a buffered mpsc channel to a service. //! //! Sometimes you want to give out multiple handles to a single service, and allow each handle to //! enqueue requests. That is, you want a [`Service`] to be [`Clone`]. This module allows you to do //! that by placing the service behind a multi-producer, single-consumer buffering channel. Clients //! enqueue requests by sending on the channel from any of the handles ([`Buffer`]), and the single //! service running elsewhere (usually spawned) receives and services the requests one by one. Each //! request is enqueued alongside a response channel that allows the service to report the result //! of the request back to the caller. //! //! # Examples //! //! ```rust //! # #[cfg(feature = "util")] //! use tower::buffer::Buffer; //! # #[cfg(feature = "util")] //! use tower::{Service, ServiceExt}; //! # #[cfg(feature = "util")] //! async fn mass_produce>(svc: S) //! where //! S: 'static + Send, //! S::Error: Send + Sync + std::error::Error, //! S::Future: Send //! { //! let svc = Buffer::new(svc, 10 /* buffer length */); //! for _ in 0..10 { //! let mut svc = svc.clone(); //! tokio::spawn(async move { //! for i in 0usize.. { //! svc.ready().await.expect("service crashed").call(i).await; //! } //! }); //! } //! } //! ``` //! //! [`Service`]: crate::Service pub mod error; pub mod future; mod layer; mod message; mod service; mod worker; pub use self::layer::BufferLayer; pub use self::service::Buffer; tower-0.4.13/src/buffer/service.rs000064400000000000000000000150360072674642500151540ustar 00000000000000use super::{ future::ResponseFuture, message::Message, worker::{Handle, Worker}, }; use futures_core::ready; use std::sync::Arc; use std::task::{Context, Poll}; use tokio::sync::{mpsc, oneshot, OwnedSemaphorePermit, Semaphore}; use tokio_util::sync::PollSemaphore; use tower_service::Service; /// Adds an mpsc buffer in front of an inner service. /// /// See the module documentation for more details. #[derive(Debug)] pub struct Buffer where T: Service, { // Note: this actually _is_ bounded, but rather than using Tokio's bounded // channel, we use Tokio's semaphore separately to implement the bound. tx: mpsc::UnboundedSender>, // When the buffer's channel is full, we want to exert backpressure in // `poll_ready`, so that callers such as load balancers could choose to call // another service rather than waiting for buffer capacity. // // Unfortunately, this can't be done easily using Tokio's bounded MPSC // channel, because it doesn't expose a polling-based interface, only an // `async fn ready`, which borrows the sender. Therefore, we implement our // own bounded MPSC on top of the unbounded channel, using a semaphore to // limit how many items are in the channel. semaphore: PollSemaphore, // The current semaphore permit, if one has been acquired. // // This is acquired in `poll_ready` and taken in `call`. permit: Option, handle: Handle, } impl Buffer where T: Service, T::Error: Into, { /// Creates a new [`Buffer`] wrapping `service`. /// /// `bound` gives the maximal number of requests that can be queued for the service before /// backpressure is applied to callers. /// /// The default Tokio executor is used to run the given service, which means that this method /// must be called while on the Tokio runtime. /// /// # A note on choosing a `bound` /// /// When [`Buffer`]'s implementation of [`poll_ready`] returns [`Poll::Ready`], it reserves a /// slot in the channel for the forthcoming [`call`]. However, if this call doesn't arrive, /// this reserved slot may be held up for a long time. As a result, it's advisable to set /// `bound` to be at least the maximum number of concurrent requests the [`Buffer`] will see. /// If you do not, all the slots in the buffer may be held up by futures that have just called /// [`poll_ready`] but will not issue a [`call`], which prevents other senders from issuing new /// requests. /// /// [`Poll::Ready`]: std::task::Poll::Ready /// [`call`]: crate::Service::call /// [`poll_ready`]: crate::Service::poll_ready pub fn new(service: T, bound: usize) -> Self where T: Send + 'static, T::Future: Send, T::Error: Send + Sync, Request: Send + 'static, { let (service, worker) = Self::pair(service, bound); tokio::spawn(worker); service } /// Creates a new [`Buffer`] wrapping `service`, but returns the background worker. /// /// This is useful if you do not want to spawn directly onto the tokio runtime /// but instead want to use your own executor. This will return the [`Buffer`] and /// the background `Worker` that you can then spawn. pub fn pair(service: T, bound: usize) -> (Buffer, Worker) where T: Send + 'static, T::Error: Send + Sync, Request: Send + 'static, { let (tx, rx) = mpsc::unbounded_channel(); let semaphore = Arc::new(Semaphore::new(bound)); let (handle, worker) = Worker::new(service, rx, &semaphore); let buffer = Buffer { tx, handle, semaphore: PollSemaphore::new(semaphore), permit: None, }; (buffer, worker) } fn get_worker_error(&self) -> crate::BoxError { self.handle.get_error_on_closed() } } impl Service for Buffer where T: Service, T::Error: Into, { type Response = T::Response; type Error = crate::BoxError; type Future = ResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { // First, check if the worker is still alive. if self.tx.is_closed() { // If the inner service has errored, then we error here. return Poll::Ready(Err(self.get_worker_error())); } // Then, check if we've already acquired a permit. if self.permit.is_some() { // We've already reserved capacity to send a request. We're ready! return Poll::Ready(Ok(())); } // Finally, if we haven't already acquired a permit, poll the semaphore // to acquire one. If we acquire a permit, then there's enough buffer // capacity to send a new request. Otherwise, we need to wait for // capacity. let permit = ready!(self.semaphore.poll_acquire(cx)).ok_or_else(|| self.get_worker_error())?; self.permit = Some(permit); Poll::Ready(Ok(())) } fn call(&mut self, request: Request) -> Self::Future { tracing::trace!("sending request to buffer worker"); let _permit = self .permit .take() .expect("buffer full; poll_ready must be called first"); // get the current Span so that we can explicitly propagate it to the worker // if we didn't do this, events on the worker related to this span wouldn't be counted // towards that span since the worker would have no way of entering it. let span = tracing::Span::current(); // If we've made it here, then a semaphore permit has already been // acquired, so we can freely allocate a oneshot. let (tx, rx) = oneshot::channel(); match self.tx.send(Message { request, span, tx, _permit, }) { Err(_) => ResponseFuture::failed(self.get_worker_error()), Ok(_) => ResponseFuture::new(rx), } } } impl Clone for Buffer where T: Service, { fn clone(&self) -> Self { Self { tx: self.tx.clone(), handle: self.handle.clone(), semaphore: self.semaphore.clone(), // The new clone hasn't acquired a permit yet. It will when it's // next polled ready. permit: None, } } } tower-0.4.13/src/buffer/worker.rs000064400000000000000000000221050072674642500150200ustar 00000000000000use super::{ error::{Closed, ServiceError}, message::Message, }; use futures_core::ready; use std::sync::{Arc, Mutex, Weak}; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tokio::sync::{mpsc, Semaphore}; use tower_service::Service; pin_project_lite::pin_project! { /// Task that handles processing the buffer. This type should not be used /// directly, instead `Buffer` requires an `Executor` that can accept this task. /// /// The struct is `pub` in the private module and the type is *not* re-exported /// as part of the public API. This is the "sealed" pattern to include "private" /// types in public traits that are not meant for consumers of the library to /// implement (only call). #[derive(Debug)] pub struct Worker where T: Service, { current_message: Option>, rx: mpsc::UnboundedReceiver>, service: T, finish: bool, failed: Option, handle: Handle, close: Option>, } impl, Request> PinnedDrop for Worker { fn drop(mut this: Pin<&mut Self>) { this.as_mut().close_semaphore(); } } } /// Get the error out #[derive(Debug)] pub(crate) struct Handle { inner: Arc>>, } impl Worker where T: Service, { /// Closes the buffer's semaphore if it is still open, waking any pending /// tasks. fn close_semaphore(&mut self) { if let Some(close) = self.close.take().as_ref().and_then(Weak::upgrade) { tracing::debug!("buffer closing; waking pending tasks"); close.close(); } else { tracing::trace!("buffer already closed"); } } } impl Worker where T: Service, T::Error: Into, { pub(crate) fn new( service: T, rx: mpsc::UnboundedReceiver>, semaphore: &Arc, ) -> (Handle, Worker) { let handle = Handle { inner: Arc::new(Mutex::new(None)), }; let semaphore = Arc::downgrade(semaphore); let worker = Worker { current_message: None, finish: false, failed: None, rx, service, handle: handle.clone(), close: Some(semaphore), }; (handle, worker) } /// Return the next queued Message that hasn't been canceled. /// /// If a `Message` is returned, the `bool` is true if this is the first time we received this /// message, and false otherwise (i.e., we tried to forward it to the backing service before). fn poll_next_msg( &mut self, cx: &mut Context<'_>, ) -> Poll, bool)>> { if self.finish { // We've already received None and are shutting down return Poll::Ready(None); } tracing::trace!("worker polling for next message"); if let Some(msg) = self.current_message.take() { // If the oneshot sender is closed, then the receiver is dropped, // and nobody cares about the response. If this is the case, we // should continue to the next request. if !msg.tx.is_closed() { tracing::trace!("resuming buffered request"); return Poll::Ready(Some((msg, false))); } tracing::trace!("dropping cancelled buffered request"); } // Get the next request while let Some(msg) = ready!(Pin::new(&mut self.rx).poll_recv(cx)) { if !msg.tx.is_closed() { tracing::trace!("processing new request"); return Poll::Ready(Some((msg, true))); } // Otherwise, request is canceled, so pop the next one. tracing::trace!("dropping cancelled request"); } Poll::Ready(None) } fn failed(&mut self, error: crate::BoxError) { // The underlying service failed when we called `poll_ready` on it with the given `error`. We // need to communicate this to all the `Buffer` handles. To do so, we wrap up the error in // an `Arc`, send that `Arc` to all pending requests, and store it so that subsequent // requests will also fail with the same error. // Note that we need to handle the case where some handle is concurrently trying to send us // a request. We need to make sure that *either* the send of the request fails *or* it // receives an error on the `oneshot` it constructed. Specifically, we want to avoid the // case where we send errors to all outstanding requests, and *then* the caller sends its // request. We do this by *first* exposing the error, *then* closing the channel used to // send more requests (so the client will see the error when the send fails), and *then* // sending the error to all outstanding requests. let error = ServiceError::new(error); let mut inner = self.handle.inner.lock().unwrap(); if inner.is_some() { // Future::poll was called after we've already errored out! return; } *inner = Some(error.clone()); drop(inner); self.rx.close(); // By closing the mpsc::Receiver, we know that poll_next_msg will soon return Ready(None), // which will trigger the `self.finish == true` phase. We just need to make sure that any // requests that we receive before we've exhausted the receiver receive the error: self.failed = Some(error); } } impl Future for Worker where T: Service, T::Error: Into, { type Output = (); fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { if self.finish { return Poll::Ready(()); } loop { match ready!(self.poll_next_msg(cx)) { Some((msg, first)) => { let _guard = msg.span.enter(); if let Some(ref failed) = self.failed { tracing::trace!("notifying caller about worker failure"); let _ = msg.tx.send(Err(failed.clone())); continue; } // Wait for the service to be ready tracing::trace!( resumed = !first, message = "worker received request; waiting for service readiness" ); match self.service.poll_ready(cx) { Poll::Ready(Ok(())) => { tracing::debug!(service.ready = true, message = "processing request"); let response = self.service.call(msg.request); // Send the response future back to the sender. // // An error means the request had been canceled in-between // our calls, the response future will just be dropped. tracing::trace!("returning response future"); let _ = msg.tx.send(Ok(response)); } Poll::Pending => { tracing::trace!(service.ready = false, message = "delay"); // Put out current message back in its slot. drop(_guard); self.current_message = Some(msg); return Poll::Pending; } Poll::Ready(Err(e)) => { let error = e.into(); tracing::debug!({ %error }, "service failed"); drop(_guard); self.failed(error); let _ = msg.tx.send(Err(self .failed .as_ref() .expect("Worker::failed did not set self.failed?") .clone())); // Wake any tasks waiting on channel capacity. self.close_semaphore(); } } } None => { // No more more requests _ever_. self.finish = true; return Poll::Ready(()); } } } } } impl Handle { pub(crate) fn get_error_on_closed(&self) -> crate::BoxError { self.inner .lock() .unwrap() .as_ref() .map(|svc_err| svc_err.clone().into()) .unwrap_or_else(|| Closed::new().into()) } } impl Clone for Handle { fn clone(&self) -> Handle { Handle { inner: self.inner.clone(), } } } tower-0.4.13/src/builder/mod.rs000064400000000000000000000662040072674642500144530ustar 00000000000000//! Builder types to compose layers and services use tower_layer::{Identity, Layer, Stack}; use tower_service::Service; use std::fmt; /// Declaratively construct [`Service`] values. /// /// [`ServiceBuilder`] provides a [builder-like interface][builder] for composing /// layers to be applied to a [`Service`]. /// /// # Service /// /// A [`Service`] is a trait representing an asynchronous function of a request /// to a response. It is similar to `async fn(Request) -> Result`. /// /// A [`Service`] is typically bound to a single transport, such as a TCP /// connection. It defines how _all_ inbound or outbound requests are handled /// by that connection. /// /// [builder]: https://doc.rust-lang.org/1.0.0/style/ownership/builders.html /// /// # Order /// /// The order in which layers are added impacts how requests are handled. Layers /// that are added first will be called with the request first. The argument to /// `service` will be last to see the request. /// /// ``` /// # // this (and other) doctest is ignored because we don't have a way /// # // to say that it should only be run with cfg(feature = "...") /// # use tower::Service; /// # use tower::builder::ServiceBuilder; /// # #[cfg(all(feature = "buffer", feature = "limit"))] /// # async fn wrap(svc: S) where S: Service<(), Error = &'static str> + 'static + Send, S::Future: Send { /// ServiceBuilder::new() /// .buffer(100) /// .concurrency_limit(10) /// .service(svc) /// # ; /// # } /// ``` /// /// In the above example, the buffer layer receives the request first followed /// by `concurrency_limit`. `buffer` enables up to 100 request to be in-flight /// **on top of** the requests that have already been forwarded to the next /// layer. Combined with `concurrency_limit`, this allows up to 110 requests to be /// in-flight. /// /// ``` /// # use tower::Service; /// # use tower::builder::ServiceBuilder; /// # #[cfg(all(feature = "buffer", feature = "limit"))] /// # async fn wrap(svc: S) where S: Service<(), Error = &'static str> + 'static + Send, S::Future: Send { /// ServiceBuilder::new() /// .concurrency_limit(10) /// .buffer(100) /// .service(svc) /// # ; /// # } /// ``` /// /// The above example is similar, but the order of layers is reversed. Now, /// `concurrency_limit` applies first and only allows 10 requests to be in-flight /// total. /// /// # Examples /// /// A [`Service`] stack with a single layer: /// /// ``` /// # use tower::Service; /// # use tower::builder::ServiceBuilder; /// # #[cfg(feature = "limit")] /// # use tower::limit::concurrency::ConcurrencyLimitLayer; /// # #[cfg(feature = "limit")] /// # async fn wrap(svc: S) where S: Service<(), Error = &'static str> + 'static + Send, S::Future: Send { /// ServiceBuilder::new() /// .concurrency_limit(5) /// .service(svc); /// # ; /// # } /// ``` /// /// A [`Service`] stack with _multiple_ layers that contain rate limiting, /// in-flight request limits, and a channel-backed, clonable [`Service`]: /// /// ``` /// # use tower::Service; /// # use tower::builder::ServiceBuilder; /// # use std::time::Duration; /// # #[cfg(all(feature = "buffer", feature = "limit"))] /// # async fn wrap(svc: S) where S: Service<(), Error = &'static str> + 'static + Send, S::Future: Send { /// ServiceBuilder::new() /// .buffer(5) /// .concurrency_limit(5) /// .rate_limit(5, Duration::from_secs(1)) /// .service(svc); /// # ; /// # } /// ``` /// /// [`Service`]: crate::Service #[derive(Clone)] pub struct ServiceBuilder { layer: L, } impl Default for ServiceBuilder { fn default() -> Self { Self::new() } } impl ServiceBuilder { /// Create a new [`ServiceBuilder`]. pub fn new() -> Self { ServiceBuilder { layer: Identity::new(), } } } impl ServiceBuilder { /// Add a new layer `T` into the [`ServiceBuilder`]. /// /// This wraps the inner service with the service provided by a user-defined /// [`Layer`]. The provided layer must implement the [`Layer`] trait. /// /// [`Layer`]: crate::Layer pub fn layer(self, layer: T) -> ServiceBuilder> { ServiceBuilder { layer: Stack::new(layer, self.layer), } } /// Optionally add a new layer `T` into the [`ServiceBuilder`]. /// /// ``` /// # use std::time::Duration; /// # use tower::Service; /// # use tower::builder::ServiceBuilder; /// # use tower::timeout::TimeoutLayer; /// # async fn wrap(svc: S) where S: Service<(), Error = &'static str> + 'static + Send, S::Future: Send { /// # let timeout = Some(Duration::new(10, 0)); /// // Apply a timeout if configured /// ServiceBuilder::new() /// .option_layer(timeout.map(TimeoutLayer::new)) /// .service(svc) /// # ; /// # } /// ``` #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub fn option_layer( self, layer: Option, ) -> ServiceBuilder, L>> { self.layer(crate::util::option_layer(layer)) } /// Add a [`Layer`] built from a function that accepts a service and returns another service. /// /// See the documentation for [`layer_fn`] for more details. /// /// [`layer_fn`]: crate::layer::layer_fn pub fn layer_fn(self, f: F) -> ServiceBuilder, L>> { self.layer(crate::layer::layer_fn(f)) } /// Buffer requests when the next layer is not ready. /// /// This wraps the inner service with an instance of the [`Buffer`] /// middleware. /// /// [`Buffer`]: crate::buffer #[cfg(feature = "buffer")] #[cfg_attr(docsrs, doc(cfg(feature = "buffer")))] pub fn buffer( self, bound: usize, ) -> ServiceBuilder, L>> { self.layer(crate::buffer::BufferLayer::new(bound)) } /// Limit the max number of in-flight requests. /// /// A request is in-flight from the time the request is received until the /// response future completes. This includes the time spent in the next /// layers. /// /// This wraps the inner service with an instance of the /// [`ConcurrencyLimit`] middleware. /// /// [`ConcurrencyLimit`]: crate::limit::concurrency #[cfg(feature = "limit")] #[cfg_attr(docsrs, doc(cfg(feature = "limit")))] pub fn concurrency_limit( self, max: usize, ) -> ServiceBuilder> { self.layer(crate::limit::ConcurrencyLimitLayer::new(max)) } /// Drop requests when the next layer is unable to respond to requests. /// /// Usually, when a service or middleware does not have capacity to process a /// request (i.e., [`poll_ready`] returns [`Pending`]), the caller waits until /// capacity becomes available. /// /// [`LoadShed`] immediately responds with an error when the next layer is /// out of capacity. /// /// This wraps the inner service with an instance of the [`LoadShed`] /// middleware. /// /// [`LoadShed`]: crate::load_shed /// [`poll_ready`]: crate::Service::poll_ready /// [`Pending`]: std::task::Poll::Pending #[cfg(feature = "load-shed")] #[cfg_attr(docsrs, doc(cfg(feature = "load-shed")))] pub fn load_shed(self) -> ServiceBuilder> { self.layer(crate::load_shed::LoadShedLayer::new()) } /// Limit requests to at most `num` per the given duration. /// /// This wraps the inner service with an instance of the [`RateLimit`] /// middleware. /// /// [`RateLimit`]: crate::limit::rate #[cfg(feature = "limit")] #[cfg_attr(docsrs, doc(cfg(feature = "limit")))] pub fn rate_limit( self, num: u64, per: std::time::Duration, ) -> ServiceBuilder> { self.layer(crate::limit::RateLimitLayer::new(num, per)) } /// Retry failed requests according to the given [retry policy][policy]. /// /// `policy` determines which failed requests will be retried. It must /// implement the [`retry::Policy`][policy] trait. /// /// This wraps the inner service with an instance of the [`Retry`] /// middleware. /// /// [`Retry`]: crate::retry /// [policy]: crate::retry::Policy #[cfg(feature = "retry")] #[cfg_attr(docsrs, doc(cfg(feature = "retry")))] pub fn retry

(self, policy: P) -> ServiceBuilder, L>> { self.layer(crate::retry::RetryLayer::new(policy)) } /// Fail requests that take longer than `timeout`. /// /// If the next layer takes more than `timeout` to respond to a request, /// processing is terminated and an error is returned. /// /// This wraps the inner service with an instance of the [`timeout`] /// middleware. /// /// [`timeout`]: crate::timeout #[cfg(feature = "timeout")] #[cfg_attr(docsrs, doc(cfg(feature = "timeout")))] pub fn timeout( self, timeout: std::time::Duration, ) -> ServiceBuilder> { self.layer(crate::timeout::TimeoutLayer::new(timeout)) } /// Conditionally reject requests based on `predicate`. /// /// `predicate` must implement the [`Predicate`] trait. /// /// This wraps the inner service with an instance of the [`Filter`] /// middleware. /// /// [`Filter`]: crate::filter /// [`Predicate`]: crate::filter::Predicate #[cfg(feature = "filter")] #[cfg_attr(docsrs, doc(cfg(feature = "filter")))] pub fn filter

( self, predicate: P, ) -> ServiceBuilder, L>> { self.layer(crate::filter::FilterLayer::new(predicate)) } /// Conditionally reject requests based on an asynchronous `predicate`. /// /// `predicate` must implement the [`AsyncPredicate`] trait. /// /// This wraps the inner service with an instance of the [`AsyncFilter`] /// middleware. /// /// [`AsyncFilter`]: crate::filter::AsyncFilter /// [`AsyncPredicate`]: crate::filter::AsyncPredicate #[cfg(feature = "filter")] #[cfg_attr(docsrs, doc(cfg(feature = "filter")))] pub fn filter_async

( self, predicate: P, ) -> ServiceBuilder, L>> { self.layer(crate::filter::AsyncFilterLayer::new(predicate)) } /// Map one request type to another. /// /// This wraps the inner service with an instance of the [`MapRequest`] /// middleware. /// /// # Examples /// /// Changing the type of a request: /// /// ```rust /// use tower::ServiceBuilder; /// use tower::ServiceExt; /// /// # #[tokio::main] /// # async fn main() -> Result<(), ()> { /// // Suppose we have some `Service` whose request type is `String`: /// let string_svc = tower::service_fn(|request: String| async move { /// println!("request: {}", request); /// Ok(()) /// }); /// /// // ...but we want to call that service with a `usize`. What do we do? /// /// let usize_svc = ServiceBuilder::new() /// // Add a middlware that converts the request type to a `String`: /// .map_request(|request: usize| format!("{}", request)) /// // ...and wrap the string service with that middleware: /// .service(string_svc); /// /// // Now, we can call that service with a `usize`: /// usize_svc.oneshot(42).await?; /// # Ok(()) /// # } /// ``` /// /// Modifying the request value: /// /// ```rust /// use tower::ServiceBuilder; /// use tower::ServiceExt; /// /// # #[tokio::main] /// # async fn main() -> Result<(), ()> { /// // A service that takes a number and returns it: /// let svc = tower::service_fn(|request: usize| async move { /// Ok(request) /// }); /// /// let svc = ServiceBuilder::new() /// // Add a middleware that adds 1 to each request /// .map_request(|request: usize| request + 1) /// .service(svc); /// /// let response = svc.oneshot(1).await?; /// assert_eq!(response, 2); /// # Ok(()) /// # } /// ``` /// /// [`MapRequest`]: crate::util::MapRequest #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub fn map_request( self, f: F, ) -> ServiceBuilder, L>> where F: FnMut(R1) -> R2 + Clone, { self.layer(crate::util::MapRequestLayer::new(f)) } /// Map one response type to another. /// /// This wraps the inner service with an instance of the [`MapResponse`] /// middleware. /// /// See the documentation for the [`map_response` combinator] for details. /// /// [`MapResponse`]: crate::util::MapResponse /// [`map_response` combinator]: crate::util::ServiceExt::map_response #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub fn map_response( self, f: F, ) -> ServiceBuilder, L>> { self.layer(crate::util::MapResponseLayer::new(f)) } /// Map one error type to another. /// /// This wraps the inner service with an instance of the [`MapErr`] /// middleware. /// /// See the documentation for the [`map_err` combinator] for details. /// /// [`MapErr`]: crate::util::MapErr /// [`map_err` combinator]: crate::util::ServiceExt::map_err #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub fn map_err(self, f: F) -> ServiceBuilder, L>> { self.layer(crate::util::MapErrLayer::new(f)) } /// Composes a function that transforms futures produced by the service. /// /// This wraps the inner service with an instance of the [`MapFutureLayer`] middleware. /// /// See the documentation for the [`map_future`] combinator for details. /// /// [`MapFutureLayer`]: crate::util::MapFutureLayer /// [`map_future`]: crate::util::ServiceExt::map_future #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub fn map_future(self, f: F) -> ServiceBuilder, L>> { self.layer(crate::util::MapFutureLayer::new(f)) } /// Apply an asynchronous function after the service, regardless of whether the future /// succeeds or fails. /// /// This wraps the inner service with an instance of the [`Then`] /// middleware. /// /// This is similar to the [`map_response`] and [`map_err`] functions, /// except that the *same* function is invoked when the service's future /// completes, whether it completes successfully or fails. This function /// takes the [`Result`] returned by the service's future, and returns a /// [`Result`]. /// /// See the documentation for the [`then` combinator] for details. /// /// [`Then`]: crate::util::Then /// [`then` combinator]: crate::util::ServiceExt::then /// [`map_response`]: ServiceBuilder::map_response /// [`map_err`]: ServiceBuilder::map_err #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub fn then(self, f: F) -> ServiceBuilder, L>> { self.layer(crate::util::ThenLayer::new(f)) } /// Executes a new future after this service's future resolves. This does /// not alter the behaviour of the [`poll_ready`] method. /// /// This method can be used to change the [`Response`] type of the service /// into a different type. You can use this method to chain along a computation once the /// service's response has been resolved. /// /// This wraps the inner service with an instance of the [`AndThen`] /// middleware. /// /// See the documentation for the [`and_then` combinator] for details. /// /// [`Response`]: crate::Service::Response /// [`poll_ready`]: crate::Service::poll_ready /// [`and_then` combinator]: crate::util::ServiceExt::and_then /// [`AndThen`]: crate::util::AndThen #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub fn and_then(self, f: F) -> ServiceBuilder, L>> { self.layer(crate::util::AndThenLayer::new(f)) } /// Maps this service's result type (`Result`) /// to a different value, regardless of whether the future succeeds or /// fails. /// /// This wraps the inner service with an instance of the [`MapResult`] /// middleware. /// /// See the documentation for the [`map_result` combinator] for details. /// /// [`map_result` combinator]: crate::util::ServiceExt::map_result /// [`MapResult`]: crate::util::MapResult #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub fn map_result(self, f: F) -> ServiceBuilder, L>> { self.layer(crate::util::MapResultLayer::new(f)) } /// Returns the underlying `Layer` implementation. pub fn into_inner(self) -> L { self.layer } /// Wrap the service `S` with the middleware provided by this /// [`ServiceBuilder`]'s [`Layer`]'s, returning a new [`Service`]. /// /// [`Layer`]: crate::Layer /// [`Service`]: crate::Service pub fn service(&self, service: S) -> L::Service where L: Layer, { self.layer.layer(service) } /// Wrap the async function `F` with the middleware provided by this [`ServiceBuilder`]'s /// [`Layer`]s, returning a new [`Service`]. /// /// This is a convenience method which is equivalent to calling /// [`ServiceBuilder::service`] with a [`service_fn`], like this: /// /// ```rust /// # use tower::{ServiceBuilder, service_fn}; /// # async fn handler_fn(_: ()) -> Result<(), ()> { Ok(()) } /// # let _ = { /// ServiceBuilder::new() /// // ... /// .service(service_fn(handler_fn)) /// # }; /// ``` /// /// # Example /// /// ```rust /// use std::time::Duration; /// use tower::{ServiceBuilder, ServiceExt, BoxError, service_fn}; /// /// # #[tokio::main] /// # async fn main() -> Result<(), BoxError> { /// async fn handle(request: &'static str) -> Result<&'static str, BoxError> { /// Ok(request) /// } /// /// let svc = ServiceBuilder::new() /// .buffer(1024) /// .timeout(Duration::from_secs(10)) /// .service_fn(handle); /// /// let response = svc.oneshot("foo").await?; /// /// assert_eq!(response, "foo"); /// # Ok(()) /// # } /// ``` /// /// [`Layer`]: crate::Layer /// [`Service`]: crate::Service /// [`service_fn`]: crate::service_fn #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub fn service_fn(self, f: F) -> L::Service where L: Layer>, { self.service(crate::util::service_fn(f)) } /// Check that the builder implements `Clone`. /// /// This can be useful when debugging type errors in `ServiceBuilder`s with lots of layers. /// /// Doesn't actually change the builder but serves as a type check. /// /// # Example /// /// ```rust /// use tower::ServiceBuilder; /// /// let builder = ServiceBuilder::new() /// // Do something before processing the request /// .map_request(|request: String| { /// println!("got request!"); /// request /// }) /// // Ensure our `ServiceBuilder` can be cloned /// .check_clone() /// // Do something after processing the request /// .map_response(|response: String| { /// println!("got response!"); /// response /// }); /// ``` #[inline] pub fn check_clone(self) -> Self where Self: Clone, { self } /// Check that the builder when given a service of type `S` produces a service that implements /// `Clone`. /// /// This can be useful when debugging type errors in `ServiceBuilder`s with lots of layers. /// /// Doesn't actually change the builder but serves as a type check. /// /// # Example /// /// ```rust /// use tower::ServiceBuilder; /// /// # #[derive(Clone)] /// # struct MyService; /// # /// let builder = ServiceBuilder::new() /// // Do something before processing the request /// .map_request(|request: String| { /// println!("got request!"); /// request /// }) /// // Ensure that the service produced when given a `MyService` implements /// .check_service_clone::() /// // Do something after processing the request /// .map_response(|response: String| { /// println!("got response!"); /// response /// }); /// ``` #[inline] pub fn check_service_clone(self) -> Self where L: Layer, L::Service: Clone, { self } /// Check that the builder when given a service of type `S` produces a service with the given /// request, response, and error types. /// /// This can be useful when debugging type errors in `ServiceBuilder`s with lots of layers. /// /// Doesn't actually change the builder but serves as a type check. /// /// # Example /// /// ```rust /// use tower::ServiceBuilder; /// use std::task::{Poll, Context}; /// use tower::{Service, ServiceExt}; /// /// // An example service /// struct MyService; /// /// impl Service for MyService { /// type Response = Response; /// type Error = Error; /// type Future = futures_util::future::Ready>; /// /// fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// // ... /// # todo!() /// } /// /// fn call(&mut self, request: Request) -> Self::Future { /// // ... /// # todo!() /// } /// } /// /// struct Request; /// struct Response; /// struct Error; /// /// struct WrappedResponse(Response); /// /// let builder = ServiceBuilder::new() /// // At this point in the builder if given a `MyService` it produces a service that /// // accepts `Request`s, produces `Response`s, and fails with `Error`s /// .check_service::() /// // Wrap responses in `WrappedResponse` /// .map_response(|response: Response| WrappedResponse(response)) /// // Now the response type will be `WrappedResponse` /// .check_service::(); /// ``` #[inline] pub fn check_service(self) -> Self where L: Layer, L::Service: Service, { self } /// This wraps the inner service with the [`Layer`] returned by [`BoxService::layer()`]. /// /// See that method for more details. /// /// # Example /// /// ``` /// use tower::{Service, ServiceBuilder, BoxError, util::BoxService}; /// use std::time::Duration; /// # /// # struct Request; /// # struct Response; /// # impl Response { /// # fn new() -> Self { Self } /// # } /// /// let service: BoxService = ServiceBuilder::new() /// .boxed() /// .load_shed() /// .concurrency_limit(64) /// .timeout(Duration::from_secs(10)) /// .service_fn(|req: Request| async { /// Ok::<_, BoxError>(Response::new()) /// }); /// # let service = assert_service(service); /// # fn assert_service(svc: S) -> S /// # where S: Service { svc } /// ``` /// /// [`BoxService::layer()`]: crate::util::BoxService::layer() #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub fn boxed( self, ) -> ServiceBuilder< Stack< tower_layer::LayerFn< fn( L::Service, ) -> crate::util::BoxService< R, >::Response, >::Error, >, >, L, >, > where L: Layer, L::Service: Service + Send + 'static, >::Future: Send + 'static, { self.layer(crate::util::BoxService::layer()) } /// This wraps the inner service with the [`Layer`] returned by [`BoxCloneService::layer()`]. /// /// This is similar to the [`boxed`] method, but it requires that `Self` implement /// [`Clone`], and the returned boxed service implements [`Clone`]. /// /// See [`BoxCloneService`] for more details. /// /// # Example /// /// ``` /// use tower::{Service, ServiceBuilder, BoxError, util::BoxCloneService}; /// use std::time::Duration; /// # /// # struct Request; /// # struct Response; /// # impl Response { /// # fn new() -> Self { Self } /// # } /// /// let service: BoxCloneService = ServiceBuilder::new() /// .boxed_clone() /// .load_shed() /// .concurrency_limit(64) /// .timeout(Duration::from_secs(10)) /// .service_fn(|req: Request| async { /// Ok::<_, BoxError>(Response::new()) /// }); /// # let service = assert_service(service); /// /// // The boxed service can still be cloned. /// service.clone(); /// # fn assert_service(svc: S) -> S /// # where S: Service { svc } /// ``` /// /// [`BoxCloneService::layer()`]: crate::util::BoxCloneService::layer() /// [`BoxCloneService`]: crate::util::BoxCloneService /// [`boxed`]: Self::boxed #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub fn boxed_clone( self, ) -> ServiceBuilder< Stack< tower_layer::LayerFn< fn( L::Service, ) -> crate::util::BoxCloneService< R, >::Response, >::Error, >, >, L, >, > where L: Layer, L::Service: Service + Clone + Send + 'static, >::Future: Send + 'static, { self.layer(crate::util::BoxCloneService::layer()) } } impl fmt::Debug for ServiceBuilder { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_tuple("ServiceBuilder").field(&self.layer).finish() } } impl Layer for ServiceBuilder where L: Layer, { type Service = L::Service; fn layer(&self, inner: S) -> Self::Service { self.layer.layer(inner) } } tower-0.4.13/src/discover/error.rs000064400000000000000000000003240072674642500152040ustar 00000000000000use std::{error::Error, fmt}; #[derive(Debug)] pub enum Never {} impl fmt::Display for Never { fn fmt(&self, _: &mut fmt::Formatter) -> fmt::Result { match *self {} } } impl Error for Never {} tower-0.4.13/src/discover/list.rs000064400000000000000000000027630072674642500150370ustar 00000000000000use super::{error::Never, Change}; use futures_core::Stream; use pin_project_lite::pin_project; use std::iter::{Enumerate, IntoIterator}; use std::{ pin::Pin, task::{Context, Poll}, }; use tower_service::Service; pin_project! { /// Static service discovery based on a predetermined list of services. /// /// [`ServiceList`] is created with an initial list of services. The discovery /// process will yield this list once and do nothing after. #[derive(Debug)] pub struct ServiceList where T: IntoIterator, { inner: Enumerate, } } impl ServiceList where T: IntoIterator, { #[allow(missing_docs)] pub fn new(services: T) -> ServiceList where U: Service, { ServiceList { inner: services.into_iter().enumerate(), } } } impl Stream for ServiceList where T: IntoIterator, { type Item = Result, Never>; fn poll_next(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll> { match self.project().inner.next() { Some((i, service)) => Poll::Ready(Some(Ok(Change::Insert(i, service)))), None => Poll::Ready(None), } } } // check that List can be directly over collections #[cfg(test)] #[allow(dead_code)] type ListVecTest = ServiceList>; #[cfg(test)] #[allow(dead_code)] type ListVecIterTest = ServiceList<::std::vec::IntoIter>; tower-0.4.13/src/discover/mod.rs000064400000000000000000000062370072674642500146430ustar 00000000000000//! Service discovery //! //! This module provides the [`Change`] enum, which indicates the arrival or departure of a service //! from a collection of similar services. Most implementations should use the [`Discover`] trait //! in their bounds to indicate that they can handle services coming and going. [`Discover`] itself //! is primarily a convenience wrapper around [`TryStream`][`TryStream`]. //! //! Every discovered service is assigned an identifier that is distinct among the currently active //! services. If that service later goes away, a [`Change::Remove`] is yielded with that service's //! identifier. From that point forward, the identifier may be re-used. //! //! # Examples //! //! ```rust //! use futures_util::{future::poll_fn, pin_mut}; //! use tower::discover::{Change, Discover}; //! async fn services_monitor(services: D) { //! pin_mut!(services); //! while let Some(Ok(change)) = poll_fn(|cx| services.as_mut().poll_discover(cx)).await { //! match change { //! Change::Insert(key, svc) => { //! // a new service with identifier `key` was discovered //! # let _ = (key, svc); //! } //! Change::Remove(key) => { //! // the service with identifier `key` has gone away //! # let _ = (key); //! } //! } //! } //! } //! ``` //! //! [`TryStream`]: https://docs.rs/futures/latest/futures/stream/trait.TryStream.html mod error; mod list; pub use self::list::ServiceList; use crate::sealed::Sealed; use futures_core::TryStream; use std::{ pin::Pin, task::{Context, Poll}, }; /// A dynamically changing set of related services. /// /// As new services arrive and old services are retired, /// [`Change`]s are returned which provide unique identifiers /// for the services. /// /// See the module documentation for more details. pub trait Discover: Sealed> { /// A unique identifier for each active service. /// /// An identifier can be re-used once a [`Change::Remove`] has been yielded for its service. type Key: Eq; /// The type of [`Service`] yielded by this [`Discover`]. /// /// [`Service`]: crate::Service type Service; /// Error produced during discovery type Error; /// Yields the next discovery change set. fn poll_discover( self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll, Self::Error>>>; } impl Sealed> for D where D: TryStream, Error = E>, K: Eq, { } impl Discover for D where D: TryStream, Error = E>, K: Eq, { type Key = K; type Service = S; type Error = E; fn poll_discover( self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { TryStream::try_poll_next(self, cx) } } /// A change in the service set. #[derive(Debug)] pub enum Change { /// A new service identified by key `K` was identified. Insert(K, V), /// The service identified by key `K` disappeared. Remove(K), } tower-0.4.13/src/filter/future.rs000064400000000000000000000046620072674642500150450ustar 00000000000000//! Future types use super::AsyncPredicate; use crate::BoxError; use futures_core::ready; use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; pin_project! { /// Filtered response future from [`AsyncFilter`] services. /// /// [`AsyncFilter`]: crate::filter::AsyncFilter #[derive(Debug)] pub struct AsyncResponseFuture where P: AsyncPredicate, S: Service, { #[pin] state: State, // Inner service service: S, } } opaque_future! { /// Filtered response future from [`Filter`] services. /// /// [`Filter`]: crate::filter::Filter pub type ResponseFuture = futures_util::future::Either< futures_util::future::Ready>, futures_util::future::ErrInto >; } pin_project! { #[project = StateProj] #[derive(Debug)] enum State { /// Waiting for the predicate future Check { #[pin] check: F }, /// Waiting for the response future WaitResponse { #[pin] response: G }, } } impl AsyncResponseFuture where P: AsyncPredicate, S: Service, S::Error: Into, { pub(crate) fn new(check: P::Future, service: S) -> Self { Self { state: State::Check { check }, service, } } } impl Future for AsyncResponseFuture where P: AsyncPredicate, S: Service, S::Error: Into, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let mut this = self.project(); loop { match this.state.as_mut().project() { StateProj::Check { mut check } => { let request = ready!(check.as_mut().poll(cx))?; let response = this.service.call(request); this.state.set(State::WaitResponse { response }); } StateProj::WaitResponse { response } => { return response.poll(cx).map_err(Into::into); } } } } } tower-0.4.13/src/filter/layer.rs000064400000000000000000000036160072674642500146450ustar 00000000000000use super::{AsyncFilter, Filter}; use tower_layer::Layer; /// Conditionally dispatch requests to the inner service based on a synchronous /// [predicate]. /// /// This [`Layer`] produces instances of the [`Filter`] service. /// /// [predicate]: crate::filter::Predicate /// [`Layer`]: crate::Layer /// [`Filter`]: crate::filter::Filter #[derive(Debug, Clone)] pub struct FilterLayer { predicate: U, } /// Conditionally dispatch requests to the inner service based on an asynchronous /// [predicate]. /// /// This [`Layer`] produces instances of the [`AsyncFilter`] service. /// /// [predicate]: crate::filter::AsyncPredicate /// [`Layer`]: crate::Layer /// [`Filter`]: crate::filter::AsyncFilter #[derive(Debug)] pub struct AsyncFilterLayer { predicate: U, } // === impl FilterLayer === impl FilterLayer { /// Returns a new layer that produces [`Filter`] services with the given /// [`Predicate`]. /// /// [`Predicate`]: crate::filter::Predicate /// [`Filter`]: crate::filter::Filter pub fn new(predicate: U) -> Self { Self { predicate } } } impl Layer for FilterLayer { type Service = Filter; fn layer(&self, service: S) -> Self::Service { let predicate = self.predicate.clone(); Filter::new(service, predicate) } } // === impl AsyncFilterLayer === impl AsyncFilterLayer { /// Returns a new layer that produces [`AsyncFilter`] services with the given /// [`AsyncPredicate`]. /// /// [`AsyncPredicate`]: crate::filter::AsyncPredicate /// [`Filter`]: crate::filter::Filter pub fn new(predicate: U) -> Self { Self { predicate } } } impl Layer for AsyncFilterLayer { type Service = AsyncFilter; fn layer(&self, service: S) -> Self::Service { let predicate = self.predicate.clone(); AsyncFilter::new(service, predicate) } } tower-0.4.13/src/filter/mod.rs000064400000000000000000000135000072674642500143010ustar 00000000000000//! Conditionally dispatch requests to the inner service based on the result of //! a predicate. //! //! A predicate takes some request type and returns a `Result`. //! If the predicate returns [`Ok`], the inner service is called with the request //! returned by the predicate — which may be the original request or a //! modified one. If the predicate returns [`Err`], the request is rejected and //! the inner service is not called. //! //! Predicates may either be synchronous (simple functions from a `Request` to //! a [`Result`]) or asynchronous (functions returning [`Future`]s). Separate //! traits, [`Predicate`] and [`AsyncPredicate`], represent these two types of //! predicate. Note that when it is not necessary to await some other //! asynchronous operation in the predicate, the synchronous predicate should be //! preferred, as it introduces less overhead. //! //! The predicate traits are implemented for closures and function pointers. //! However, users may also implement them for other types, such as when the //! predicate requires some state carried between requests. For example, //! [`Predicate`] could be implemented for a type that rejects a fixed set of //! requests by checking if they are contained by a a [`HashSet`] or other //! collection. //! //! [`Future`]: std::future::Future //! [`HashSet`]: std::collections::HashSet pub mod future; mod layer; mod predicate; pub use self::{ layer::{AsyncFilterLayer, FilterLayer}, predicate::{AsyncPredicate, Predicate}, }; use self::future::{AsyncResponseFuture, ResponseFuture}; use crate::BoxError; use futures_util::{future::Either, TryFutureExt}; use std::task::{Context, Poll}; use tower_service::Service; /// Conditionally dispatch requests to the inner service based on a [predicate]. /// /// [predicate]: Predicate #[derive(Clone, Debug)] pub struct Filter { inner: T, predicate: U, } /// Conditionally dispatch requests to the inner service based on an /// [asynchronous predicate]. /// /// [asynchronous predicate]: AsyncPredicate #[derive(Clone, Debug)] pub struct AsyncFilter { inner: T, predicate: U, } // ==== impl Filter ==== impl Filter { /// Returns a new [`Filter`] service wrapping `inner`. pub fn new(inner: T, predicate: U) -> Self { Self { inner, predicate } } /// Returns a new [`Layer`] that wraps services with a [`Filter`] service /// with the given [`Predicate`]. /// /// [`Layer`]: crate::Layer pub fn layer(predicate: U) -> FilterLayer { FilterLayer::new(predicate) } /// Check a `Request` value against this filter's predicate. pub fn check(&mut self, request: R) -> Result where U: Predicate, { self.predicate.check(request) } /// Get a reference to the inner service pub fn get_ref(&self) -> &T { &self.inner } /// Get a mutable reference to the inner service pub fn get_mut(&mut self) -> &mut T { &mut self.inner } /// Consume `self`, returning the inner service pub fn into_inner(self) -> T { self.inner } } impl Service for Filter where U: Predicate, T: Service, T::Error: Into, { type Response = T::Response; type Error = BoxError; type Future = ResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx).map_err(Into::into) } fn call(&mut self, request: Request) -> Self::Future { ResponseFuture::new(match self.predicate.check(request) { Ok(request) => Either::Right(self.inner.call(request).err_into()), Err(e) => Either::Left(futures_util::future::ready(Err(e))), }) } } // ==== impl AsyncFilter ==== impl AsyncFilter { /// Returns a new [`AsyncFilter`] service wrapping `inner`. pub fn new(inner: T, predicate: U) -> Self { Self { inner, predicate } } /// Returns a new [`Layer`] that wraps services with an [`AsyncFilter`] /// service with the given [`AsyncPredicate`]. /// /// [`Layer`]: crate::Layer pub fn layer(predicate: U) -> FilterLayer { FilterLayer::new(predicate) } /// Check a `Request` value against this filter's predicate. pub async fn check(&mut self, request: R) -> Result where U: AsyncPredicate, { self.predicate.check(request).await } /// Get a reference to the inner service pub fn get_ref(&self) -> &T { &self.inner } /// Get a mutable reference to the inner service pub fn get_mut(&mut self) -> &mut T { &mut self.inner } /// Consume `self`, returning the inner service pub fn into_inner(self) -> T { self.inner } } impl Service for AsyncFilter where U: AsyncPredicate, T: Service + Clone, T::Error: Into, { type Response = T::Response; type Error = BoxError; type Future = AsyncResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx).map_err(Into::into) } fn call(&mut self, request: Request) -> Self::Future { use std::mem; let inner = self.inner.clone(); // In case the inner service has state that's driven to readiness and // not tracked by clones (such as `Buffer`), pass the version we have // already called `poll_ready` on into the future, and leave its clone // behind. let inner = mem::replace(&mut self.inner, inner); // Check the request let check = self.predicate.check(request); AsyncResponseFuture::new(check, inner) } } tower-0.4.13/src/filter/predicate.rs000064400000000000000000000035360072674642500154720ustar 00000000000000use crate::BoxError; use std::future::Future; /// Checks a request asynchronously. pub trait AsyncPredicate { /// The future returned by [`check`]. /// /// [`check`]: crate::filter::AsyncPredicate::check type Future: Future>; /// The type of requests returned by [`check`]. /// /// This request is forwarded to the inner service if the predicate /// succeeds. /// /// [`check`]: crate::filter::AsyncPredicate::check type Request; /// Check whether the given request should be forwarded. /// /// If the future resolves with [`Ok`], the request is forwarded to the inner service. fn check(&mut self, request: Request) -> Self::Future; } /// Checks a request synchronously. pub trait Predicate { /// The type of requests returned by [`check`]. /// /// This request is forwarded to the inner service if the predicate /// succeeds. /// /// [`check`]: crate::filter::Predicate::check type Request; /// Check whether the given request should be forwarded. /// /// If the future resolves with [`Ok`], the request is forwarded to the inner service. fn check(&mut self, request: Request) -> Result; } impl AsyncPredicate for F where F: FnMut(T) -> U, U: Future>, E: Into, { type Future = futures_util::future::ErrInto; type Request = R; fn check(&mut self, request: T) -> Self::Future { use futures_util::TryFutureExt; self(request).err_into() } } impl Predicate for F where F: FnMut(T) -> Result, E: Into, { type Request = R; fn check(&mut self, request: T) -> Result { self(request).map_err(Into::into) } } tower-0.4.13/src/hedge/delay.rs000064400000000000000000000065160072674642500144200ustar 00000000000000use futures_util::ready; use pin_project_lite::pin_project; use std::time::Duration; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; use crate::util::Oneshot; /// A policy which specifies how long each request should be delayed for. pub trait Policy { fn delay(&self, req: &Request) -> Duration; } /// A middleware which delays sending the request to the underlying service /// for an amount of time specified by the policy. #[derive(Debug)] pub struct Delay { policy: P, service: S, } pin_project! { #[derive(Debug)] pub struct ResponseFuture where S: Service, { service: Option, #[pin] state: State>, } } pin_project! { #[project = StateProj] #[derive(Debug)] enum State { Delaying { #[pin] delay: tokio::time::Sleep, req: Option, }, Called { #[pin] fut: F, }, } } impl State { fn delaying(delay: tokio::time::Sleep, req: Option) -> Self { Self::Delaying { delay, req } } fn called(fut: F) -> Self { Self::Called { fut } } } impl Delay { pub fn new(policy: P, service: S) -> Self where P: Policy, S: Service + Clone, S::Error: Into, { Delay { policy, service } } } impl Service for Delay where P: Policy, S: Service + Clone, S::Error: Into, { type Response = S::Response; type Error = crate::BoxError; type Future = ResponseFuture; fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll> { // Calling self.service.poll_ready would reserve a slot for the delayed request, // potentially well in advance of actually making it. Instead, signal readiness here and // treat the service as a Oneshot in the future. Poll::Ready(Ok(())) } fn call(&mut self, request: Request) -> Self::Future { let delay = self.policy.delay(&request); ResponseFuture { service: Some(self.service.clone()), state: State::delaying(tokio::time::sleep(delay), Some(request)), } } } impl Future for ResponseFuture where E: Into, S: Service, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let mut this = self.project(); loop { match this.state.as_mut().project() { StateProj::Delaying { delay, req } => { ready!(delay.poll(cx)); let req = req.take().expect("Missing request in delay"); let svc = this.service.take().expect("Missing service in delay"); let fut = Oneshot::new(svc, req); this.state.set(State::called(fut)); } StateProj::Called { fut } => { return fut.poll(cx).map_err(Into::into); } }; } } } tower-0.4.13/src/hedge/latency.rs000064400000000000000000000042750072674642500147610ustar 00000000000000use futures_util::ready; use pin_project_lite::pin_project; use std::time::Duration; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tokio::time::Instant; use tower_service::Service; /// Record is the interface for accepting request latency measurements. When /// a request completes, record is called with the elapsed duration between /// when the service was called and when the future completed. pub trait Record { fn record(&mut self, latency: Duration); } /// Latency is a middleware that measures request latency and records it to the /// provided Record instance. #[derive(Clone, Debug)] pub struct Latency { rec: R, service: S, } pin_project! { #[derive(Debug)] pub struct ResponseFuture { start: Instant, rec: R, #[pin] inner: F, } } impl Latency where R: Record + Clone, { pub fn new(rec: R, service: S) -> Self where S: Service, S::Error: Into, { Latency { rec, service } } } impl Service for Latency where S: Service, S::Error: Into, R: Record + Clone, { type Response = S::Response; type Error = crate::BoxError; type Future = ResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.service.poll_ready(cx).map_err(Into::into) } fn call(&mut self, request: Request) -> Self::Future { ResponseFuture { start: Instant::now(), rec: self.rec.clone(), inner: self.service.call(request), } } } impl Future for ResponseFuture where R: Record, F: Future>, E: Into, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let this = self.project(); let rsp = ready!(this.inner.poll(cx)).map_err(Into::into)?; let duration = Instant::now().saturating_duration_since(*this.start); this.rec.record(duration); Poll::Ready(Ok(rsp)) } } tower-0.4.13/src/hedge/mod.rs000064400000000000000000000177210072674642500141010ustar 00000000000000//! Pre-emptively retry requests which have been outstanding for longer //! than a given latency percentile. #![warn(missing_debug_implementations, missing_docs, unreachable_pub)] use crate::filter::AsyncFilter; use futures_util::future; use pin_project_lite::pin_project; use std::sync::{Arc, Mutex}; use std::time::Duration; use std::{ pin::Pin, task::{Context, Poll}, }; use tracing::error; mod delay; mod latency; mod rotating_histogram; mod select; use delay::Delay; use latency::Latency; use rotating_histogram::RotatingHistogram; use select::Select; type Histo = Arc>; type Service = select::Select< SelectPolicy

, Latency, Delay, PolicyPredicate

>>, >; /// A middleware that pre-emptively retries requests which have been outstanding /// for longer than a given latency percentile. If either of the original /// future or the retry future completes, that value is used. #[derive(Debug)] pub struct Hedge(Service); pin_project! { /// The [`Future`] returned by the [`Hedge`] service. /// /// [`Future`]: std::future::Future #[derive(Debug)] pub struct Future where S: tower_service::Service, { #[pin] inner: S::Future, } } /// A policy which describes which requests can be cloned and then whether those /// requests should be retried. pub trait Policy { /// Called when the request is first received to determine if the request is retryable. fn clone_request(&self, req: &Request) -> Option; /// Called after the hedge timeout to determine if the hedge retry should be issued. fn can_retry(&self, req: &Request) -> bool; } // NOTE: these are pub only because they appear inside a Future #[doc(hidden)] #[derive(Clone, Debug)] pub struct PolicyPredicate

(P); #[doc(hidden)] #[derive(Debug)] pub struct DelayPolicy { histo: Histo, latency_percentile: f32, } #[doc(hidden)] #[derive(Debug)] pub struct SelectPolicy

{ policy: P, histo: Histo, min_data_points: u64, } impl Hedge { /// Create a new hedge middleware. pub fn new( service: S, policy: P, min_data_points: u64, latency_percentile: f32, period: Duration, ) -> Hedge where S: tower_service::Service + Clone, S::Error: Into, P: Policy + Clone, { let histo = Arc::new(Mutex::new(RotatingHistogram::new(period))); Self::new_with_histo(service, policy, min_data_points, latency_percentile, histo) } /// A hedge middleware with a prepopulated latency histogram. This is usedful /// for integration tests. pub fn new_with_mock_latencies( service: S, policy: P, min_data_points: u64, latency_percentile: f32, period: Duration, latencies_ms: &[u64], ) -> Hedge where S: tower_service::Service + Clone, S::Error: Into, P: Policy + Clone, { let histo = Arc::new(Mutex::new(RotatingHistogram::new(period))); { let mut locked = histo.lock().unwrap(); for latency in latencies_ms.iter() { locked.read().record(*latency).unwrap(); } } Self::new_with_histo(service, policy, min_data_points, latency_percentile, histo) } fn new_with_histo( service: S, policy: P, min_data_points: u64, latency_percentile: f32, histo: Histo, ) -> Hedge where S: tower_service::Service + Clone, S::Error: Into, P: Policy + Clone, { // Clone the underlying service and wrap both copies in a middleware that // records the latencies in a rotating histogram. let recorded_a = Latency::new(histo.clone(), service.clone()); let recorded_b = Latency::new(histo.clone(), service); // Check policy to see if the hedge request should be issued. let filtered = AsyncFilter::new(recorded_b, PolicyPredicate(policy.clone())); // Delay the second request by a percentile of the recorded request latency // histogram. let delay_policy = DelayPolicy { histo: histo.clone(), latency_percentile, }; let delayed = Delay::new(delay_policy, filtered); // If the request is retryable, issue two requests -- the second one delayed // by a latency percentile. Use the first result to complete. let select_policy = SelectPolicy { policy, histo, min_data_points, }; Hedge(Select::new(select_policy, recorded_a, delayed)) } } impl tower_service::Service for Hedge where S: tower_service::Service + Clone, S::Error: Into, P: Policy + Clone, { type Response = S::Response; type Error = crate::BoxError; type Future = Future, Request>; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.0.poll_ready(cx) } fn call(&mut self, request: Request) -> Self::Future { Future { inner: self.0.call(request), } } } impl std::future::Future for Future where S: tower_service::Service, S::Error: Into, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { self.project().inner.poll(cx).map_err(Into::into) } } // TODO: Remove when Duration::as_millis() becomes stable. const NANOS_PER_MILLI: u32 = 1_000_000; const MILLIS_PER_SEC: u64 = 1_000; fn millis(duration: Duration) -> u64 { // Round up. let millis = (duration.subsec_nanos() + NANOS_PER_MILLI - 1) / NANOS_PER_MILLI; duration .as_secs() .saturating_mul(MILLIS_PER_SEC) .saturating_add(u64::from(millis)) } impl latency::Record for Histo { fn record(&mut self, latency: Duration) { let mut locked = self.lock().unwrap(); locked.write().record(millis(latency)).unwrap_or_else(|e| { error!("Failed to write to hedge histogram: {:?}", e); }) } } impl crate::filter::AsyncPredicate for PolicyPredicate

where P: Policy, { type Future = future::Either< future::Ready>, future::Pending>, >; type Request = Request; fn check(&mut self, request: Request) -> Self::Future { if self.0.can_retry(&request) { future::Either::Left(future::ready(Ok(request))) } else { // If the hedge retry should not be issued, we simply want to wait // for the result of the original request. Therefore we don't want // to return an error here. Instead, we use future::pending to ensure // that the original request wins the select. future::Either::Right(future::pending()) } } } impl delay::Policy for DelayPolicy { fn delay(&self, _req: &Request) -> Duration { let mut locked = self.histo.lock().unwrap(); let millis = locked .read() .value_at_quantile(self.latency_percentile.into()); Duration::from_millis(millis) } } impl select::Policy for SelectPolicy

where P: Policy, { fn clone_request(&self, req: &Request) -> Option { self.policy.clone_request(req).filter(|_| { let mut locked = self.histo.lock().unwrap(); // Do not attempt a retry if there are insufficiently many data // points in the histogram. locked.read().len() >= self.min_data_points }) } } tower-0.4.13/src/hedge/rotating_histogram.rs000064400000000000000000000045130072674642500172210ustar 00000000000000use hdrhistogram::Histogram; use std::time::Duration; use tokio::time::Instant; use tracing::trace; /// This represents a "rotating" histogram which stores two histogram, one which /// should be read and one which should be written to. Every period, the read /// histogram is discarded and replaced by the write histogram. The idea here /// is that the read histogram should always contain a full period (the previous /// period) of write operations. #[derive(Debug)] pub struct RotatingHistogram { read: Histogram, write: Histogram, last_rotation: Instant, period: Duration, } impl RotatingHistogram { pub fn new(period: Duration) -> RotatingHistogram { RotatingHistogram { // Use an auto-resizing histogram to avoid choosing // a maximum latency bound for all users. read: Histogram::::new(3).expect("Invalid histogram params"), write: Histogram::::new(3).expect("Invalid histogram params"), last_rotation: Instant::now(), period, } } pub fn read(&mut self) -> &mut Histogram { self.maybe_rotate(); &mut self.read } pub fn write(&mut self) -> &mut Histogram { self.maybe_rotate(); &mut self.write } fn maybe_rotate(&mut self) { let delta = Instant::now().saturating_duration_since(self.last_rotation); // TODO: replace with delta.duration_div when it becomes stable. let rotations = (nanos(delta) / nanos(self.period)) as u32; if rotations >= 2 { trace!("Time since last rotation is {:?}. clearing!", delta); self.clear(); } else if rotations == 1 { trace!("Time since last rotation is {:?}. rotating!", delta); self.rotate(); } self.last_rotation += self.period * rotations; } fn rotate(&mut self) { std::mem::swap(&mut self.read, &mut self.write); trace!("Rotated {:?} points into read", self.read.len()); self.write.clear(); } fn clear(&mut self) { self.read.clear(); self.write.clear(); } } const NANOS_PER_SEC: u64 = 1_000_000_000; fn nanos(duration: Duration) -> u64 { duration .as_secs() .saturating_mul(NANOS_PER_SEC) .saturating_add(u64::from(duration.subsec_nanos())) } tower-0.4.13/src/hedge/select.rs000064400000000000000000000057400072674642500145770ustar 00000000000000use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; /// A policy which decides which requests can be cloned and sent to the B /// service. pub trait Policy { fn clone_request(&self, req: &Request) -> Option; } /// Select is a middleware which attempts to clone the request and sends the /// original request to the A service and, if the request was able to be cloned, /// the cloned request to the B service. Both resulting futures will be polled /// and whichever future completes first will be used as the result. #[derive(Debug)] pub struct Select { policy: P, a: A, b: B, } pin_project! { #[derive(Debug)] pub struct ResponseFuture { #[pin] a_fut: AF, #[pin] b_fut: Option, } } impl Select { pub fn new(policy: P, a: A, b: B) -> Self where P: Policy, A: Service, A::Error: Into, B: Service, B::Error: Into, { Select { policy, a, b } } } impl Service for Select where P: Policy, A: Service, A::Error: Into, B: Service, B::Error: Into, { type Response = A::Response; type Error = crate::BoxError; type Future = ResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { match (self.a.poll_ready(cx), self.b.poll_ready(cx)) { (Poll::Ready(Ok(())), Poll::Ready(Ok(()))) => Poll::Ready(Ok(())), (Poll::Ready(Err(e)), _) => Poll::Ready(Err(e.into())), (_, Poll::Ready(Err(e))) => Poll::Ready(Err(e.into())), _ => Poll::Pending, } } fn call(&mut self, request: Request) -> Self::Future { let b_fut = if let Some(cloned_req) = self.policy.clone_request(&request) { Some(self.b.call(cloned_req)) } else { None }; ResponseFuture { a_fut: self.a.call(request), b_fut, } } } impl Future for ResponseFuture where AF: Future>, AE: Into, BF: Future>, BE: Into, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let this = self.project(); if let Poll::Ready(r) = this.a_fut.poll(cx) { return Poll::Ready(Ok(r.map_err(Into::into)?)); } if let Some(b_fut) = this.b_fut.as_pin_mut() { if let Poll::Ready(r) = b_fut.poll(cx) { return Poll::Ready(Ok(r.map_err(Into::into)?)); } } Poll::Pending } } tower-0.4.13/src/layer.rs000064400000000000000000000005360072674642500133560ustar 00000000000000//! A collection of [`Layer`] based tower services //! //! [`Layer`]: crate::Layer pub use tower_layer::{layer_fn, Layer, LayerFn}; /// Utilities for combining layers /// /// [`Identity`]: crate::layer::util::Identity /// [`Layer`]: crate::Layer /// [`Stack`]: crate::layer::util::Stack pub mod util { pub use tower_layer::{Identity, Stack}; } tower-0.4.13/src/lib.rs000064400000000000000000000225320072674642500130100ustar 00000000000000#![warn( missing_debug_implementations, missing_docs, rust_2018_idioms, unreachable_pub )] #![forbid(unsafe_code)] #![allow(elided_lifetimes_in_paths, clippy::type_complexity)] #![cfg_attr(test, allow(clippy::float_cmp))] #![cfg_attr(docsrs, feature(doc_cfg))] // `rustdoc::broken_intra_doc_links` is checked on CI //! `async fn(Request) -> Result` //! //! # Overview //! //! Tower is a library of modular and reusable components for building //! robust networking clients and servers. //! //! Tower provides a simple core abstraction, the [`Service`] trait, which //! represents an asynchronous function taking a request and returning either a //! response or an error. This abstraction can be used to model both clients and //! servers. //! //! Generic components, like [timeouts], [rate limiting], and [load balancing], //! can be modeled as [`Service`]s that wrap some inner service and apply //! additional behavior before or after the inner service is called. This allows //! implementing these components in a protocol-agnostic, composable way. Typically, //! such services are referred to as _middleware_. //! //! An additional abstraction, the [`Layer`] trait, is used to compose //! middleware with [`Service`]s. If a [`Service`] can be thought of as an //! asynchronous function from a request type to a response type, a [`Layer`] is //! a function taking a [`Service`] of one type and returning a [`Service`] of a //! different type. The [`ServiceBuilder`] type is used to add middleware to a //! service by composing it with multiple [`Layer`]s. //! //! ## The Tower Ecosystem //! //! Tower is made up of the following crates: //! //! * [`tower`] (this crate) //! * [`tower-service`] //! * [`tower-layer`] //! * [`tower-test`] //! //! Since the [`Service`] and [`Layer`] traits are important integration points //! for all libraries using Tower, they are kept as stable as possible, and //! breaking changes are made rarely. Therefore, they are defined in separate //! crates, [`tower-service`] and [`tower-layer`]. This crate contains //! re-exports of those core traits, implementations of commonly-used //! middleware, and [utilities] for working with [`Service`]s and [`Layer`]s. //! Finally, the [`tower-test`] crate provides tools for testing programs using //! Tower. //! //! # Usage //! //! Tower provides an abstraction layer, and generic implementations of various //! middleware. This means that the `tower` crate on its own does *not* provide //! a working implementation of a network client or server. Instead, Tower's //! [`Service` trait][`Service`] provides an integration point between //! application code, libraries providing middleware implementations, and //! libraries that implement servers and/or clients for various network //! protocols. //! //! Depending on your particular use case, you might use Tower in several ways: //! //! * **Implementing application logic** for a networked program. You might //! use the [`Service`] trait to model your application's behavior, and use //! the middleware [provided by this crate](#modules) and by other libraries //! to add functionality to clients and servers provided by one or more //! protocol implementations. //! * **Implementing middleware** to add custom behavior to network clients and //! servers in a reusable manner. This might be general-purpose middleware //! (and if it is, please consider releasing your middleware as a library for //! other Tower users!) or application-specific behavior that needs to be //! shared between multiple clients or servers. //! * **Implementing a network protocol**. Libraries that implement network //! protocols (such as HTTP) can depend on `tower-service` to use the //! [`Service`] trait as an integration point between the protocol and user //! code. For example, a client for some protocol might implement [`Service`], //! allowing users to add arbitrary Tower middleware to those clients. //! Similarly, a server might be created from a user-provided [`Service`]. //! //! Additionally, when a network protocol requires functionality already //! provided by existing Tower middleware, a protocol implementation might use //! Tower middleware internally, as well as as an integration point. //! //! ## Library Support //! //! A number of third-party libraries support Tower and the [`Service`] trait. //! The following is an incomplete list of such libraries: //! //! * [`hyper`]: A fast and correct low-level HTTP implementation. //! * [`tonic`]: A [gRPC-over-HTTP/2][grpc] implementation built on top of //! [`hyper`]. See [here][tonic-examples] for examples of using [`tonic`] with //! Tower. //! * [`warp`]: A lightweight, composable web framework. See //! [here][warp-service] for details on using [`warp`] with Tower. //! * [`tower-lsp`] and its fork, [`lspower`]: implementations of the [Language //! Server Protocol][lsp] based on Tower. //! //! [`hyper`]: https://crates.io/crates/hyper //! [`tonic`]: https://crates.io/crates/tonic //! [tonic-examples]: https://github.com/hyperium/tonic/tree/master/examples/src/tower //! [grpc]: https://grpc.io //! [`warp`]: https://crates.io/crates/warp //! [warp-service]: https://docs.rs/warp/0.2.5/warp/fn.service.html //! [`tower-lsp`]: https://crates.io/crates/tower-lsp //! [`lspower`]: https://crates.io/crates/lspower //! [lsp]: https://microsoft.github.io/language-server-protocol/ //! //! If you're the maintainer of a crate that supports Tower, we'd love to add //! your crate to this list! Please [open a PR] adding a brief description of //! your library! //! //! ## Getting Started //! //! If you're brand new to Tower and want to start with the basics, we recommend you //! check out some of our [guides]. //! //! The various middleware implementations provided by this crate are feature //! flagged, so that users can only compile the parts of Tower they need. By //! default, all the optional middleware are disabled. //! //! To get started using all of Tower's optional middleware, add this to your //! `Cargo.toml`: //! //! ```toml //! tower = { version = "0.4", features = ["full"] } //! ``` //! //! Alternatively, you can only enable some features. For example, to enable //! only the [`retry`] and [`timeout`][timeouts] middleware, write: //! //! ```toml //! tower = { version = "0.4", features = ["retry", "timeout"] } //! ``` //! //! See [here](#modules) for a complete list of all middleware provided by //! Tower. //! //! //! ## Supported Rust Versions //! //! Tower will keep a rolling MSRV (minimum supported Rust version) policy of **at //! least** 6 months. When increasing the MSRV, the new Rust version must have been //! released at least six months ago. The current MSRV is 1.49.0. //! //! [`Service`]: crate::Service //! [`Layer`]: crate::Layer //! [timeouts]: crate::timeout //! [rate limiting]: crate::limit::rate //! [load balancing]: crate::balance //! [`ServiceBuilder`]: crate::ServiceBuilder //! [utilities]: crate::ServiceExt //! [`tower`]: https://crates.io/crates/tower //! [`tower-service`]: https://crates.io/crates/tower-service //! [`tower-layer`]: https://crates.io/crates/tower-layer //! [`tower-test`]: https://crates.io/crates/tower-test //! [`retry`]: crate::retry //! [open a PR]: https://github.com/tower-rs/tower/compare //! [guides]: https://github.com/tower-rs/tower/tree/master/guides #[macro_use] pub(crate) mod macros; #[cfg(feature = "balance")] #[cfg_attr(docsrs, doc(cfg(feature = "balance")))] pub mod balance; #[cfg(feature = "buffer")] #[cfg_attr(docsrs, doc(cfg(feature = "buffer")))] pub mod buffer; #[cfg(feature = "discover")] #[cfg_attr(docsrs, doc(cfg(feature = "discover")))] pub mod discover; #[cfg(feature = "filter")] #[cfg_attr(docsrs, doc(cfg(feature = "filter")))] pub mod filter; #[cfg(feature = "hedge")] #[cfg_attr(docsrs, doc(cfg(feature = "hedge")))] pub mod hedge; #[cfg(feature = "limit")] #[cfg_attr(docsrs, doc(cfg(feature = "limit")))] pub mod limit; #[cfg(feature = "load")] #[cfg_attr(docsrs, doc(cfg(feature = "load")))] pub mod load; #[cfg(feature = "load-shed")] #[cfg_attr(docsrs, doc(cfg(feature = "load-shed")))] pub mod load_shed; #[cfg(feature = "make")] #[cfg_attr(docsrs, doc(cfg(feature = "make")))] pub mod make; #[cfg(feature = "ready-cache")] #[cfg_attr(docsrs, doc(cfg(feature = "ready-cache")))] pub mod ready_cache; #[cfg(feature = "reconnect")] #[cfg_attr(docsrs, doc(cfg(feature = "reconnect")))] pub mod reconnect; #[cfg(feature = "retry")] #[cfg_attr(docsrs, doc(cfg(feature = "retry")))] pub mod retry; #[cfg(feature = "spawn-ready")] #[cfg_attr(docsrs, doc(cfg(feature = "spawn-ready")))] pub mod spawn_ready; #[cfg(feature = "steer")] #[cfg_attr(docsrs, doc(cfg(feature = "steer")))] pub mod steer; #[cfg(feature = "timeout")] #[cfg_attr(docsrs, doc(cfg(feature = "timeout")))] pub mod timeout; #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] pub mod util; pub mod builder; pub mod layer; #[cfg(feature = "util")] #[cfg_attr(docsrs, doc(cfg(feature = "util")))] #[doc(inline)] pub use self::util::{service_fn, ServiceExt}; #[doc(inline)] pub use crate::builder::ServiceBuilder; #[cfg(feature = "make")] #[cfg_attr(docsrs, doc(cfg(feature = "make")))] #[doc(inline)] pub use crate::make::MakeService; #[doc(inline)] pub use tower_layer::Layer; #[doc(inline)] pub use tower_service::Service; #[allow(unreachable_pub)] mod sealed { pub trait Sealed {} } /// Alias for a type-erased error type. pub type BoxError = Box; tower-0.4.13/src/limit/concurrency/future.rs000064400000000000000000000017740072674642500172310ustar 00000000000000//! [`Future`] types //! //! [`Future`]: std::future::Future use futures_core::ready; use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tokio::sync::OwnedSemaphorePermit; pin_project! { /// Future for the [`ConcurrencyLimit`] service. /// /// [`ConcurrencyLimit`]: crate::limit::ConcurrencyLimit #[derive(Debug)] pub struct ResponseFuture { #[pin] inner: T, // Keep this around so that it is dropped when the future completes _permit: OwnedSemaphorePermit, } } impl ResponseFuture { pub(crate) fn new(inner: T, _permit: OwnedSemaphorePermit) -> ResponseFuture { ResponseFuture { inner, _permit } } } impl Future for ResponseFuture where F: Future>, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { Poll::Ready(ready!(self.project().inner.poll(cx))) } } tower-0.4.13/src/limit/concurrency/layer.rs000064400000000000000000000032450072674642500170260ustar 00000000000000use std::sync::Arc; use super::ConcurrencyLimit; use tokio::sync::Semaphore; use tower_layer::Layer; /// Enforces a limit on the concurrent number of requests the underlying /// service can handle. #[derive(Debug, Clone)] pub struct ConcurrencyLimitLayer { max: usize, } impl ConcurrencyLimitLayer { /// Create a new concurrency limit layer. pub fn new(max: usize) -> Self { ConcurrencyLimitLayer { max } } } impl Layer for ConcurrencyLimitLayer { type Service = ConcurrencyLimit; fn layer(&self, service: S) -> Self::Service { ConcurrencyLimit::new(service, self.max) } } /// Enforces a limit on the concurrent number of requests the underlying /// service can handle. /// /// Unlike [`ConcurrencyLimitLayer`], which enforces a per-service concurrency /// limit, this layer accepts a owned semaphore (`Arc`) which can be /// shared across multiple services. /// /// Cloning this layer will not create a new semaphore. #[derive(Debug, Clone)] pub struct GlobalConcurrencyLimitLayer { semaphore: Arc, } impl GlobalConcurrencyLimitLayer { /// Create a new `GlobalConcurrencyLimitLayer`. pub fn new(max: usize) -> Self { Self::with_semaphore(Arc::new(Semaphore::new(max))) } /// Create a new `GlobalConcurrencyLimitLayer` from a `Arc` pub fn with_semaphore(semaphore: Arc) -> Self { GlobalConcurrencyLimitLayer { semaphore } } } impl Layer for GlobalConcurrencyLimitLayer { type Service = ConcurrencyLimit; fn layer(&self, service: S) -> Self::Service { ConcurrencyLimit::with_semaphore(service, self.semaphore.clone()) } } tower-0.4.13/src/limit/concurrency/mod.rs000064400000000000000000000003400072674642500164620ustar 00000000000000//! Limit the max number of requests being concurrently processed. pub mod future; mod layer; mod service; pub use self::{ layer::{ConcurrencyLimitLayer, GlobalConcurrencyLimitLayer}, service::ConcurrencyLimit, }; tower-0.4.13/src/limit/concurrency/never.rs000064400000000000000000000003620072674642500170260ustar 00000000000000use std::fmt; #[derive(Debug)] /// An error that can never occur. pub enum Never {} impl fmt::Display for Never { fn fmt(&self, _: &mut fmt::Formatter) -> fmt::Result { match *self {} } } impl std::error::Error for Never {} tower-0.4.13/src/limit/concurrency/service.rs000064400000000000000000000065710072674642500173570ustar 00000000000000use super::future::ResponseFuture; use tokio::sync::{OwnedSemaphorePermit, Semaphore}; use tokio_util::sync::PollSemaphore; use tower_service::Service; use futures_core::ready; use std::{ sync::Arc, task::{Context, Poll}, }; /// Enforces a limit on the concurrent number of requests the underlying /// service can handle. #[derive(Debug)] pub struct ConcurrencyLimit { inner: T, semaphore: PollSemaphore, /// The currently acquired semaphore permit, if there is sufficient /// concurrency to send a new request. /// /// The permit is acquired in `poll_ready`, and taken in `call` when sending /// a new request. permit: Option, } impl ConcurrencyLimit { /// Create a new concurrency limiter. pub fn new(inner: T, max: usize) -> Self { Self::with_semaphore(inner, Arc::new(Semaphore::new(max))) } /// Create a new concurrency limiter with a provided shared semaphore pub fn with_semaphore(inner: T, semaphore: Arc) -> Self { ConcurrencyLimit { inner, semaphore: PollSemaphore::new(semaphore), permit: None, } } /// Get a reference to the inner service pub fn get_ref(&self) -> &T { &self.inner } /// Get a mutable reference to the inner service pub fn get_mut(&mut self) -> &mut T { &mut self.inner } /// Consume `self`, returning the inner service pub fn into_inner(self) -> T { self.inner } } impl Service for ConcurrencyLimit where S: Service, { type Response = S::Response; type Error = S::Error; type Future = ResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { // If we haven't already acquired a permit from the semaphore, try to // acquire one first. if self.permit.is_none() { self.permit = ready!(self.semaphore.poll_acquire(cx)); debug_assert!( self.permit.is_some(), "ConcurrencyLimit semaphore is never closed, so `poll_acquire` \ should never fail", ); } // Once we've acquired a permit (or if we already had one), poll the // inner service. self.inner.poll_ready(cx) } fn call(&mut self, request: Request) -> Self::Future { // Take the permit let permit = self .permit .take() .expect("max requests in-flight; poll_ready must be called first"); // Call the inner service let future = self.inner.call(request); ResponseFuture::new(future, permit) } } impl Clone for ConcurrencyLimit { fn clone(&self) -> Self { // Since we hold an `OwnedSemaphorePermit`, we can't derive `Clone`. // Instead, when cloning the service, create a new service with the // same semaphore, but with the permit in the un-acquired state. Self { inner: self.inner.clone(), semaphore: self.semaphore.clone(), permit: None, } } } #[cfg(feature = "load")] #[cfg_attr(docsrs, doc(cfg(feature = "load")))] impl crate::load::Load for ConcurrencyLimit where S: crate::load::Load, { type Metric = S::Metric; fn load(&self) -> Self::Metric { self.inner.load() } } tower-0.4.13/src/limit/mod.rs000064400000000000000000000003440072674642500141340ustar 00000000000000//! Tower middleware for limiting requests. pub mod concurrency; pub mod rate; pub use self::{ concurrency::{ConcurrencyLimit, ConcurrencyLimitLayer, GlobalConcurrencyLimitLayer}, rate::{RateLimit, RateLimitLayer}, }; tower-0.4.13/src/limit/rate/layer.rs000064400000000000000000000011640072674642500154250ustar 00000000000000use super::{Rate, RateLimit}; use std::time::Duration; use tower_layer::Layer; /// Enforces a rate limit on the number of requests the underlying /// service can handle over a period of time. #[derive(Debug, Clone)] pub struct RateLimitLayer { rate: Rate, } impl RateLimitLayer { /// Create new rate limit layer. pub fn new(num: u64, per: Duration) -> Self { let rate = Rate::new(num, per); RateLimitLayer { rate } } } impl Layer for RateLimitLayer { type Service = RateLimit; fn layer(&self, service: S) -> Self::Service { RateLimit::new(service, self.rate) } } tower-0.4.13/src/limit/rate/mod.rs000064400000000000000000000003020072674642500150610ustar 00000000000000//! Limit the rate at which requests are processed. mod layer; #[allow(clippy::module_inception)] mod rate; mod service; pub use self::{layer::RateLimitLayer, rate::Rate, service::RateLimit}; tower-0.4.13/src/limit/rate/rate.rs000064400000000000000000000010640072674642500152430ustar 00000000000000use std::time::Duration; /// A rate of requests per time period. #[derive(Debug, Copy, Clone)] pub struct Rate { num: u64, per: Duration, } impl Rate { /// Create a new rate. /// /// # Panics /// /// This function panics if `num` or `per` is 0. pub fn new(num: u64, per: Duration) -> Self { assert!(num > 0); assert!(per > Duration::from_millis(0)); Rate { num, per } } pub(crate) fn num(&self) -> u64 { self.num } pub(crate) fn per(&self) -> Duration { self.per } } tower-0.4.13/src/limit/rate/service.rs000064400000000000000000000071160072674642500157540ustar 00000000000000use super::Rate; use futures_core::ready; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tokio::time::{Instant, Sleep}; use tower_service::Service; /// Enforces a rate limit on the number of requests the underlying /// service can handle over a period of time. #[derive(Debug)] pub struct RateLimit { inner: T, rate: Rate, state: State, sleep: Pin>, } #[derive(Debug)] enum State { // The service has hit its limit Limited, Ready { until: Instant, rem: u64 }, } impl RateLimit { /// Create a new rate limiter pub fn new(inner: T, rate: Rate) -> Self { let until = Instant::now(); let state = State::Ready { until, rem: rate.num(), }; RateLimit { inner, rate, state, // The sleep won't actually be used with this duration, but // we create it eagerly so that we can reset it in place rather than // `Box::pin`ning a new `Sleep` every time we need one. sleep: Box::pin(tokio::time::sleep_until(until)), } } /// Get a reference to the inner service pub fn get_ref(&self) -> &T { &self.inner } /// Get a mutable reference to the inner service pub fn get_mut(&mut self) -> &mut T { &mut self.inner } /// Consume `self`, returning the inner service pub fn into_inner(self) -> T { self.inner } } impl Service for RateLimit where S: Service, { type Response = S::Response; type Error = S::Error; type Future = S::Future; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { match self.state { State::Ready { .. } => return Poll::Ready(ready!(self.inner.poll_ready(cx))), State::Limited => { if Pin::new(&mut self.sleep).poll(cx).is_pending() { tracing::trace!("rate limit exceeded; sleeping."); return Poll::Pending; } } } self.state = State::Ready { until: Instant::now() + self.rate.per(), rem: self.rate.num(), }; Poll::Ready(ready!(self.inner.poll_ready(cx))) } fn call(&mut self, request: Request) -> Self::Future { match self.state { State::Ready { mut until, mut rem } => { let now = Instant::now(); // If the period has elapsed, reset it. if now >= until { until = now + self.rate.per(); rem = self.rate.num(); } if rem > 1 { rem -= 1; self.state = State::Ready { until, rem }; } else { // The service is disabled until further notice // Reset the sleep future in place, so that we don't have to // deallocate the existing box and allocate a new one. self.sleep.as_mut().reset(until); self.state = State::Limited; } // Call the inner future self.inner.call(request) } State::Limited => panic!("service not ready; poll_ready must be called first"), } } } #[cfg(feature = "load")] #[cfg_attr(docsrs, doc(cfg(feature = "load")))] impl crate::load::Load for RateLimit where S: crate::load::Load, { type Metric = S::Metric; fn load(&self) -> Self::Metric { self.inner.load() } } tower-0.4.13/src/load/completion.rs000064400000000000000000000064320072674642500153330ustar 00000000000000//! Application-specific request completion semantics. use futures_core::ready; use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; /// Attaches `H`-typed completion tracker to `V` typed values. /// /// Handles (of type `H`) are intended to be RAII guards that primarily implement [`Drop`] and update /// load metric state as they are dropped. This trait allows implementors to "forward" the handle /// to later parts of the request-handling pipeline, so that the handle is only dropped when the /// request has truly completed. /// /// This utility allows load metrics to have a protocol-agnostic means to track streams past their /// initial response future. For example, if `V` represents an HTTP response type, an /// implementation could add `H`-typed handles to each response's extensions to detect when all the /// response's extensions have been dropped. /// /// A base `impl TrackCompletion for CompleteOnResponse` is provided to drop the handle /// once the response future is resolved. This is appropriate when a response is discrete and /// cannot comprise multiple messages. /// /// In many cases, the `Output` type is simply `V`. However, [`TrackCompletion`] may alter the type /// in order to instrument it appropriately. For example, an HTTP [`TrackCompletion`] may modify /// the body type: so a [`TrackCompletion`] that takes values of type /// [`http::Response`][response] may output values of type [`http::Response`][response]. /// /// [response]: https://docs.rs/http/latest/http/response/struct.Response.html pub trait TrackCompletion: Clone { /// The instrumented value type. type Output; /// Attaches a `H`-typed handle to a `V`-typed value. fn track_completion(&self, handle: H, value: V) -> Self::Output; } /// A [`TrackCompletion`] implementation that considers the request completed when the response /// future is resolved. #[derive(Clone, Copy, Debug, Default)] #[non_exhaustive] pub struct CompleteOnResponse; pin_project! { /// Attaches a `C`-typed completion tracker to the result of an `F`-typed [`Future`]. #[derive(Debug)] pub struct TrackCompletionFuture { #[pin] future: F, handle: Option, completion: C, } } // ===== impl InstrumentFuture ===== impl TrackCompletionFuture { /// Wraps a future, propagating the tracker into its value if successful. pub fn new(completion: C, handle: H, future: F) -> Self { TrackCompletionFuture { future, completion, handle: Some(handle), } } } impl Future for TrackCompletionFuture where F: Future>, C: TrackCompletion, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let this = self.project(); let rsp = ready!(this.future.poll(cx))?; let h = this.handle.take().expect("handle"); Poll::Ready(Ok(this.completion.track_completion(h, rsp))) } } // ===== CompleteOnResponse ===== impl TrackCompletion for CompleteOnResponse { type Output = V; fn track_completion(&self, handle: H, value: V) -> V { drop(handle); value } } tower-0.4.13/src/load/constant.rs000064400000000000000000000042110072674642500150040ustar 00000000000000//! A constant [`Load`] implementation. #[cfg(feature = "discover")] use crate::discover::{Change, Discover}; #[cfg(feature = "discover")] use futures_core::{ready, Stream}; #[cfg(feature = "discover")] use std::pin::Pin; use super::Load; use pin_project_lite::pin_project; use std::task::{Context, Poll}; use tower_service::Service; pin_project! { #[derive(Debug)] /// Wraps a type so that it implements [`Load`] and returns a constant load metric. /// /// This load estimator is primarily useful for testing. pub struct Constant { inner: T, load: M, } } // ===== impl Constant ===== impl Constant { /// Wraps a `T`-typed service with a constant `M`-typed load metric. pub fn new(inner: T, load: M) -> Self { Self { inner, load } } } impl Load for Constant { type Metric = M; fn load(&self) -> M { self.load } } impl Service for Constant where S: Service, M: Copy, { type Response = S::Response; type Error = S::Error; type Future = S::Future; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx) } fn call(&mut self, req: Request) -> Self::Future { self.inner.call(req) } } /// Proxies [`Discover`] such that all changes are wrapped with a constant load. #[cfg(feature = "discover")] #[cfg_attr(docsrs, doc(cfg(feature = "discover")))] impl Stream for Constant { type Item = Result>, D::Error>; /// Yields the next discovery change set. fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { use self::Change::*; let this = self.project(); let change = match ready!(Pin::new(this.inner).poll_discover(cx)).transpose()? { None => return Poll::Ready(None), Some(Insert(k, svc)) => Insert(k, Constant::new(svc, *this.load)), Some(Remove(k)) => Remove(k), }; Poll::Ready(Some(Ok(change))) } } tower-0.4.13/src/load/mod.rs000064400000000000000000000071230072674642500137370ustar 00000000000000//! Service load measurement //! //! This module provides the [`Load`] trait, which allows measuring how loaded a service is. //! It also provides several wrapper types that measure load in different ways: //! //! - [`Constant`] — Always returns the same constant load value for a service. //! - [`PendingRequests`] — Measures load by tracking the number of in-flight requests. //! - [`PeakEwma`] — Measures load using a moving average of the peak latency for the service. //! //! In general, you will want to use one of these when using the types in [`tower::balance`] which //! balance services depending on their load. Which load metric to use depends on your exact //! use-case, but the ones above should get you quite far! //! //! When the `discover` feature is enabled, wrapper types for [`Discover`] that //! wrap the discovered services with the given load estimator are also provided. //! //! # When does a request complete? //! //! For many applications, the request life-cycle is relatively simple: when a service responds to //! a request, that request is done, and the system can forget about it. However, for some //! applications, the service may respond to the initial request while other parts of the system //! are still acting on that request. In such an application, the system load must take these //! requests into account as well, or risk the system underestimating its own load. //! //! To support these use-cases, the load estimators in this module are parameterized by the //! [`TrackCompletion`] trait, with [`CompleteOnResponse`] as the default type. The behavior of //! [`CompleteOnResponse`] is what you would normally expect for a request-response cycle: when the //! response is produced, the request is considered "finished", and load goes down. This can be //! overriden by your own user-defined type to track more complex request completion semantics. See //! the documentation for [`completion`] for more details. //! //! # Examples //! //! ```rust //! # #[cfg(feature = "util")] //! use tower::util::ServiceExt; //! # #[cfg(feature = "util")] //! use tower::{load::Load, Service}; //! # #[cfg(feature = "util")] //! async fn simple_balance( //! svc1: &mut S1, //! svc2: &mut S2, //! request: R //! ) -> Result //! where //! S1: Load + Service, //! S2: Load + Service //! { //! if svc1.load() < svc2.load() { //! svc1.ready().await?.call(request).await //! } else { //! svc2.ready().await?.call(request).await //! } //! } //! ``` //! //! [`tower::balance`]: crate::balance //! [`Discover`]: crate::discover::Discover //! [`CompleteOnResponse`]: crate::load::completion::CompleteOnResponse // TODO: a custom completion example would be good here pub mod completion; mod constant; pub mod peak_ewma; pub mod pending_requests; pub use self::{ completion::{CompleteOnResponse, TrackCompletion}, constant::Constant, peak_ewma::PeakEwma, pending_requests::PendingRequests, }; #[cfg(feature = "discover")] pub use self::{peak_ewma::PeakEwmaDiscover, pending_requests::PendingRequestsDiscover}; /// Types that implement this trait can give an estimate of how loaded they are. /// /// See the module documentation for more details. pub trait Load { /// A comparable load metric. /// /// Lesser values indicate that the service is less loaded, and should be preferred for new /// requests over another service with a higher value. type Metric: PartialOrd; /// Estimate the service's current load. fn load(&self) -> Self::Metric; } tower-0.4.13/src/load/peak_ewma.rs000064400000000000000000000311560072674642500151140ustar 00000000000000//! A `Load` implementation that measures load using the PeakEWMA response latency. #[cfg(feature = "discover")] use crate::discover::{Change, Discover}; #[cfg(feature = "discover")] use futures_core::{ready, Stream}; #[cfg(feature = "discover")] use pin_project_lite::pin_project; #[cfg(feature = "discover")] use std::pin::Pin; use super::completion::{CompleteOnResponse, TrackCompletion, TrackCompletionFuture}; use super::Load; use std::task::{Context, Poll}; use std::{ sync::{Arc, Mutex}, time::Duration, }; use tokio::time::Instant; use tower_service::Service; use tracing::trace; /// Measures the load of the underlying service using Peak-EWMA load measurement. /// /// [`PeakEwma`] implements [`Load`] with the [`Cost`] metric that estimates the amount of /// pending work to an endpoint. Work is calculated by multiplying the /// exponentially-weighted moving average (EWMA) of response latencies by the number of /// pending requests. The Peak-EWMA algorithm is designed to be especially sensitive to /// worst-case latencies. Over time, the peak latency value decays towards the moving /// average of latencies to the endpoint. /// /// When no latency information has been measured for an endpoint, an arbitrary default /// RTT of 1 second is used to prevent the endpoint from being overloaded before a /// meaningful baseline can be established.. /// /// ## Note /// /// This is derived from [Finagle][finagle], which is distributed under the Apache V2 /// license. Copyright 2017, Twitter Inc. /// /// [finagle]: /// https://github.com/twitter/finagle/blob/9cc08d15216497bb03a1cafda96b7266cfbbcff1/finagle-core/src/main/scala/com/twitter/finagle/loadbalancer/PeakEwma.scala #[derive(Debug)] pub struct PeakEwma { service: S, decay_ns: f64, rtt_estimate: Arc>, completion: C, } #[cfg(feature = "discover")] pin_project! { /// Wraps a `D`-typed stream of discovered services with `PeakEwma`. #[cfg_attr(docsrs, doc(cfg(feature = "discover")))] #[derive(Debug)] pub struct PeakEwmaDiscover { #[pin] discover: D, decay_ns: f64, default_rtt: Duration, completion: C, } } /// Represents the relative cost of communicating with a service. /// /// The underlying value estimates the amount of pending work to a service: the Peak-EWMA /// latency estimate multiplied by the number of pending requests. #[derive(Copy, Clone, Debug, PartialEq, PartialOrd)] pub struct Cost(f64); /// Tracks an in-flight request and updates the RTT-estimate on Drop. #[derive(Debug)] pub struct Handle { sent_at: Instant, decay_ns: f64, rtt_estimate: Arc>, } /// Holds the current RTT estimate and the last time this value was updated. #[derive(Debug)] struct RttEstimate { update_at: Instant, rtt_ns: f64, } const NANOS_PER_MILLI: f64 = 1_000_000.0; // ===== impl PeakEwma ===== impl PeakEwma { /// Wraps an `S`-typed service so that its load is tracked by the EWMA of its peak latency. pub fn new(service: S, default_rtt: Duration, decay_ns: f64, completion: C) -> Self { debug_assert!(decay_ns > 0.0, "decay_ns must be positive"); Self { service, decay_ns, rtt_estimate: Arc::new(Mutex::new(RttEstimate::new(nanos(default_rtt)))), completion, } } fn handle(&self) -> Handle { Handle { decay_ns: self.decay_ns, sent_at: Instant::now(), rtt_estimate: self.rtt_estimate.clone(), } } } impl Service for PeakEwma where S: Service, C: TrackCompletion, { type Response = C::Output; type Error = S::Error; type Future = TrackCompletionFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.service.poll_ready(cx) } fn call(&mut self, req: Request) -> Self::Future { TrackCompletionFuture::new( self.completion.clone(), self.handle(), self.service.call(req), ) } } impl Load for PeakEwma { type Metric = Cost; fn load(&self) -> Self::Metric { let pending = Arc::strong_count(&self.rtt_estimate) as u32 - 1; // Update the RTT estimate to account for decay since the last update. // If an estimate has not been established, a default is provided let estimate = self.update_estimate(); let cost = Cost(estimate * f64::from(pending + 1)); trace!( "load estimate={:.0}ms pending={} cost={:?}", estimate / NANOS_PER_MILLI, pending, cost, ); cost } } impl PeakEwma { fn update_estimate(&self) -> f64 { let mut rtt = self.rtt_estimate.lock().expect("peak ewma prior_estimate"); rtt.decay(self.decay_ns) } } // ===== impl PeakEwmaDiscover ===== #[cfg(feature = "discover")] impl PeakEwmaDiscover { /// Wraps a `D`-typed [`Discover`] so that services have a [`PeakEwma`] load metric. /// /// The provided `default_rtt` is used as the default RTT estimate for newly /// added services. /// /// They `decay` value determines over what time period a RTT estimate should /// decay. pub fn new(discover: D, default_rtt: Duration, decay: Duration, completion: C) -> Self where D: Discover, D::Service: Service, C: TrackCompletion>::Response>, { PeakEwmaDiscover { discover, decay_ns: nanos(decay), default_rtt, completion, } } } #[cfg(feature = "discover")] #[cfg_attr(docsrs, doc(cfg(feature = "discover")))] impl Stream for PeakEwmaDiscover where D: Discover, C: Clone, { type Item = Result>, D::Error>; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { let this = self.project(); let change = match ready!(this.discover.poll_discover(cx)).transpose()? { None => return Poll::Ready(None), Some(Change::Remove(k)) => Change::Remove(k), Some(Change::Insert(k, svc)) => { let peak_ewma = PeakEwma::new( svc, *this.default_rtt, *this.decay_ns, this.completion.clone(), ); Change::Insert(k, peak_ewma) } }; Poll::Ready(Some(Ok(change))) } } // ===== impl RttEstimate ===== impl RttEstimate { fn new(rtt_ns: f64) -> Self { debug_assert!(0.0 < rtt_ns, "rtt must be positive"); Self { rtt_ns, update_at: Instant::now(), } } /// Decays the RTT estimate with a decay period of `decay_ns`. fn decay(&mut self, decay_ns: f64) -> f64 { // Updates with a 0 duration so that the estimate decays towards 0. let now = Instant::now(); self.update(now, now, decay_ns) } /// Updates the Peak-EWMA RTT estimate. /// /// The elapsed time from `sent_at` to `recv_at` is added fn update(&mut self, sent_at: Instant, recv_at: Instant, decay_ns: f64) -> f64 { debug_assert!( sent_at <= recv_at, "recv_at={:?} after sent_at={:?}", recv_at, sent_at ); let rtt = nanos(recv_at.saturating_duration_since(sent_at)); let now = Instant::now(); debug_assert!( self.update_at <= now, "update_at={:?} in the future", self.update_at ); self.rtt_ns = if self.rtt_ns < rtt { // For Peak-EWMA, always use the worst-case (peak) value as the estimate for // subsequent requests. trace!( "update peak rtt={}ms prior={}ms", rtt / NANOS_PER_MILLI, self.rtt_ns / NANOS_PER_MILLI, ); rtt } else { // When an RTT is observed that is less than the estimated RTT, we decay the // prior estimate according to how much time has elapsed since the last // update. The inverse of the decay is used to scale the estimate towards the // observed RTT value. let elapsed = nanos(now.saturating_duration_since(self.update_at)); let decay = (-elapsed / decay_ns).exp(); let recency = 1.0 - decay; let next_estimate = (self.rtt_ns * decay) + (rtt * recency); trace!( "update rtt={:03.0}ms decay={:06.0}ns; next={:03.0}ms", rtt / NANOS_PER_MILLI, self.rtt_ns - next_estimate, next_estimate / NANOS_PER_MILLI, ); next_estimate }; self.update_at = now; self.rtt_ns } } // ===== impl Handle ===== impl Drop for Handle { fn drop(&mut self) { let recv_at = Instant::now(); if let Ok(mut rtt) = self.rtt_estimate.lock() { rtt.update(self.sent_at, recv_at, self.decay_ns); } } } // ===== impl Cost ===== // Utility that converts durations to nanos in f64. // // Due to a lossy transformation, the maximum value that can be represented is ~585 years, // which, I hope, is more than enough to represent request latencies. fn nanos(d: Duration) -> f64 { const NANOS_PER_SEC: u64 = 1_000_000_000; let n = f64::from(d.subsec_nanos()); let s = d.as_secs().saturating_mul(NANOS_PER_SEC) as f64; n + s } #[cfg(test)] mod tests { use futures_util::future; use std::time::Duration; use tokio::time; use tokio_test::{assert_ready, assert_ready_ok, task}; use super::*; struct Svc; impl Service<()> for Svc { type Response = (); type Error = (); type Future = future::Ready>; fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll> { Poll::Ready(Ok(())) } fn call(&mut self, (): ()) -> Self::Future { future::ok(()) } } /// The default RTT estimate decays, so that new nodes are considered if the /// default RTT is too high. #[tokio::test] async fn default_decay() { time::pause(); let svc = PeakEwma::new( Svc, Duration::from_millis(10), NANOS_PER_MILLI * 1_000.0, CompleteOnResponse, ); let Cost(load) = svc.load(); assert_eq!(load, 10.0 * NANOS_PER_MILLI); time::advance(Duration::from_millis(100)).await; let Cost(load) = svc.load(); assert!(9.0 * NANOS_PER_MILLI < load && load < 10.0 * NANOS_PER_MILLI); time::advance(Duration::from_millis(100)).await; let Cost(load) = svc.load(); assert!(8.0 * NANOS_PER_MILLI < load && load < 9.0 * NANOS_PER_MILLI); } // The default RTT estimate decays, so that new nodes are considered if the default RTT is too // high. #[tokio::test] async fn compound_decay() { time::pause(); let mut svc = PeakEwma::new( Svc, Duration::from_millis(20), NANOS_PER_MILLI * 1_000.0, CompleteOnResponse, ); assert_eq!(svc.load(), Cost(20.0 * NANOS_PER_MILLI)); time::advance(Duration::from_millis(100)).await; let mut rsp0 = task::spawn(svc.call(())); assert!(svc.load() > Cost(20.0 * NANOS_PER_MILLI)); time::advance(Duration::from_millis(100)).await; let mut rsp1 = task::spawn(svc.call(())); assert!(svc.load() > Cost(40.0 * NANOS_PER_MILLI)); time::advance(Duration::from_millis(100)).await; let () = assert_ready_ok!(rsp0.poll()); assert_eq!(svc.load(), Cost(400_000_000.0)); time::advance(Duration::from_millis(100)).await; let () = assert_ready_ok!(rsp1.poll()); assert_eq!(svc.load(), Cost(200_000_000.0)); // Check that values decay as time elapses time::advance(Duration::from_secs(1)).await; assert!(svc.load() < Cost(100_000_000.0)); time::advance(Duration::from_secs(10)).await; assert!(svc.load() < Cost(100_000.0)); } #[test] fn nanos() { assert_eq!(super::nanos(Duration::new(0, 0)), 0.0); assert_eq!(super::nanos(Duration::new(0, 123)), 123.0); assert_eq!(super::nanos(Duration::new(1, 23)), 1_000_000_023.0); assert_eq!( super::nanos(Duration::new(::std::u64::MAX, 999_999_999)), 18446744074709553000.0 ); } } tower-0.4.13/src/load/pending_requests.rs000064400000000000000000000135200072674642500165350ustar 00000000000000//! A [`Load`] implementation that measures load using the number of in-flight requests. #[cfg(feature = "discover")] use crate::discover::{Change, Discover}; #[cfg(feature = "discover")] use futures_core::{ready, Stream}; #[cfg(feature = "discover")] use pin_project_lite::pin_project; #[cfg(feature = "discover")] use std::pin::Pin; use super::completion::{CompleteOnResponse, TrackCompletion, TrackCompletionFuture}; use super::Load; use std::sync::Arc; use std::task::{Context, Poll}; use tower_service::Service; /// Measures the load of the underlying service using the number of currently-pending requests. #[derive(Debug)] pub struct PendingRequests { service: S, ref_count: RefCount, completion: C, } /// Shared between instances of [`PendingRequests`] and [`Handle`] to track active references. #[derive(Clone, Debug, Default)] struct RefCount(Arc<()>); #[cfg(feature = "discover")] pin_project! { /// Wraps a `D`-typed stream of discovered services with [`PendingRequests`]. #[cfg_attr(docsrs, doc(cfg(feature = "discover")))] #[derive(Debug)] pub struct PendingRequestsDiscover { #[pin] discover: D, completion: C, } } /// Represents the number of currently-pending requests to a given service. #[derive(Clone, Copy, Debug, Default, PartialOrd, PartialEq, Ord, Eq)] pub struct Count(usize); /// Tracks an in-flight request by reference count. #[derive(Debug)] pub struct Handle(RefCount); // ===== impl PendingRequests ===== impl PendingRequests { /// Wraps an `S`-typed service so that its load is tracked by the number of pending requests. pub fn new(service: S, completion: C) -> Self { Self { service, completion, ref_count: RefCount::default(), } } fn handle(&self) -> Handle { Handle(self.ref_count.clone()) } } impl Load for PendingRequests { type Metric = Count; fn load(&self) -> Count { // Count the number of references that aren't `self`. Count(self.ref_count.ref_count() - 1) } } impl Service for PendingRequests where S: Service, C: TrackCompletion, { type Response = C::Output; type Error = S::Error; type Future = TrackCompletionFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.service.poll_ready(cx) } fn call(&mut self, req: Request) -> Self::Future { TrackCompletionFuture::new( self.completion.clone(), self.handle(), self.service.call(req), ) } } // ===== impl PendingRequestsDiscover ===== #[cfg(feature = "discover")] impl PendingRequestsDiscover { /// Wraps a [`Discover`], wrapping all of its services with [`PendingRequests`]. pub fn new(discover: D, completion: C) -> Self where D: Discover, D::Service: Service, C: TrackCompletion>::Response>, { Self { discover, completion, } } } #[cfg(feature = "discover")] impl Stream for PendingRequestsDiscover where D: Discover, C: Clone, { type Item = Result>, D::Error>; /// Yields the next discovery change set. fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { use self::Change::*; let this = self.project(); let change = match ready!(this.discover.poll_discover(cx)).transpose()? { None => return Poll::Ready(None), Some(Insert(k, svc)) => Insert(k, PendingRequests::new(svc, this.completion.clone())), Some(Remove(k)) => Remove(k), }; Poll::Ready(Some(Ok(change))) } } // ==== RefCount ==== impl RefCount { pub(crate) fn ref_count(&self) -> usize { Arc::strong_count(&self.0) } } #[cfg(test)] mod tests { use super::*; use futures_util::future; use std::task::{Context, Poll}; struct Svc; impl Service<()> for Svc { type Response = (); type Error = (); type Future = future::Ready>; fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll> { Poll::Ready(Ok(())) } fn call(&mut self, (): ()) -> Self::Future { future::ok(()) } } #[test] fn default() { let mut svc = PendingRequests::new(Svc, CompleteOnResponse); assert_eq!(svc.load(), Count(0)); let rsp0 = svc.call(()); assert_eq!(svc.load(), Count(1)); let rsp1 = svc.call(()); assert_eq!(svc.load(), Count(2)); let () = tokio_test::block_on(rsp0).unwrap(); assert_eq!(svc.load(), Count(1)); let () = tokio_test::block_on(rsp1).unwrap(); assert_eq!(svc.load(), Count(0)); } #[test] fn with_completion() { #[derive(Clone)] struct IntoHandle; impl TrackCompletion for IntoHandle { type Output = Handle; fn track_completion(&self, i: Handle, (): ()) -> Handle { i } } let mut svc = PendingRequests::new(Svc, IntoHandle); assert_eq!(svc.load(), Count(0)); let rsp = svc.call(()); assert_eq!(svc.load(), Count(1)); let i0 = tokio_test::block_on(rsp).unwrap(); assert_eq!(svc.load(), Count(1)); let rsp = svc.call(()); assert_eq!(svc.load(), Count(2)); let i1 = tokio_test::block_on(rsp).unwrap(); assert_eq!(svc.load(), Count(2)); drop(i1); assert_eq!(svc.load(), Count(1)); drop(i0); assert_eq!(svc.load(), Count(0)); } } tower-0.4.13/src/load_shed/error.rs000064400000000000000000000013240072674642500153110ustar 00000000000000//! Error types use std::fmt; /// An error returned by [`LoadShed`] when the underlying service /// is not ready to handle any requests at the time of being /// called. /// /// [`LoadShed`]: crate::load_shed::LoadShed #[derive(Default)] pub struct Overloaded { _p: (), } impl Overloaded { /// Construct a new overloaded error pub fn new() -> Self { Overloaded { _p: () } } } impl fmt::Debug for Overloaded { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.write_str("Overloaded") } } impl fmt::Display for Overloaded { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.write_str("service overloaded") } } impl std::error::Error for Overloaded {} tower-0.4.13/src/load_shed/future.rs000064400000000000000000000031270072674642500154750ustar 00000000000000//! Future types use std::fmt; use std::future::Future; use std::pin::Pin; use std::task::{Context, Poll}; use futures_core::ready; use pin_project_lite::pin_project; use super::error::Overloaded; pin_project! { /// Future for the [`LoadShed`] service. /// /// [`LoadShed`]: crate::load_shed::LoadShed pub struct ResponseFuture { #[pin] state: ResponseState, } } pin_project! { #[project = ResponseStateProj] enum ResponseState { Called { #[pin] fut: F }, Overloaded, } } impl ResponseFuture { pub(crate) fn called(fut: F) -> Self { ResponseFuture { state: ResponseState::Called { fut }, } } pub(crate) fn overloaded() -> Self { ResponseFuture { state: ResponseState::Overloaded, } } } impl Future for ResponseFuture where F: Future>, E: Into, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { match self.project().state.project() { ResponseStateProj::Called { fut } => { Poll::Ready(ready!(fut.poll(cx)).map_err(Into::into)) } ResponseStateProj::Overloaded => Poll::Ready(Err(Overloaded::new().into())), } } } impl fmt::Debug for ResponseFuture where // bounds for future-proofing... F: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.write_str("ResponseFuture") } } tower-0.4.13/src/load_shed/layer.rs000064400000000000000000000012160072674642500152740ustar 00000000000000use std::fmt; use tower_layer::Layer; use super::LoadShed; /// A [`Layer`] to wrap services in [`LoadShed`] middleware. /// /// [`Layer`]: crate::Layer #[derive(Clone, Default)] pub struct LoadShedLayer { _p: (), } impl LoadShedLayer { /// Creates a new layer. pub fn new() -> Self { LoadShedLayer { _p: () } } } impl Layer for LoadShedLayer { type Service = LoadShed; fn layer(&self, service: S) -> Self::Service { LoadShed::new(service) } } impl fmt::Debug for LoadShedLayer { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("LoadShedLayer").finish() } } tower-0.4.13/src/load_shed/mod.rs000064400000000000000000000040260072674642500147410ustar 00000000000000//! Middleware for shedding load when inner services aren't ready. use std::task::{Context, Poll}; use tower_service::Service; pub mod error; pub mod future; mod layer; use self::future::ResponseFuture; pub use self::layer::LoadShedLayer; /// A [`Service`] that sheds load when the inner service isn't ready. /// /// [`Service`]: crate::Service #[derive(Debug)] pub struct LoadShed { inner: S, is_ready: bool, } // ===== impl LoadShed ===== impl LoadShed { /// Wraps a service in [`LoadShed`] middleware. pub fn new(inner: S) -> Self { LoadShed { inner, is_ready: false, } } } impl Service for LoadShed where S: Service, S::Error: Into, { type Response = S::Response; type Error = crate::BoxError; type Future = ResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { // We check for readiness here, so that we can know in `call` if // the inner service is overloaded or not. self.is_ready = match self.inner.poll_ready(cx) { Poll::Ready(Err(e)) => return Poll::Ready(Err(e.into())), r => r.is_ready(), }; // But we always report Ready, so that layers above don't wait until // the inner service is ready (the entire point of this layer!) Poll::Ready(Ok(())) } fn call(&mut self, req: Req) -> Self::Future { if self.is_ready { // readiness only counts once, you need to check again! self.is_ready = false; ResponseFuture::called(self.inner.call(req)) } else { ResponseFuture::overloaded() } } } impl Clone for LoadShed { fn clone(&self) -> Self { LoadShed { inner: self.inner.clone(), // new clones shouldn't carry the readiness state, as a cloneable // inner service likely tracks readiness per clone. is_ready: false, } } } tower-0.4.13/src/macros.rs000064400000000000000000000024030072674642500135210ustar 00000000000000#[cfg(any( feature = "util", feature = "spawn-ready", feature = "filter", feature = "make" ))] macro_rules! opaque_future { ($(#[$m:meta])* pub type $name:ident<$($param:ident),+> = $actual:ty;) => { pin_project_lite::pin_project! { $(#[$m])* pub struct $name<$($param),+> { #[pin] inner: $actual } } impl<$($param),+> $name<$($param),+> { pub(crate) fn new(inner: $actual) -> Self { Self { inner } } } impl<$($param),+> std::fmt::Debug for $name<$($param),+> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_tuple(stringify!($name)).field(&format_args!("...")).finish() } } impl<$($param),+> std::future::Future for $name<$($param),+> where $actual: std::future::Future, { type Output = <$actual as std::future::Future>::Output; #[inline] fn poll(self: std::pin::Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> std::task::Poll { self.project().inner.poll(cx) } } } } tower-0.4.13/src/make/make_connection.rs000064400000000000000000000031030072674642500163040ustar 00000000000000use crate::sealed::Sealed; use std::future::Future; use std::task::{Context, Poll}; use tokio::io::{AsyncRead, AsyncWrite}; use tower_service::Service; /// The [`MakeConnection`] trait is used to create transports. /// /// The goal of this service is to allow composable methods for creating /// `AsyncRead + AsyncWrite` transports. This could mean creating a TLS /// based connection or using some other method to authenticate the connection. pub trait MakeConnection: Sealed<(Target,)> { /// The transport provided by this service type Connection: AsyncRead + AsyncWrite; /// Errors produced by the connecting service type Error; /// The future that eventually produces the transport type Future: Future>; /// Returns `Poll::Ready(Ok(()))` when it is able to make more connections. fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll>; /// Connect and return a transport asynchronously fn make_connection(&mut self, target: Target) -> Self::Future; } impl Sealed<(Target,)> for S where S: Service {} impl MakeConnection for C where C: Service, C::Response: AsyncRead + AsyncWrite, { type Connection = C::Response; type Error = C::Error; type Future = C::Future; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { Service::poll_ready(self, cx) } fn make_connection(&mut self, target: Target) -> Self::Future { Service::call(self, target) } } tower-0.4.13/src/make/make_service/shared.rs000064400000000000000000000074260072674642500170670ustar 00000000000000use std::convert::Infallible; use std::task::{Context, Poll}; use tower_service::Service; /// A [`MakeService`] that produces services by cloning an inner service. /// /// [`MakeService`]: super::MakeService /// /// # Example /// /// ``` /// # use std::task::{Context, Poll}; /// # use std::pin::Pin; /// # use std::convert::Infallible; /// use tower::make::{MakeService, Shared}; /// use tower::buffer::Buffer; /// use tower::Service; /// use futures::future::{Ready, ready}; /// /// // An example connection type /// struct Connection {} /// /// // An example request type /// struct Request {} /// /// // An example response type /// struct Response {} /// /// // Some service that doesn't implement `Clone` /// struct MyService; /// /// impl Service for MyService { /// type Response = Response; /// type Error = Infallible; /// type Future = Ready>; /// /// fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll> { /// Poll::Ready(Ok(())) /// } /// /// fn call(&mut self, req: Request) -> Self::Future { /// ready(Ok(Response {})) /// } /// } /// /// // Example function that runs a service by accepting new connections and using /// // `Make` to create new services that might be bound to the connection. /// // /// // This is similar to what you might find in hyper. /// async fn serve_make_service(make: Make) /// where /// Make: MakeService /// { /// // ... /// } /// /// # async { /// // Our service /// let svc = MyService; /// /// // Make it `Clone` by putting a channel in front /// let buffered = Buffer::new(svc, 1024); /// /// // Convert it into a `MakeService` /// let make = Shared::new(buffered); /// /// // Run the service and just ignore the `Connection`s as `MyService` doesn't need them /// serve_make_service(make).await; /// # }; /// ``` #[derive(Debug, Clone, Copy)] pub struct Shared { service: S, } impl Shared { /// Create a new [`Shared`] from a service. pub fn new(service: S) -> Self { Self { service } } } impl Service for Shared where S: Clone, { type Response = S; type Error = Infallible; type Future = SharedFuture; fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll> { Poll::Ready(Ok(())) } fn call(&mut self, _target: T) -> Self::Future { SharedFuture::new(futures_util::future::ready(Ok(self.service.clone()))) } } opaque_future! { /// Response future from [`Shared`] services. pub type SharedFuture = futures_util::future::Ready>; } #[cfg(test)] mod tests { use super::*; use crate::make::MakeService; use crate::service_fn; use futures::future::poll_fn; async fn echo(req: R) -> Result { Ok(req) } #[tokio::test] async fn as_make_service() { let mut shared = Shared::new(service_fn(echo::<&'static str>)); poll_fn(|cx| MakeService::<(), _>::poll_ready(&mut shared, cx)) .await .unwrap(); let mut svc = shared.make_service(()).await.unwrap(); poll_fn(|cx| svc.poll_ready(cx)).await.unwrap(); let res = svc.call("foo").await.unwrap(); assert_eq!(res, "foo"); } #[tokio::test] async fn as_make_service_into_service() { let shared = Shared::new(service_fn(echo::<&'static str>)); let mut shared = MakeService::<(), _>::into_service(shared); poll_fn(|cx| Service::<()>::poll_ready(&mut shared, cx)) .await .unwrap(); let mut svc = shared.call(()).await.unwrap(); poll_fn(|cx| svc.poll_ready(cx)).await.unwrap(); let res = svc.call("foo").await.unwrap(); assert_eq!(res, "foo"); } } tower-0.4.13/src/make/make_service.rs000064400000000000000000000155470072674642500156240ustar 00000000000000//! Contains [`MakeService`] which is a trait alias for a [`Service`] of [`Service`]s. use crate::sealed::Sealed; use std::fmt; use std::future::Future; use std::marker::PhantomData; use std::task::{Context, Poll}; use tower_service::Service; pub(crate) mod shared; /// Creates new [`Service`] values. /// /// Acts as a service factory. This is useful for cases where new [`Service`] /// values must be produced. One case is a TCP server listener. The listener /// accepts new TCP streams, obtains a new [`Service`] value using the /// [`MakeService`] trait, and uses that new [`Service`] value to process inbound /// requests on that new TCP stream. /// /// This is essentially a trait alias for a [`Service`] of [`Service`]s. pub trait MakeService: Sealed<(Target, Request)> { /// Responses given by the service type Response; /// Errors produced by the service type Error; /// The [`Service`] value created by this factory type Service: Service; /// Errors produced while building a service. type MakeError; /// The future of the [`Service`] instance. type Future: Future>; /// Returns [`Poll::Ready`] when the factory is able to create more services. /// /// If the service is at capacity, then [`Poll::Pending`] is returned and the task /// is notified when the service becomes ready again. This function is /// expected to be called while on a task. /// /// [`Poll::Ready`]: std::task::Poll::Ready /// [`Poll::Pending`]: std::task::Poll::Pending fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll>; /// Create and return a new service value asynchronously. fn make_service(&mut self, target: Target) -> Self::Future; /// Consume this [`MakeService`] and convert it into a [`Service`]. /// /// # Example /// ``` /// use std::convert::Infallible; /// use tower::Service; /// use tower::make::MakeService; /// use tower::service_fn; /// /// # fn main() { /// # async { /// // A `MakeService` /// let make_service = service_fn(|make_req: ()| async { /// Ok::<_, Infallible>(service_fn(|req: String| async { /// Ok::<_, Infallible>(req) /// })) /// }); /// /// // Convert the `MakeService` into a `Service` /// let mut svc = make_service.into_service(); /// /// // Make a new service /// let mut new_svc = svc.call(()).await.unwrap(); /// /// // Call the service /// let res = new_svc.call("foo".to_string()).await.unwrap(); /// # }; /// # } /// ``` fn into_service(self) -> IntoService where Self: Sized, { IntoService { make: self, _marker: PhantomData, } } /// Convert this [`MakeService`] into a [`Service`] without consuming the original [`MakeService`]. /// /// # Example /// ``` /// use std::convert::Infallible; /// use tower::Service; /// use tower::make::MakeService; /// use tower::service_fn; /// /// # fn main() { /// # async { /// // A `MakeService` /// let mut make_service = service_fn(|make_req: ()| async { /// Ok::<_, Infallible>(service_fn(|req: String| async { /// Ok::<_, Infallible>(req) /// })) /// }); /// /// // Convert the `MakeService` into a `Service` /// let mut svc = make_service.as_service(); /// /// // Make a new service /// let mut new_svc = svc.call(()).await.unwrap(); /// /// // Call the service /// let res = new_svc.call("foo".to_string()).await.unwrap(); /// /// // The original `MakeService` is still accessible /// let new_svc = make_service.make_service(()).await.unwrap(); /// # }; /// # } /// ``` fn as_service(&mut self) -> AsService where Self: Sized, { AsService { make: self, _marker: PhantomData, } } } impl Sealed<(Target, Request)> for M where M: Service, S: Service, { } impl MakeService for M where M: Service, S: Service, { type Response = S::Response; type Error = S::Error; type Service = S; type MakeError = M::Error; type Future = M::Future; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { Service::poll_ready(self, cx) } fn make_service(&mut self, target: Target) -> Self::Future { Service::call(self, target) } } /// Service returned by [`MakeService::into_service`][into]. /// /// See the documentation on [`into_service`][into] for details. /// /// [into]: MakeService::into_service pub struct IntoService { make: M, _marker: PhantomData, } impl Clone for IntoService where M: Clone, { fn clone(&self) -> Self { Self { make: self.make.clone(), _marker: PhantomData, } } } impl fmt::Debug for IntoService where M: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("IntoService") .field("make", &self.make) .finish() } } impl Service for IntoService where M: Service, S: Service, { type Response = M::Response; type Error = M::Error; type Future = M::Future; #[inline] fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.make.poll_ready(cx) } #[inline] fn call(&mut self, target: Target) -> Self::Future { self.make.make_service(target) } } /// Service returned by [`MakeService::as_service`][as]. /// /// See the documentation on [`as_service`][as] for details. /// /// [as]: MakeService::as_service pub struct AsService<'a, M, Request> { make: &'a mut M, _marker: PhantomData, } impl fmt::Debug for AsService<'_, M, Request> where M: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("AsService") .field("make", &self.make) .finish() } } impl Service for AsService<'_, M, Request> where M: Service, S: Service, { type Response = M::Response; type Error = M::Error; type Future = M::Future; #[inline] fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.make.poll_ready(cx) } #[inline] fn call(&mut self, target: Target) -> Self::Future { self.make.make_service(target) } } tower-0.4.13/src/make/mod.rs000064400000000000000000000005610072674642500137340ustar 00000000000000//! Trait aliases for Services that produce specific types of Responses. mod make_connection; mod make_service; pub use self::make_connection::MakeConnection; pub use self::make_service::shared::Shared; pub use self::make_service::{AsService, IntoService, MakeService}; pub mod future { //! Future types pub use super::make_service::shared::SharedFuture; } tower-0.4.13/src/ready_cache/cache.rs000064400000000000000000000422040072674642500155320ustar 00000000000000//! A cache of services. use super::error; use futures_core::Stream; use futures_util::{stream::FuturesUnordered, task::AtomicWaker}; pub use indexmap::Equivalent; use indexmap::IndexMap; use std::fmt; use std::future::Future; use std::hash::Hash; use std::pin::Pin; use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::Arc; use std::task::{Context, Poll}; use tower_service::Service; use tracing::{debug, trace}; /// Drives readiness over a set of services. /// /// The cache maintains two internal data structures: /// /// * a set of _pending_ services that have not yet become ready; and /// * a set of _ready_ services that have previously polled ready. /// /// As each `S` typed [`Service`] is added to the cache via [`ReadyCache::push`], it /// is added to the _pending set_. As [`ReadyCache::poll_pending`] is invoked, /// pending services are polled and added to the _ready set_. /// /// [`ReadyCache::call_ready`] (or [`ReadyCache::call_ready_index`]) dispatches a /// request to the specified service, but panics if the specified service is not /// in the ready set. The `ReadyCache::check_*` functions can be used to ensure /// that a service is ready before dispatching a request. /// /// The ready set can hold services for an abitrarily long time. During this /// time, the runtime may process events that invalidate that ready state (for /// instance, if a keepalive detects a lost connection). In such cases, callers /// should use [`ReadyCache::check_ready`] (or [`ReadyCache::check_ready_index`]) /// immediately before dispatching a request to ensure that the service has not /// become unavailable. /// /// Once `ReadyCache::call_ready*` is invoked, the service is placed back into /// the _pending_ set to be driven to readiness again. /// /// When `ReadyCache::check_ready*` returns `false`, it indicates that the /// specified service is _not_ ready. If an error is returned, this indicats that /// the server failed and has been removed from the cache entirely. /// /// [`ReadyCache::evict`] can be used to remove a service from the cache (by key), /// though the service may not be dropped (if it is currently pending) until /// [`ReadyCache::poll_pending`] is invoked. /// /// Note that the by-index accessors are provided to support use cases (like /// power-of-two-choices load balancing) where the caller does not care to keep /// track of each service's key. Instead, it needs only to access _some_ ready /// service. In such a case, it should be noted that calls to /// [`ReadyCache::poll_pending`] and [`ReadyCache::evict`] may perturb the order of /// the ready set, so any cached indexes should be discarded after such a call. pub struct ReadyCache where K: Eq + Hash, { /// A stream of services that are not yet ready. pending: FuturesUnordered>, /// An index of cancelation handles for pending streams. pending_cancel_txs: IndexMap, /// Services that have previously become ready. Readiness can become stale, /// so a given service should be polled immediately before use. /// /// The cancelation oneshot is preserved (though unused) while the service is /// ready so that it need not be reallocated each time a request is /// dispatched. ready: IndexMap, } // Safety: This is safe because we do not use `Pin::new_unchecked`. impl Unpin for ReadyCache {} #[derive(Debug)] struct Cancel { waker: AtomicWaker, canceled: AtomicBool, } #[derive(Debug)] struct CancelRx(Arc); #[derive(Debug)] struct CancelTx(Arc); type CancelPair = (CancelTx, CancelRx); #[derive(Debug)] enum PendingError { Canceled(K), Inner(K, E), } pin_project_lite::pin_project! { /// A [`Future`] that becomes satisfied when an `S`-typed service is ready. /// /// May fail due to cancelation, i.e. if the service is evicted from the balancer. struct Pending { key: Option, cancel: Option, ready: Option, _pd: std::marker::PhantomData, } } // === ReadyCache === impl Default for ReadyCache where K: Eq + Hash, S: Service, { fn default() -> Self { Self { ready: IndexMap::default(), pending: FuturesUnordered::new(), pending_cancel_txs: IndexMap::default(), } } } impl fmt::Debug for ReadyCache where K: fmt::Debug + Eq + Hash, S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let Self { pending, pending_cancel_txs, ready, } = self; f.debug_struct("ReadyCache") .field("pending", pending) .field("pending_cancel_txs", pending_cancel_txs) .field("ready", ready) .finish() } } impl ReadyCache where K: Eq + Hash, { /// Returns the total number of services in the cache. pub fn len(&self) -> usize { self.ready_len() + self.pending_len() } /// Returns whether or not there are any services in the cache. pub fn is_empty(&self) -> bool { self.ready.is_empty() && self.pending.is_empty() } /// Returns the number of services in the ready set. pub fn ready_len(&self) -> usize { self.ready.len() } /// Returns the number of services in the unready set. pub fn pending_len(&self) -> usize { self.pending.len() } /// Returns true iff the given key is in the unready set. pub fn pending_contains>(&self, key: &Q) -> bool { self.pending_cancel_txs.contains_key(key) } /// Obtains a reference to a service in the ready set by key. pub fn get_ready>(&self, key: &Q) -> Option<(usize, &K, &S)> { self.ready.get_full(key).map(|(i, k, v)| (i, k, &v.0)) } /// Obtains a mutable reference to a service in the ready set by key. pub fn get_ready_mut>( &mut self, key: &Q, ) -> Option<(usize, &K, &mut S)> { self.ready .get_full_mut(key) .map(|(i, k, v)| (i, k, &mut v.0)) } /// Obtains a reference to a service in the ready set by index. pub fn get_ready_index(&self, idx: usize) -> Option<(&K, &S)> { self.ready.get_index(idx).map(|(k, v)| (k, &v.0)) } /// Obtains a mutable reference to a service in the ready set by index. pub fn get_ready_index_mut(&mut self, idx: usize) -> Option<(&mut K, &mut S)> { self.ready.get_index_mut(idx).map(|(k, v)| (k, &mut v.0)) } /// Evicts an item from the cache. /// /// Returns true if a service was marked for eviction. /// /// Services are dropped from the ready set immediately. Services in the /// pending set are marked for cancellation, but [`ReadyCache::poll_pending`] /// must be called to cause the service to be dropped. pub fn evict>(&mut self, key: &Q) -> bool { let canceled = if let Some(c) = self.pending_cancel_txs.swap_remove(key) { c.cancel(); true } else { false }; self.ready .swap_remove_full(key) .map(|_| true) .unwrap_or(canceled) } } impl ReadyCache where K: Clone + Eq + Hash, S: Service, >::Error: Into, S::Error: Into, { /// Pushes a new service onto the pending set. /// /// The service will be promoted to the ready set as [`poll_pending`] is invoked. /// /// Note that this does **not** remove services from the ready set. Once the /// old service is used, it will be dropped instead of being added back to /// the pending set; OR, when the new service becomes ready, it will replace /// the prior service in the ready set. /// /// [`poll_pending`]: crate::ready_cache::cache::ReadyCache::poll_pending pub fn push(&mut self, key: K, svc: S) { let cancel = cancelable(); self.push_pending(key, svc, cancel); } fn push_pending(&mut self, key: K, svc: S, (cancel_tx, cancel_rx): CancelPair) { if let Some(c) = self.pending_cancel_txs.insert(key.clone(), cancel_tx) { // If there is already a service for this key, cancel it. c.cancel(); } self.pending.push(Pending { key: Some(key), cancel: Some(cancel_rx), ready: Some(svc), _pd: std::marker::PhantomData, }); } /// Polls services pending readiness, adding ready services to the ready set. /// /// Returns [`Poll::Ready`] when there are no remaining unready services. /// [`poll_pending`] should be called again after [`push`] or /// [`call_ready_index`] are invoked. /// /// Failures indicate that an individual pending service failed to become /// ready (and has been removed from the cache). In such a case, /// [`poll_pending`] should typically be called again to continue driving /// pending services to readiness. /// /// [`poll_pending`]: crate::ready_cache::cache::ReadyCache::poll_pending /// [`push`]: crate::ready_cache::cache::ReadyCache::push /// [`call_ready_index`]: crate::ready_cache::cache::ReadyCache::call_ready_index pub fn poll_pending(&mut self, cx: &mut Context<'_>) -> Poll>> { loop { match Pin::new(&mut self.pending).poll_next(cx) { Poll::Pending => return Poll::Pending, Poll::Ready(None) => return Poll::Ready(Ok(())), Poll::Ready(Some(Ok((key, svc, cancel_rx)))) => { trace!("endpoint ready"); let cancel_tx = self.pending_cancel_txs.swap_remove(&key); if let Some(cancel_tx) = cancel_tx { // Keep track of the cancelation so that it need not be // recreated after the service is used. self.ready.insert(key, (svc, (cancel_tx, cancel_rx))); } else { assert!( cancel_tx.is_some(), "services that become ready must have a pending cancelation" ); } } Poll::Ready(Some(Err(PendingError::Canceled(_)))) => { debug!("endpoint canceled"); // The cancellation for this service was removed in order to // cause this cancellation. } Poll::Ready(Some(Err(PendingError::Inner(key, e)))) => { let cancel_tx = self.pending_cancel_txs.swap_remove(&key); assert!( cancel_tx.is_some(), "services that return an error must have a pending cancelation" ); return Err(error::Failed(key, e.into())).into(); } } } } /// Checks whether the referenced endpoint is ready. /// /// Returns true if the endpoint is ready and false if it is not. An error is /// returned if the endpoint fails. pub fn check_ready>( &mut self, cx: &mut Context<'_>, key: &Q, ) -> Result> { match self.ready.get_full_mut(key) { Some((index, _, _)) => self.check_ready_index(cx, index), None => Ok(false), } } /// Checks whether the referenced endpoint is ready. /// /// If the service is no longer ready, it is moved back into the pending set /// and `false` is returned. /// /// If the service errors, it is removed and dropped and the error is returned. pub fn check_ready_index( &mut self, cx: &mut Context<'_>, index: usize, ) -> Result> { let svc = match self.ready.get_index_mut(index) { None => return Ok(false), Some((_, (svc, _))) => svc, }; match svc.poll_ready(cx) { Poll::Ready(Ok(())) => Ok(true), Poll::Pending => { // became unready; so move it back there. let (key, (svc, cancel)) = self .ready .swap_remove_index(index) .expect("invalid ready index"); // If a new version of this service has been added to the // unready set, don't overwrite it. if !self.pending_contains(&key) { self.push_pending(key, svc, cancel); } Ok(false) } Poll::Ready(Err(e)) => { // failed, so drop it. let (key, _) = self .ready .swap_remove_index(index) .expect("invalid ready index"); Err(error::Failed(key, e.into())) } } } /// Calls a ready service by key. /// /// # Panics /// /// If the specified key does not exist in the ready pub fn call_ready>(&mut self, key: &Q, req: Req) -> S::Future { let (index, _, _) = self .ready .get_full_mut(key) .expect("check_ready was not called"); self.call_ready_index(index, req) } /// Calls a ready service by index. /// /// # Panics /// /// If the specified index is out of range. pub fn call_ready_index(&mut self, index: usize, req: Req) -> S::Future { let (key, (mut svc, cancel)) = self .ready .swap_remove_index(index) .expect("check_ready_index was not called"); let fut = svc.call(req); // If a new version of this service has been added to the // unready set, don't overwrite it. if !self.pending_contains(&key) { self.push_pending(key, svc, cancel); } fut } } // === impl Cancel === /// Creates a cancelation sender and receiver. /// /// A `tokio::sync::oneshot` is NOT used, as a `Receiver` is not guaranteed to /// observe results as soon as a `Sender` fires. Using an `AtomicBool` allows /// the state to be observed as soon as the cancelation is triggered. fn cancelable() -> CancelPair { let cx = Arc::new(Cancel { waker: AtomicWaker::new(), canceled: AtomicBool::new(false), }); (CancelTx(cx.clone()), CancelRx(cx)) } impl CancelTx { fn cancel(self) { self.0.canceled.store(true, Ordering::SeqCst); self.0.waker.wake(); } } // === Pending === impl Future for Pending where S: Service, { type Output = Result<(K, S, CancelRx), PendingError>; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let this = self.project(); // Before checking whether the service is ready, check to see whether // readiness has been canceled. let CancelRx(cancel) = this.cancel.as_mut().expect("polled after complete"); if cancel.canceled.load(Ordering::SeqCst) { let key = this.key.take().expect("polled after complete"); return Err(PendingError::Canceled(key)).into(); } match this .ready .as_mut() .expect("polled after ready") .poll_ready(cx) { Poll::Pending => { // Before returning Pending, register interest in cancelation so // that this future is polled again if the state changes. let CancelRx(cancel) = this.cancel.as_mut().expect("polled after complete"); cancel.waker.register(cx.waker()); // Because both the cancel receiver and cancel sender are held // by the `ReadyCache` (i.e., on a single task), then it must // not be possible for the cancelation state to change while // polling a `Pending` service. assert!( !cancel.canceled.load(Ordering::SeqCst), "cancelation cannot be notified while polling a pending service" ); Poll::Pending } Poll::Ready(Ok(())) => { let key = this.key.take().expect("polled after complete"); let cancel = this.cancel.take().expect("polled after complete"); Ok((key, this.ready.take().expect("polled after ready"), cancel)).into() } Poll::Ready(Err(e)) => { let key = this.key.take().expect("polled after compete"); Err(PendingError::Inner(key, e)).into() } } } } impl fmt::Debug for Pending where K: fmt::Debug, S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let Self { key, cancel, ready, _pd, } = self; f.debug_struct("Pending") .field("key", key) .field("cancel", cancel) .field("ready", ready) .finish() } } tower-0.4.13/src/ready_cache/error.rs000064400000000000000000000013140072674642500156150ustar 00000000000000//! Errors /// An error indicating that the service with a `K`-typed key failed with an /// error. pub struct Failed(pub K, pub crate::BoxError); // === Failed === impl std::fmt::Debug for Failed { fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result { f.debug_tuple("Failed") .field(&self.0) .field(&self.1) .finish() } } impl std::fmt::Display for Failed { fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result { self.1.fmt(f) } } impl std::error::Error for Failed { fn source(&self) -> Option<&(dyn std::error::Error + 'static)> { Some(&*self.1) } } tower-0.4.13/src/ready_cache/mod.rs000064400000000000000000000001310072674642500152370ustar 00000000000000//! A cache of services pub mod cache; pub mod error; pub use self::cache::ReadyCache; tower-0.4.13/src/reconnect/future.rs000064400000000000000000000031050072674642500155270ustar 00000000000000use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; pin_project! { /// Future that resolves to the response or failure to connect. #[derive(Debug)] pub struct ResponseFuture { #[pin] inner: Inner, } } pin_project! { #[project = InnerProj] #[derive(Debug)] enum Inner { Future { #[pin] fut: F, }, Error { error: Option, }, } } impl Inner { fn future(fut: F) -> Self { Self::Future { fut } } fn error(error: Option) -> Self { Self::Error { error } } } impl ResponseFuture { pub(crate) fn new(inner: F) -> Self { ResponseFuture { inner: Inner::future(inner), } } pub(crate) fn error(error: E) -> Self { ResponseFuture { inner: Inner::error(Some(error)), } } } impl Future for ResponseFuture where F: Future>, E: Into, ME: Into, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let me = self.project(); match me.inner.project() { InnerProj::Future { fut } => fut.poll(cx).map_err(Into::into), InnerProj::Error { error } => { let e = error.take().expect("Polled after ready.").into(); Poll::Ready(Err(e)) } } } } tower-0.4.13/src/reconnect/mod.rs000064400000000000000000000123510072674642500147770ustar 00000000000000//! Reconnect services when they fail. //! //! Reconnect takes some [`MakeService`] and transforms it into a //! [`Service`]. It then attempts to lazily connect and //! reconnect on failure. The `Reconnect` service becomes unavailable //! when the inner `MakeService::poll_ready` returns an error. When the //! connection future returned from `MakeService::call` fails this will be //! returned in the next call to `Reconnect::call`. This allows the user to //! call the service again even if the inner `MakeService` was unable to //! connect on the last call. //! //! [`MakeService`]: crate::make::MakeService //! [`Service`]: crate::Service mod future; pub use future::ResponseFuture; use crate::make::MakeService; use std::fmt; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; use tracing::trace; /// Reconnect to failed services. pub struct Reconnect where M: Service, { mk_service: M, state: State, target: Target, error: Option, } #[derive(Debug)] enum State { Idle, Connecting(F), Connected(S), } impl Reconnect where M: Service, { /// Lazily connect and reconnect to a [`Service`]. pub fn new(mk_service: M, target: Target) -> Self { Reconnect { mk_service, state: State::Idle, target, error: None, } } /// Reconnect to a already connected [`Service`]. pub fn with_connection(init_conn: M::Response, mk_service: M, target: Target) -> Self { Reconnect { mk_service, state: State::Connected(init_conn), target, error: None, } } } impl Service for Reconnect where M: Service, S: Service, M::Future: Unpin, crate::BoxError: From + From, Target: Clone, { type Response = S::Response; type Error = crate::BoxError; type Future = ResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { loop { match &mut self.state { State::Idle => { trace!("poll_ready; idle"); match self.mk_service.poll_ready(cx) { Poll::Ready(r) => r?, Poll::Pending => { trace!("poll_ready; MakeService not ready"); return Poll::Pending; } } let fut = self.mk_service.make_service(self.target.clone()); self.state = State::Connecting(fut); continue; } State::Connecting(ref mut f) => { trace!("poll_ready; connecting"); match Pin::new(f).poll(cx) { Poll::Ready(Ok(service)) => { self.state = State::Connected(service); } Poll::Pending => { trace!("poll_ready; not ready"); return Poll::Pending; } Poll::Ready(Err(e)) => { trace!("poll_ready; error"); self.state = State::Idle; self.error = Some(e); break; } } } State::Connected(ref mut inner) => { trace!("poll_ready; connected"); match inner.poll_ready(cx) { Poll::Ready(Ok(())) => { trace!("poll_ready; ready"); return Poll::Ready(Ok(())); } Poll::Pending => { trace!("poll_ready; not ready"); return Poll::Pending; } Poll::Ready(Err(_)) => { trace!("poll_ready; error"); self.state = State::Idle; } } } } } Poll::Ready(Ok(())) } fn call(&mut self, request: Request) -> Self::Future { if let Some(error) = self.error.take() { return ResponseFuture::error(error); } let service = match self.state { State::Connected(ref mut service) => service, _ => panic!("service not ready; poll_ready must be called first"), }; let fut = service.call(request); ResponseFuture::new(fut) } } impl fmt::Debug for Reconnect where M: Service + fmt::Debug, M::Future: fmt::Debug, M::Response: fmt::Debug, Target: fmt::Debug, { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { fmt.debug_struct("Reconnect") .field("mk_service", &self.mk_service) .field("state", &self.state) .field("target", &self.target) .finish() } } tower-0.4.13/src/retry/budget.rs000064400000000000000000000176340072674642500146700ustar 00000000000000//! A retry "budget" for allowing only a certain amount of retries over time. use std::{ fmt, sync::{ atomic::{AtomicIsize, Ordering}, Mutex, }, time::Duration, }; use tokio::time::Instant; /// Represents a "budget" for retrying requests. /// /// This is useful for limiting the amount of retries a service can perform /// over a period of time, or per a certain number of requests attempted. pub struct Budget { bucket: Bucket, deposit_amount: isize, withdraw_amount: isize, } /// Indicates that it is not currently allowed to "withdraw" another retry /// from the [`Budget`]. #[derive(Debug)] pub struct Overdrawn { _inner: (), } #[derive(Debug)] struct Bucket { generation: Mutex, /// Initial budget allowed for every second. reserve: isize, /// Slots of a the TTL divided evenly. slots: Box<[AtomicIsize]>, /// The amount of time represented by each slot. window: Duration, /// The changers for the current slot to be commited /// after the slot expires. writer: AtomicIsize, } #[derive(Debug)] struct Generation { /// Slot index of the last generation. index: usize, /// The timestamp since the last generation expired. time: Instant, } // ===== impl Budget ===== impl Budget { /// Create a [`Budget`] that allows for a certain percent of the total /// requests to be retried. /// /// - The `ttl` is the duration of how long a single `deposit` should be /// considered. Must be between 1 and 60 seconds. /// - The `min_per_sec` is the minimum rate of retries allowed to accomodate /// clients that have just started issuing requests, or clients that do /// not issue many requests per window. /// - The `retry_percent` is the percentage of calls to `deposit` that can /// be retried. This is in addition to any retries allowed for via /// `min_per_sec`. Must be between 0 and 1000. /// /// As an example, if `0.1` is used, then for every 10 calls to `deposit`, /// 1 retry will be allowed. If `2.0` is used, then every `deposit` /// allows for 2 retries. pub fn new(ttl: Duration, min_per_sec: u32, retry_percent: f32) -> Self { // assertions taken from finagle assert!(ttl >= Duration::from_secs(1)); assert!(ttl <= Duration::from_secs(60)); assert!(retry_percent >= 0.0); assert!(retry_percent <= 1000.0); assert!(min_per_sec < ::std::i32::MAX as u32); let (deposit_amount, withdraw_amount) = if retry_percent == 0.0 { // If there is no percent, then you gain nothing from deposits. // Withdrawals can only be made against the reserve, over time. (0, 1) } else if retry_percent <= 1.0 { (1, (1.0 / retry_percent) as isize) } else { // Support for when retry_percent is between 1.0 and 1000.0, // meaning for every deposit D, D*retry_percent withdrawals // can be made. (1000, (1000.0 / retry_percent) as isize) }; let reserve = (min_per_sec as isize) .saturating_mul(ttl.as_secs() as isize) // ttl is between 1 and 60 seconds .saturating_mul(withdraw_amount); // AtomicIsize isn't clone, so the slots need to be built in a loop... let windows = 10u32; let mut slots = Vec::with_capacity(windows as usize); for _ in 0..windows { slots.push(AtomicIsize::new(0)); } Budget { bucket: Bucket { generation: Mutex::new(Generation { index: 0, time: Instant::now(), }), reserve, slots: slots.into_boxed_slice(), window: ttl / windows, writer: AtomicIsize::new(0), }, deposit_amount, withdraw_amount, } } /// Store a "deposit" in the budget, which will be used to permit future /// withdrawals. pub fn deposit(&self) { self.bucket.put(self.deposit_amount); } /// Check whether there is enough "balance" in the budget to issue a new /// retry. /// /// If there is not enough, an `Err(Overdrawn)` is returned. pub fn withdraw(&self) -> Result<(), Overdrawn> { if self.bucket.try_get(self.withdraw_amount) { Ok(()) } else { Err(Overdrawn { _inner: () }) } } } impl Default for Budget { fn default() -> Budget { Budget::new(Duration::from_secs(10), 10, 0.2) } } impl fmt::Debug for Budget { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("Budget") .field("deposit", &self.deposit_amount) .field("withdraw", &self.withdraw_amount) .field("balance", &self.bucket.sum()) .finish() } } // ===== impl Bucket ===== impl Bucket { fn put(&self, amt: isize) { self.expire(); self.writer.fetch_add(amt, Ordering::SeqCst); } fn try_get(&self, amt: isize) -> bool { debug_assert!(amt >= 0); self.expire(); let sum = self.sum(); if sum >= amt { self.writer.fetch_add(-amt, Ordering::SeqCst); true } else { false } } fn expire(&self) { let mut gen = self.generation.lock().expect("generation lock"); let now = Instant::now(); let diff = now.saturating_duration_since(gen.time); if diff < self.window { // not expired yet return; } let to_commit = self.writer.swap(0, Ordering::SeqCst); self.slots[gen.index].store(to_commit, Ordering::SeqCst); let mut diff = diff; let mut idx = (gen.index + 1) % self.slots.len(); while diff > self.window { self.slots[idx].store(0, Ordering::SeqCst); diff -= self.window; idx = (idx + 1) % self.slots.len(); } gen.index = idx; gen.time = now; } fn sum(&self) -> isize { let current = self.writer.load(Ordering::SeqCst); let windowed_sum: isize = self .slots .iter() .map(|slot| slot.load(Ordering::SeqCst)) // fold() is used instead of sum() to determine overflow behavior .fold(0, isize::saturating_add); current .saturating_add(windowed_sum) .saturating_add(self.reserve) } } #[cfg(test)] mod tests { use super::*; use tokio::time; #[test] fn empty() { let bgt = Budget::new(Duration::from_secs(1), 0, 1.0); bgt.withdraw().unwrap_err(); } #[tokio::test] async fn leaky() { time::pause(); let bgt = Budget::new(Duration::from_secs(1), 0, 1.0); bgt.deposit(); time::advance(Duration::from_secs(3)).await; bgt.withdraw().unwrap_err(); } #[tokio::test] async fn slots() { time::pause(); let bgt = Budget::new(Duration::from_secs(1), 0, 0.5); bgt.deposit(); bgt.deposit(); time::advance(Duration::from_millis(901)).await; // 900ms later, the deposit should still be valid bgt.withdraw().unwrap(); // blank slate time::advance(Duration::from_millis(2001)).await; bgt.deposit(); time::advance(Duration::from_millis(301)).await; bgt.deposit(); time::advance(Duration::from_millis(801)).await; bgt.deposit(); // the first deposit is expired, but the 2nd should still be valid, // combining with the 3rd bgt.withdraw().unwrap(); } #[tokio::test] async fn reserve() { let bgt = Budget::new(Duration::from_secs(1), 5, 1.0); bgt.withdraw().unwrap(); bgt.withdraw().unwrap(); bgt.withdraw().unwrap(); bgt.withdraw().unwrap(); bgt.withdraw().unwrap(); bgt.withdraw().unwrap_err(); } } tower-0.4.13/src/retry/future.rs000064400000000000000000000075200072674642500147210ustar 00000000000000//! Future types use super::{Policy, Retry}; use futures_core::ready; use pin_project_lite::pin_project; use std::future::Future; use std::pin::Pin; use std::task::{Context, Poll}; use tower_service::Service; pin_project! { /// The [`Future`] returned by a [`Retry`] service. #[derive(Debug)] pub struct ResponseFuture where P: Policy, S: Service, { request: Option, #[pin] retry: Retry, #[pin] state: State, } } pin_project! { #[project = StateProj] #[derive(Debug)] enum State { // Polling the future from [`Service::call`] Called { #[pin] future: F }, // Polling the future from [`Policy::retry`] Checking { #[pin] checking: P }, // Polling [`Service::poll_ready`] after [`Checking`] was OK. Retrying, } } impl ResponseFuture where P: Policy, S: Service, { pub(crate) fn new( request: Option, retry: Retry, future: S::Future, ) -> ResponseFuture { ResponseFuture { request, retry, state: State::Called { future }, } } } impl Future for ResponseFuture where P: Policy + Clone, S: Service + Clone, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let mut this = self.project(); loop { match this.state.as_mut().project() { StateProj::Called { future } => { let result = ready!(future.poll(cx)); if let Some(ref req) = this.request { match this.retry.policy.retry(req, result.as_ref()) { Some(checking) => { this.state.set(State::Checking { checking }); } None => return Poll::Ready(result), } } else { // request wasn't cloned, so no way to retry it return Poll::Ready(result); } } StateProj::Checking { checking } => { this.retry .as_mut() .project() .policy .set(ready!(checking.poll(cx))); this.state.set(State::Retrying); } StateProj::Retrying => { // NOTE: we assume here that // // this.retry.poll_ready() // // is equivalent to // // this.retry.service.poll_ready() // // we need to make that assumption to avoid adding an Unpin bound to the Policy // in Ready to make it Unpin so that we can get &mut Ready as needed to call // poll_ready on it. ready!(this.retry.as_mut().project().service.poll_ready(cx))?; let req = this .request .take() .expect("retrying requires cloned request"); *this.request = this.retry.policy.clone_request(&req); this.state.set(State::Called { future: this.retry.as_mut().project().service.call(req), }); } } } } } tower-0.4.13/src/retry/layer.rs000064400000000000000000000010170072674642500145160ustar 00000000000000use super::Retry; use tower_layer::Layer; /// Retry requests based on a policy #[derive(Debug)] pub struct RetryLayer

{ policy: P, } impl

RetryLayer

{ /// Create a new [`RetryLayer`] from a retry policy pub fn new(policy: P) -> Self { RetryLayer { policy } } } impl Layer for RetryLayer

where P: Clone, { type Service = Retry; fn layer(&self, service: S) -> Self::Service { let policy = self.policy.clone(); Retry::new(policy, service) } } tower-0.4.13/src/retry/mod.rs000064400000000000000000000036350072674642500141710ustar 00000000000000//! Middleware for retrying "failed" requests. pub mod budget; pub mod future; mod layer; mod policy; pub use self::layer::RetryLayer; pub use self::policy::Policy; use self::future::ResponseFuture; use pin_project_lite::pin_project; use std::task::{Context, Poll}; use tower_service::Service; pin_project! { /// Configure retrying requests of "failed" responses. /// /// A [`Policy`] classifies what is a "failed" response. #[derive(Clone, Debug)] pub struct Retry { #[pin] policy: P, service: S, } } // ===== impl Retry ===== impl Retry { /// Retry the inner service depending on this [`Policy`]. pub fn new(policy: P, service: S) -> Self { Retry { policy, service } } /// Get a reference to the inner service pub fn get_ref(&self) -> &S { &self.service } /// Get a mutable reference to the inner service pub fn get_mut(&mut self) -> &mut S { &mut self.service } /// Consume `self`, returning the inner service pub fn into_inner(self) -> S { self.service } } impl Service for Retry where P: Policy + Clone, S: Service + Clone, { type Response = S::Response; type Error = S::Error; type Future = ResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { // NOTE: the Future::poll impl for ResponseFuture assumes that Retry::poll_ready is // equivalent to Ready.service.poll_ready. If this ever changes, that code must be updated // as well. self.service.poll_ready(cx) } fn call(&mut self, request: Request) -> Self::Future { let cloned = self.policy.clone_request(&request); let future = self.service.call(request); ResponseFuture::new(cloned, self.clone(), future) } } tower-0.4.13/src/retry/never.rs000064400000000000000000000003630072674642500145240ustar 00000000000000use std::fmt; #[derive(Debug)] /// An error that can never occur. pub enum Never {} impl fmt::Display for Never { fn fmt(&self, _: &mut fmt::Formatter) -> fmt::Result { match *self {} } } impl std::error::Error for Never {} tower-0.4.13/src/retry/policy.rs000064400000000000000000000042450072674642500147070ustar 00000000000000use std::future::Future; /// A "retry policy" to classify if a request should be retried. /// /// # Example /// /// ``` /// use tower::retry::Policy; /// use futures_util::future; /// /// type Req = String; /// type Res = String; /// /// struct Attempts(usize); /// /// impl Policy for Attempts { /// type Future = future::Ready; /// /// fn retry(&self, req: &Req, result: Result<&Res, &E>) -> Option { /// match result { /// Ok(_) => { /// // Treat all `Response`s as success, /// // so don't retry... /// None /// }, /// Err(_) => { /// // Treat all errors as failures... /// // But we limit the number of attempts... /// if self.0 > 0 { /// // Try again! /// Some(future::ready(Attempts(self.0 - 1))) /// } else { /// // Used all our attempts, no retry... /// None /// } /// } /// } /// } /// /// fn clone_request(&self, req: &Req) -> Option { /// Some(req.clone()) /// } /// } /// ``` pub trait Policy: Sized { /// The [`Future`] type returned by [`Policy::retry`]. type Future: Future; /// Check the policy if a certain request should be retried. /// /// This method is passed a reference to the original request, and either /// the [`Service::Response`] or [`Service::Error`] from the inner service. /// /// If the request should **not** be retried, return `None`. /// /// If the request *should* be retried, return `Some` future of a new /// policy that would apply for the next request attempt. /// /// [`Service::Response`]: crate::Service::Response /// [`Service::Error`]: crate::Service::Error fn retry(&self, req: &Req, result: Result<&Res, &E>) -> Option; /// Tries to clone a request before being passed to the inner service. /// /// If the request cannot be cloned, return [`None`]. fn clone_request(&self, req: &Req) -> Option; } tower-0.4.13/src/spawn_ready/future.rs000064400000000000000000000004070072674642500160650ustar 00000000000000//! Background readiness types opaque_future! { /// Response future from [`SpawnReady`] services. /// /// [`SpawnReady`]: crate::spawn_ready::SpawnReady pub type ResponseFuture = futures_util::future::MapErr crate::BoxError>; } tower-0.4.13/src/spawn_ready/layer.rs000064400000000000000000000007420072674642500156710ustar 00000000000000use super::MakeSpawnReady; use tower_layer::Layer; /// Spawns tasks to drive its inner service to readiness. #[derive(Debug, Clone, Default)] pub struct SpawnReadyLayer; impl SpawnReadyLayer { /// Builds a [`SpawnReadyLayer`] with the default executor. pub fn new() -> Self { Self } } impl Layer for SpawnReadyLayer { type Service = MakeSpawnReady; fn layer(&self, service: S) -> Self::Service { MakeSpawnReady::new(service) } } tower-0.4.13/src/spawn_ready/make.rs000064400000000000000000000027660072674642500155020ustar 00000000000000use super::SpawnReady; use futures_core::ready; use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; /// Builds [`SpawnReady`] instances with the result of an inner [`Service`]. #[derive(Clone, Debug)] pub struct MakeSpawnReady { inner: S, } impl MakeSpawnReady { /// Creates a new [`MakeSpawnReady`] wrapping `service`. pub fn new(service: S) -> Self { Self { inner: service } } } pin_project! { /// Builds a [`SpawnReady`] with the result of an inner [`Future`]. #[derive(Debug)] pub struct MakeFuture { #[pin] inner: F, } } impl Service for MakeSpawnReady where S: Service, { type Response = SpawnReady; type Error = S::Error; type Future = MakeFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx) } fn call(&mut self, target: Target) -> Self::Future { MakeFuture { inner: self.inner.call(target), } } } impl Future for MakeFuture where F: Future>, { type Output = Result, E>; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let this = self.project(); let inner = ready!(this.inner.poll(cx))?; let svc = SpawnReady::new(inner); Poll::Ready(Ok(svc)) } } tower-0.4.13/src/spawn_ready/mod.rs000064400000000000000000000004140072674642500153300ustar 00000000000000//! When an underlying service is not ready, drive it to readiness on a //! background task. pub mod future; mod layer; mod make; mod service; pub use self::layer::SpawnReadyLayer; pub use self::make::{MakeFuture, MakeSpawnReady}; pub use self::service::SpawnReady; tower-0.4.13/src/spawn_ready/service.rs000064400000000000000000000044200072674642500162120ustar 00000000000000use super::future::ResponseFuture; use crate::{util::ServiceExt, BoxError}; use futures_core::ready; use futures_util::future::TryFutureExt; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; use tracing::Instrument; /// Spawns tasks to drive an inner service to readiness. /// /// See crate level documentation for more details. #[derive(Debug)] pub struct SpawnReady { inner: Inner, } #[derive(Debug)] enum Inner { Service(Option), Future(tokio::task::JoinHandle>), } impl SpawnReady { /// Creates a new [`SpawnReady`] wrapping `service`. pub fn new(service: S) -> Self { Self { inner: Inner::Service(Some(service)), } } } impl Drop for SpawnReady { fn drop(&mut self) { if let Inner::Future(ref mut task) = self.inner { task.abort(); } } } impl Service for SpawnReady where Req: 'static, S: Service + Send + 'static, S::Error: Into, { type Response = S::Response; type Error = BoxError; type Future = ResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { loop { self.inner = match self.inner { Inner::Service(ref mut svc) => { if let Poll::Ready(r) = svc.as_mut().expect("illegal state").poll_ready(cx) { return Poll::Ready(r.map_err(Into::into)); } let svc = svc.take().expect("illegal state"); let rx = tokio::spawn(svc.ready_oneshot().map_err(Into::into).in_current_span()); Inner::Future(rx) } Inner::Future(ref mut fut) => { let svc = ready!(Pin::new(fut).poll(cx))??; Inner::Service(Some(svc)) } } } } fn call(&mut self, request: Req) -> Self::Future { match self.inner { Inner::Service(Some(ref mut svc)) => { ResponseFuture::new(svc.call(request).map_err(Into::into)) } _ => unreachable!("poll_ready must be called"), } } } tower-0.4.13/src/steer/mod.rs000064400000000000000000000152460072674642500141470ustar 00000000000000//! This module provides functionality to aid managing routing requests between [`Service`]s. //! //! # Example //! //! [`Steer`] can for example be used to create a router, akin to what you might find in web //! frameworks. //! //! Here, `GET /` will be sent to the `root` service, while all other requests go to `not_found`. //! //! ```rust //! # use std::task::{Context, Poll}; //! # use tower_service::Service; //! # use futures_util::future::{ready, Ready, poll_fn}; //! # use tower::steer::Steer; //! # use tower::service_fn; //! # use tower::util::BoxService; //! # use tower::ServiceExt; //! # use std::convert::Infallible; //! use http::{Request, Response, StatusCode, Method}; //! //! # #[tokio::main] //! # async fn main() -> Result<(), Box> { //! // Service that responds to `GET /` //! let root = service_fn(|req: Request| async move { //! # assert_eq!(req.uri().path(), "/"); //! let res = Response::new("Hello, World!".to_string()); //! Ok::<_, Infallible>(res) //! }); //! // We have to box the service so its type gets erased and we can put it in a `Vec` with other //! // services //! let root = BoxService::new(root); //! //! // Service that responds with `404 Not Found` to all requests //! let not_found = service_fn(|req: Request| async move { //! let res = Response::builder() //! .status(StatusCode::NOT_FOUND) //! .body(String::new()) //! .expect("response is valid"); //! Ok::<_, Infallible>(res) //! }); //! // Box that as well //! let not_found = BoxService::new(not_found); //! //! let mut svc = Steer::new( //! // All services we route between //! vec![root, not_found], //! // How we pick which service to send the request to //! |req: &Request, _services: &[_]| { //! if req.method() == Method::GET && req.uri().path() == "/" { //! 0 // Index of `root` //! } else { //! 1 // Index of `not_found` //! } //! }, //! ); //! //! // This request will get sent to `root` //! let req = Request::get("/").body(String::new()).unwrap(); //! let res = svc.ready().await?.call(req).await?; //! assert_eq!(res.into_body(), "Hello, World!"); //! //! // This request will get sent to `not_found` //! let req = Request::get("/does/not/exist").body(String::new()).unwrap(); //! let res = svc.ready().await?.call(req).await?; //! assert_eq!(res.status(), StatusCode::NOT_FOUND); //! assert_eq!(res.into_body(), ""); //! # //! # Ok(()) //! # } //! ``` use std::task::{Context, Poll}; use std::{collections::VecDeque, fmt, marker::PhantomData}; use tower_service::Service; /// This is how callers of [`Steer`] tell it which `Service` a `Req` corresponds to. pub trait Picker { /// Return an index into the iterator of `Service` passed to [`Steer::new`]. fn pick(&mut self, r: &Req, services: &[S]) -> usize; } impl Picker for F where F: Fn(&Req, &[S]) -> usize, { fn pick(&mut self, r: &Req, services: &[S]) -> usize { self(r, services) } } /// [`Steer`] manages a list of [`Service`]s which all handle the same type of request. /// /// An example use case is a sharded service. /// It accepts new requests, then: /// 1. Determines, via the provided [`Picker`], which [`Service`] the request coresponds to. /// 2. Waits (in [`Service::poll_ready`]) for *all* services to be ready. /// 3. Calls the correct [`Service`] with the request, and returns a future corresponding to the /// call. /// /// Note that [`Steer`] must wait for all services to be ready since it can't know ahead of time /// which [`Service`] the next message will arrive for, and is unwilling to buffer items /// indefinitely. This will cause head-of-line blocking unless paired with a [`Service`] that does /// buffer items indefinitely, and thus always returns [`Poll::Ready`]. For example, wrapping each /// component service with a [`Buffer`] with a high enough limit (the maximum number of concurrent /// requests) will prevent head-of-line blocking in [`Steer`]. /// /// [`Buffer`]: crate::buffer::Buffer pub struct Steer { router: F, services: Vec, not_ready: VecDeque, _phantom: PhantomData, } impl Steer { /// Make a new [`Steer`] with a list of [`Service`]'s and a [`Picker`]. /// /// Note: the order of the [`Service`]'s is significant for [`Picker::pick`]'s return value. pub fn new(services: impl IntoIterator, router: F) -> Self { let services: Vec<_> = services.into_iter().collect(); let not_ready: VecDeque<_> = services.iter().enumerate().map(|(i, _)| i).collect(); Self { router, services, not_ready, _phantom: PhantomData, } } } impl Service for Steer where S: Service, F: Picker, { type Response = S::Response; type Error = S::Error; type Future = S::Future; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { loop { // must wait for *all* services to be ready. // this will cause head-of-line blocking unless the underlying services are always ready. if self.not_ready.is_empty() { return Poll::Ready(Ok(())); } else { if self.services[self.not_ready[0]] .poll_ready(cx)? .is_pending() { return Poll::Pending; } self.not_ready.pop_front(); } } } fn call(&mut self, req: Req) -> Self::Future { assert!( self.not_ready.is_empty(), "Steer must wait for all services to be ready. Did you forget to call poll_ready()?" ); let idx = self.router.pick(&req, &self.services[..]); let cl = &mut self.services[idx]; self.not_ready.push_back(idx); cl.call(req) } } impl Clone for Steer where S: Clone, F: Clone, { fn clone(&self) -> Self { Self { router: self.router.clone(), services: self.services.clone(), not_ready: self.not_ready.clone(), _phantom: PhantomData, } } } impl fmt::Debug for Steer where S: fmt::Debug, F: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let Self { router, services, not_ready, _phantom, } = self; f.debug_struct("Steer") .field("router", router) .field("services", services) .field("not_ready", not_ready) .finish() } } tower-0.4.13/src/timeout/error.rs000064400000000000000000000006310072674642500150550ustar 00000000000000//! Error types use std::{error, fmt}; /// The timeout elapsed. #[derive(Debug, Default)] pub struct Elapsed(pub(super) ()); impl Elapsed { /// Construct a new elapsed error pub fn new() -> Self { Elapsed(()) } } impl fmt::Display for Elapsed { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.pad("request timed out") } } impl error::Error for Elapsed {} tower-0.4.13/src/timeout/future.rs000064400000000000000000000023240072674642500152370ustar 00000000000000//! Future types use super::error::Elapsed; use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tokio::time::Sleep; pin_project! { /// [`Timeout`] response future /// /// [`Timeout`]: crate::timeout::Timeout #[derive(Debug)] pub struct ResponseFuture { #[pin] response: T, #[pin] sleep: Sleep, } } impl ResponseFuture { pub(crate) fn new(response: T, sleep: Sleep) -> Self { ResponseFuture { response, sleep } } } impl Future for ResponseFuture where F: Future>, E: Into, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let this = self.project(); // First, try polling the future match this.response.poll(cx) { Poll::Ready(v) => return Poll::Ready(v.map_err(Into::into)), Poll::Pending => {} } // Now check the sleep match this.sleep.poll(cx) { Poll::Pending => Poll::Pending, Poll::Ready(_) => Poll::Ready(Err(Elapsed(()).into())), } } } tower-0.4.13/src/timeout/layer.rs000064400000000000000000000010220072674642500150330ustar 00000000000000use super::Timeout; use std::time::Duration; use tower_layer::Layer; /// Applies a timeout to requests via the supplied inner service. #[derive(Debug, Clone)] pub struct TimeoutLayer { timeout: Duration, } impl TimeoutLayer { /// Create a timeout from a duration pub fn new(timeout: Duration) -> Self { TimeoutLayer { timeout } } } impl Layer for TimeoutLayer { type Service = Timeout; fn layer(&self, service: S) -> Self::Service { Timeout::new(service, self.timeout) } } tower-0.4.13/src/timeout/mod.rs000064400000000000000000000032560072674642500145110ustar 00000000000000//! Middleware that applies a timeout to requests. //! //! If the response does not complete within the specified timeout, the response //! will be aborted. pub mod error; pub mod future; mod layer; pub use self::layer::TimeoutLayer; use self::future::ResponseFuture; use std::task::{Context, Poll}; use std::time::Duration; use tower_service::Service; /// Applies a timeout to requests. #[derive(Debug, Clone)] pub struct Timeout { inner: T, timeout: Duration, } // ===== impl Timeout ===== impl Timeout { /// Creates a new [`Timeout`] pub fn new(inner: T, timeout: Duration) -> Self { Timeout { inner, timeout } } /// Get a reference to the inner service pub fn get_ref(&self) -> &T { &self.inner } /// Get a mutable reference to the inner service pub fn get_mut(&mut self) -> &mut T { &mut self.inner } /// Consume `self`, returning the inner service pub fn into_inner(self) -> T { self.inner } } impl Service for Timeout where S: Service, S::Error: Into, { type Response = S::Response; type Error = crate::BoxError; type Future = ResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { match self.inner.poll_ready(cx) { Poll::Pending => Poll::Pending, Poll::Ready(r) => Poll::Ready(r.map_err(Into::into)), } } fn call(&mut self, request: Request) -> Self::Future { let response = self.inner.call(request); let sleep = tokio::time::sleep(self.timeout); ResponseFuture::new(response, sleep) } } tower-0.4.13/src/util/and_then.rs000064400000000000000000000064350072674642500150030ustar 00000000000000use futures_core::TryFuture; use futures_util::{future, TryFutureExt}; use std::fmt; use std::future::Future; use std::pin::Pin; use std::task::{Context, Poll}; use tower_layer::Layer; use tower_service::Service; /// Service returned by the [`and_then`] combinator. /// /// [`and_then`]: crate::util::ServiceExt::and_then #[derive(Clone)] pub struct AndThen { inner: S, f: F, } impl fmt::Debug for AndThen where S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("AndThen") .field("inner", &self.inner) .field("f", &format_args!("{}", std::any::type_name::())) .finish() } } pin_project_lite::pin_project! { /// Response future from [`AndThen`] services. /// /// [`AndThen`]: crate::util::AndThen pub struct AndThenFuture { #[pin] inner: future::AndThen, F2, N>, } } impl AndThenFuture { pub(crate) fn new(inner: future::AndThen, F2, N>) -> Self { Self { inner } } } impl std::fmt::Debug for AndThenFuture { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_tuple("AndThenFuture") .field(&format_args!("...")) .finish() } } impl Future for AndThenFuture where future::AndThen, F2, N>: Future, { type Output = , F2, N> as Future>::Output; #[inline] fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { self.project().inner.poll(cx) } } /// A [`Layer`] that produces a [`AndThen`] service. /// /// [`Layer`]: tower_layer::Layer #[derive(Clone, Debug)] pub struct AndThenLayer { f: F, } impl AndThen { /// Creates a new `AndThen` service. pub fn new(inner: S, f: F) -> Self { AndThen { f, inner } } /// Returns a new [`Layer`] that produces [`AndThen`] services. /// /// This is a convenience function that simply calls [`AndThenLayer::new`]. /// /// [`Layer`]: tower_layer::Layer pub fn layer(f: F) -> AndThenLayer { AndThenLayer { f } } } impl Service for AndThen where S: Service, S::Error: Into, F: FnOnce(S::Response) -> Fut + Clone, Fut: TryFuture, { type Response = Fut::Ok; type Error = Fut::Error; type Future = AndThenFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx).map_err(Into::into) } fn call(&mut self, request: Request) -> Self::Future { AndThenFuture::new(self.inner.call(request).err_into().and_then(self.f.clone())) } } impl AndThenLayer { /// Creates a new [`AndThenLayer`] layer. pub fn new(f: F) -> Self { AndThenLayer { f } } } impl Layer for AndThenLayer where F: Clone, { type Service = AndThen; fn layer(&self, inner: S) -> Self::Service { AndThen { f: self.f.clone(), inner, } } } tower-0.4.13/src/util/boxed/layer.rs000064400000000000000000000055020072674642500154320ustar 00000000000000use crate::util::BoxService; use std::{fmt, sync::Arc}; use tower_layer::{layer_fn, Layer}; use tower_service::Service; /// A boxed [`Layer`] trait object. /// /// [`BoxLayer`] turns a layer into a trait object, allowing both the [`Layer`] itself /// and the output [`Service`] to be dynamic, while having consistent types. /// /// This [`Layer`] produces [`BoxService`] instances erasing the type of the /// [`Service`] produced by the wrapped [`Layer`]. /// /// # Example /// /// `BoxLayer` can, for example, be useful to create layers dynamically that otherwise wouldn't have /// the same types. In this example, we include a [`Timeout`] layer /// only if an environment variable is set. We can use `BoxLayer` /// to return a consistent type regardless of runtime configuration: /// /// ``` /// use std::time::Duration; /// use tower::{Service, ServiceBuilder, BoxError, util::BoxLayer}; /// /// fn common_layer() -> BoxLayer /// where /// S: Service + Send + 'static, /// S::Future: Send + 'static, /// S::Error: Into + 'static, /// { /// let builder = ServiceBuilder::new() /// .concurrency_limit(100); /// /// if std::env::var("SET_TIMEOUT").is_ok() { /// let layer = builder /// .timeout(Duration::from_secs(30)) /// .into_inner(); /// /// BoxLayer::new(layer) /// } else { /// let layer = builder /// .map_err(Into::into) /// .into_inner(); /// /// BoxLayer::new(layer) /// } /// } /// ``` /// /// [`Layer`]: tower_layer::Layer /// [`Service`]: tower_service::Service /// [`BoxService`]: super::BoxService /// [`Timeout`]: crate::timeout pub struct BoxLayer { boxed: Arc> + Send + Sync + 'static>, } impl BoxLayer { /// Create a new [`BoxLayer`]. pub fn new(inner_layer: L) -> Self where L: Layer + Send + Sync + 'static, L::Service: Service + Send + 'static, >::Future: Send + 'static, { let layer = layer_fn(move |inner: In| { let out = inner_layer.layer(inner); BoxService::new(out) }); Self { boxed: Arc::new(layer), } } } impl Layer for BoxLayer { type Service = BoxService; fn layer(&self, inner: In) -> Self::Service { self.boxed.layer(inner) } } impl Clone for BoxLayer { fn clone(&self) -> Self { Self { boxed: Arc::clone(&self.boxed), } } } impl fmt::Debug for BoxLayer { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { fmt.debug_struct("BoxLayer").finish() } } tower-0.4.13/src/util/boxed/mod.rs000064400000000000000000000026160072674642500151000ustar 00000000000000//! Trait object [`Service`] instances //! //! Dynamically dispatched [`Service`] objects allow for erasing the underlying //! [`Service`] type and using the `Service` instances as opaque handles. This can //! be useful when the service instance cannot be explicitly named for whatever //! reason. //! //! There are two variants of service objects. [`BoxService`] requires both the //! service and the response future to be [`Send`]. These values can move freely //! across threads. [`UnsyncBoxService`] requires both the service and the //! response future to remain on the current thread. This is useful for //! representing services that are backed by [`Rc`] or other non-[`Send`] types. //! //! # Examples //! //! ``` //! use futures_util::future::ready; //! # use tower_service::Service; //! # use tower::util::{BoxService, service_fn}; //! // Respond to requests using a closure, but closures cannot be named... //! # pub fn main() { //! let svc = service_fn(|mut request: String| { //! request.push_str(" response"); //! ready(Ok(request)) //! }); //! //! let service: BoxService = BoxService::new(svc); //! # drop(service); //! } //! ``` //! //! [`Service`]: crate::Service //! [`Rc`]: std::rc::Rc mod layer; mod sync; mod unsync; #[allow(unreachable_pub)] // https://github.com/rust-lang/rust/issues/57411 pub use self::{layer::BoxLayer, sync::BoxService, unsync::UnsyncBoxService}; tower-0.4.13/src/util/boxed/sync.rs000064400000000000000000000041320072674642500152700ustar 00000000000000use crate::ServiceExt; use tower_layer::{layer_fn, LayerFn}; use tower_service::Service; use std::fmt; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; /// A boxed `Service + Send` trait object. /// /// [`BoxService`] turns a service into a trait object, allowing the response /// future type to be dynamic. This type requires both the service and the /// response future to be [`Send`]. /// /// If you need a boxed [`Service`] that implements [`Clone`] consider using /// [`BoxCloneService`](crate::util::BoxCloneService). /// /// See module level documentation for more details. pub struct BoxService { inner: Box> + Send>, } /// A boxed `Future + Send` trait object. /// /// This type alias represents a boxed future that is [`Send`] and can be moved /// across threads. type BoxFuture = Pin> + Send>>; impl BoxService { #[allow(missing_docs)] pub fn new(inner: S) -> Self where S: Service + Send + 'static, S::Future: Send + 'static, { let inner = Box::new(inner.map_future(|f: S::Future| Box::pin(f) as _)); BoxService { inner } } /// Returns a [`Layer`] for wrapping a [`Service`] in a [`BoxService`] /// middleware. /// /// [`Layer`]: crate::Layer pub fn layer() -> LayerFn Self> where S: Service + Send + 'static, S::Future: Send + 'static, { layer_fn(Self::new) } } impl Service for BoxService { type Response = U; type Error = E; type Future = BoxFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx) } fn call(&mut self, request: T) -> BoxFuture { self.inner.call(request) } } impl fmt::Debug for BoxService { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { fmt.debug_struct("BoxService").finish() } } tower-0.4.13/src/util/boxed/unsync.rs000064400000000000000000000043450072674642500156410ustar 00000000000000use tower_layer::{layer_fn, LayerFn}; use tower_service::Service; use std::fmt; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; /// A boxed [`Service`] trait object. pub struct UnsyncBoxService { inner: Box>>, } /// A boxed [`Future`] trait object. /// /// This type alias represents a boxed future that is *not* [`Send`] and must /// remain on the current thread. type UnsyncBoxFuture = Pin>>>; #[derive(Debug)] struct UnsyncBoxed { inner: S, } impl UnsyncBoxService { #[allow(missing_docs)] pub fn new(inner: S) -> Self where S: Service + 'static, S::Future: 'static, { let inner = Box::new(UnsyncBoxed { inner }); UnsyncBoxService { inner } } /// Returns a [`Layer`] for wrapping a [`Service`] in an [`UnsyncBoxService`] middleware. /// /// [`Layer`]: crate::Layer pub fn layer() -> LayerFn Self> where S: Service + 'static, S::Future: 'static, { layer_fn(Self::new) } } impl Service for UnsyncBoxService { type Response = U; type Error = E; type Future = UnsyncBoxFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx) } fn call(&mut self, request: T) -> UnsyncBoxFuture { self.inner.call(request) } } impl fmt::Debug for UnsyncBoxService { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { fmt.debug_struct("UnsyncBoxService").finish() } } impl Service for UnsyncBoxed where S: Service + 'static, S::Future: 'static, { type Response = S::Response; type Error = S::Error; type Future = Pin>>>; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx) } fn call(&mut self, request: Request) -> Self::Future { Box::pin(self.inner.call(request)) } } tower-0.4.13/src/util/boxed_clone.rs000064400000000000000000000071760072674642500155070ustar 00000000000000use super::ServiceExt; use futures_util::future::BoxFuture; use std::{ fmt, task::{Context, Poll}, }; use tower_layer::{layer_fn, LayerFn}; use tower_service::Service; /// A [`Clone`] + [`Send`] boxed [`Service`]. /// /// [`BoxCloneService`] turns a service into a trait object, allowing the /// response future type to be dynamic, and allowing the service to be cloned. /// /// This is similar to [`BoxService`](super::BoxService) except the resulting /// service implements [`Clone`]. /// /// # Example /// /// ``` /// use tower::{Service, ServiceBuilder, BoxError, util::BoxCloneService}; /// use std::time::Duration; /// # /// # struct Request; /// # struct Response; /// # impl Response { /// # fn new() -> Self { Self } /// # } /// /// // This service has a complex type that is hard to name /// let service = ServiceBuilder::new() /// .map_request(|req| { /// println!("received request"); /// req /// }) /// .map_response(|res| { /// println!("response produced"); /// res /// }) /// .load_shed() /// .concurrency_limit(64) /// .timeout(Duration::from_secs(10)) /// .service_fn(|req: Request| async { /// Ok::<_, BoxError>(Response::new()) /// }); /// # let service = assert_service(service); /// /// // `BoxCloneService` will erase the type so it's nameable /// let service: BoxCloneService = BoxCloneService::new(service); /// # let service = assert_service(service); /// /// // And we can still clone the service /// let cloned_service = service.clone(); /// # /// # fn assert_service(svc: S) -> S /// # where S: Service { svc } /// ``` pub struct BoxCloneService( Box< dyn CloneService>> + Send, >, ); impl BoxCloneService { /// Create a new `BoxCloneService`. pub fn new(inner: S) -> Self where S: Service + Clone + Send + 'static, S::Future: Send + 'static, { let inner = inner.map_future(|f| Box::pin(f) as _); BoxCloneService(Box::new(inner)) } /// Returns a [`Layer`] for wrapping a [`Service`] in a [`BoxCloneService`] /// middleware. /// /// [`Layer`]: crate::Layer pub fn layer() -> LayerFn Self> where S: Service + Clone + Send + 'static, S::Future: Send + 'static, { layer_fn(Self::new) } } impl Service for BoxCloneService { type Response = U; type Error = E; type Future = BoxFuture<'static, Result>; #[inline] fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.0.poll_ready(cx) } #[inline] fn call(&mut self, request: T) -> Self::Future { self.0.call(request) } } impl Clone for BoxCloneService { fn clone(&self) -> Self { Self(self.0.clone_box()) } } trait CloneService: Service { fn clone_box( &self, ) -> Box< dyn CloneService + Send, >; } impl CloneService for T where T: Service + Send + Clone + 'static, { fn clone_box( &self, ) -> Box + Send> { Box::new(self.clone()) } } impl fmt::Debug for BoxCloneService { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { fmt.debug_struct("BoxCloneService").finish() } } tower-0.4.13/src/util/call_all/common.rs000064400000000000000000000066400072674642500162540ustar 00000000000000use futures_core::{ready, Stream}; use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; pin_project! { /// The [`Future`] returned by the [`ServiceExt::call_all`] combinator. #[derive(Debug)] pub(crate) struct CallAll { service: Option, #[pin] stream: S, queue: Q, eof: bool, } } pub(crate) trait Drive { fn is_empty(&self) -> bool; fn push(&mut self, future: F); fn poll(&mut self, cx: &mut Context<'_>) -> Poll>; } impl CallAll where Svc: Service, Svc::Error: Into, S: Stream, Q: Drive, { pub(crate) fn new(service: Svc, stream: S, queue: Q) -> CallAll { CallAll { service: Some(service), stream, queue, eof: false, } } /// Extract the wrapped [`Service`]. pub(crate) fn into_inner(mut self) -> Svc { self.service.take().expect("Service already taken") } /// Extract the wrapped [`Service`]. pub(crate) fn take_service(self: Pin<&mut Self>) -> Svc { self.project() .service .take() .expect("Service already taken") } pub(crate) fn unordered(mut self) -> super::CallAllUnordered { assert!(self.queue.is_empty() && !self.eof); super::CallAllUnordered::new(self.service.take().unwrap(), self.stream) } } impl Stream for CallAll where Svc: Service, Svc::Error: Into, S: Stream, Q: Drive, { type Item = Result; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { let mut this = self.project(); loop { // First, see if we have any responses to yield if let Poll::Ready(r) = this.queue.poll(cx) { if let Some(rsp) = r.transpose().map_err(Into::into)? { return Poll::Ready(Some(Ok(rsp))); } } // If there are no more requests coming, check if we're done if *this.eof { if this.queue.is_empty() { return Poll::Ready(None); } else { return Poll::Pending; } } // Then, see that the service is ready for another request let svc = this .service .as_mut() .expect("Using CallAll after extracing inner Service"); ready!(svc.poll_ready(cx)).map_err(Into::into)?; // If it is, gather the next request (if there is one), or return `Pending` if the // stream is not ready. // TODO: We probably want to "release" the slot we reserved in Svc if the // stream returns `Pending`. It may be a while until we get around to actually // using it. match ready!(this.stream.as_mut().poll_next(cx)) { Some(req) => { this.queue.push(svc.call(req)); } None => { // We're all done once any outstanding requests have completed *this.eof = true; } } } } } tower-0.4.13/src/util/call_all/mod.rs000064400000000000000000000006260072674642500155410ustar 00000000000000//! [`Stream`][stream] + [`Service`] => [`Stream`][stream]. //! //! [`Service`]: crate::Service //! [stream]: https://docs.rs/futures/latest/futures/stream/trait.Stream.html mod common; mod ordered; mod unordered; #[allow(unreachable_pub)] // https://github.com/rust-lang/rust/issues/57411 pub use self::{ordered::CallAll, unordered::CallAllUnordered}; tower-0.4.13/src/util/call_all/ordered.rs000064400000000000000000000133350072674642500164070ustar 00000000000000//! [`Stream`][stream] + [`Service`] => [`Stream`][stream]. //! //! [`Service`]: crate::Service //! [stream]: https://docs.rs/futures/latest/futures/stream/trait.Stream.html use super::common; use futures_core::Stream; use futures_util::stream::FuturesOrdered; use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; pin_project! { /// This is a [`Stream`] of responses resulting from calling the wrapped [`Service`] for each /// request received on the wrapped [`Stream`]. /// /// ```rust /// # use std::task::{Poll, Context}; /// # use std::cell::Cell; /// # use std::error::Error; /// # use std::rc::Rc; /// # /// use futures::future::{ready, Ready}; /// use futures::StreamExt; /// use futures::channel::mpsc; /// use tower_service::Service; /// use tower::util::ServiceExt; /// /// // First, we need to have a Service to process our requests. /// #[derive(Debug, Eq, PartialEq)] /// struct FirstLetter; /// impl Service<&'static str> for FirstLetter { /// type Response = &'static str; /// type Error = Box; /// type Future = Ready>; /// /// fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// Poll::Ready(Ok(())) /// } /// /// fn call(&mut self, req: &'static str) -> Self::Future { /// ready(Ok(&req[..1])) /// } /// } /// /// #[tokio::main] /// async fn main() { /// // Next, we need a Stream of requests. // TODO(eliza): when `tokio-util` has a nice way to convert MPSCs to streams, // tokio::sync::mpsc again? /// let (mut reqs, rx) = mpsc::unbounded(); /// // Note that we have to help Rust out here by telling it what error type to use. /// // Specifically, it has to be From + From. /// let mut rsps = FirstLetter.call_all(rx); /// /// // Now, let's send a few requests and then check that we get the corresponding responses. /// reqs.unbounded_send("one").unwrap(); /// reqs.unbounded_send("two").unwrap(); /// reqs.unbounded_send("three").unwrap(); /// drop(reqs); /// /// // We then loop over the response Strem that we get back from call_all. /// let mut i = 0usize; /// while let Some(rsp) = rsps.next().await { /// // Each response is a Result (we could also have used TryStream::try_next) /// match (i + 1, rsp.unwrap()) { /// (1, "o") | /// (2, "t") | /// (3, "t") => {} /// (n, i) => { /// unreachable!("{}. response was '{}'", n, i); /// } /// } /// i += 1; /// } /// /// // And at the end, we can get the Service back when there are no more requests. /// assert_eq!(rsps.into_inner(), FirstLetter); /// } /// ``` /// /// [`Stream`]: https://docs.rs/futures/latest/futures/stream/trait.Stream.html #[derive(Debug)] pub struct CallAll where Svc: Service, S: Stream, { #[pin] inner: common::CallAll>, } } impl CallAll where Svc: Service, Svc::Error: Into, S: Stream, { /// Create new [`CallAll`] combinator. /// /// Each request yielded by `stream` is passed to `svc`, and the resulting responses are /// yielded in the same order by the implementation of [`Stream`] for [`CallAll`]. /// /// [`Stream`]: https://docs.rs/futures/latest/futures/stream/trait.Stream.html pub fn new(service: Svc, stream: S) -> CallAll { CallAll { inner: common::CallAll::new(service, stream, FuturesOrdered::new()), } } /// Extract the wrapped [`Service`]. /// /// # Panics /// /// Panics if [`take_service`] was already called. /// /// [`take_service`]: crate::util::CallAll::take_service pub fn into_inner(self) -> Svc { self.inner.into_inner() } /// Extract the wrapped [`Service`]. /// /// This [`CallAll`] can no longer be used after this function has been called. /// /// # Panics /// /// Panics if [`take_service`] was already called. /// /// [`take_service`]: crate::util::CallAll::take_service pub fn take_service(self: Pin<&mut Self>) -> Svc { self.project().inner.take_service() } /// Return responses as they are ready, regardless of the initial order. /// /// This function must be called before the stream is polled. /// /// # Panics /// /// Panics if [`poll`] was called. /// /// [`poll`]: std::future::Future::poll pub fn unordered(self) -> super::CallAllUnordered { self.inner.unordered() } } impl Stream for CallAll where Svc: Service, Svc::Error: Into, S: Stream, { type Item = Result; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { self.project().inner.poll_next(cx) } } impl common::Drive for FuturesOrdered { fn is_empty(&self) -> bool { FuturesOrdered::is_empty(self) } fn push(&mut self, future: F) { FuturesOrdered::push(self, future) } fn poll(&mut self, cx: &mut Context<'_>) -> Poll> { Stream::poll_next(Pin::new(self), cx) } } tower-0.4.13/src/util/call_all/unordered.rs000064400000000000000000000054160072674642500167530ustar 00000000000000//! [`Stream`][stream] + [`Service`] => [`Stream`][stream]. //! //! [`Service`]: crate::Service //! [stream]: https://docs.rs/futures/latest/futures/stream/trait.Stream.html use super::common; use futures_core::Stream; use futures_util::stream::FuturesUnordered; use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; pin_project! { /// A stream of responses received from the inner service in received order. /// /// Similar to [`CallAll`] except, instead of yielding responses in request order, /// responses are returned as they are available. /// /// [`CallAll`]: crate::util::CallAll #[derive(Debug)] pub struct CallAllUnordered where Svc: Service, S: Stream, { #[pin] inner: common::CallAll>, } } impl CallAllUnordered where Svc: Service, Svc::Error: Into, S: Stream, { /// Create new [`CallAllUnordered`] combinator. /// /// [`Stream`]: https://docs.rs/futures/latest/futures/stream/trait.Stream.html pub fn new(service: Svc, stream: S) -> CallAllUnordered { CallAllUnordered { inner: common::CallAll::new(service, stream, FuturesUnordered::new()), } } /// Extract the wrapped [`Service`]. /// /// # Panics /// /// Panics if [`take_service`] was already called. /// /// [`take_service`]: crate::util::CallAllUnordered::take_service pub fn into_inner(self) -> Svc { self.inner.into_inner() } /// Extract the wrapped `Service`. /// /// This [`CallAllUnordered`] can no longer be used after this function has been called. /// /// # Panics /// /// Panics if [`take_service`] was already called. /// /// [`take_service`]: crate::util::CallAllUnordered::take_service pub fn take_service(self: Pin<&mut Self>) -> Svc { self.project().inner.take_service() } } impl Stream for CallAllUnordered where Svc: Service, Svc::Error: Into, S: Stream, { type Item = Result; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { self.project().inner.poll_next(cx) } } impl common::Drive for FuturesUnordered { fn is_empty(&self) -> bool { FuturesUnordered::is_empty(self) } fn push(&mut self, future: F) { FuturesUnordered::push(self, future) } fn poll(&mut self, cx: &mut Context<'_>) -> Poll> { Stream::poll_next(Pin::new(self), cx) } } tower-0.4.13/src/util/either.rs000064400000000000000000000050200072674642500144700ustar 00000000000000//! Contains [`Either`] and related types and functions. //! //! See [`Either`] documentation for more details. use futures_core::ready; use pin_project::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_layer::Layer; use tower_service::Service; /// Combine two different service types into a single type. /// /// Both services must be of the same request, response, and error types. /// [`Either`] is useful for handling conditional branching in service middleware /// to different inner service types. #[pin_project(project = EitherProj)] #[derive(Clone, Debug)] pub enum Either { /// One type of backing [`Service`]. A(#[pin] A), /// The other type of backing [`Service`]. B(#[pin] B), } impl Service for Either where A: Service, A::Error: Into, B: Service, B::Error: Into, { type Response = A::Response; type Error = crate::BoxError; type Future = Either; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { use self::Either::*; match self { A(service) => Poll::Ready(Ok(ready!(service.poll_ready(cx)).map_err(Into::into)?)), B(service) => Poll::Ready(Ok(ready!(service.poll_ready(cx)).map_err(Into::into)?)), } } fn call(&mut self, request: Request) -> Self::Future { use self::Either::*; match self { A(service) => A(service.call(request)), B(service) => B(service.call(request)), } } } impl Future for Either where A: Future>, AE: Into, B: Future>, BE: Into, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { match self.project() { EitherProj::A(fut) => Poll::Ready(Ok(ready!(fut.poll(cx)).map_err(Into::into)?)), EitherProj::B(fut) => Poll::Ready(Ok(ready!(fut.poll(cx)).map_err(Into::into)?)), } } } impl Layer for Either where A: Layer, B: Layer, { type Service = Either; fn layer(&self, inner: S) -> Self::Service { match self { Either::A(layer) => Either::A(layer.layer(inner)), Either::B(layer) => Either::B(layer.layer(inner)), } } } tower-0.4.13/src/util/future_service.rs000064400000000000000000000152770072674642500162610ustar 00000000000000use std::fmt; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; /// Returns a new [`FutureService`] for the given future. /// /// A [`FutureService`] allows you to treat a future that resolves to a service as a service. This /// can be useful for services that are created asynchronously. /// /// # Example /// ``` /// use tower::{service_fn, Service, ServiceExt}; /// use tower::util::future_service; /// use std::convert::Infallible; /// /// # fn main() { /// # async { /// // A future which outputs a type implementing `Service`. /// let future_of_a_service = async { /// let svc = service_fn(|_req: ()| async { Ok::<_, Infallible>("ok") }); /// Ok::<_, Infallible>(svc) /// }; /// /// // Wrap the future with a `FutureService`, allowing it to be used /// // as a service without awaiting the future's completion: /// let mut svc = future_service(Box::pin(future_of_a_service)); /// /// // Now, when we wait for the service to become ready, it will /// // drive the future to completion internally. /// let svc = svc.ready().await.unwrap(); /// let res = svc.call(()).await.unwrap(); /// # }; /// # } /// ``` /// /// # Regarding the [`Unpin`] bound /// /// The [`Unpin`] bound on `F` is necessary because the future will be polled in /// [`Service::poll_ready`] which doesn't have a pinned receiver (it takes `&mut self` and not `self: /// Pin<&mut Self>`). So we cannot put the future into a `Pin` without requiring `Unpin`. /// /// This will most likely come up if you're calling `future_service` with an async block. In that /// case you can use `Box::pin(async { ... })` as shown in the example. pub fn future_service(future: F) -> FutureService where F: Future> + Unpin, S: Service, { FutureService::new(future) } /// A type that implements [`Service`] for a [`Future`] that produces a [`Service`]. /// /// See [`future_service`] for more details. #[derive(Clone)] pub struct FutureService { state: State, } impl FutureService { /// Returns a new [`FutureService`] for the given future. /// /// A [`FutureService`] allows you to treat a future that resolves to a service as a service. This /// can be useful for services that are created asynchronously. /// /// # Example /// ``` /// use tower::{service_fn, Service, ServiceExt}; /// use tower::util::FutureService; /// use std::convert::Infallible; /// /// # fn main() { /// # async { /// // A future which outputs a type implementing `Service`. /// let future_of_a_service = async { /// let svc = service_fn(|_req: ()| async { Ok::<_, Infallible>("ok") }); /// Ok::<_, Infallible>(svc) /// }; /// /// // Wrap the future with a `FutureService`, allowing it to be used /// // as a service without awaiting the future's completion: /// let mut svc = FutureService::new(Box::pin(future_of_a_service)); /// /// // Now, when we wait for the service to become ready, it will /// // drive the future to completion internally. /// let svc = svc.ready().await.unwrap(); /// let res = svc.call(()).await.unwrap(); /// # }; /// # } /// ``` /// /// # Regarding the [`Unpin`] bound /// /// The [`Unpin`] bound on `F` is necessary because the future will be polled in /// [`Service::poll_ready`] which doesn't have a pinned receiver (it takes `&mut self` and not `self: /// Pin<&mut Self>`). So we cannot put the future into a `Pin` without requiring `Unpin`. /// /// This will most likely come up if you're calling `future_service` with an async block. In that /// case you can use `Box::pin(async { ... })` as shown in the example. pub fn new(future: F) -> Self { Self { state: State::Future(future), } } } impl fmt::Debug for FutureService where S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("FutureService") .field("state", &format_args!("{:?}", self.state)) .finish() } } #[derive(Clone)] enum State { Future(F), Service(S), } impl fmt::Debug for State where S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { State::Future(_) => f .debug_tuple("State::Future") .field(&format_args!("<{}>", std::any::type_name::())) .finish(), State::Service(svc) => f.debug_tuple("State::Service").field(svc).finish(), } } } impl Service for FutureService where F: Future> + Unpin, S: Service, { type Response = S::Response; type Error = E; type Future = S::Future; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { loop { self.state = match &mut self.state { State::Future(fut) => { let fut = Pin::new(fut); let svc = futures_core::ready!(fut.poll(cx)?); State::Service(svc) } State::Service(svc) => return svc.poll_ready(cx), }; } } fn call(&mut self, req: R) -> Self::Future { if let State::Service(svc) = &mut self.state { svc.call(req) } else { panic!("FutureService::call was called before FutureService::poll_ready") } } } #[cfg(test)] mod tests { use super::*; use crate::util::{future_service, ServiceExt}; use crate::Service; use futures::future::{ready, Ready}; use std::convert::Infallible; #[tokio::test] async fn pending_service_debug_impl() { let mut pending_svc = future_service(ready(Ok(DebugService))); assert_eq!( format!("{:?}", pending_svc), "FutureService { state: State::Future(>>) }" ); pending_svc.ready().await.unwrap(); assert_eq!( format!("{:?}", pending_svc), "FutureService { state: State::Service(DebugService) }" ); } #[derive(Debug)] struct DebugService; impl Service<()> for DebugService { type Response = (); type Error = Infallible; type Future = Ready>; fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll> { Ok(()).into() } fn call(&mut self, _req: ()) -> Self::Future { ready(Ok(())) } } } tower-0.4.13/src/util/map_err.rs000064400000000000000000000043700072674642500146440ustar 00000000000000use futures_util::{future, TryFutureExt}; use std::fmt; use std::task::{Context, Poll}; use tower_layer::Layer; use tower_service::Service; /// Service returned by the [`map_err`] combinator. /// /// [`map_err`]: crate::util::ServiceExt::map_err #[derive(Clone)] pub struct MapErr { inner: S, f: F, } impl fmt::Debug for MapErr where S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("MapErr") .field("inner", &self.inner) .field("f", &format_args!("{}", std::any::type_name::())) .finish() } } /// A [`Layer`] that produces [`MapErr`] services. /// /// [`Layer`]: tower_layer::Layer #[derive(Clone, Debug)] pub struct MapErrLayer { f: F, } opaque_future! { /// Response future from [`MapErr`] services. /// /// [`MapErr`]: crate::util::MapErr pub type MapErrFuture = future::MapErr; } impl MapErr { /// Creates a new [`MapErr`] service. pub fn new(inner: S, f: F) -> Self { MapErr { f, inner } } /// Returns a new [`Layer`] that produces [`MapErr`] services. /// /// This is a convenience function that simply calls [`MapErrLayer::new`]. /// /// [`Layer`]: tower_layer::Layer pub fn layer(f: F) -> MapErrLayer { MapErrLayer { f } } } impl Service for MapErr where S: Service, F: FnOnce(S::Error) -> Error + Clone, { type Response = S::Response; type Error = Error; type Future = MapErrFuture; #[inline] fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx).map_err(self.f.clone()) } #[inline] fn call(&mut self, request: Request) -> Self::Future { MapErrFuture::new(self.inner.call(request).map_err(self.f.clone())) } } impl MapErrLayer { /// Creates a new [`MapErrLayer`]. pub fn new(f: F) -> Self { MapErrLayer { f } } } impl Layer for MapErrLayer where F: Clone, { type Service = MapErr; fn layer(&self, inner: S) -> Self::Service { MapErr { f: self.f.clone(), inner, } } } tower-0.4.13/src/util/map_future.rs000064400000000000000000000051260072674642500153660ustar 00000000000000use std::{ fmt, future::Future, task::{Context, Poll}, }; use tower_layer::Layer; use tower_service::Service; /// [`Service`] returned by the [`map_future`] combinator. /// /// [`map_future`]: crate::util::ServiceExt::map_future #[derive(Clone)] pub struct MapFuture { inner: S, f: F, } impl MapFuture { /// Creates a new [`MapFuture`] service. pub fn new(inner: S, f: F) -> Self { Self { inner, f } } /// Returns a new [`Layer`] that produces [`MapFuture`] services. /// /// This is a convenience function that simply calls [`MapFutureLayer::new`]. /// /// [`Layer`]: tower_layer::Layer pub fn layer(f: F) -> MapFutureLayer { MapFutureLayer::new(f) } /// Get a reference to the inner service pub fn get_ref(&self) -> &S { &self.inner } /// Get a mutable reference to the inner service pub fn get_mut(&mut self) -> &mut S { &mut self.inner } /// Consume `self`, returning the inner service pub fn into_inner(self) -> S { self.inner } } impl Service for MapFuture where S: Service, F: FnMut(S::Future) -> Fut, E: From, Fut: Future>, { type Response = T; type Error = E; type Future = Fut; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx).map_err(From::from) } fn call(&mut self, req: R) -> Self::Future { (self.f)(self.inner.call(req)) } } impl fmt::Debug for MapFuture where S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("MapFuture") .field("inner", &self.inner) .field("f", &format_args!("{}", std::any::type_name::())) .finish() } } /// A [`Layer`] that produces a [`MapFuture`] service. /// /// [`Layer`]: tower_layer::Layer #[derive(Clone)] pub struct MapFutureLayer { f: F, } impl MapFutureLayer { /// Creates a new [`MapFutureLayer`] layer. pub fn new(f: F) -> Self { Self { f } } } impl Layer for MapFutureLayer where F: Clone, { type Service = MapFuture; fn layer(&self, inner: S) -> Self::Service { MapFuture::new(inner, self.f.clone()) } } impl fmt::Debug for MapFutureLayer { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("MapFutureLayer") .field("f", &format_args!("{}", std::any::type_name::())) .finish() } } tower-0.4.13/src/util/map_request.rs000064400000000000000000000040000072674642500155320ustar 00000000000000use std::fmt; use std::task::{Context, Poll}; use tower_layer::Layer; use tower_service::Service; /// Service returned by the [`MapRequest`] combinator. /// /// [`MapRequest`]: crate::util::ServiceExt::map_request #[derive(Clone)] pub struct MapRequest { inner: S, f: F, } impl fmt::Debug for MapRequest where S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("MapRequest") .field("inner", &self.inner) .field("f", &format_args!("{}", std::any::type_name::())) .finish() } } impl MapRequest { /// Creates a new [`MapRequest`] service. pub fn new(inner: S, f: F) -> Self { MapRequest { inner, f } } /// Returns a new [`Layer`] that produces [`MapRequest`] services. /// /// This is a convenience function that simply calls [`MapRequestLayer::new`]. /// /// [`Layer`]: tower_layer::Layer pub fn layer(f: F) -> MapRequestLayer { MapRequestLayer { f } } } impl Service for MapRequest where S: Service, F: FnMut(R1) -> R2, { type Response = S::Response; type Error = S::Error; type Future = S::Future; #[inline] fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx) } #[inline] fn call(&mut self, request: R1) -> S::Future { self.inner.call((self.f)(request)) } } /// A [`Layer`] that produces [`MapRequest`] services. /// /// [`Layer`]: tower_layer::Layer #[derive(Clone, Debug)] pub struct MapRequestLayer { f: F, } impl MapRequestLayer { /// Creates a new [`MapRequestLayer`]. pub fn new(f: F) -> Self { MapRequestLayer { f } } } impl Layer for MapRequestLayer where F: Clone, { type Service = MapRequest; fn layer(&self, inner: S) -> Self::Service { MapRequest { f: self.f.clone(), inner, } } } tower-0.4.13/src/util/map_response.rs000064400000000000000000000045670072674642500157220ustar 00000000000000use futures_util::{future::MapOk, TryFutureExt}; use std::fmt; use std::task::{Context, Poll}; use tower_layer::Layer; use tower_service::Service; /// Service returned by the [`map_response`] combinator. /// /// [`map_response`]: crate::util::ServiceExt::map_response #[derive(Clone)] pub struct MapResponse { inner: S, f: F, } impl fmt::Debug for MapResponse where S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("MapResponse") .field("inner", &self.inner) .field("f", &format_args!("{}", std::any::type_name::())) .finish() } } /// A [`Layer`] that produces a [`MapResponse`] service. /// /// [`Layer`]: tower_layer::Layer #[derive(Debug, Clone)] pub struct MapResponseLayer { f: F, } opaque_future! { /// Response future from [`MapResponse`] services. /// /// [`MapResponse`]: crate::util::MapResponse pub type MapResponseFuture = MapOk; } impl MapResponse { /// Creates a new `MapResponse` service. pub fn new(inner: S, f: F) -> Self { MapResponse { f, inner } } /// Returns a new [`Layer`] that produces [`MapResponse`] services. /// /// This is a convenience function that simply calls [`MapResponseLayer::new`]. /// /// [`Layer`]: tower_layer::Layer pub fn layer(f: F) -> MapResponseLayer { MapResponseLayer { f } } } impl Service for MapResponse where S: Service, F: FnOnce(S::Response) -> Response + Clone, { type Response = Response; type Error = S::Error; type Future = MapResponseFuture; #[inline] fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx) } #[inline] fn call(&mut self, request: Request) -> Self::Future { MapResponseFuture::new(self.inner.call(request).map_ok(self.f.clone())) } } impl MapResponseLayer { /// Creates a new [`MapResponseLayer`] layer. pub fn new(f: F) -> Self { MapResponseLayer { f } } } impl Layer for MapResponseLayer where F: Clone, { type Service = MapResponse; fn layer(&self, inner: S) -> Self::Service { MapResponse { f: self.f.clone(), inner, } } } tower-0.4.13/src/util/map_result.rs000064400000000000000000000046130072674642500153720ustar 00000000000000use futures_util::{future::Map, FutureExt}; use std::fmt; use std::task::{Context, Poll}; use tower_layer::Layer; use tower_service::Service; /// Service returned by the [`map_result`] combinator. /// /// [`map_result`]: crate::util::ServiceExt::map_result #[derive(Clone)] pub struct MapResult { inner: S, f: F, } impl fmt::Debug for MapResult where S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("MapResult") .field("inner", &self.inner) .field("f", &format_args!("{}", std::any::type_name::())) .finish() } } /// A [`Layer`] that produces a [`MapResult`] service. /// /// [`Layer`]: tower_layer::Layer #[derive(Debug, Clone)] pub struct MapResultLayer { f: F, } opaque_future! { /// Response future from [`MapResult`] services. /// /// [`MapResult`]: crate::util::MapResult pub type MapResultFuture = Map; } impl MapResult { /// Creates a new [`MapResult`] service. pub fn new(inner: S, f: F) -> Self { MapResult { f, inner } } /// Returns a new [`Layer`] that produces [`MapResult`] services. /// /// This is a convenience function that simply calls [`MapResultLayer::new`]. /// /// [`Layer`]: tower_layer::Layer pub fn layer(f: F) -> MapResultLayer { MapResultLayer { f } } } impl Service for MapResult where S: Service, Error: From, F: FnOnce(Result) -> Result + Clone, { type Response = Response; type Error = Error; type Future = MapResultFuture; #[inline] fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx).map_err(Into::into) } #[inline] fn call(&mut self, request: Request) -> Self::Future { MapResultFuture::new(self.inner.call(request).map(self.f.clone())) } } impl MapResultLayer { /// Creates a new [`MapResultLayer`] layer. pub fn new(f: F) -> Self { MapResultLayer { f } } } impl Layer for MapResultLayer where F: Clone, { type Service = MapResult; fn layer(&self, inner: S) -> Self::Service { MapResult { f: self.f.clone(), inner, } } } tower-0.4.13/src/util/mod.rs000064400000000000000000001056300072674642500137770ustar 00000000000000//! Various utility types and functions that are generally used with Tower. mod and_then; mod boxed; mod boxed_clone; mod call_all; mod either; mod future_service; mod map_err; mod map_request; mod map_response; mod map_result; mod map_future; mod oneshot; mod optional; mod ready; mod service_fn; mod then; #[allow(deprecated)] pub use self::{ and_then::{AndThen, AndThenLayer}, boxed::{BoxLayer, BoxService, UnsyncBoxService}, boxed_clone::BoxCloneService, either::Either, future_service::{future_service, FutureService}, map_err::{MapErr, MapErrLayer}, map_future::{MapFuture, MapFutureLayer}, map_request::{MapRequest, MapRequestLayer}, map_response::{MapResponse, MapResponseLayer}, map_result::{MapResult, MapResultLayer}, oneshot::Oneshot, optional::Optional, ready::{Ready, ReadyAnd, ReadyOneshot}, service_fn::{service_fn, ServiceFn}, then::{Then, ThenLayer}, }; pub use self::call_all::{CallAll, CallAllUnordered}; use std::future::Future; use crate::layer::util::Identity; pub mod error { //! Error types pub use super::optional::error as optional; } pub mod future { //! Future types pub use super::and_then::AndThenFuture; pub use super::map_err::MapErrFuture; pub use super::map_response::MapResponseFuture; pub use super::map_result::MapResultFuture; pub use super::optional::future as optional; pub use super::then::ThenFuture; } /// An extension trait for `Service`s that provides a variety of convenient /// adapters pub trait ServiceExt: tower_service::Service { /// Yields a mutable reference to the service when it is ready to accept a request. fn ready(&mut self) -> Ready<'_, Self, Request> where Self: Sized, { Ready::new(self) } /// Yields a mutable reference to the service when it is ready to accept a request. #[deprecated( since = "0.4.6", note = "please use the `ServiceExt::ready` method instead" )] #[allow(deprecated)] fn ready_and(&mut self) -> ReadyAnd<'_, Self, Request> where Self: Sized, { ReadyAnd::new(self) } /// Yields the service when it is ready to accept a request. fn ready_oneshot(self) -> ReadyOneshot where Self: Sized, { ReadyOneshot::new(self) } /// Consume this `Service`, calling with the providing request once it is ready. fn oneshot(self, req: Request) -> Oneshot where Self: Sized, { Oneshot::new(self, req) } /// Process all requests from the given [`Stream`], and produce a [`Stream`] of their responses. /// /// This is essentially [`Stream`][stream] + `Self` => [`Stream`][stream]. See the documentation for [`CallAll`] for /// details. /// /// [`Stream`]: https://docs.rs/futures/latest/futures/stream/trait.Stream.html /// [stream]: https://docs.rs/futures/latest/futures/stream/trait.Stream.html fn call_all(self, reqs: S) -> CallAll where Self: Sized, Self::Error: Into, S: futures_core::Stream, { CallAll::new(self, reqs) } /// Executes a new future after this service's future resolves. This does /// not alter the behaviour of the [`poll_ready`] method. /// /// This method can be used to change the [`Response`] type of the service /// into a different type. You can use this method to chain along a computation once the /// service's response has been resolved. /// /// [`Response`]: crate::Service::Response /// [`poll_ready`]: crate::Service::poll_ready /// /// # Example /// ``` /// # use std::task::{Poll, Context}; /// # use tower::{Service, ServiceExt}; /// # /// # struct DatabaseService; /// # impl DatabaseService { /// # fn new(address: &str) -> Self { /// # DatabaseService /// # } /// # } /// # /// # struct Record { /// # pub name: String, /// # pub age: u16 /// # } /// # /// # impl Service for DatabaseService { /// # type Response = Record; /// # type Error = u8; /// # type Future = futures_util::future::Ready>; /// # /// # fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// # Poll::Ready(Ok(())) /// # } /// # /// # fn call(&mut self, request: u32) -> Self::Future { /// # futures_util::future::ready(Ok(Record { name: "Jack".into(), age: 32 })) /// # } /// # } /// # /// # async fn avatar_lookup(name: String) -> Result, u8> { Ok(vec![]) } /// # /// # fn main() { /// # async { /// // A service returning Result /// let service = DatabaseService::new("127.0.0.1:8080"); /// /// // Map the response into a new response /// let mut new_service = service.and_then(|record: Record| async move { /// let name = record.name; /// avatar_lookup(name).await /// }); /// /// // Call the new service /// let id = 13; /// let avatar = new_service.call(id).await.unwrap(); /// # }; /// # } /// ``` fn and_then(self, f: F) -> AndThen where Self: Sized, F: Clone, { AndThen::new(self, f) } /// Maps this service's response value to a different value. This does not /// alter the behaviour of the [`poll_ready`] method. /// /// This method can be used to change the [`Response`] type of the service /// into a different type. It is similar to the [`Result::map`] /// method. You can use this method to chain along a computation once the /// service's response has been resolved. /// /// [`Response`]: crate::Service::Response /// [`poll_ready`]: crate::Service::poll_ready /// /// # Example /// ``` /// # use std::task::{Poll, Context}; /// # use tower::{Service, ServiceExt}; /// # /// # struct DatabaseService; /// # impl DatabaseService { /// # fn new(address: &str) -> Self { /// # DatabaseService /// # } /// # } /// # /// # struct Record { /// # pub name: String, /// # pub age: u16 /// # } /// # /// # impl Service for DatabaseService { /// # type Response = Record; /// # type Error = u8; /// # type Future = futures_util::future::Ready>; /// # /// # fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// # Poll::Ready(Ok(())) /// # } /// # /// # fn call(&mut self, request: u32) -> Self::Future { /// # futures_util::future::ready(Ok(Record { name: "Jack".into(), age: 32 })) /// # } /// # } /// # /// # fn main() { /// # async { /// // A service returning Result /// let service = DatabaseService::new("127.0.0.1:8080"); /// /// // Map the response into a new response /// let mut new_service = service.map_response(|record| record.name); /// /// // Call the new service /// let id = 13; /// let name = new_service /// .ready() /// .await? /// .call(id) /// .await?; /// # Ok::<(), u8>(()) /// # }; /// # } /// ``` fn map_response(self, f: F) -> MapResponse where Self: Sized, F: FnOnce(Self::Response) -> Response + Clone, { MapResponse::new(self, f) } /// Maps this service's error value to a different value. This does not /// alter the behaviour of the [`poll_ready`] method. /// /// This method can be used to change the [`Error`] type of the service /// into a different type. It is similar to the [`Result::map_err`] method. /// /// [`Error`]: crate::Service::Error /// [`poll_ready`]: crate::Service::poll_ready /// /// # Example /// ``` /// # use std::task::{Poll, Context}; /// # use tower::{Service, ServiceExt}; /// # /// # struct DatabaseService; /// # impl DatabaseService { /// # fn new(address: &str) -> Self { /// # DatabaseService /// # } /// # } /// # /// # struct Error { /// # pub code: u32, /// # pub message: String /// # } /// # /// # impl Service for DatabaseService { /// # type Response = String; /// # type Error = Error; /// # type Future = futures_util::future::Ready>; /// # /// # fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// # Poll::Ready(Ok(())) /// # } /// # /// # fn call(&mut self, request: u32) -> Self::Future { /// # futures_util::future::ready(Ok(String::new())) /// # } /// # } /// # /// # fn main() { /// # async { /// // A service returning Result<_, Error> /// let service = DatabaseService::new("127.0.0.1:8080"); /// /// // Map the error to a new error /// let mut new_service = service.map_err(|err| err.code); /// /// // Call the new service /// let id = 13; /// let code = new_service /// .ready() /// .await? /// .call(id) /// .await /// .unwrap_err(); /// # Ok::<(), u32>(()) /// # }; /// # } /// ``` fn map_err(self, f: F) -> MapErr where Self: Sized, F: FnOnce(Self::Error) -> Error + Clone, { MapErr::new(self, f) } /// Maps this service's result type (`Result`) /// to a different value, regardless of whether the future succeeds or /// fails. /// /// This is similar to the [`map_response`] and [`map_err`] combinators, /// except that the *same* function is invoked when the service's future /// completes, whether it completes successfully or fails. This function /// takes the [`Result`] returned by the service's future, and returns a /// [`Result`]. /// /// Like the standard library's [`Result::and_then`], this method can be /// used to implement control flow based on `Result` values. For example, it /// may be used to implement error recovery, by turning some [`Err`] /// responses from the service into [`Ok`] responses. Similarly, some /// successful responses from the service could be rejected, by returning an /// [`Err`] conditionally, depending on the value inside the [`Ok`.] Finally, /// this method can also be used to implement behaviors that must run when a /// service's future completes, regardless of whether it succeeded or failed. /// /// This method can be used to change the [`Response`] type of the service /// into a different type. It can also be used to change the [`Error`] type /// of the service. However, because the [`map_result`] function is not applied /// to the errors returned by the service's [`poll_ready`] method, it must /// be possible to convert the service's [`Error`] type into the error type /// returned by the [`map_result`] function. This is trivial when the function /// returns the same error type as the service, but in other cases, it can /// be useful to use [`BoxError`] to erase differing error types. /// /// # Examples /// /// Recovering from certain errors: /// /// ``` /// # use std::task::{Poll, Context}; /// # use tower::{Service, ServiceExt}; /// # /// # struct DatabaseService; /// # impl DatabaseService { /// # fn new(address: &str) -> Self { /// # DatabaseService /// # } /// # } /// # /// # struct Record { /// # pub name: String, /// # pub age: u16 /// # } /// # #[derive(Debug)] /// # enum DbError { /// # Parse(std::num::ParseIntError), /// # NoRecordsFound, /// # } /// # /// # impl Service for DatabaseService { /// # type Response = Vec; /// # type Error = DbError; /// # type Future = futures_util::future::Ready, DbError>>; /// # /// # fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// # Poll::Ready(Ok(())) /// # } /// # /// # fn call(&mut self, request: u32) -> Self::Future { /// # futures_util::future::ready(Ok(vec![Record { name: "Jack".into(), age: 32 }])) /// # } /// # } /// # /// # fn main() { /// # async { /// // A service returning Result, DbError> /// let service = DatabaseService::new("127.0.0.1:8080"); /// /// // If the database returns no records for the query, we just want an empty `Vec`. /// let mut new_service = service.map_result(|result| match result { /// // If the error indicates that no records matched the query, return an empty /// // `Vec` instead. /// Err(DbError::NoRecordsFound) => Ok(Vec::new()), /// // Propagate all other responses (`Ok` and `Err`) unchanged /// x => x, /// }); /// /// // Call the new service /// let id = 13; /// let name = new_service /// .ready() /// .await? /// .call(id) /// .await?; /// # Ok::<(), DbError>(()) /// # }; /// # } /// ``` /// /// Rejecting some `Ok` responses: /// /// ``` /// # use std::task::{Poll, Context}; /// # use tower::{Service, ServiceExt}; /// # /// # struct DatabaseService; /// # impl DatabaseService { /// # fn new(address: &str) -> Self { /// # DatabaseService /// # } /// # } /// # /// # struct Record { /// # pub name: String, /// # pub age: u16 /// # } /// # type DbError = String; /// # type AppError = String; /// # /// # impl Service for DatabaseService { /// # type Response = Record; /// # type Error = DbError; /// # type Future = futures_util::future::Ready>; /// # /// # fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// # Poll::Ready(Ok(())) /// # } /// # /// # fn call(&mut self, request: u32) -> Self::Future { /// # futures_util::future::ready(Ok(Record { name: "Jack".into(), age: 32 })) /// # } /// # } /// # /// # fn main() { /// # async { /// use tower::BoxError; /// /// // A service returning Result /// let service = DatabaseService::new("127.0.0.1:8080"); /// /// // If the user is zero years old, return an error. /// let mut new_service = service.map_result(|result| { /// let record = result?; /// /// if record.age == 0 { /// // Users must have been born to use our app! /// let app_error = AppError::from("users cannot be 0 years old!"); /// /// // Box the error to erase its type (as it can be an `AppError` /// // *or* the inner service's `DbError`). /// return Err(BoxError::from(app_error)); /// } /// /// // Otherwise, return the record. /// Ok(record) /// }); /// /// // Call the new service /// let id = 13; /// let record = new_service /// .ready() /// .await? /// .call(id) /// .await?; /// # Ok::<(), BoxError>(()) /// # }; /// # } /// ``` /// /// Performing an action that must be run for both successes and failures: /// /// ``` /// # use std::convert::TryFrom; /// # use std::task::{Poll, Context}; /// # use tower::{Service, ServiceExt}; /// # /// # struct DatabaseService; /// # impl DatabaseService { /// # fn new(address: &str) -> Self { /// # DatabaseService /// # } /// # } /// # /// # impl Service for DatabaseService { /// # type Response = String; /// # type Error = u8; /// # type Future = futures_util::future::Ready>; /// # /// # fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// # Poll::Ready(Ok(())) /// # } /// # /// # fn call(&mut self, request: u32) -> Self::Future { /// # futures_util::future::ready(Ok(String::new())) /// # } /// # } /// # /// # fn main() { /// # async { /// // A service returning Result /// let service = DatabaseService::new("127.0.0.1:8080"); /// /// // Print a message whenever a query completes. /// let mut new_service = service.map_result(|result| { /// println!("query completed; success={}", result.is_ok()); /// result /// }); /// /// // Call the new service /// let id = 13; /// let response = new_service /// .ready() /// .await? /// .call(id) /// .await; /// # response /// # }; /// # } /// ``` /// /// [`map_response`]: ServiceExt::map_response /// [`map_err`]: ServiceExt::map_err /// [`map_result`]: ServiceExt::map_result /// [`Error`]: crate::Service::Error /// [`Response`]: crate::Service::Response /// [`poll_ready`]: crate::Service::poll_ready /// [`BoxError`]: crate::BoxError fn map_result(self, f: F) -> MapResult where Self: Sized, Error: From, F: FnOnce(Result) -> Result + Clone, { MapResult::new(self, f) } /// Composes a function *in front of* the service. /// /// This adapter produces a new service that passes each value through the /// given function `f` before sending it to `self`. /// /// # Example /// ``` /// # use std::convert::TryFrom; /// # use std::task::{Poll, Context}; /// # use tower::{Service, ServiceExt}; /// # /// # struct DatabaseService; /// # impl DatabaseService { /// # fn new(address: &str) -> Self { /// # DatabaseService /// # } /// # } /// # /// # impl Service for DatabaseService { /// # type Response = String; /// # type Error = u8; /// # type Future = futures_util::future::Ready>; /// # /// # fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// # Poll::Ready(Ok(())) /// # } /// # /// # fn call(&mut self, request: String) -> Self::Future { /// # futures_util::future::ready(Ok(String::new())) /// # } /// # } /// # /// # fn main() { /// # async { /// // A service taking a String as a request /// let service = DatabaseService::new("127.0.0.1:8080"); /// /// // Map the request to a new request /// let mut new_service = service.map_request(|id: u32| id.to_string()); /// /// // Call the new service /// let id = 13; /// let response = new_service /// .ready() /// .await? /// .call(id) /// .await; /// # response /// # }; /// # } /// ``` fn map_request(self, f: F) -> MapRequest where Self: Sized, F: FnMut(NewRequest) -> Request, { MapRequest::new(self, f) } /// Composes this service with a [`Filter`] that conditionally accepts or /// rejects requests based on a [predicate]. /// /// This adapter produces a new service that passes each value through the /// given function `predicate` before sending it to `self`. /// /// # Example /// ``` /// # use std::convert::TryFrom; /// # use std::task::{Poll, Context}; /// # use tower::{Service, ServiceExt}; /// # /// # struct DatabaseService; /// # impl DatabaseService { /// # fn new(address: &str) -> Self { /// # DatabaseService /// # } /// # } /// # /// # #[derive(Debug)] enum DbError { /// # Parse(std::num::ParseIntError) /// # } /// # /// # impl std::fmt::Display for DbError { /// # fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result { std::fmt::Debug::fmt(self, f) } /// # } /// # impl std::error::Error for DbError {} /// # impl Service for DatabaseService { /// # type Response = String; /// # type Error = DbError; /// # type Future = futures_util::future::Ready>; /// # /// # fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// # Poll::Ready(Ok(())) /// # } /// # /// # fn call(&mut self, request: u32) -> Self::Future { /// # futures_util::future::ready(Ok(String::new())) /// # } /// # } /// # /// # fn main() { /// # async { /// // A service taking a u32 as a request and returning Result<_, DbError> /// let service = DatabaseService::new("127.0.0.1:8080"); /// /// // Fallibly map the request to a new request /// let mut new_service = service /// .filter(|id_str: &str| id_str.parse().map_err(DbError::Parse)); /// /// // Call the new service /// let id = "13"; /// let response = new_service /// .ready() /// .await? /// .call(id) /// .await; /// # response /// # }; /// # } /// ``` /// /// [`Filter`]: crate::filter::Filter /// [predicate]: crate::filter::Predicate #[cfg(feature = "filter")] #[cfg_attr(docsrs, doc(cfg(feature = "filter")))] fn filter(self, filter: F) -> crate::filter::Filter where Self: Sized, F: crate::filter::Predicate, { crate::filter::Filter::new(self, filter) } /// Composes this service with an [`AsyncFilter`] that conditionally accepts or /// rejects requests based on an [async predicate]. /// /// This adapter produces a new service that passes each value through the /// given function `predicate` before sending it to `self`. /// /// # Example /// ``` /// # use std::convert::TryFrom; /// # use std::task::{Poll, Context}; /// # use tower::{Service, ServiceExt}; /// # /// # #[derive(Clone)] struct DatabaseService; /// # impl DatabaseService { /// # fn new(address: &str) -> Self { /// # DatabaseService /// # } /// # } /// # #[derive(Debug)] /// # enum DbError { /// # Rejected /// # } /// # impl std::fmt::Display for DbError { /// # fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result { std::fmt::Debug::fmt(self, f) } /// # } /// # impl std::error::Error for DbError {} /// # /// # impl Service for DatabaseService { /// # type Response = String; /// # type Error = DbError; /// # type Future = futures_util::future::Ready>; /// # /// # fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// # Poll::Ready(Ok(())) /// # } /// # /// # fn call(&mut self, request: u32) -> Self::Future { /// # futures_util::future::ready(Ok(String::new())) /// # } /// # } /// # /// # fn main() { /// # async { /// // A service taking a u32 as a request and returning Result<_, DbError> /// let service = DatabaseService::new("127.0.0.1:8080"); /// /// /// Returns `true` if we should query the database for an ID. /// async fn should_query(id: u32) -> bool { /// // ... /// # true /// } /// /// // Filter requests based on `should_query`. /// let mut new_service = service /// .filter_async(|id: u32| async move { /// if should_query(id).await { /// return Ok(id); /// } /// /// Err(DbError::Rejected) /// }); /// /// // Call the new service /// let id = 13; /// # let id: u32 = id; /// let response = new_service /// .ready() /// .await? /// .call(id) /// .await; /// # response /// # }; /// # } /// ``` /// /// [`AsyncFilter`]: crate::filter::AsyncFilter /// [asynchronous predicate]: crate::filter::AsyncPredicate #[cfg(feature = "filter")] #[cfg_attr(docsrs, doc(cfg(feature = "filter")))] fn filter_async(self, filter: F) -> crate::filter::AsyncFilter where Self: Sized, F: crate::filter::AsyncPredicate, { crate::filter::AsyncFilter::new(self, filter) } /// Composes an asynchronous function *after* this service. /// /// This takes a function or closure returning a future, and returns a new /// `Service` that chains that function after this service's [`Future`]. The /// new `Service`'s future will consist of this service's future, followed /// by the future returned by calling the chained function with the future's /// [`Output`] type. The chained function is called regardless of whether /// this service's future completes with a successful response or with an /// error. /// /// This method can be thought of as an equivalent to the [`futures` /// crate]'s [`FutureExt::then`] combinator, but acting on `Service`s that /// _return_ futures, rather than on an individual future. Similarly to that /// combinator, [`ServiceExt::then`] can be used to implement asynchronous /// error recovery, by calling some asynchronous function with errors /// returned by this service. Alternatively, it may also be used to call a /// fallible async function with the successful response of this service. /// /// This method can be used to change the [`Response`] type of the service /// into a different type. It can also be used to change the [`Error`] type /// of the service. However, because the `then` function is not applied /// to the errors returned by the service's [`poll_ready`] method, it must /// be possible to convert the service's [`Error`] type into the error type /// returned by the `then` future. This is trivial when the function /// returns the same error type as the service, but in other cases, it can /// be useful to use [`BoxError`] to erase differing error types. /// /// # Examples /// /// ``` /// # use std::task::{Poll, Context}; /// # use tower::{Service, ServiceExt}; /// # /// # struct DatabaseService; /// # impl DatabaseService { /// # fn new(address: &str) -> Self { /// # DatabaseService /// # } /// # } /// # /// # type Record = (); /// # type DbError = (); /// # /// # impl Service for DatabaseService { /// # type Response = Record; /// # type Error = DbError; /// # type Future = futures_util::future::Ready>; /// # /// # fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// # Poll::Ready(Ok(())) /// # } /// # /// # fn call(&mut self, request: u32) -> Self::Future { /// # futures_util::future::ready(Ok(())) /// # } /// # } /// # /// # fn main() { /// // A service returning Result /// let service = DatabaseService::new("127.0.0.1:8080"); /// /// // An async function that attempts to recover from errors returned by the /// // database. /// async fn recover_from_error(error: DbError) -> Result { /// // ... /// # Ok(()) /// } /// # async { /// /// // If the database service returns an error, attempt to recover by /// // calling `recover_from_error`. Otherwise, return the successful response. /// let mut new_service = service.then(|result| async move { /// match result { /// Ok(record) => Ok(record), /// Err(e) => recover_from_error(e).await, /// } /// }); /// /// // Call the new service /// let id = 13; /// let record = new_service /// .ready() /// .await? /// .call(id) /// .await?; /// # Ok::<(), DbError>(()) /// # }; /// # } /// ``` /// /// [`Future`]: crate::Service::Future /// [`Output`]: std::future::Future::Output /// [`futures` crate]: https://docs.rs/futures /// [`FutureExt::then`]: https://docs.rs/futures/latest/futures/future/trait.FutureExt.html#method.then /// [`Error`]: crate::Service::Error /// [`Response`]: crate::Service::Response /// [`poll_ready`]: crate::Service::poll_ready /// [`BoxError`]: crate::BoxError fn then(self, f: F) -> Then where Self: Sized, Error: From, F: FnOnce(Result) -> Fut + Clone, Fut: Future>, { Then::new(self, f) } /// Composes a function that transforms futures produced by the service. /// /// This takes a function or closure returning a future computed from the future returned by /// the service's [`call`] method, as opposed to the responses produced by the future. /// /// # Examples /// /// ``` /// # use std::task::{Poll, Context}; /// # use tower::{Service, ServiceExt, BoxError}; /// # /// # struct DatabaseService; /// # impl DatabaseService { /// # fn new(address: &str) -> Self { /// # DatabaseService /// # } /// # } /// # /// # type Record = (); /// # type DbError = crate::BoxError; /// # /// # impl Service for DatabaseService { /// # type Response = Record; /// # type Error = DbError; /// # type Future = futures_util::future::Ready>; /// # /// # fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { /// # Poll::Ready(Ok(())) /// # } /// # /// # fn call(&mut self, request: u32) -> Self::Future { /// # futures_util::future::ready(Ok(())) /// # } /// # } /// # /// # fn main() { /// use std::time::Duration; /// use tokio::time::timeout; /// /// // A service returning Result /// let service = DatabaseService::new("127.0.0.1:8080"); /// # async { /// /// let mut new_service = service.map_future(|future| async move { /// let res = timeout(Duration::from_secs(1), future).await?; /// Ok::<_, BoxError>(res) /// }); /// /// // Call the new service /// let id = 13; /// let record = new_service /// .ready() /// .await? /// .call(id) /// .await?; /// # Ok::<(), BoxError>(()) /// # }; /// # } /// ``` /// /// Note that normally you wouldn't implement timeouts like this and instead use [`Timeout`]. /// /// [`call`]: crate::Service::call /// [`Timeout`]: crate::timeout::Timeout fn map_future(self, f: F) -> MapFuture where Self: Sized, F: FnMut(Self::Future) -> Fut, Error: From, Fut: Future>, { MapFuture::new(self, f) } /// Convert the service into a [`Service`] + [`Send`] trait object. /// /// See [`BoxService`] for more details. /// /// If `Self` implements the [`Clone`] trait, the [`boxed_clone`] method /// can be used instead, to produce a boxed service which will also /// implement [`Clone`]. /// /// # Example /// /// ``` /// use tower::{Service, ServiceExt, BoxError, service_fn, util::BoxService}; /// # /// # struct Request; /// # struct Response; /// # impl Response { /// # fn new() -> Self { Self } /// # } /// /// let service = service_fn(|req: Request| async { /// Ok::<_, BoxError>(Response::new()) /// }); /// /// let service: BoxService = service /// .map_request(|req| { /// println!("received request"); /// req /// }) /// .map_response(|res| { /// println!("response produced"); /// res /// }) /// .boxed(); /// # let service = assert_service(service); /// # fn assert_service(svc: S) -> S /// # where S: Service { svc } /// ``` /// /// [`Service`]: crate::Service /// [`boxed_clone`]: Self::boxed_clone fn boxed(self) -> BoxService where Self: Sized + Send + 'static, Self::Future: Send + 'static, { BoxService::new(self) } /// Convert the service into a [`Service`] + [`Clone`] + [`Send`] trait object. /// /// This is similar to the [`boxed`] method, but it requires that `Self` implement /// [`Clone`], and the returned boxed service implements [`Clone`]. /// See [`BoxCloneService`] for more details. /// /// # Example /// /// ``` /// use tower::{Service, ServiceExt, BoxError, service_fn, util::BoxCloneService}; /// # /// # struct Request; /// # struct Response; /// # impl Response { /// # fn new() -> Self { Self } /// # } /// /// let service = service_fn(|req: Request| async { /// Ok::<_, BoxError>(Response::new()) /// }); /// /// let service: BoxCloneService = service /// .map_request(|req| { /// println!("received request"); /// req /// }) /// .map_response(|res| { /// println!("response produced"); /// res /// }) /// .boxed_clone(); /// /// // The boxed service can still be cloned. /// service.clone(); /// # let service = assert_service(service); /// # fn assert_service(svc: S) -> S /// # where S: Service { svc } /// ``` /// /// [`Service`]: crate::Service /// [`boxed`]: Self::boxed fn boxed_clone(self) -> BoxCloneService where Self: Clone + Sized + Send + 'static, Self::Future: Send + 'static, { BoxCloneService::new(self) } } impl ServiceExt for T where T: tower_service::Service {} /// Convert an `Option` into a [`Layer`]. /// /// ``` /// # use std::time::Duration; /// # use tower::Service; /// # use tower::builder::ServiceBuilder; /// use tower::util::option_layer; /// # use tower::timeout::TimeoutLayer; /// # async fn wrap(svc: S) where S: Service<(), Error = &'static str> + 'static + Send, S::Future: Send { /// # let timeout = Some(Duration::new(10, 0)); /// // Layer to apply a timeout if configured /// let maybe_timeout = option_layer(timeout.map(TimeoutLayer::new)); /// /// ServiceBuilder::new() /// .layer(maybe_timeout) /// .service(svc); /// # } /// ``` /// /// [`Layer`]: crate::layer::Layer pub fn option_layer(layer: Option) -> Either { if let Some(layer) = layer { Either::A(layer) } else { Either::B(Identity::new()) } } tower-0.4.13/src/util/oneshot.rs000064400000000000000000000052760072674642500147040ustar 00000000000000use futures_core::ready; use pin_project_lite::pin_project; use std::{ fmt, future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; pin_project! { /// A [`Future`] consuming a [`Service`] and request, waiting until the [`Service`] /// is ready, and then calling [`Service::call`] with the request, and /// waiting for that [`Future`]. #[derive(Debug)] pub struct Oneshot, Req> { #[pin] state: State, } } pin_project! { #[project = StateProj] enum State, Req> { NotReady { svc: S, req: Option, }, Called { #[pin] fut: S::Future, }, Done, } } impl, Req> State { fn not_ready(svc: S, req: Option) -> Self { Self::NotReady { svc, req } } fn called(fut: S::Future) -> Self { Self::Called { fut } } } impl fmt::Debug for State where S: Service + fmt::Debug, Req: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { State::NotReady { svc, req: Some(req), } => f .debug_tuple("State::NotReady") .field(svc) .field(req) .finish(), State::NotReady { req: None, .. } => unreachable!(), State::Called { .. } => f.debug_tuple("State::Called").field(&"S::Future").finish(), State::Done => f.debug_tuple("State::Done").finish(), } } } impl Oneshot where S: Service, { #[allow(missing_docs)] pub fn new(svc: S, req: Req) -> Self { Oneshot { state: State::not_ready(svc, Some(req)), } } } impl Future for Oneshot where S: Service, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let mut this = self.project(); loop { match this.state.as_mut().project() { StateProj::NotReady { svc, req } => { let _ = ready!(svc.poll_ready(cx))?; let f = svc.call(req.take().expect("already called")); this.state.set(State::called(f)); } StateProj::Called { fut } => { let res = ready!(fut.poll(cx))?; this.state.set(State::Done); return Poll::Ready(Ok(res)); } StateProj::Done => panic!("polled after complete"), } } } } tower-0.4.13/src/util/optional/error.rs000064400000000000000000000006060072674642500161730ustar 00000000000000use std::{error, fmt}; /// Error returned if the inner [`Service`] has not been set. /// /// [`Service`]: crate::Service #[derive(Debug)] pub struct None(()); impl None { pub(crate) fn new() -> None { None(()) } } impl fmt::Display for None { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { write!(fmt, "None") } } impl error::Error for None {} tower-0.4.13/src/util/optional/future.rs000064400000000000000000000017220072674642500163540ustar 00000000000000use super::error; use futures_core::ready; use pin_project_lite::pin_project; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; pin_project! { /// Response future returned by [`Optional`]. /// /// [`Optional`]: crate::util::Optional #[derive(Debug)] pub struct ResponseFuture { #[pin] inner: Option, } } impl ResponseFuture { pub(crate) fn new(inner: Option) -> ResponseFuture { ResponseFuture { inner } } } impl Future for ResponseFuture where F: Future>, E: Into, { type Output = Result; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { match self.project().inner.as_pin_mut() { Some(inner) => Poll::Ready(Ok(ready!(inner.poll(cx)).map_err(Into::into)?)), None => Poll::Ready(Err(error::None::new().into())), } } } tower-0.4.13/src/util/optional/mod.rs000064400000000000000000000031460072674642500156230ustar 00000000000000//! Contains [`Optional`] and related types and functions. //! //! See [`Optional`] documentation for more details. /// Error types for [`Optional`]. pub mod error; /// Future types for [`Optional`]. pub mod future; use self::future::ResponseFuture; use std::task::{Context, Poll}; use tower_service::Service; /// Optionally forwards requests to an inner service. /// /// If the inner service is [`None`], [`optional::None`] is returned as the response. /// /// [`optional::None`]: crate::util::error::optional::None #[derive(Debug)] pub struct Optional { inner: Option, } impl Optional { /// Create a new [`Optional`]. pub fn new(inner: Option) -> Optional where T: Service, T::Error: Into, { Optional { inner } } } impl Service for Optional where T: Service, T::Error: Into, { type Response = T::Response; type Error = crate::BoxError; type Future = ResponseFuture; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { match self.inner { Some(ref mut inner) => match inner.poll_ready(cx) { Poll::Ready(r) => Poll::Ready(r.map_err(Into::into)), Poll::Pending => Poll::Pending, }, // None services are always ready None => Poll::Ready(Ok(())), } } fn call(&mut self, request: Request) -> Self::Future { let inner = self.inner.as_mut().map(|i| i.call(request)); ResponseFuture::new(inner) } } tower-0.4.13/src/util/ready.rs000064400000000000000000000057270072674642500143320ustar 00000000000000use std::{fmt, marker::PhantomData}; use futures_core::ready; use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use tower_service::Service; /// A [`Future`] that yields the service when it is ready to accept a request. /// /// [`ReadyOneshot`] values are produced by [`ServiceExt::ready_oneshot`]. /// /// [`ServiceExt::ready_oneshot`]: crate::util::ServiceExt::ready_oneshot pub struct ReadyOneshot { inner: Option, _p: PhantomData Request>, } // Safety: This is safe because `Services`'s are always `Unpin`. impl Unpin for ReadyOneshot {} impl ReadyOneshot where T: Service, { #[allow(missing_docs)] pub fn new(service: T) -> Self { Self { inner: Some(service), _p: PhantomData, } } } impl Future for ReadyOneshot where T: Service, { type Output = Result; fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { ready!(self .inner .as_mut() .expect("poll after Poll::Ready") .poll_ready(cx))?; Poll::Ready(Ok(self.inner.take().expect("poll after Poll::Ready"))) } } impl fmt::Debug for ReadyOneshot where T: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("ReadyOneshot") .field("inner", &self.inner) .finish() } } /// A future that yields a mutable reference to the service when it is ready to accept a request. /// /// [`Ready`] values are produced by [`ServiceExt::ready`]. /// /// [`ServiceExt::ready`]: crate::util::ServiceExt::ready pub struct Ready<'a, T, Request>(ReadyOneshot<&'a mut T, Request>); /// A future that yields a mutable reference to the service when it is ready to accept a request. /// /// [`ReadyAnd`] values are produced by [`ServiceExt::ready_and`]. /// /// [`ServiceExt::ready_and`]: crate::util::ServiceExt::ready_and #[deprecated(since = "0.4.6", note = "Please use the Ready future instead")] pub type ReadyAnd<'a, T, Request> = Ready<'a, T, Request>; // Safety: This is safe for the same reason that the impl for ReadyOneshot is safe. impl<'a, T, Request> Unpin for Ready<'a, T, Request> {} impl<'a, T, Request> Ready<'a, T, Request> where T: Service, { #[allow(missing_docs)] pub fn new(service: &'a mut T) -> Self { Self(ReadyOneshot::new(service)) } } impl<'a, T, Request> Future for Ready<'a, T, Request> where T: Service, { type Output = Result<&'a mut T, T::Error>; fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { Pin::new(&mut self.0).poll(cx) } } impl<'a, T, Request> fmt::Debug for Ready<'a, T, Request> where T: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_tuple("Ready").field(&self.0).finish() } } tower-0.4.13/src/util/service_fn.rs000064400000000000000000000036440072674642500153450ustar 00000000000000use std::fmt; use std::future::Future; use std::task::{Context, Poll}; use tower_service::Service; /// Returns a new [`ServiceFn`] with the given closure. /// /// This lets you build a [`Service`] from an async function that returns a [`Result`]. /// /// # Example /// /// ``` /// use tower::{service_fn, Service, ServiceExt, BoxError}; /// # struct Request; /// # impl Request { /// # fn new() -> Self { Self } /// # } /// # struct Response(&'static str); /// # impl Response { /// # fn new(body: &'static str) -> Self { /// # Self(body) /// # } /// # fn into_body(self) -> &'static str { self.0 } /// # } /// /// # #[tokio::main] /// # async fn main() -> Result<(), BoxError> { /// async fn handle(request: Request) -> Result { /// let response = Response::new("Hello, World!"); /// Ok(response) /// } /// /// let mut service = service_fn(handle); /// /// let response = service /// .ready() /// .await? /// .call(Request::new()) /// .await?; /// /// assert_eq!("Hello, World!", response.into_body()); /// # /// # Ok(()) /// # } /// ``` pub fn service_fn(f: T) -> ServiceFn { ServiceFn { f } } /// A [`Service`] implemented by a closure. /// /// See [`service_fn`] for more details. #[derive(Copy, Clone)] pub struct ServiceFn { f: T, } impl fmt::Debug for ServiceFn { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("ServiceFn") .field("f", &format_args!("{}", std::any::type_name::())) .finish() } } impl Service for ServiceFn where T: FnMut(Request) -> F, F: Future>, { type Response = R; type Error = E; type Future = F; fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll> { Ok(()).into() } fn call(&mut self, req: Request) -> Self::Future { (self.f)(req) } } tower-0.4.13/src/util/then.rs000064400000000000000000000045160072674642500141570ustar 00000000000000use futures_util::{future, FutureExt}; use std::{ fmt, future::Future, task::{Context, Poll}, }; use tower_layer::Layer; use tower_service::Service; /// [`Service`] returned by the [`then`] combinator. /// /// [`then`]: crate::util::ServiceExt::then #[derive(Clone)] pub struct Then { inner: S, f: F, } impl fmt::Debug for Then where S: fmt::Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("Then") .field("inner", &self.inner) .field("f", &format_args!("{}", std::any::type_name::())) .finish() } } /// A [`Layer`] that produces a [`Then`] service. /// /// [`Layer`]: tower_layer::Layer #[derive(Debug, Clone)] pub struct ThenLayer { f: F, } impl Then { /// Creates a new `Then` service. pub fn new(inner: S, f: F) -> Self { Then { f, inner } } /// Returns a new [`Layer`] that produces [`Then`] services. /// /// This is a convenience function that simply calls [`ThenLayer::new`]. /// /// [`Layer`]: tower_layer::Layer pub fn layer(f: F) -> ThenLayer { ThenLayer { f } } } opaque_future! { /// Response future from [`Then`] services. /// /// [`Then`]: crate::util::Then pub type ThenFuture = future::Then; } impl Service for Then where S: Service, S::Error: Into, F: FnOnce(Result) -> Fut + Clone, Fut: Future>, { type Response = Response; type Error = Error; type Future = ThenFuture; #[inline] fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { self.inner.poll_ready(cx).map_err(Into::into) } #[inline] fn call(&mut self, request: Request) -> Self::Future { ThenFuture::new(self.inner.call(request).then(self.f.clone())) } } impl ThenLayer { /// Creates a new [`ThenLayer`] layer. pub fn new(f: F) -> Self { ThenLayer { f } } } impl Layer for ThenLayer where F: Clone, { type Service = Then; fn layer(&self, inner: S) -> Self::Service { Then { f: self.f.clone(), inner, } } } tower-0.4.13/tests/balance/main.rs000064400000000000000000000156660072674642500151400ustar 00000000000000#![cfg(feature = "balance")] #[path = "../support.rs"] mod support; use std::future::Future; use std::task::{Context, Poll}; use tokio_test::{assert_pending, assert_ready, task}; use tower::balance::p2c::Balance; use tower::discover::Change; use tower_service::Service; use tower_test::mock; type Req = &'static str; struct Mock(mock::Mock); impl Service for Mock { type Response = as Service>::Response; type Error = as Service>::Error; type Future = as Service>::Future; fn poll_ready(&mut self, cx: &mut Context) -> Poll> { self.0.poll_ready(cx) } fn call(&mut self, req: Req) -> Self::Future { self.0.call(req) } } impl tower::load::Load for Mock { type Metric = usize; fn load(&self) -> Self::Metric { rand::random() } } #[test] fn stress() { let _t = support::trace_init(); let mut task = task::spawn(()); let (tx, rx) = tokio::sync::mpsc::unbounded_channel::>(); let mut cache = Balance::<_, Req>::new(support::IntoStream::new(rx)); let mut nready = 0; let mut services = slab::Slab::<(mock::Handle, bool)>::new(); let mut retired = Vec::>::new(); for _ in 0..100_000 { for _ in 0..(rand::random::() % 8) { if !services.is_empty() && rand::random() { if nready == 0 || rand::random::() > u8::max_value() / 4 { // ready a service // TODO: sometimes ready a removed service? for (_, (handle, ready)) in &mut services { if !*ready { handle.allow(1); *ready = true; nready += 1; break; } } } else { // use a service use std::task::Poll; match task.enter(|cx, _| cache.poll_ready(cx)) { Poll::Ready(Ok(())) => { assert_ne!(nready, 0, "got ready when no service is ready"); let mut fut = cache.call("hello"); let mut fut = std::pin::Pin::new(&mut fut); assert_pending!(task.enter(|cx, _| fut.as_mut().poll(cx))); let mut found = false; for (_, (handle, ready)) in &mut services { if *ready { if let Poll::Ready(Some((req, res))) = handle.poll_request() { assert_eq!(req, "hello"); res.send_response("world"); *ready = false; nready -= 1; found = true; break; } } } if !found { // we must have been given a retired service let mut at = None; for (i, handle) in retired.iter_mut().enumerate() { if let Poll::Ready(Some((req, res))) = handle.poll_request() { assert_eq!(req, "hello"); res.send_response("world"); at = Some(i); break; } } let _ = retired.swap_remove( at.expect("request was not sent to a ready service"), ); nready -= 1; } assert_ready!(task.enter(|cx, _| fut.as_mut().poll(cx))).unwrap(); } Poll::Ready(_) => unreachable!("discover stream has not failed"), Poll::Pending => { // assert_eq!(nready, 0, "got pending when a service is ready"); } } } } else if services.is_empty() || rand::random() { if services.is_empty() || nready == 0 || rand::random() { // add let (svc, mut handle) = mock::pair::(); let svc = Mock(svc); handle.allow(0); let k = services.insert((handle, false)); let ok = tx.send(Ok(Change::Insert(k, svc))); assert!(ok.is_ok()); } else { // remove while !services.is_empty() { let k = rand::random::() % (services.iter().last().unwrap().0 + 1); if services.contains(k) { let (handle, ready) = services.remove(k); if ready { retired.push(handle); } let ok = tx.send(Ok(Change::Remove(k))); assert!(ok.is_ok()); break; } } } } else { // fail a service while !services.is_empty() { let k = rand::random::() % (services.iter().last().unwrap().0 + 1); if services.contains(k) { let (mut handle, ready) = services.remove(k); if ready { nready -= 1; } handle.send_error("doom"); break; } } } } let r = task.enter(|cx, _| cache.poll_ready(cx)); // drop any retired services that the p2c has gotten rid of let mut removed = Vec::new(); for (i, handle) in retired.iter_mut().enumerate() { if let Poll::Ready(None) = handle.poll_request() { removed.push(i); } } for i in removed.into_iter().rev() { retired.swap_remove(i); nready -= 1; } use std::task::Poll; match r { Poll::Ready(Ok(())) => { assert_ne!(nready, 0, "got ready when no service is ready"); } Poll::Ready(_) => unreachable!("discover stream has not failed"), Poll::Pending => { assert_eq!(nready, 0, "got pending when a service is ready"); } } } } tower-0.4.13/tests/buffer/main.rs000064400000000000000000000301460072674642500150120ustar 00000000000000#![cfg(feature = "buffer")] #[path = "../support.rs"] mod support; use std::thread; use tokio_test::{assert_pending, assert_ready, assert_ready_err, assert_ready_ok, task}; use tower::buffer::{error, Buffer}; use tower::{util::ServiceExt, Service}; use tower_test::{assert_request_eq, mock}; fn let_worker_work() { // Allow the Buffer's executor to do work thread::sleep(::std::time::Duration::from_millis(100)); } #[tokio::test(flavor = "current_thread")] async fn req_and_res() { let _t = support::trace_init(); let (mut service, mut handle) = new_service(); assert_ready_ok!(service.poll_ready()); let mut response = task::spawn(service.call("hello")); assert_request_eq!(handle, "hello").send_response("world"); let_worker_work(); assert_eq!(assert_ready_ok!(response.poll()), "world"); } #[tokio::test(flavor = "current_thread")] async fn clears_canceled_requests() { let _t = support::trace_init(); let (mut service, mut handle) = new_service(); handle.allow(1); assert_ready_ok!(service.poll_ready()); let mut res1 = task::spawn(service.call("hello")); let send_response1 = assert_request_eq!(handle, "hello"); // don't respond yet, new requests will get buffered assert_ready_ok!(service.poll_ready()); let res2 = task::spawn(service.call("hello2")); assert_pending!(handle.poll_request()); assert_ready_ok!(service.poll_ready()); let mut res3 = task::spawn(service.call("hello3")); drop(res2); send_response1.send_response("world"); let_worker_work(); assert_eq!(assert_ready_ok!(res1.poll()), "world"); // res2 was dropped, so it should have been canceled in the buffer handle.allow(1); assert_request_eq!(handle, "hello3").send_response("world3"); let_worker_work(); assert_eq!(assert_ready_ok!(res3.poll()), "world3"); } #[tokio::test(flavor = "current_thread")] async fn when_inner_is_not_ready() { let _t = support::trace_init(); let (mut service, mut handle) = new_service(); // Make the service NotReady handle.allow(0); assert_ready_ok!(service.poll_ready()); let mut res1 = task::spawn(service.call("hello")); let_worker_work(); assert_pending!(res1.poll()); assert_pending!(handle.poll_request()); handle.allow(1); assert_request_eq!(handle, "hello").send_response("world"); let_worker_work(); assert_eq!(assert_ready_ok!(res1.poll()), "world"); } #[tokio::test(flavor = "current_thread")] async fn when_inner_fails() { use std::error::Error as StdError; let _t = support::trace_init(); let (mut service, mut handle) = new_service(); // Make the service NotReady handle.allow(0); handle.send_error("foobar"); assert_ready_ok!(service.poll_ready()); let mut res1 = task::spawn(service.call("hello")); let_worker_work(); let e = assert_ready_err!(res1.poll()); if let Some(e) = e.downcast_ref::() { let e = e.source().unwrap(); assert_eq!(e.to_string(), "foobar"); } else { panic!("unexpected error type: {:?}", e); } } #[tokio::test(flavor = "current_thread")] async fn poll_ready_when_worker_is_dropped_early() { let _t = support::trace_init(); let (service, _handle) = mock::pair::<(), ()>(); let (service, worker) = Buffer::pair(service, 1); let mut service = mock::Spawn::new(service); drop(worker); let err = assert_ready_err!(service.poll_ready()); assert!(err.is::(), "should be a Closed: {:?}", err); } #[tokio::test(flavor = "current_thread")] async fn response_future_when_worker_is_dropped_early() { let _t = support::trace_init(); let (service, mut handle) = mock::pair::<_, ()>(); let (service, worker) = Buffer::pair(service, 1); let mut service = mock::Spawn::new(service); // keep the request in the worker handle.allow(0); assert_ready_ok!(service.poll_ready()); let mut response = task::spawn(service.call("hello")); drop(worker); let_worker_work(); let err = assert_ready_err!(response.poll()); assert!(err.is::(), "should be a Closed: {:?}", err); } #[tokio::test(flavor = "current_thread")] async fn waits_for_channel_capacity() { let _t = support::trace_init(); let (service, mut handle) = mock::pair::<&'static str, &'static str>(); let (service, worker) = Buffer::pair(service, 3); let mut service = mock::Spawn::new(service); let mut worker = task::spawn(worker); // keep requests in the worker handle.allow(0); assert_ready_ok!(service.poll_ready()); let mut response1 = task::spawn(service.call("hello")); assert_pending!(worker.poll()); assert_ready_ok!(service.poll_ready()); let mut response2 = task::spawn(service.call("hello")); assert_pending!(worker.poll()); assert_ready_ok!(service.poll_ready()); let mut response3 = task::spawn(service.call("hello")); assert_pending!(service.poll_ready()); assert_pending!(worker.poll()); handle.allow(1); assert_pending!(worker.poll()); handle .next_request() .await .unwrap() .1 .send_response("world"); assert_pending!(worker.poll()); assert_ready_ok!(response1.poll()); assert_ready_ok!(service.poll_ready()); let mut response4 = task::spawn(service.call("hello")); assert_pending!(worker.poll()); handle.allow(3); assert_pending!(worker.poll()); handle .next_request() .await .unwrap() .1 .send_response("world"); assert_pending!(worker.poll()); assert_ready_ok!(response2.poll()); assert_pending!(worker.poll()); handle .next_request() .await .unwrap() .1 .send_response("world"); assert_pending!(worker.poll()); assert_ready_ok!(response3.poll()); assert_pending!(worker.poll()); handle .next_request() .await .unwrap() .1 .send_response("world"); assert_pending!(worker.poll()); assert_ready_ok!(response4.poll()); } #[tokio::test(flavor = "current_thread")] async fn wakes_pending_waiters_on_close() { let _t = support::trace_init(); let (service, mut handle) = mock::pair::<_, ()>(); let (mut service, worker) = Buffer::pair(service, 1); let mut worker = task::spawn(worker); // keep the request in the worker handle.allow(0); let service1 = service.ready().await.unwrap(); assert_pending!(worker.poll()); let mut response = task::spawn(service1.call("hello")); let mut service1 = service.clone(); let mut ready1 = task::spawn(service1.ready()); assert_pending!(worker.poll()); assert_pending!(ready1.poll(), "no capacity"); let mut service1 = service.clone(); let mut ready2 = task::spawn(service1.ready()); assert_pending!(worker.poll()); assert_pending!(ready2.poll(), "no capacity"); // kill the worker task drop(worker); let err = assert_ready_err!(response.poll()); assert!( err.is::(), "response should fail with a Closed, got: {:?}", err ); assert!( ready1.is_woken(), "dropping worker should wake ready task 1" ); let err = assert_ready_err!(ready1.poll()); assert!( err.is::(), "ready 1 should fail with a Closed, got: {:?}", err ); assert!( ready2.is_woken(), "dropping worker should wake ready task 2" ); let err = assert_ready_err!(ready1.poll()); assert!( err.is::(), "ready 2 should fail with a Closed, got: {:?}", err ); } #[tokio::test(flavor = "current_thread")] async fn wakes_pending_waiters_on_failure() { let _t = support::trace_init(); let (service, mut handle) = mock::pair::<_, ()>(); let (mut service, worker) = Buffer::pair(service, 1); let mut worker = task::spawn(worker); // keep the request in the worker handle.allow(0); let service1 = service.ready().await.unwrap(); assert_pending!(worker.poll()); let mut response = task::spawn(service1.call("hello")); let mut service1 = service.clone(); let mut ready1 = task::spawn(service1.ready()); assert_pending!(worker.poll()); assert_pending!(ready1.poll(), "no capacity"); let mut service1 = service.clone(); let mut ready2 = task::spawn(service1.ready()); assert_pending!(worker.poll()); assert_pending!(ready2.poll(), "no capacity"); // fail the inner service handle.send_error("foobar"); // worker task terminates assert_ready!(worker.poll()); let err = assert_ready_err!(response.poll()); assert!( err.is::(), "response should fail with a ServiceError, got: {:?}", err ); assert!( ready1.is_woken(), "dropping worker should wake ready task 1" ); let err = assert_ready_err!(ready1.poll()); assert!( err.is::(), "ready 1 should fail with a ServiceError, got: {:?}", err ); assert!( ready2.is_woken(), "dropping worker should wake ready task 2" ); let err = assert_ready_err!(ready1.poll()); assert!( err.is::(), "ready 2 should fail with a ServiceError, got: {:?}", err ); } #[tokio::test(flavor = "current_thread")] async fn propagates_trace_spans() { use tower::util::ServiceExt; use tracing::Instrument; let _t = support::trace_init(); let span = tracing::info_span!("my_span"); let service = support::AssertSpanSvc::new(span.clone()); let (service, worker) = Buffer::pair(service, 5); let worker = tokio::spawn(worker); let result = tokio::spawn(service.oneshot(()).instrument(span)); result.await.expect("service panicked").expect("failed"); worker.await.expect("worker panicked"); } #[tokio::test(flavor = "current_thread")] async fn doesnt_leak_permits() { let _t = support::trace_init(); let (service, mut handle) = mock::pair::<_, ()>(); let (mut service1, worker) = Buffer::pair(service, 2); let mut worker = task::spawn(worker); let mut service2 = service1.clone(); let mut service3 = service1.clone(); // Attempt to poll the first clone of the buffer to readiness multiple // times. These should all succeed, because the readiness is never // *consumed* --- no request is sent. assert_ready_ok!(task::spawn(service1.ready()).poll()); assert_ready_ok!(task::spawn(service1.ready()).poll()); assert_ready_ok!(task::spawn(service1.ready()).poll()); // It should also be possible to drive the second clone of the service to // readiness --- it should only acquire one permit, as well. assert_ready_ok!(task::spawn(service2.ready()).poll()); assert_ready_ok!(task::spawn(service2.ready()).poll()); assert_ready_ok!(task::spawn(service2.ready()).poll()); // The third clone *doesn't* poll ready, because the first two clones have // each acquired one permit. let mut ready3 = task::spawn(service3.ready()); assert_pending!(ready3.poll()); // Consume the first service's readiness. let mut response = task::spawn(service1.call(())); handle.allow(1); assert_pending!(worker.poll()); handle.next_request().await.unwrap().1.send_response(()); assert_pending!(worker.poll()); assert_ready_ok!(response.poll()); // Now, the third service should acquire a permit... assert!(ready3.is_woken()); assert_ready_ok!(ready3.poll()); } type Mock = mock::Mock<&'static str, &'static str>; type Handle = mock::Handle<&'static str, &'static str>; fn new_service() -> (mock::Spawn>, Handle) { // bound is >0 here because clears_canceled_requests needs multiple outstanding requests new_service_with_bound(10) } fn new_service_with_bound(bound: usize) -> (mock::Spawn>, Handle) { mock::spawn_with(|s| { let (svc, worker) = Buffer::pair(s, bound); thread::spawn(move || { let mut fut = tokio_test::task::spawn(worker); while fut.poll().is_pending() {} }); svc }) } tower-0.4.13/tests/builder.rs000064400000000000000000000030070072674642500142370ustar 00000000000000#![cfg(all(feature = "buffer", feature = "limit", feature = "retry"))] mod support; use futures_util::{future::Ready, pin_mut}; use std::time::Duration; use tower::builder::ServiceBuilder; use tower::retry::Policy; use tower::util::ServiceExt; use tower_service::*; use tower_test::{assert_request_eq, mock}; #[tokio::test(flavor = "current_thread")] async fn builder_service() { let _t = support::trace_init(); let (service, handle) = mock::pair(); pin_mut!(handle); let policy = MockPolicy::<&'static str, bool>::default(); let mut client = ServiceBuilder::new() .buffer(5) .concurrency_limit(5) .rate_limit(5, Duration::from_secs(5)) .retry(policy) .map_response(|r: &'static str| r == "world") .map_request(|r: &'static str| r == "hello") .service(service); // allow a request through handle.allow(1); let fut = client.ready().await.unwrap().call("hello"); assert_request_eq!(handle, true).send_response("world"); assert!(fut.await.unwrap()); } #[derive(Debug, Clone, Default)] struct MockPolicy { _pd: std::marker::PhantomData<(Req, Res)>, } impl Policy for MockPolicy where Req: Clone, E: Into>, { type Future = Ready; fn retry(&self, _req: &Req, _result: Result<&Res, &E>) -> Option { None } fn clone_request(&self, req: &Req) -> Option { Some(req.clone()) } } tower-0.4.13/tests/filter/async_filter.rs000064400000000000000000000034270072674642500165660ustar 00000000000000#![cfg(feature = "filter")] #[path = "../support.rs"] mod support; use futures_util::{future::poll_fn, pin_mut}; use std::future::Future; use tower::filter::{error::Error, AsyncFilter}; use tower_service::Service; use tower_test::{assert_request_eq, mock}; #[tokio::test(flavor = "current_thread")] async fn passthrough_sync() { let _t = support::trace_init(); let (mut service, handle) = new_service(|_| async { Ok(()) }); let th = tokio::spawn(async move { // Receive the requests and respond pin_mut!(handle); for i in 0..10usize { assert_request_eq!(handle, format!("ping-{}", i)).send_response(format!("pong-{}", i)); } }); let mut responses = vec![]; for i in 0usize..10 { let request = format!("ping-{}", i); poll_fn(|cx| service.poll_ready(cx)).await.unwrap(); let exchange = service.call(request); let exchange = async move { let response = exchange.await.unwrap(); let expect = format!("pong-{}", i); assert_eq!(response.as_str(), expect.as_str()); }; responses.push(exchange); } futures_util::future::join_all(responses).await; th.await.unwrap(); } #[tokio::test(flavor = "current_thread")] async fn rejected_sync() { let _t = support::trace_init(); let (mut service, _handle) = new_service(|_| async { Err(Error::rejected()) }); service.call("hello".into()).await.unwrap_err(); } type Mock = mock::Mock; type Handle = mock::Handle; fn new_service(f: F) -> (AsyncFilter, Handle) where F: Fn(&String) -> U, U: Future>, { let (service, handle) = mock::pair(); let service = AsyncFilter::new(service, f); (service, handle) } tower-0.4.13/tests/hedge/main.rs000064400000000000000000000126320072674642500146150ustar 00000000000000#![cfg(feature = "hedge")] #[path = "../support.rs"] mod support; use std::time::Duration; use tokio::time; use tokio_test::{assert_pending, assert_ready, assert_ready_ok, task}; use tower::hedge::{Hedge, Policy}; use tower_test::{assert_request_eq, mock}; #[tokio::test(flavor = "current_thread")] async fn hedge_orig_completes_first() { let _t = support::trace_init(); time::pause(); let (mut service, mut handle) = new_service(TestPolicy); assert_ready_ok!(service.poll_ready()); let mut fut = task::spawn(service.call("orig")); // Check that orig request has been issued. let req = assert_request_eq!(handle, "orig"); // Check fut is not ready. assert_pending!(fut.poll()); // Check hedge has not been issued. assert_pending!(handle.poll_request()); time::advance(Duration::from_millis(11)).await; // Check fut is not ready. assert_pending!(fut.poll()); // Check that the hedge has been issued. let _hedge_req = assert_request_eq!(handle, "orig"); req.send_response("orig-done"); // Check that fut gets orig response. assert_eq!(assert_ready_ok!(fut.poll()), "orig-done"); } #[tokio::test(flavor = "current_thread")] async fn hedge_hedge_completes_first() { let _t = support::trace_init(); time::pause(); let (mut service, mut handle) = new_service(TestPolicy); assert_ready_ok!(service.poll_ready()); let mut fut = task::spawn(service.call("orig")); // Check that orig request has been issued. let _req = assert_request_eq!(handle, "orig"); // Check fut is not ready. assert_pending!(fut.poll()); // Check hedge has not been issued. assert_pending!(handle.poll_request()); time::advance(Duration::from_millis(11)).await; // Check fut is not ready. assert_pending!(fut.poll()); // Check that the hedge has been issued. let hedge_req = assert_request_eq!(handle, "orig"); hedge_req.send_response("hedge-done"); // Check that fut gets hedge response. assert_eq!(assert_ready_ok!(fut.poll()), "hedge-done"); } #[tokio::test(flavor = "current_thread")] async fn completes_before_hedge() { let _t = support::trace_init(); let (mut service, mut handle) = new_service(TestPolicy); assert_ready_ok!(service.poll_ready()); let mut fut = task::spawn(service.call("orig")); // Check that orig request has been issued. let req = assert_request_eq!(handle, "orig"); // Check fut is not ready. assert_pending!(fut.poll()); req.send_response("orig-done"); // Check hedge has not been issued. assert_pending!(handle.poll_request()); // Check that fut gets orig response. assert_eq!(assert_ready_ok!(fut.poll()), "orig-done"); } #[tokio::test(flavor = "current_thread")] async fn request_not_retyable() { let _t = support::trace_init(); time::pause(); let (mut service, mut handle) = new_service(TestPolicy); assert_ready_ok!(service.poll_ready()); let mut fut = task::spawn(service.call(NOT_RETRYABLE)); // Check that orig request has been issued. let req = assert_request_eq!(handle, NOT_RETRYABLE); // Check fut is not ready. assert_pending!(fut.poll()); // Check hedge has not been issued. assert_pending!(handle.poll_request()); time::advance(Duration::from_millis(10)).await; // Check fut is not ready. assert_pending!(fut.poll()); // Check hedge has not been issued. assert_pending!(handle.poll_request()); req.send_response("orig-done"); // Check that fut gets orig response. assert_eq!(assert_ready_ok!(fut.poll()), "orig-done"); } #[tokio::test(flavor = "current_thread")] async fn request_not_clonable() { let _t = support::trace_init(); time::pause(); let (mut service, mut handle) = new_service(TestPolicy); assert_ready_ok!(service.poll_ready()); let mut fut = task::spawn(service.call(NOT_CLONABLE)); // Check that orig request has been issued. let req = assert_request_eq!(handle, NOT_CLONABLE); // Check fut is not ready. assert_pending!(fut.poll()); // Check hedge has not been issued. assert_pending!(handle.poll_request()); time::advance(Duration::from_millis(10)).await; // Check fut is not ready. assert_pending!(fut.poll()); // Check hedge has not been issued. assert_pending!(handle.poll_request()); req.send_response("orig-done"); // Check that fut gets orig response. assert_eq!(assert_ready_ok!(fut.poll()), "orig-done"); } type Req = &'static str; type Res = &'static str; type Mock = tower_test::mock::Mock; type Handle = tower_test::mock::Handle; static NOT_RETRYABLE: &str = "NOT_RETRYABLE"; static NOT_CLONABLE: &str = "NOT_CLONABLE"; #[derive(Clone)] struct TestPolicy; impl tower::hedge::Policy for TestPolicy { fn can_retry(&self, req: &Req) -> bool { *req != NOT_RETRYABLE } fn clone_request(&self, req: &Req) -> Option { if *req == NOT_CLONABLE { None } else { Some(req) } } } fn new_service + Clone>(policy: P) -> (mock::Spawn>, Handle) { let (service, handle) = tower_test::mock::pair(); let mock_latencies: [u64; 10] = [1, 1, 1, 1, 1, 1, 1, 1, 10, 10]; let service = Hedge::new_with_mock_latencies( service, policy, 10, 0.9, Duration::from_secs(60), &mock_latencies, ); (mock::Spawn::new(service), handle) } tower-0.4.13/tests/limit/concurrency.rs000064400000000000000000000133700072674642500162650ustar 00000000000000#[path = "../support.rs"] mod support; use tokio_test::{assert_pending, assert_ready, assert_ready_ok}; use tower::limit::concurrency::ConcurrencyLimitLayer; use tower_test::{assert_request_eq, mock}; #[tokio::test(flavor = "current_thread")] async fn basic_service_limit_functionality_with_poll_ready() { let _t = support::trace_init(); let limit = ConcurrencyLimitLayer::new(2); let (mut service, mut handle) = mock::spawn_layer(limit); assert_ready_ok!(service.poll_ready()); let r1 = service.call("hello 1"); assert_ready_ok!(service.poll_ready()); let r2 = service.call("hello 2"); assert_pending!(service.poll_ready()); assert!(!service.is_woken()); // The request gets passed through assert_request_eq!(handle, "hello 1").send_response("world 1"); // The next request gets passed through assert_request_eq!(handle, "hello 2").send_response("world 2"); // There are no more requests assert_pending!(handle.poll_request()); assert_eq!(r1.await.unwrap(), "world 1"); assert!(service.is_woken()); // Another request can be sent assert_ready_ok!(service.poll_ready()); let r3 = service.call("hello 3"); assert_pending!(service.poll_ready()); assert_eq!(r2.await.unwrap(), "world 2"); // The request gets passed through assert_request_eq!(handle, "hello 3").send_response("world 3"); assert_eq!(r3.await.unwrap(), "world 3"); } #[tokio::test(flavor = "current_thread")] async fn basic_service_limit_functionality_without_poll_ready() { let _t = support::trace_init(); let limit = ConcurrencyLimitLayer::new(2); let (mut service, mut handle) = mock::spawn_layer(limit); assert_ready_ok!(service.poll_ready()); let r1 = service.call("hello 1"); assert_ready_ok!(service.poll_ready()); let r2 = service.call("hello 2"); assert_pending!(service.poll_ready()); // The request gets passed through assert_request_eq!(handle, "hello 1").send_response("world 1"); assert!(!service.is_woken()); // The next request gets passed through assert_request_eq!(handle, "hello 2").send_response("world 2"); assert!(!service.is_woken()); // There are no more requests assert_pending!(handle.poll_request()); assert_eq!(r1.await.unwrap(), "world 1"); assert!(service.is_woken()); // One more request can be sent assert_ready_ok!(service.poll_ready()); let r4 = service.call("hello 4"); assert_pending!(service.poll_ready()); assert_eq!(r2.await.unwrap(), "world 2"); assert!(service.is_woken()); // The request gets passed through assert_request_eq!(handle, "hello 4").send_response("world 4"); assert_eq!(r4.await.unwrap(), "world 4"); } #[tokio::test(flavor = "current_thread")] async fn request_without_capacity() { let _t = support::trace_init(); let limit = ConcurrencyLimitLayer::new(0); let (mut service, _) = mock::spawn_layer::<(), (), _>(limit); assert_pending!(service.poll_ready()); } #[tokio::test(flavor = "current_thread")] async fn reserve_capacity_without_sending_request() { let _t = support::trace_init(); let limit = ConcurrencyLimitLayer::new(1); let (mut s1, mut handle) = mock::spawn_layer(limit); let mut s2 = s1.clone(); // Reserve capacity in s1 assert_ready_ok!(s1.poll_ready()); // Service 2 cannot get capacity assert_pending!(s2.poll_ready()); // s1 sends the request, then s2 is able to get capacity let r1 = s1.call("hello"); assert_request_eq!(handle, "hello").send_response("world"); assert_pending!(s2.poll_ready()); r1.await.unwrap(); assert_ready_ok!(s2.poll_ready()); } #[tokio::test(flavor = "current_thread")] async fn service_drop_frees_capacity() { let _t = support::trace_init(); let limit = ConcurrencyLimitLayer::new(1); let (mut s1, _handle) = mock::spawn_layer::<(), (), _>(limit); let mut s2 = s1.clone(); // Reserve capacity in s1 assert_ready_ok!(s1.poll_ready()); // Service 2 cannot get capacity assert_pending!(s2.poll_ready()); drop(s1); assert!(s2.is_woken()); assert_ready_ok!(s2.poll_ready()); } #[tokio::test(flavor = "current_thread")] async fn response_error_releases_capacity() { let _t = support::trace_init(); let limit = ConcurrencyLimitLayer::new(1); let (mut s1, mut handle) = mock::spawn_layer::<_, (), _>(limit); let mut s2 = s1.clone(); // Reserve capacity in s1 assert_ready_ok!(s1.poll_ready()); // s1 sends the request, then s2 is able to get capacity let r1 = s1.call("hello"); assert_request_eq!(handle, "hello").send_error("boom"); r1.await.unwrap_err(); assert_ready_ok!(s2.poll_ready()); } #[tokio::test(flavor = "current_thread")] async fn response_future_drop_releases_capacity() { let _t = support::trace_init(); let limit = ConcurrencyLimitLayer::new(1); let (mut s1, _handle) = mock::spawn_layer::<_, (), _>(limit); let mut s2 = s1.clone(); // Reserve capacity in s1 assert_ready_ok!(s1.poll_ready()); // s1 sends the request, then s2 is able to get capacity let r1 = s1.call("hello"); assert_pending!(s2.poll_ready()); drop(r1); assert_ready_ok!(s2.poll_ready()); } #[tokio::test(flavor = "current_thread")] async fn multi_waiters() { let _t = support::trace_init(); let limit = ConcurrencyLimitLayer::new(1); let (mut s1, _handle) = mock::spawn_layer::<(), (), _>(limit); let mut s2 = s1.clone(); let mut s3 = s1.clone(); // Reserve capacity in s1 assert_ready_ok!(s1.poll_ready()); // s2 and s3 are not ready assert_pending!(s2.poll_ready()); assert_pending!(s3.poll_ready()); drop(s1); assert!(s2.is_woken()); assert!(!s3.is_woken()); drop(s2); assert!(s3.is_woken()); } tower-0.4.13/tests/limit/main.rs000064400000000000000000000001500072674642500146470ustar 00000000000000#![cfg(feature = "limit")] mod concurrency; mod rate; #[path = "../support.rs"] pub(crate) mod support; tower-0.4.13/tests/limit/rate.rs000064400000000000000000000047020072674642500146650ustar 00000000000000use super::support; use std::time::Duration; use tokio::time; use tokio_test::{assert_pending, assert_ready, assert_ready_ok}; use tower::limit::rate::RateLimitLayer; use tower_test::{assert_request_eq, mock}; #[tokio::test(flavor = "current_thread")] async fn reaching_capacity() { let _t = support::trace_init(); time::pause(); let rate_limit = RateLimitLayer::new(1, Duration::from_millis(100)); let (mut service, mut handle) = mock::spawn_layer(rate_limit); assert_ready_ok!(service.poll_ready()); let response = service.call("hello"); assert_request_eq!(handle, "hello").send_response("world"); assert_eq!(response.await.unwrap(), "world"); assert_pending!(service.poll_ready()); assert_pending!(handle.poll_request()); time::advance(Duration::from_millis(101)).await; assert_ready_ok!(service.poll_ready()); let response = service.call("two"); assert_request_eq!(handle, "two").send_response("done"); assert_eq!(response.await.unwrap(), "done"); } #[tokio::test(flavor = "current_thread")] async fn remaining_gets_reset() { // This test checks for the case where the `until` state gets reset // but the `rem` does not. This was a bug found `cd7dd12315706fc0860a35646b1eb7b60c50a5c1`. // // The main premise here is that we can make one request which should initialize the state // as ready. Then we can advance the clock to put us beyond the current period. When we make // subsequent requests the `rem` for the next window is continued from the previous when // it should be totally reset. let _t = support::trace_init(); time::pause(); let rate_limit = RateLimitLayer::new(3, Duration::from_millis(100)); let (mut service, mut handle) = mock::spawn_layer(rate_limit); assert_ready_ok!(service.poll_ready()); let response = service.call("hello"); assert_request_eq!(handle, "hello").send_response("world"); assert_eq!(response.await.unwrap(), "world"); time::advance(Duration::from_millis(100)).await; assert_ready_ok!(service.poll_ready()); let response = service.call("hello"); assert_request_eq!(handle, "hello").send_response("world"); assert_eq!(response.await.unwrap(), "world"); assert_ready_ok!(service.poll_ready()); let response = service.call("hello"); assert_request_eq!(handle, "hello").send_response("world"); assert_eq!(response.await.unwrap(), "world"); assert_ready_ok!(service.poll_ready()); } tower-0.4.13/tests/load_shed/main.rs000064400000000000000000000022140072674642500154560ustar 00000000000000#![cfg(feature = "load-shed")] #[path = "../support.rs"] mod support; use tokio_test::{assert_ready_err, assert_ready_ok, task}; use tower::load_shed::LoadShedLayer; use tower_test::{assert_request_eq, mock}; #[tokio::test(flavor = "current_thread")] async fn when_ready() { let _t = support::trace_init(); let layer = LoadShedLayer::new(); let (mut service, mut handle) = mock::spawn_layer(layer); assert_ready_ok!(service.poll_ready(), "overload always reports ready"); let mut response = task::spawn(service.call("hello")); assert_request_eq!(handle, "hello").send_response("world"); assert_eq!(assert_ready_ok!(response.poll()), "world"); } #[tokio::test(flavor = "current_thread")] async fn when_not_ready() { let _t = support::trace_init(); let layer = LoadShedLayer::new(); let (mut service, mut handle) = mock::spawn_layer::<_, (), _>(layer); handle.allow(0); assert_ready_ok!(service.poll_ready(), "overload always reports ready"); let mut fut = task::spawn(service.call("hello")); let err = assert_ready_err!(fut.poll()); assert!(err.is::()); } tower-0.4.13/tests/ready_cache/main.rs000064400000000000000000000143270072674642500157730ustar 00000000000000#![cfg(feature = "ready-cache")] #[path = "../support.rs"] mod support; use std::pin::Pin; use tokio_test::{assert_pending, assert_ready, task}; use tower::ready_cache::{error, ReadyCache}; use tower_test::mock; type Req = &'static str; type Mock = mock::Mock; #[test] fn poll_ready_inner_failure() { let _t = support::trace_init(); let mut task = task::spawn(()); let mut cache = ReadyCache::::default(); let (service0, mut handle0) = mock::pair::(); handle0.send_error("doom"); cache.push(0, service0); let (service1, mut handle1) = mock::pair::(); handle1.allow(1); cache.push(1, service1); let failed = assert_ready!(task.enter(|cx, _| cache.poll_pending(cx))).unwrap_err(); assert_eq!(failed.0, 0); assert_eq!(format!("{}", failed.1), "doom"); assert_eq!(cache.len(), 1); } #[test] fn poll_ready_not_ready() { let _t = support::trace_init(); let mut task = task::spawn(()); let mut cache = ReadyCache::::default(); let (service0, mut handle0) = mock::pair::(); handle0.allow(0); cache.push(0, service0); let (service1, mut handle1) = mock::pair::(); handle1.allow(0); cache.push(1, service1); assert_pending!(task.enter(|cx, _| cache.poll_pending(cx))); assert_eq!(cache.ready_len(), 0); assert_eq!(cache.pending_len(), 2); assert_eq!(cache.len(), 2); } #[test] fn poll_ready_promotes_inner() { let _t = support::trace_init(); let mut task = task::spawn(()); let mut cache = ReadyCache::::default(); let (service0, mut handle0) = mock::pair::(); handle0.allow(1); cache.push(0, service0); let (service1, mut handle1) = mock::pair::(); handle1.allow(1); cache.push(1, service1); assert_eq!(cache.ready_len(), 0); assert_eq!(cache.pending_len(), 2); assert_eq!(cache.len(), 2); assert_ready!(task.enter(|cx, _| cache.poll_pending(cx))).unwrap(); assert_eq!(cache.ready_len(), 2); assert_eq!(cache.pending_len(), 0); assert_eq!(cache.len(), 2); } #[test] fn evict_ready_then_error() { let _t = support::trace_init(); let mut task = task::spawn(()); let mut cache = ReadyCache::::default(); let (service, mut handle) = mock::pair::(); handle.allow(0); cache.push(0, service); assert_pending!(task.enter(|cx, _| cache.poll_pending(cx))); handle.allow(1); assert_ready!(task.enter(|cx, _| cache.poll_pending(cx))).unwrap(); handle.send_error("doom"); assert!(cache.evict(&0)); assert_ready!(task.enter(|cx, _| cache.poll_pending(cx))).unwrap(); } #[test] fn evict_pending_then_error() { let _t = support::trace_init(); let mut task = task::spawn(()); let mut cache = ReadyCache::::default(); let (service, mut handle) = mock::pair::(); handle.allow(0); cache.push(0, service); assert_pending!(task.enter(|cx, _| cache.poll_pending(cx))); handle.send_error("doom"); assert!(cache.evict(&0)); assert_ready!(task.enter(|cx, _| cache.poll_pending(cx))).unwrap(); } #[test] fn push_then_evict() { let _t = support::trace_init(); let mut task = task::spawn(()); let mut cache = ReadyCache::::default(); let (service, mut handle) = mock::pair::(); handle.allow(0); cache.push(0, service); handle.send_error("doom"); assert!(cache.evict(&0)); assert_ready!(task.enter(|cx, _| cache.poll_pending(cx))).unwrap(); } #[test] fn error_after_promote() { let _t = support::trace_init(); let mut task = task::spawn(()); let mut cache = ReadyCache::::default(); let (service, mut handle) = mock::pair::(); handle.allow(0); cache.push(0, service); assert_pending!(task.enter(|cx, _| cache.poll_pending(cx))); handle.allow(1); assert_ready!(task.enter(|cx, _| cache.poll_pending(cx))).unwrap(); handle.send_error("doom"); assert_ready!(task.enter(|cx, _| cache.poll_pending(cx))).unwrap(); } #[test] fn duplicate_key_by_index() { let _t = support::trace_init(); let mut task = task::spawn(()); let mut cache = ReadyCache::::default(); let (service0, mut handle0) = mock::pair::(); handle0.allow(1); cache.push(0, service0); let (service1, mut handle1) = mock::pair::(); handle1.allow(1); // this push should replace the old service (service0) cache.push(0, service1); // this evict should evict service1 cache.evict(&0); // poll_pending should complete (there are no remaining pending services) assert_ready!(task.enter(|cx, _| cache.poll_pending(cx))).unwrap(); // but service 0 should not be ready (1 replaced, 1 evicted) assert!(!task.enter(|cx, _| cache.check_ready(cx, &0)).unwrap()); let (service2, mut handle2) = mock::pair::(); handle2.allow(1); // this push should ensure replace the evicted service1 cache.push(0, service2); // there should be no more pending assert_ready!(task.enter(|cx, _| cache.poll_pending(cx))).unwrap(); // _and_ service 0 should now be callable assert!(task.enter(|cx, _| cache.check_ready(cx, &0)).unwrap()); } // Tests https://github.com/tower-rs/tower/issues/415 #[tokio::test(flavor = "current_thread")] async fn cancelation_observed() { let mut cache = ReadyCache::default(); let mut handles = vec![]; // NOTE This test passes at 129 items, but fails at 130 items (if coop // schedulding interferes with cancelation). for _ in 0..130 { let (svc, mut handle) = tower_test::mock::pair::<(), ()>(); handle.allow(1); cache.push("ep0", svc); handles.push(handle); } struct Ready(ReadyCache<&'static str, tower_test::mock::Mock<(), ()>, ()>); impl Unpin for Ready {} impl std::future::Future for Ready { type Output = Result<(), error::Failed<&'static str>>; fn poll( self: Pin<&mut Self>, cx: &mut std::task::Context<'_>, ) -> std::task::Poll { self.get_mut().0.poll_pending(cx) } } Ready(cache).await.unwrap(); } tower-0.4.13/tests/retry/main.rs000064400000000000000000000114540072674642500147070ustar 00000000000000#![cfg(feature = "retry")] #[path = "../support.rs"] mod support; use futures_util::future; use tokio_test::{assert_pending, assert_ready_err, assert_ready_ok, task}; use tower::retry::Policy; use tower_test::{assert_request_eq, mock}; #[tokio::test(flavor = "current_thread")] async fn retry_errors() { let _t = support::trace_init(); let (mut service, mut handle) = new_service(RetryErrors); assert_ready_ok!(service.poll_ready()); let mut fut = task::spawn(service.call("hello")); assert_request_eq!(handle, "hello").send_error("retry me"); assert_pending!(fut.poll()); assert_request_eq!(handle, "hello").send_response("world"); assert_eq!(fut.into_inner().await.unwrap(), "world"); } #[tokio::test(flavor = "current_thread")] async fn retry_limit() { let _t = support::trace_init(); let (mut service, mut handle) = new_service(Limit(2)); assert_ready_ok!(service.poll_ready()); let mut fut = task::spawn(service.call("hello")); assert_request_eq!(handle, "hello").send_error("retry 1"); assert_pending!(fut.poll()); assert_request_eq!(handle, "hello").send_error("retry 2"); assert_pending!(fut.poll()); assert_request_eq!(handle, "hello").send_error("retry 3"); assert_eq!(assert_ready_err!(fut.poll()).to_string(), "retry 3"); } #[tokio::test(flavor = "current_thread")] async fn retry_error_inspection() { let _t = support::trace_init(); let (mut service, mut handle) = new_service(UnlessErr("reject")); assert_ready_ok!(service.poll_ready()); let mut fut = task::spawn(service.call("hello")); assert_request_eq!(handle, "hello").send_error("retry 1"); assert_pending!(fut.poll()); assert_request_eq!(handle, "hello").send_error("reject"); assert_eq!(assert_ready_err!(fut.poll()).to_string(), "reject"); } #[tokio::test(flavor = "current_thread")] async fn retry_cannot_clone_request() { let _t = support::trace_init(); let (mut service, mut handle) = new_service(CannotClone); assert_ready_ok!(service.poll_ready()); let mut fut = task::spawn(service.call("hello")); assert_request_eq!(handle, "hello").send_error("retry 1"); assert_eq!(assert_ready_err!(fut.poll()).to_string(), "retry 1"); } #[tokio::test(flavor = "current_thread")] async fn success_with_cannot_clone() { let _t = support::trace_init(); // Even though the request couldn't be cloned, if the first request succeeds, // it should succeed overall. let (mut service, mut handle) = new_service(CannotClone); assert_ready_ok!(service.poll_ready()); let mut fut = task::spawn(service.call("hello")); assert_request_eq!(handle, "hello").send_response("world"); assert_ready_ok!(fut.poll(), "world"); } type Req = &'static str; type Res = &'static str; type InnerError = &'static str; type Error = Box; type Mock = mock::Mock; type Handle = mock::Handle; #[derive(Clone)] struct RetryErrors; impl Policy for RetryErrors { type Future = future::Ready; fn retry(&self, _: &Req, result: Result<&Res, &Error>) -> Option { if result.is_err() { Some(future::ready(RetryErrors)) } else { None } } fn clone_request(&self, req: &Req) -> Option { Some(*req) } } #[derive(Clone)] struct Limit(usize); impl Policy for Limit { type Future = future::Ready; fn retry(&self, _: &Req, result: Result<&Res, &Error>) -> Option { if result.is_err() && self.0 > 0 { Some(future::ready(Limit(self.0 - 1))) } else { None } } fn clone_request(&self, req: &Req) -> Option { Some(*req) } } #[derive(Clone)] struct UnlessErr(InnerError); impl Policy for UnlessErr { type Future = future::Ready; fn retry(&self, _: &Req, result: Result<&Res, &Error>) -> Option { result.err().and_then(|err| { if err.to_string() != self.0 { Some(future::ready(self.clone())) } else { None } }) } fn clone_request(&self, req: &Req) -> Option { Some(*req) } } #[derive(Clone)] struct CannotClone; impl Policy for CannotClone { type Future = future::Ready; fn retry(&self, _: &Req, _: Result<&Res, &Error>) -> Option { unreachable!("retry cannot be called since request isn't cloned"); } fn clone_request(&self, _req: &Req) -> Option { None } } fn new_service + Clone>( policy: P, ) -> (mock::Spawn>, Handle) { let retry = tower::retry::RetryLayer::new(policy); mock::spawn_layer(retry) } tower-0.4.13/tests/spawn_ready/main.rs000064400000000000000000000047210072674642500160550ustar 00000000000000#![cfg(feature = "spawn-ready")] #[path = "../support.rs"] mod support; use tokio::time; use tokio_test::{assert_pending, assert_ready, assert_ready_err, assert_ready_ok}; use tower::spawn_ready::{SpawnReady, SpawnReadyLayer}; use tower::util::ServiceExt; use tower_test::mock; #[tokio::test(flavor = "current_thread")] async fn when_inner_is_not_ready() { time::pause(); let _t = support::trace_init(); let layer = SpawnReadyLayer::new(); let (mut service, mut handle) = mock::spawn_layer::<(), (), _>(layer); // Make the service NotReady handle.allow(0); assert_pending!(service.poll_ready()); // Make the service is Ready handle.allow(1); time::sleep(time::Duration::from_millis(100)).await; assert_ready_ok!(service.poll_ready()); } #[tokio::test(flavor = "current_thread")] async fn when_inner_fails() { let _t = support::trace_init(); let layer = SpawnReadyLayer::new(); let (mut service, mut handle) = mock::spawn_layer::<(), (), _>(layer); // Make the service NotReady handle.allow(0); handle.send_error("foobar"); assert_eq!( assert_ready_err!(service.poll_ready()).to_string(), "foobar" ); } #[tokio::test(flavor = "current_thread")] async fn propagates_trace_spans() { use tracing::Instrument; let _t = support::trace_init(); let span = tracing::info_span!("my_span"); let service = support::AssertSpanSvc::new(span.clone()); let service = SpawnReady::new(service); let result = tokio::spawn(service.oneshot(()).instrument(span)); result.await.expect("service panicked").expect("failed"); } #[cfg(test)] #[tokio::test(flavor = "current_thread")] async fn abort_on_drop() { let (mock, mut handle) = mock::pair::<(), ()>(); let mut svc = SpawnReady::new(mock); handle.allow(0); // Drive the service to readiness until we signal a drop. let (drop_tx, drop_rx) = tokio::sync::oneshot::channel(); let mut task = tokio_test::task::spawn(async move { tokio::select! { _ = drop_rx => {} _ = svc.ready() => unreachable!("Service must not become ready"), } }); assert_pending!(task.poll()); assert_pending!(handle.poll_request()); // End the task and ensure that the inner service has been dropped. assert!(drop_tx.send(()).is_ok()); tokio_test::assert_ready!(task.poll()); tokio::task::yield_now().await; assert!(tokio_test::assert_ready!(handle.poll_request()).is_none()); } tower-0.4.13/tests/steer/main.rs000064400000000000000000000031230072674642500146560ustar 00000000000000#![cfg(feature = "steer")] #[path = "../support.rs"] mod support; use futures_util::future::{ready, Ready}; use std::task::{Context, Poll}; use tower::steer::Steer; use tower_service::Service; type StdError = Box; struct MyService(u8, bool); impl Service for MyService { type Response = u8; type Error = StdError; type Future = Ready>; fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll> { if !self.1 { Poll::Pending } else { Poll::Ready(Ok(())) } } fn call(&mut self, _req: String) -> Self::Future { ready(Ok(self.0)) } } #[tokio::test(flavor = "current_thread")] async fn pick_correctly() { let _t = support::trace_init(); let srvs = vec![MyService(42, true), MyService(57, true)]; let mut st = Steer::new(srvs, |_: &_, _: &[_]| 1); futures_util::future::poll_fn(|cx| st.poll_ready(cx)) .await .unwrap(); let r = st.call(String::from("foo")).await.unwrap(); assert_eq!(r, 57); } #[tokio::test(flavor = "current_thread")] async fn pending_all_ready() { let _t = support::trace_init(); let srvs = vec![MyService(42, true), MyService(57, false)]; let mut st = Steer::new(srvs, |_: &_, _: &[_]| 0); let p = futures_util::poll!(futures_util::future::poll_fn(|cx| st.poll_ready(cx))); match p { Poll::Pending => (), _ => panic!( "Steer should not return poll_ready if at least one component service is not ready" ), } } tower-0.4.13/tests/support.rs000064400000000000000000000052440072674642500143320ustar 00000000000000#![allow(dead_code)] use futures::future; use std::fmt; use std::pin::Pin; use std::task::{Context, Poll}; use tokio::sync::mpsc; use tokio_stream::Stream; use tower::Service; pub(crate) fn trace_init() -> tracing::subscriber::DefaultGuard { let subscriber = tracing_subscriber::fmt() .with_test_writer() .with_max_level(tracing::Level::TRACE) .with_thread_names(true) .finish(); tracing::subscriber::set_default(subscriber) } pin_project_lite::pin_project! { #[derive(Clone, Debug)] pub struct IntoStream { #[pin] inner: S } } impl IntoStream { pub fn new(inner: S) -> Self { Self { inner } } } impl Stream for IntoStream> { type Item = I; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { self.project().inner.poll_recv(cx) } } impl Stream for IntoStream> { type Item = I; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { self.project().inner.poll_recv(cx) } } #[derive(Clone, Debug)] pub struct AssertSpanSvc { span: tracing::Span, polled: bool, } pub struct AssertSpanError(String); impl fmt::Debug for AssertSpanError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fmt::Display::fmt(&self.0, f) } } impl fmt::Display for AssertSpanError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fmt::Display::fmt(&self.0, f) } } impl std::error::Error for AssertSpanError {} impl AssertSpanSvc { pub fn new(span: tracing::Span) -> Self { Self { span, polled: false, } } fn check(&self, func: &str) -> Result<(), AssertSpanError> { let current_span = tracing::Span::current(); tracing::debug!(?current_span, ?self.span, %func); if current_span == self.span { return Ok(()); } Err(AssertSpanError(format!( "{} called outside expected span\n expected: {:?}\n current: {:?}", func, self.span, current_span ))) } } impl Service<()> for AssertSpanSvc { type Response = (); type Error = AssertSpanError; type Future = future::Ready>; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { if self.polled { return Poll::Ready(self.check("poll_ready")); } cx.waker().wake_by_ref(); self.polled = true; Poll::Pending } fn call(&mut self, _: ()) -> Self::Future { future::ready(self.check("call")) } } tower-0.4.13/tests/util/call_all.rs000064400000000000000000000116170072674642500153370ustar 00000000000000use super::support; use futures_core::Stream; use futures_util::{ future::{ready, Ready}, pin_mut, }; use std::task::{Context, Poll}; use std::{cell::Cell, rc::Rc}; use tokio_test::{assert_pending, assert_ready, task}; use tower::util::ServiceExt; use tower_service::*; use tower_test::{assert_request_eq, mock}; type Error = Box; #[derive(Debug, Eq, PartialEq)] struct Srv { admit: Rc>, count: Rc>, } impl Service<&'static str> for Srv { type Response = &'static str; type Error = Error; type Future = Ready>; fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll> { if !self.admit.get() { return Poll::Pending; } self.admit.set(false); Poll::Ready(Ok(())) } fn call(&mut self, req: &'static str) -> Self::Future { self.count.set(self.count.get() + 1); ready(Ok(req)) } } #[test] fn ordered() { let _t = support::trace_init(); let mut mock = task::spawn(()); let admit = Rc::new(Cell::new(false)); let count = Rc::new(Cell::new(0)); let srv = Srv { count: count.clone(), admit: admit.clone(), }; let (tx, rx) = tokio::sync::mpsc::unbounded_channel(); let ca = srv.call_all(support::IntoStream::new(rx)); pin_mut!(ca); assert_pending!(mock.enter(|cx, _| ca.as_mut().poll_next(cx))); tx.send("one").unwrap(); mock.is_woken(); assert_pending!(mock.enter(|cx, _| ca.as_mut().poll_next(cx))); admit.set(true); let v = assert_ready!(mock.enter(|cx, _| ca.as_mut().poll_next(cx))) .transpose() .unwrap(); assert_eq!(v, Some("one")); assert_pending!(mock.enter(|cx, _| ca.as_mut().poll_next(cx))); admit.set(true); tx.send("two").unwrap(); mock.is_woken(); tx.send("three").unwrap(); let v = assert_ready!(mock.enter(|cx, _| ca.as_mut().poll_next(cx))) .transpose() .unwrap(); assert_eq!(v, Some("two")); assert_pending!(mock.enter(|cx, _| ca.as_mut().poll_next(cx))); admit.set(true); let v = assert_ready!(mock.enter(|cx, _| ca.as_mut().poll_next(cx))) .transpose() .unwrap(); assert_eq!(v, Some("three")); admit.set(true); assert_pending!(mock.enter(|cx, _| ca.as_mut().poll_next(cx))); admit.set(true); tx.send("four").unwrap(); mock.is_woken(); let v = assert_ready!(mock.enter(|cx, _| ca.as_mut().poll_next(cx))) .transpose() .unwrap(); assert_eq!(v, Some("four")); assert_pending!(mock.enter(|cx, _| ca.as_mut().poll_next(cx))); // need to be ready since impl doesn't know it'll get EOF admit.set(true); // When we drop the request stream, CallAll should return None. drop(tx); mock.is_woken(); let v = assert_ready!(mock.enter(|cx, _| ca.as_mut().poll_next(cx))) .transpose() .unwrap(); assert!(v.is_none()); assert_eq!(count.get(), 4); // We should also be able to recover the wrapped Service. assert_eq!(ca.take_service(), Srv { count, admit }); } #[tokio::test(flavor = "current_thread")] async fn unordered() { let _t = support::trace_init(); let (mock, handle) = mock::pair::<_, &'static str>(); pin_mut!(handle); let mut task = task::spawn(()); let requests = futures_util::stream::iter(&["one", "two"]); let svc = mock.call_all(requests).unordered(); pin_mut!(svc); assert_pending!(task.enter(|cx, _| svc.as_mut().poll_next(cx))); let resp1 = assert_request_eq!(handle, &"one"); let resp2 = assert_request_eq!(handle, &"two"); resp2.send_response("resp 1"); let v = assert_ready!(task.enter(|cx, _| svc.as_mut().poll_next(cx))) .transpose() .unwrap(); assert_eq!(v, Some("resp 1")); assert_pending!(task.enter(|cx, _| svc.as_mut().poll_next(cx))); resp1.send_response("resp 2"); let v = assert_ready!(task.enter(|cx, _| svc.as_mut().poll_next(cx))) .transpose() .unwrap(); assert_eq!(v, Some("resp 2")); let v = assert_ready!(task.enter(|cx, _| svc.as_mut().poll_next(cx))) .transpose() .unwrap(); assert!(v.is_none()); } #[tokio::test] async fn pending() { let _t = support::trace_init(); let (mock, mut handle) = mock::pair::<_, &'static str>(); let mut task = task::spawn(()); let (tx, rx) = tokio::sync::mpsc::unbounded_channel(); let ca = mock.call_all(support::IntoStream::new(rx)); pin_mut!(ca); assert_pending!(task.enter(|cx, _| ca.as_mut().poll_next(cx))); tx.send("req").unwrap(); assert_pending!(task.enter(|cx, _| ca.as_mut().poll_next(cx))); assert_request_eq!(handle, "req").send_response("res"); let res = assert_ready!(task.enter(|cx, _| ca.as_mut().poll_next(cx))); assert_eq!(res.transpose().unwrap(), Some("res")); assert_pending!(task.enter(|cx, _| ca.as_mut().poll_next(cx))); } tower-0.4.13/tests/util/main.rs000064400000000000000000000002330072674642500145100ustar 00000000000000#![cfg(feature = "util")] #![allow(clippy::type_complexity)] mod call_all; mod oneshot; mod service_fn; #[path = "../support.rs"] pub(crate) mod support; tower-0.4.13/tests/util/oneshot.rs000064400000000000000000000022620072674642500152470ustar 00000000000000use std::task::{Context, Poll}; use std::{future::Future, pin::Pin}; use tower::util::ServiceExt; use tower_service::Service; #[tokio::test(flavor = "current_thread")] async fn service_driven_to_readiness() { // This test ensures that `oneshot` will repeatedly call `poll_ready` until // the service is ready. let _t = super::support::trace_init(); struct PollMeTwice { ready: bool, } impl Service<()> for PollMeTwice { type Error = (); type Response = (); type Future = Pin< Box> + Send + Sync + 'static>, >; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { if self.ready { Poll::Ready(Ok(())) } else { self.ready = true; cx.waker().wake_by_ref(); Poll::Pending } } fn call(&mut self, _: ()) -> Self::Future { assert!(self.ready, "service not driven to readiness!"); Box::pin(async { Ok(()) }) } } let svc = PollMeTwice { ready: false }; svc.oneshot(()).await.unwrap(); } tower-0.4.13/tests/util/service_fn.rs000064400000000000000000000005300072674642500157070ustar 00000000000000use futures_util::future::ready; use tower::util::service_fn; use tower_service::Service; #[tokio::test(flavor = "current_thread")] async fn simple() { let _t = super::support::trace_init(); let mut add_one = service_fn(|req| ready(Ok::<_, ()>(req + 1))); let answer = add_one.call(1).await.unwrap(); assert_eq!(answer, 2); }