actix-http-3.9.0/.cargo_vcs_info.json0000644000000001500000000000100131260ustar { "git": { "sha1": "9ba326aed0dfe1a992c43a07fcb74958e4cfb3c6" }, "path_in_vcs": "actix-http" }actix-http-3.9.0/CHANGES.md000064400000000000000000001417461046102023000133310ustar 00000000000000# Changes ## Unreleased ## 3.9.0 ### Added - Implement `FromIterator<(HeaderName, HeaderValue)>` for `HeaderMap`. ## 3.8.0 ### Added - Add `error::InvalidStatusCode` re-export. ## 3.7.0 ### Added - Add `rustls-0_23` crate feature - Add `{h1::H1Service, h2::H2Service, HttpService}::rustls_0_23()` and `HttpService::rustls_0_23_with_config()` service constructors. ### Changed - Update `brotli` dependency to `6`. - Minimum supported Rust version (MSRV) is now 1.72. ## 3.6.0 ### Added - Add `rustls-0_22` crate feature. - Add `{h1::H1Service, h2::H2Service, HttpService}::rustls_0_22()` and `HttpService::rustls_0_22_with_config()` service constructors. - Implement `From<&HeaderMap>` for `http::HeaderMap`. ## 3.5.1 ### Fixed - Prevent hang when returning zero-sized response bodies through compression layer. ## 3.5.0 ### Added - Implement `From` for `http::HeaderMap`. ### Changed - Updated `zstd` dependency to `0.13`. ### Fixed - Prevent compression of zero-sized response bodies. ## 3.4.0 ### Added - Add `rustls-0_20` crate feature. - Add `{h1::H1Service, h2::H2Service, HttpService}::rustls_021()` and `HttpService::rustls_021_with_config()` service constructors. - Add `body::to_bytes_limited()` function. - Add `body::BodyLimitExceeded` error type. ### Changed - Minimum supported Rust version (MSRV) is now 1.68 due to transitive `time` dependency. ## 3.3.1 ### Fixed - Use correct `http` version requirement to ensure support for const `HeaderName` definitions. ## 3.3.0 ### Added - Implement `MessageBody` for `Cow<'static, str>` and `Cow<'static, [u8]>`. [#2959] - Implement `MessageBody` for `&mut B` where `B: MessageBody + Unpin`. [#2868] - Implement `MessageBody` for `Pin` where `B::Target: MessageBody`. [#2868] - Automatic h2c detection via new service finalizer `HttpService::tcp_auto_h2c()`. [#2957] - `HeaderMap::retain()`. [#2955] - Header name constants in `header` module. [#2956] [#2968] - `CACHE_STATUS` - `CDN_CACHE_CONTROL` - `CROSS_ORIGIN_EMBEDDER_POLICY` - `CROSS_ORIGIN_OPENER_POLICY` - `PERMISSIONS_POLICY` - `X_FORWARDED_FOR` - `X_FORWARDED_HOST` - `X_FORWARDED_PROTO` ### Fixed - Fix non-empty body of HTTP/2 HEAD responses. [#2920] ### Performance - Improve overall performance of operations on `Extensions`. [#2890] [#2959]: https://github.com/actix/actix-web/pull/2959 [#2868]: https://github.com/actix/actix-web/pull/2868 [#2890]: https://github.com/actix/actix-web/pull/2890 [#2920]: https://github.com/actix/actix-web/pull/2920 [#2957]: https://github.com/actix/actix-web/pull/2957 [#2955]: https://github.com/actix/actix-web/pull/2955 [#2956]: https://github.com/actix/actix-web/pull/2956 [#2968]: https://github.com/actix/actix-web/pull/2968 ## 3.2.2 ### Changed - Minimum supported Rust version (MSRV) is now 1.59 due to transitive `time` dependency. ### Fixed - Avoid possibility of dispatcher getting stuck while back-pressuring I/O. [#2369] [#2369]: https://github.com/actix/actix-web/pull/2369 ## 3.2.1 ### Fixed - Fix parsing ambiguity in Transfer-Encoding and Content-Length headers for HTTP/1.0 requests. [#2794] [#2794]: https://github.com/actix/actix-web/pull/2794 ## 3.2.0 ### Changed - Minimum supported Rust version (MSRV) is now 1.57 due to transitive `time` dependency. ### Fixed - Websocket parser no longer throws endless overflow errors after receiving an oversized frame. [#2790] - Retain previously set Vary headers when using compression encoder. [#2798] [#2790]: https://github.com/actix/actix-web/pull/2790 [#2798]: https://github.com/actix/actix-web/pull/2798 ## 3.1.0 ### Changed - Minimum supported Rust version (MSRV) is now 1.56 due to transitive `hashbrown` dependency. ### Fixed - Revert broken fix in [#2624] that caused erroneous 500 error responses. Temporarily re-introduces [#2357] bug. [#2779] [#2624]: https://github.com/actix/actix-web/pull/2624 [#2357]: https://github.com/actix/actix-web/issues/2357 [#2779]: https://github.com/actix/actix-web/pull/2779 ## 3.0.4 ### Fixed - Document on docs.rs with `ws` feature enabled. ## 3.0.3 ### Fixed - Allow spaces between header name and colon when parsing responses. [#2684] [#2684]: https://github.com/actix/actix-web/pull/2684 ## 3.0.2 ### Fixed - Fix encoding camel-case header names with more than one hyphen. [#2683] [#2683]: https://github.com/actix/actix-web/pull/2683 ## 3.0.1 - Fix panic in H1 dispatcher when pipelining is used with keep-alive. [#2678] [#2678]: https://github.com/actix/actix-web/issues/2678 ## 3.0.0 ### Dependencies - Updated `actix-*` to Tokio v1-based versions. [#1813] - Updated `bytes` to `1.0`. [#1813] - Updated `h2` to `0.3`. [#1813] - Updated `rustls` to `0.20.0`. [#2414] - Updated `language-tags` to `0.3`. - Updated `tokio` to `1`. ### Added - Crate Features: - `ws`; disabled by default. [#2618] - `http2`; disabled by default. [#2618] - `compress-brotli`; disabled by default. [#2618] - `compress-gzip`; disabled by default. [#2618] - `compress-zstd`; disabled by default. [#2618] - Functions: - `body::to_bytes` for async collecting message body into Bytes. [#2158] - Traits: - `TryIntoHeaderPair`; allows using typed and untyped headers in the same methods. [#1869] - Types: - `body::BoxBody`; a boxed message body with boxed errors. [#2183] - `body::EitherBody` enum. [#2468] - `body::None` struct. [#2468] - Re-export `http` crate's `Error` type as `error::HttpError`. [#2171] - Variants: - `ContentEncoding::Zstd` along with . [#2244] - `Protocol::Http3` for future compatibility and also mark `#[non_exhaustive]`. [00ba8d55] - Methods: - `ContentEncoding::to_header_value()`. [#2501] - `header::QualityItem::{max, min}()`. [#2486] - `header::QualityItem::zero()` that uses `Quality::ZERO`. [#2501] - `HeaderMap::drain()` as an efficient draining iterator. [#1964] - `HeaderMap::len_keys()` has the behavior of the old `len` method. [#1964] - `MessageBody::boxed` trait method for wrapping boxing types efficiently. [#2520] - `MessageBody::try_into_bytes` trait method, with default implementation, for optimizations on body types that complete in exactly one poll. [#2522] - `Request::conn_data()`. [#2491] - `Request::take_conn_data()`. [#2491] - `Request::take_req_data()`. [#2487] - `Response::{ok, bad_request, not_found, internal_server_error}()`. [#2159] - `Response::into_body()` that consumes response and returns body type. [#2201] - `Response::map_into_boxed_body()`. [#2468] - `ResponseBuilder::append_header()` method which allows using typed and untyped headers. [#1869] - `ResponseBuilder::insert_header()` method which allows using typed and untyped headers. [#1869] - `ResponseHead::set_camel_case_headers()`. [#2587] - `TestRequest::insert_header()` method which allows using typed and untyped headers. [#1869] - Implementations: - Implement `Clone for ws::HandshakeError`. [#2468] - Implement `Clone` for `body::AnyBody where S: Clone`. [#2448] - Implement `Clone` for `RequestHead`. [#2487] - Implement `Clone` for `ResponseHead`. [#2585] - Implement `Copy` for `QualityItem where T: Copy`. [#2501] - Implement `Default` for `ContentEncoding`. [#1912] - Implement `Default` for `HttpServiceBuilder`. [#2611] - Implement `Default` for `KeepAlive`. [#2611] - Implement `Default` for `Response`. [#2201] - Implement `Default` for `ws::Codec`. [#1920] - Implement `Display` for `header::Quality`. [#2486] - Implement `Eq` for `header::ContentEncoding`. [#2501] - Implement `ExactSizeIterator` and `FusedIterator` for all `HeaderMap` iterators. [#2470] - Implement `From` for `KeepAlive`. [#2611] - Implement `From>` for `KeepAlive`. [#2611] - Implement `From>` for `Response>`. [#2625] - Implement `FromStr` for `ContentEncoding`. [#1912] - Implement `Header` for `ContentEncoding`. [#1912] - Implement `IntoHeaderValue` for `ContentEncoding`. [#1912] - Implement `IntoIterator` for `HeaderMap`. [#1964] - Implement `MessageBody` for `bytestring::ByteString`. [#2468] - Implement `MessageBody` for `Pin> where T: MessageBody`. [#2152] - Misc: - Re-export `StatusCode`, `Method`, `Version` and `Uri` at the crate root. [#2171] - Re-export `ContentEncoding` and `ConnectionType` at the crate root. [#2171] - `Quality::ZERO` associated constant equivalent to `q=0`. [#2501] - `header::Quality::{MAX, MIN}` associated constants equivalent to `q=1` and `q=0.001`, respectively. [#2486] - Timeout for canceling HTTP/2 server side connection handshake. Configurable with `ServiceConfig::client_timeout`; defaults to 5 seconds. [#2483] - `#[must_use]` for `ws::Codec` to prevent subtle bugs. [#1920] ### Changed - Traits: - Rename `IntoHeaderValue => TryIntoHeaderValue`. [#2510] - `MessageBody` now has an associated `Error` type. [#2183] - Types: - `Protocol` enum is now marked `#[non_exhaustive]`. - `error::DispatcherError` enum is now marked `#[non_exhaustive]`. [#2624] - `ContentEncoding` is now marked `#[non_exhaustive]`. [#2377] - Error enums are marked `#[non_exhaustive]`. [#2161] - Rename `PayloadStream` to `BoxedPayloadStream`. [#2545] - The body type parameter of `Response` no longer has a default. [#2152] - Enum Variants: - Rename `ContentEncoding::{Br => Brotli}`. [#2501] - `Payload` inner fields are now named. [#2545] - `ws::Message::Text` now contains a `bytestring::ByteString`. [#1864] - Methods: - Rename `ServiceConfig::{client_timer_expire => client_request_deadline}`. [#2611] - Rename `ServiceConfig::{client_disconnect_timer => client_disconnect_deadline}`. [#2611] - Rename `h1::Codec::{keepalive => keep_alive}`. [#2611] - Rename `h1::Codec::{keepalive_enabled => keep_alive_enabled}`. [#2611] - Rename `h1::ClientCodec::{keepalive => keep_alive}`. [#2611] - Rename `h1::ClientPayloadCodec::{keepalive => keep_alive}`. [#2611] - Rename `header::EntityTag::{weak => new_weak, strong => new_strong}`. [#2565] - Rename `TryIntoHeaderValue::{try_into => try_into_value}` to avoid ambiguity with std `TryInto` trait. [#1894] - Deadline methods in `ServiceConfig` now return `std::time::Instant`s instead of Tokio's wrapper type. [#2611] - Places in `Response` where `ResponseBody` was received or returned now simply use `B`. [#2201] - `encoding::Encoder::response` now returns `AnyBody>`. [#2448] - `Extensions::insert` returns replaced item. [#1904] - `HeaderMap::get_all` now returns a `std::slice::Iter`. [#2527] - `HeaderMap::insert` now returns iterator of removed values. [#1964] - `HeaderMap::len` now returns number of values instead of number of keys. [#1964] - `HeaderMap::remove` now returns iterator of removed values. [#1964] - `ResponseBuilder::body(B)` now returns `Response>`. [#2468] - `ResponseBuilder::content_type` now takes an `impl TryIntoHeaderValue` to support using typed `mime` types. [#1894] - `ResponseBuilder::finish()` now returns `Response>`. [#2468] - `ResponseBuilder::json` now takes `impl Serialize`. [#2052] - `ResponseBuilder::message_body` now returns a `Result`. [#2201]∑ - `ServiceConfig::keep_alive` now returns a `KeepAlive`. [#2611] - `ws::hash_key` now returns array. [#2035] - Trait Implementations: - Implementation of `Stream` for `Payload` no longer requires the `Stream` variant be `Unpin`. [#2545] - Implementation of `Future` for `h1::SendResponse` no longer requires the body type be `Unpin`. [#2545] - Implementation of `Stream` for `encoding::Decoder` no longer requires the stream type be `Unpin`. [#2545] - Implementation of `From` for error types now return a `Response`. [#2468] - Misc: - `header` module is now public. [#2171] - `uri` module is now public. [#2171] - Request-local data container is no longer part of a `RequestHead`. Instead it is a distinct part of a `Request`. [#2487] - All error trait bounds in server service builders have changed from `Into` to `Into>`. [#2253] - All error trait bounds in message body and stream impls changed from `Into` to `Into>`. [#2253] - Guarantee ordering of `header::GetAll` iterator to be same as insertion order. [#2467] - Connection data set through the `on_connect_ext` callbacks is now accessible only from the new `Request::conn_data()` method. [#2491] - Brotli (de)compression support is now provided by the `brotli` crate. [#2538] - Minimum supported Rust version (MSRV) is now 1.54. ### Fixed - A `Vary` header is now correctly sent along with compressed content. [#2501] - HTTP/1.1 dispatcher correctly uses client request timeout. [#2611] - Fixed issue where handlers that took payload but then dropped without reading it to EOF it would cause keep-alive connections to become stuck. [#2624] - `ContentEncoding`'s `Identity` variant can now be parsed from a string. [#2501] - `HttpServer::{listen_rustls(), bind_rustls()}` now honor the ALPN protocols in the configuration parameter. [#2226] - Remove unnecessary `Into` bound on `Encoder` body types. [#2375] - Remove unnecessary `Unpin` bound on `ResponseBuilder::streaming`. [#2253] - `BodyStream` and `SizedStream` are no longer restricted to `Unpin` types. [#2152] - Fixed slice creation pointing to potential uninitialized data on h1 encoder. [#2364] - Fixed quality parse error in Accept-Encoding header. [#2344] ### Removed - Crate Features: - `compress` feature. [#2065] - `cookies` feature. [#2065] - `trust-dns` feature. [#2425] - `actors` optional feature and trait implementation for `actix` types. [#1969] - Functions: - `header::qitem` helper. Replaced with `header::QualityItem::max`. [#2486] - Types: - `body::Body`; replaced with `EitherBody` and `BoxBody`. [#2468] - `body::ResponseBody`. [#2446] - `ConnectError::SslHandshakeError` and re-export of `HandshakeError`. Due to the removal of this type from `tokio-openssl` crate. OpenSSL handshake error now returns `ConnectError::SslError`. [#1813] - `error::Canceled` re-export. [#1994] - `error::Result` type alias. [#2201] - `error::BlockingError` [#2660] - `InternalError` and all the error types it constructed were moved up to `actix-web`. [#2215] - Typed HTTP headers; they have moved up to `actix-web`. [2094] - Re-export of `http` crate's `HeaderMap` types in addition to ours. [#2171] - Enum Variants: - `body::BodySize::Empty`; an empty body can now only be represented as a `Sized(0)` variant. [#2446] - `ContentEncoding::Auto`. [#2501] - `EncoderError::Boxed`. [#2446] - Methods: - `ContentEncoding::is_compression()`. [#2501] - `h1::Payload::readany()`. [#2545] - `HttpMessage::cookie[s]()` trait methods. [#2065] - `HttpServiceBuilder::new()`; use `default` instead. [#2611] - `on_connect` (previously deprecated) methods have been removed; use `on_connect_ext`. [#1857] - `Response::build_from()`. [#2159] - `Response::error()` [#2205] - `Response::take_body()` and old `Response::into_body()` method that casted body type. [#2201] - `Response`'s status code builders. [#2159] - `ResponseBuilder::{if_true, if_some}()` (previously deprecated). [#2148] - `ResponseBuilder::{set, set_header}()`; use `ResponseBuilder::insert_header()`. [#1869] - `ResponseBuilder::extensions[_mut]()`. [#2585] - `ResponseBuilder::header()`; use `ResponseBuilder::append_header()`. [#1869] - `ResponseBuilder::json()`. [#2148] - `ResponseBuilder::json2()`. [#1903] - `ResponseBuilder::streaming()`. [#2468] - `ResponseHead::extensions[_mut]()`. [#2585] - `ServiceConfig::{client_timer, keep_alive_timer}()`. [#2611] - `TestRequest::with_hdr()`; use `TestRequest::default().insert_header()`. [#1869] - `TestRequest::with_header()`; use `TestRequest::default().insert_header()`. [#1869] - Trait implementations: - Implementation of `Copy` for `ws::Codec`. [#1920] - Implementation of `From> for KeepAlive`; use `Duration`s instead. [#2611] - Implementation of `From` for `Body`. [#2148] - Implementation of `From for KeepAlive`; use `Duration`s instead. [#2611] - Implementation of `Future` for `Response`. [#2201] - Implementation of `Future` for `ResponseBuilder`. [#2468] - Implementation of `Into` for `Response`. [#2215] - Implementation of `Into` for `ResponseBuilder`. [#2215] - Implementation of `ResponseError` for `actix_utils::timeout::TimeoutError`. [#2127] - Implementation of `ResponseError` for `CookieParseError`. [#2065] - Implementation of `TryFrom` for `header::Quality`. [#2486] - Misc: - `http` module; most everything it contained is exported at the crate root. [#2488] - `cookies` module (re-export). [#2065] - `client` module. Connector types now live in `awc`. [#2425] - `error` field from `Response`. [#2205] - `downcast` and `downcast_get_type_id` macros. [#2291] - Down-casting for `MessageBody` types; use standard `Any` trait. [#2183] [#1813]: https://github.com/actix/actix-web/pull/1813 [#1845]: https://github.com/actix/actix-web/pull/1845 [#1857]: https://github.com/actix/actix-web/pull/1857 [#1864]: https://github.com/actix/actix-web/pull/1864 [#1869]: https://github.com/actix/actix-web/pull/1869 [#1878]: https://github.com/actix/actix-web/pull/1878 [#1894]: https://github.com/actix/actix-web/pull/1894 [#1903]: https://github.com/actix/actix-web/pull/1903 [#1904]: https://github.com/actix/actix-web/pull/1904 [#1912]: https://github.com/actix/actix-web/pull/1912 [#1920]: https://github.com/actix/actix-web/pull/1920 [#1964]: https://github.com/actix/actix-web/pull/1964 [#1969]: https://github.com/actix/actix-web/pull/1969 [#1981]: https://github.com/actix/actix-web/pull/1981 [#1994]: https://github.com/actix/actix-web/pull/1994 [#2035]: https://github.com/actix/actix-web/pull/2035 [#2052]: https://github.com/actix/actix-web/pull/2052 [#2065]: https://github.com/actix/actix-web/pull/2065 [#2094]: https://github.com/actix/actix-web/pull/2094 [#2127]: https://github.com/actix/actix-web/pull/2127 [#2148]: https://github.com/actix/actix-web/pull/2148 [#2152]: https://github.com/actix/actix-web/pull/2152 [#2158]: https://github.com/actix/actix-web/pull/2158 [#2159]: https://github.com/actix/actix-web/pull/2159 [#2161]: https://github.com/actix/actix-web/pull/2161 [#2171]: https://github.com/actix/actix-web/pull/2171 [#2183]: https://github.com/actix/actix-web/pull/2183 [#2196]: https://github.com/actix/actix-web/pull/2196 [#2201]: https://github.com/actix/actix-web/pull/2201 [#2205]: https://github.com/actix/actix-web/pull/2205 [#2215]: https://github.com/actix/actix-web/pull/2215 [#2244]: https://github.com/actix/actix-web/pull/2244 [#2250]: https://github.com/actix/actix-web/pull/2250 [#2253]: https://github.com/actix/actix-web/pull/2253 [#2291]: https://github.com/actix/actix-web/pull/2291 [#2344]: https://github.com/actix/actix-web/pull/2344 [#2364]: https://github.com/actix/actix-web/pull/2364 [#2375]: https://github.com/actix/actix-web/pull/2375 [#2377]: https://github.com/actix/actix-web/pull/2377 [#2414]: https://github.com/actix/actix-web/pull/2414 [#2425]: https://github.com/actix/actix-web/pull/2425 [#2442]: https://github.com/actix/actix-web/pull/2442 [#2446]: https://github.com/actix/actix-web/pull/2446 [#2448]: https://github.com/actix/actix-web/pull/2448 [#2456]: https://github.com/actix/actix-web/pull/2456 [#2467]: https://github.com/actix/actix-web/pull/2467 [#2468]: https://github.com/actix/actix-web/pull/2468 [#2470]: https://github.com/actix/actix-web/pull/2470 [#2474]: https://github.com/actix/actix-web/pull/2474 [#2483]: https://github.com/actix/actix-web/pull/2483 [#2486]: https://github.com/actix/actix-web/pull/2486 [#2487]: https://github.com/actix/actix-web/pull/2487 [#2488]: https://github.com/actix/actix-web/pull/2488 [#2491]: https://github.com/actix/actix-web/pull/2491 [#2497]: https://github.com/actix/actix-web/pull/2497 [#2501]: https://github.com/actix/actix-web/pull/2501 [#2510]: https://github.com/actix/actix-web/pull/2510 [#2520]: https://github.com/actix/actix-web/pull/2520 [#2522]: https://github.com/actix/actix-web/pull/2522 [#2527]: https://github.com/actix/actix-web/pull/2527 [#2538]: https://github.com/actix/actix-web/pull/2538 [#2545]: https://github.com/actix/actix-web/pull/2545 [#2565]: https://github.com/actix/actix-web/pull/2565 [#2585]: https://github.com/actix/actix-web/pull/2585 [#2587]: https://github.com/actix/actix-web/pull/2587 [#2611]: https://github.com/actix/actix-web/pull/2611 [#2618]: https://github.com/actix/actix-web/pull/2618 [#2624]: https://github.com/actix/actix-web/pull/2624 [#2625]: https://github.com/actix/actix-web/pull/2625 [#2660]: https://github.com/actix/actix-web/pull/2660 [00ba8d55]: https://github.com/actix/actix-web/commit/00ba8d55492284581695d824648590715a8bd386
3.0.0 Pre-Releases ## 3.0.0-rc.4 ### Fixed - Fix h1 dispatcher panic. [1ce58ecb] [1ce58ecb]: https://github.com/actix/actix-web/commit/1ce58ecb305c60e51db06e6c913b7a1344e229ca ## 3.0.0-rc.3 - No significant changes since `3.0.0-rc.2`. ## 3.0.0-rc.2 ### Added - Implement `From>` for `Response>`. [#2625] ### Changed - `error::DispatcherError` enum is now marked `#[non_exhaustive]`. [#2624] ### Fixed - Issue where handlers that took payload but then dropped without reading it to EOF it would cause keep-alive connections to become stuck. [#2624] [#2624]: https://github.com/actix/actix-web/pull/2624 [#2625]: https://github.com/actix/actix-web/pull/2625 ## 3.0.0-rc.1 ### Added - Implement `Default` for `KeepAlive`. [#2611] - Implement `From` for `KeepAlive`. [#2611] - Implement `From>` for `KeepAlive`. [#2611] - Implement `Default` for `HttpServiceBuilder`. [#2611] - Crate `ws` feature flag, disabled by default. [#2618] - Crate `http2` feature flag, disabled by default. [#2618] ### Changed - Rename `ServiceConfig::{client_timer_expire => client_request_deadline}`. [#2611] - Rename `ServiceConfig::{client_disconnect_timer => client_disconnect_deadline}`. [#2611] - Deadline methods in `ServiceConfig` now return `std::time::Instant`s instead of Tokio's wrapper type. [#2611] - Rename `h1::Codec::{keepalive => keep_alive}`. [#2611] - Rename `h1::Codec::{keepalive_enabled => keep_alive_enabled}`. [#2611] - Rename `h1::ClientCodec::{keepalive => keep_alive}`. [#2611] - Rename `h1::ClientPayloadCodec::{keepalive => keep_alive}`. [#2611] - `ServiceConfig::keep_alive` now returns a `KeepAlive`. [#2611] ### Fixed - HTTP/1.1 dispatcher correctly uses client request timeout. [#2611] ### Removed - `ServiceConfig::{client_timer, keep_alive_timer}`. [#2611] - `impl From for KeepAlive`; use `Duration`s instead. [#2611] - `impl From> for KeepAlive`; use `Duration`s instead. [#2611] - `HttpServiceBuilder::new`; use `default` instead. [#2611] [#2611]: https://github.com/actix/actix-web/pull/2611 [#2618]: https://github.com/actix/actix-web/pull/2618 ## 3.0.0-beta.19 ### Added - Response headers can be sent as camel case using `res.head_mut().set_camel_case_headers(true)`. [#2587] - `ResponseHead` now implements `Clone`. [#2585] ### Changed - Brotli (de)compression support is now provided by the `brotli` crate. [#2538] ### Removed - `ResponseHead::extensions[_mut]()`. [#2585] - `ResponseBuilder::extensions[_mut]()`. [#2585] [#2538]: https://github.com/actix/actix-web/pull/2538 [#2585]: https://github.com/actix/actix-web/pull/2585 [#2587]: https://github.com/actix/actix-web/pull/2587 ## 3.0.0-beta.18 ### Added - `impl Eq` for `header::ContentEncoding`. [#2501] - `impl Copy` for `QualityItem` where `T: Copy`. [#2501] - `Quality::ZERO` equivalent to `q=0`. [#2501] - `QualityItem::zero` that uses `Quality::ZERO`. [#2501] - `ContentEncoding::to_header_value()`. [#2501] ### Changed - `Quality::MIN` is now the smallest non-zero value. [#2501] - `QualityItem::min` semantics changed with `QualityItem::MIN`. [#2501] - Rename `ContentEncoding::{Br => Brotli}`. [#2501] - Rename `header::EntityTag::{weak => new_weak, strong => new_strong}`. [#2565] - Minimum supported Rust version (MSRV) is now 1.54. ### Fixed - `ContentEncoding::Identity` can now be parsed from a string. [#2501] - A `Vary` header is now correctly sent along with compressed content. [#2501] ### Removed - `ContentEncoding::Auto` variant. [#2501] - `ContentEncoding::is_compression()`. [#2501] [#2501]: https://github.com/actix/actix-web/pull/2501 [#2565]: https://github.com/actix/actix-web/pull/2565 ## 3.0.0-beta.17 ### Changed - `HeaderMap::get_all` now returns a `std::slice::Iter`. [#2527] - `Payload` inner fields are now named. [#2545] - `impl Stream` for `Payload` no longer requires the `Stream` variant be `Unpin`. [#2545] - `impl Future` for `h1::SendResponse` no longer requires the body type be `Unpin`. [#2545] - `impl Stream` for `encoding::Decoder` no longer requires the stream type be `Unpin`. [#2545] - Rename `PayloadStream` to `BoxedPayloadStream`. [#2545] ### Removed - `h1::Payload::readany`. [#2545] [#2527]: https://github.com/actix/actix-web/pull/2527 [#2545]: https://github.com/actix/actix-web/pull/2545 ## 3.0.0-beta.16 ### Added - New method on `MessageBody` trait, `try_into_bytes`, with default implementation, for optimizations on body types that complete in exactly one poll. Replaces `is_complete_body` and `take_complete_body`. [#2522] ### Changed - Rename trait `IntoHeaderPair => TryIntoHeaderPair`. [#2510] - Rename `TryIntoHeaderPair::{try_into_header_pair => try_into_pair}`. [#2510] - Rename trait `IntoHeaderValue => TryIntoHeaderValue`. [#2510] ### Removed - `MessageBody::{is_complete_body,take_complete_body}`. [#2522] [#2510]: https://github.com/actix/actix-web/pull/2510 [#2522]: https://github.com/actix/actix-web/pull/2522 ## 3.0.0-beta.15 ### Added - Add timeout for canceling HTTP/2 server side connection handshake. Default to 5 seconds. [#2483] - HTTP/2 handshake timeout can be configured with `ServiceConfig::client_timeout`. [#2483] - `Response::map_into_boxed_body`. [#2468] - `body::EitherBody` enum. [#2468] - `body::None` struct. [#2468] - Impl `MessageBody` for `bytestring::ByteString`. [#2468] - `impl Clone for ws::HandshakeError`. [#2468] - `#[must_use]` for `ws::Codec` to prevent subtle bugs. [#1920] - `impl Default ` for `ws::Codec`. [#1920] - `header::QualityItem::{max, min}`. [#2486] - `header::Quality::{MAX, MIN}`. [#2486] - `impl Display` for `header::Quality`. [#2486] - Connection data set through the `on_connect_ext` callbacks is now accessible only from the new `Request::conn_data()` method. [#2491] - `Request::take_conn_data()`. [#2491] - `Request::take_req_data()`. [#2487] - `impl Clone` for `RequestHead`. [#2487] - New methods on `MessageBody` trait, `is_complete_body` and `take_complete_body`, both with default implementations, for optimizations on body types that are done in exactly one poll/chunk. [#2497] - New `boxed` method on `MessageBody` trait for wrapping body type. [#2520] ### Changed - Rename `body::BoxBody::{from_body => new}`. [#2468] - Body type for `Responses` returned from `Response::{new, ok, etc...}` is now `BoxBody`. [#2468] - The `Error` associated type on `MessageBody` type now requires `impl Error` (or similar). [#2468] - Error types using in service builders now require `Into>`. [#2468] - `From` implementations on error types now return a `Response`. [#2468] - `ResponseBuilder::body(B)` now returns `Response>`. [#2468] - `ResponseBuilder::finish()` now returns `Response>`. [#2468] ### Removed - `ResponseBuilder::streaming`. [#2468] - `impl Future` for `ResponseBuilder`. [#2468] - Remove unnecessary `MessageBody` bound on types passed to `body::AnyBody::new`. [#2468] - Move `body::AnyBody` to `awc`. Replaced with `EitherBody` and `BoxBody`. [#2468] - `impl Copy` for `ws::Codec`. [#1920] - `header::qitem` helper. Replaced with `header::QualityItem::max`. [#2486] - `impl TryFrom` for `header::Quality`. [#2486] - `http` module. Most everything it contained is exported at the crate root. [#2488] [#2483]: https://github.com/actix/actix-web/pull/2483 [#2468]: https://github.com/actix/actix-web/pull/2468 [#1920]: https://github.com/actix/actix-web/pull/1920 [#2486]: https://github.com/actix/actix-web/pull/2486 [#2487]: https://github.com/actix/actix-web/pull/2487 [#2488]: https://github.com/actix/actix-web/pull/2488 [#2491]: https://github.com/actix/actix-web/pull/2491 [#2497]: https://github.com/actix/actix-web/pull/2497 [#2520]: https://github.com/actix/actix-web/pull/2520 ## 3.0.0-beta.14 ### Changed - Guarantee ordering of `header::GetAll` iterator to be same as insertion order. [#2467] - Expose `header::map` module. [#2467] - Implement `ExactSizeIterator` and `FusedIterator` for all `HeaderMap` iterators. [#2470] - Update `actix-tls` to `3.0.0-rc.1`. [#2474] [#2467]: https://github.com/actix/actix-web/pull/2467 [#2470]: https://github.com/actix/actix-web/pull/2470 [#2474]: https://github.com/actix/actix-web/pull/2474 ## 3.0.0-beta.13 ### Added - `body::AnyBody::empty` for quickly creating an empty body. [#2446] - `body::AnyBody::none` for quickly creating a "none" body. [#2456] - `impl Clone` for `body::AnyBody where S: Clone`. [#2448] - `body::AnyBody::into_boxed` for quickly converting to a type-erased, boxed body type. [#2448] ### Changed - Rename `body::AnyBody::{Message => Body}`. [#2446] - Rename `body::AnyBody::{from_message => new_boxed}`. [#2448] - Rename `body::AnyBody::{from_slice => copy_from_slice}`. [#2448] - Rename `body::{BoxAnyBody => BoxBody}`. [#2448] - Change representation of `AnyBody` to include a type parameter in `Body` variant. Defaults to `BoxBody`. [#2448] - `Encoder::response` now returns `AnyBody>`. [#2448] ### Removed - `body::AnyBody::Empty`; an empty body can now only be represented as a zero-length `Bytes` variant. [#2446] - `body::BodySize::Empty`; an empty body can now only be represented as a `Sized(0)` variant. [#2446] - `EncoderError::Boxed`; it is no longer required. [#2446] - `body::ResponseBody`; is function is replaced by the new `body::AnyBody` enum. [#2446] [#2446]: https://github.com/actix/actix-web/pull/2446 [#2448]: https://github.com/actix/actix-web/pull/2448 [#2456]: https://github.com/actix/actix-web/pull/2456 ## 3.0.0-beta.12 ### Changed - Update `actix-server` to `2.0.0-beta.9`. [#2442] ### Removed - `client` module. [#2425] - `trust-dns` feature. [#2425] [#2425]: https://github.com/actix/actix-web/pull/2425 [#2442]: https://github.com/actix/actix-web/pull/2442 ## 3.0.0-beta.11 ### Changed - Updated rustls to v0.20. [#2414] - Minimum supported Rust version (MSRV) is now 1.52. [#2414]: https://github.com/actix/actix-web/pull/2414 ## 3.0.0-beta.10 ### Changed - `ContentEncoding` is now marked `#[non_exhaustive]`. [#2377] - Minimum supported Rust version (MSRV) is now 1.51. ### Fixed - Remove slice creation pointing to potential uninitialized data on h1 encoder. [#2364] - Remove `Into` bound on `Encoder` body types. [#2375] - Fix quality parse error in Accept-Encoding header. [#2344] [#2364]: https://github.com/actix/actix-web/pull/2364 [#2375]: https://github.com/actix/actix-web/pull/2375 [#2344]: https://github.com/actix/actix-web/pull/2344 [#2377]: https://github.com/actix/actix-web/pull/2377 ## 3.0.0-beta.9 ### Fixed - Potential HTTP request smuggling vulnerabilities. [RUSTSEC-2021-0081](https://github.com/rustsec/advisory-db/pull/977) ## 3.0.0-beta.8 ### Changed - Change compression algorithm features flags. [#2250] ### Removed - `downcast` and `downcast_get_type_id` macros. [#2291] [#2291]: https://github.com/actix/actix-web/pull/2291 [#2250]: https://github.com/actix/actix-web/pull/2250 ## 3.0.0-beta.7 ### Added - Alias `body::Body` as `body::AnyBody`. [#2215] - `BoxAnyBody`: a boxed message body with boxed errors. [#2183] - Re-export `http` crate's `Error` type as `error::HttpError`. [#2171] - Re-export `StatusCode`, `Method`, `Version` and `Uri` at the crate root. [#2171] - Re-export `ContentEncoding` and `ConnectionType` at the crate root. [#2171] - `Response::into_body` that consumes response and returns body type. [#2201] - `impl Default` for `Response`. [#2201] - Add zstd support for `ContentEncoding`. [#2244] ### Changed - The `MessageBody` trait now has an associated `Error` type. [#2183] - All error trait bounds in server service builders have changed from `Into` to `Into>`. [#2253] - All error trait bounds in message body and stream impls changed from `Into` to `Into>`. [#2253] - Places in `Response` where `ResponseBody` was received or returned now simply use `B`. [#2201] - `header` mod is now public. [#2171] - `uri` mod is now public. [#2171] - Update `language-tags` to `0.3`. - Reduce the level from `error` to `debug` for the log line that is emitted when a `500 Internal Server Error` is built using `HttpResponse::from_error`. [#2201] - `ResponseBuilder::message_body` now returns a `Result`. [#2201] - Remove `Unpin` bound on `ResponseBuilder::streaming`. [#2253] - `HttpServer::{listen_rustls(), bind_rustls()}` now honor the ALPN protocols in the configuration parameter. [#2226] ### Removed - Stop re-exporting `http` crate's `HeaderMap` types in addition to ours. [#2171] - Down-casting for `MessageBody` types. [#2183] - `error::Result` alias. [#2201] - Error field from `Response` and `Response::error`. [#2205] - `impl Future` for `Response`. [#2201] - `Response::take_body` and old `Response::into_body` method that casted body type. [#2201] - `InternalError` and all the error types it constructed. [#2215] - Conversion (`impl Into`) of `Response` and `ResponseBuilder` to `Error`. [#2215] [#2171]: https://github.com/actix/actix-web/pull/2171 [#2183]: https://github.com/actix/actix-web/pull/2183 [#2196]: https://github.com/actix/actix-web/pull/2196 [#2201]: https://github.com/actix/actix-web/pull/2201 [#2205]: https://github.com/actix/actix-web/pull/2205 [#2215]: https://github.com/actix/actix-web/pull/2215 [#2253]: https://github.com/actix/actix-web/pull/2253 [#2244]: https://github.com/actix/actix-web/pull/2244 ## 3.0.0-beta.6 ### Added - `impl MessageBody for Pin>`. [#2152] - `Response::{ok, bad_request, not_found, internal_server_error}`. [#2159] - Helper `body::to_bytes` for async collecting message body into Bytes. [#2158] ### Changed - The type parameter of `Response` no longer has a default. [#2152] - The `Message` variant of `body::Body` is now `Pin>`. [#2152] - `BodyStream` and `SizedStream` are no longer restricted to Unpin types. [#2152] - Error enum types are marked `#[non_exhaustive]`. [#2161] ### Removed - `cookies` feature flag. [#2065] - Top-level `cookies` mod (re-export). [#2065] - `HttpMessage` trait loses the `cookies` and `cookie` methods. [#2065] - `impl ResponseError for CookieParseError`. [#2065] - Deprecated methods on `ResponseBuilder`: `if_true`, `if_some`. [#2148] - `ResponseBuilder::json`. [#2148] - `ResponseBuilder::{set_header, header}`. [#2148] - `impl From for Body`. [#2148] - `Response::build_from`. [#2159] - Most of the status code builders on `Response`. [#2159] [#2065]: https://github.com/actix/actix-web/pull/2065 [#2148]: https://github.com/actix/actix-web/pull/2148 [#2152]: https://github.com/actix/actix-web/pull/2152 [#2159]: https://github.com/actix/actix-web/pull/2159 [#2158]: https://github.com/actix/actix-web/pull/2158 [#2161]: https://github.com/actix/actix-web/pull/2161 ## 3.0.0-beta.5 ### Added - `client::Connector::handshake_timeout` method for customizing TLS connection handshake timeout. [#2081] - `client::ConnectorService` as `client::Connector::finish` method's return type [#2081] - `client::ConnectionIo` trait alias [#2081] ### Changed - `client::Connector` type now only have one generic type for `actix_service::Service`. [#2063] ### Removed - Common typed HTTP headers were moved to actix-web. [2094] - `ResponseError` impl for `actix_utils::timeout::TimeoutError`. [#2127] [#2063]: https://github.com/actix/actix-web/pull/2063 [#2081]: https://github.com/actix/actix-web/pull/2081 [#2094]: https://github.com/actix/actix-web/pull/2094 [#2127]: https://github.com/actix/actix-web/pull/2127 ## 3.0.0-beta.4 ### Changed - Feature `cookies` is now optional and disabled by default. [#1981] - `ws::hash_key` now returns array. [#2035] - `ResponseBuilder::json` now takes `impl Serialize`. [#2052] ### Removed - Re-export of `futures_channel::oneshot::Canceled` is removed from `error` mod. [#1994] - `ResponseError` impl for `futures_channel::oneshot::Canceled` is removed. [#1994] [#1981]: https://github.com/actix/actix-web/pull/1981 [#1994]: https://github.com/actix/actix-web/pull/1994 [#2035]: https://github.com/actix/actix-web/pull/2035 [#2052]: https://github.com/actix/actix-web/pull/2052 ## 3.0.0-beta.3 - No notable changes. ## 3.0.0-beta.2 ### Added - `TryIntoHeaderPair` trait that allows using typed and untyped headers in the same methods. [#1869] - `ResponseBuilder::insert_header` method which allows using typed headers. [#1869] - `ResponseBuilder::append_header` method which allows using typed headers. [#1869] - `TestRequest::insert_header` method which allows using typed headers. [#1869] - `ContentEncoding` implements all necessary header traits. [#1912] - `HeaderMap::len_keys` has the behavior of the old `len` method. [#1964] - `HeaderMap::drain` as an efficient draining iterator. [#1964] - Implement `IntoIterator` for owned `HeaderMap`. [#1964] - `trust-dns` optional feature to enable `trust-dns-resolver` as client dns resolver. [#1969] ### Changed - `ResponseBuilder::content_type` now takes an `impl TryIntoHeaderValue` to support using typed `mime` types. [#1894] - Renamed `TryIntoHeaderValue::{try_into => try_into_value}` to avoid ambiguity with std `TryInto` trait. [#1894] - `Extensions::insert` returns Option of replaced item. [#1904] - Remove `HttpResponseBuilder::json2()`. [#1903] - Enable `HttpResponseBuilder::json()` to receive data by value and reference. [#1903] - `client::error::ConnectError` Resolver variant contains `Box` type. [#1905] - `client::ConnectorConfig` default timeout changed to 5 seconds. [#1905] - Simplify `BlockingError` type to a unit struct. It's now only triggered when blocking thread pool is dead. [#1957] - `HeaderMap::len` now returns number of values instead of number of keys. [#1964] - `HeaderMap::insert` now returns iterator of removed values. [#1964] - `HeaderMap::remove` now returns iterator of removed values. [#1964] ### Removed - `ResponseBuilder::set`; use `ResponseBuilder::insert_header`. [#1869] - `ResponseBuilder::set_header`; use `ResponseBuilder::insert_header`. [#1869] - `ResponseBuilder::header`; use `ResponseBuilder::append_header`. [#1869] - `TestRequest::with_hdr`; use `TestRequest::default().insert_header()`. [#1869] - `TestRequest::with_header`; use `TestRequest::default().insert_header()`. [#1869] - `actors` optional feature. [#1969] - `ResponseError` impl for `actix::MailboxError`. [#1969] ### Documentation - Vastly improve docs and add examples for `HeaderMap`. [#1964] [#1869]: https://github.com/actix/actix-web/pull/1869 [#1894]: https://github.com/actix/actix-web/pull/1894 [#1903]: https://github.com/actix/actix-web/pull/1903 [#1904]: https://github.com/actix/actix-web/pull/1904 [#1905]: https://github.com/actix/actix-web/pull/1905 [#1912]: https://github.com/actix/actix-web/pull/1912 [#1957]: https://github.com/actix/actix-web/pull/1957 [#1964]: https://github.com/actix/actix-web/pull/1964 [#1969]: https://github.com/actix/actix-web/pull/1969 ## 3.0.0-beta.1 ### Added - Add `Http3` to `Protocol` enum for future compatibility and also mark `#[non_exhaustive]`. ### Changed - Update `actix-*` dependencies to tokio `1.0` based versions. [#1813] - Bumped `rand` to `0.8`. - Update `bytes` to `1.0`. [#1813] - Update `h2` to `0.3`. [#1813] - The `ws::Message::Text` enum variant now contains a `bytestring::ByteString`. [#1864] ### Removed - Deprecated `on_connect` methods have been removed. Prefer the new `on_connect_ext` technique. [#1857] - Remove `ResponseError` impl for `actix::actors::resolver::ResolverError` due to deprecate of resolver actor. [#1813] - Remove `ConnectError::SslHandshakeError` and re-export of `HandshakeError`. due to the removal of this type from `tokio-openssl` crate. openssl handshake error would return as `ConnectError::SslError`. [#1813] - Remove `actix-threadpool` dependency. Use `actix_rt::task::spawn_blocking`. Due to this change `actix_threadpool::BlockingError` type is moved into `actix_http::error` module. [#1878] [#1813]: https://github.com/actix/actix-web/pull/1813 [#1857]: https://github.com/actix/actix-web/pull/1857 [#1864]: https://github.com/actix/actix-web/pull/1864 [#1878]: https://github.com/actix/actix-web/pull/1878
## 2.2.2 ### Changed - Migrate to `brotli` crate. [ad7e3c06] [ad7e3c06]: https://github.com/actix/actix-web/commit/ad7e3c06 ## 2.2.1 ### Fixed - Potential HTTP request smuggling vulnerabilities. [RUSTSEC-2021-0081](https://github.com/rustsec/advisory-db/pull/977) ## 2.2.0 ### Added - HttpResponse builders for 1xx status codes. [#1768] - `Accept::mime_precedence` and `Accept::mime_preference`. [#1793] - `TryFrom` and `TryFrom` for `http::header::Quality`. [#1797] ### Fixed - Started dropping `transfer-encoding: chunked` and `Content-Length` for 1XX and 204 responses. [#1767] ### Changed - Upgrade `serde_urlencoded` to `0.7`. [#1773] [#1773]: https://github.com/actix/actix-web/pull/1773 [#1767]: https://github.com/actix/actix-web/pull/1767 [#1768]: https://github.com/actix/actix-web/pull/1768 [#1793]: https://github.com/actix/actix-web/pull/1793 [#1797]: https://github.com/actix/actix-web/pull/1797 ## 2.1.0 ### Added - Added more flexible `on_connect_ext` methods for on-connect handling. [#1754] ### Changed - Upgrade `base64` to `0.13`. [#1744] - Upgrade `pin-project` to `1.0`. [#1733] - Deprecate `ResponseBuilder::{if_some, if_true}`. [#1760] [#1760]: https://github.com/actix/actix-web/pull/1760 [#1754]: https://github.com/actix/actix-web/pull/1754 [#1733]: https://github.com/actix/actix-web/pull/1733 [#1744]: https://github.com/actix/actix-web/pull/1744 ## 2.0.0 - No significant changes from `2.0.0-beta.4`. ## 2.0.0-beta.4 ### Changed - Update actix-codec and actix-utils dependencies. - Update actix-connect and actix-tls dependencies. ## 2.0.0-beta.3 ### Fixed - Memory leak of `client::pool::ConnectorPoolSupport`. [#1626] [#1626]: https://github.com/actix/actix-web/pull/1626 ## 2.0.0-beta.2 ### Fixed - Potential UB in h1 decoder using uninitialized memory. [#1614] ### Changed - Fix illegal chunked encoding. [#1615] [#1614]: https://github.com/actix/actix-web/pull/1614 [#1615]: https://github.com/actix/actix-web/pull/1615 ## 2.0.0-beta.1 ### Changed - Migrate cookie handling to `cookie` crate. [#1558] - Update `sha-1` to 0.9. [#1586] - Fix leak in client pool. [#1580] - MSRV is now 1.41.1. [#1558]: https://github.com/actix/actix-web/pull/1558 [#1586]: https://github.com/actix/actix-web/pull/1586 [#1580]: https://github.com/actix/actix-web/pull/1580 ## 2.0.0-alpha.4 ### Changed - Bump minimum supported Rust version to 1.40 - content_length function is removed, and you can set Content-Length by calling no_chunking function [#1439] - `BodySize::Sized64` variant has been removed. `BodySize::Sized` now receives a `u64` instead of a `usize`. - Update `base64` dependency to 0.12 ### Fixed - Support parsing of `SameSite=None` [#1503] [#1439]: https://github.com/actix/actix-web/pull/1439 [#1503]: https://github.com/actix/actix-web/pull/1503 ## 2.0.0-alpha.3 ### Fixed - Correct spelling of ConnectError::Unresolved [#1487] - Fix a mistake in the encoding of websocket continuation messages wherein Item::FirstText and Item::FirstBinary are each encoded as the other. ### Changed - Implement `std::error::Error` for our custom errors [#1422] - Remove `failure` support for `ResponseError` since that crate will be deprecated in the near future. [#1422]: https://github.com/actix/actix-web/pull/1422 [#1487]: https://github.com/actix/actix-web/pull/1487 ## 2.0.0-alpha.2 ### Changed - Update `actix-connect` and `actix-tls` dependency to 2.0.0-alpha.1. [#1395] - Change default initial window size and connection window size for HTTP2 to 2MB and 1MB respectively to improve download speed for awc when downloading large objects. [#1394] - client::Connector accepts initial_window_size and initial_connection_window_size HTTP2 configuration. [#1394] - client::Connector allowing to set max_http_version to limit HTTP version to be used. [#1394] [#1394]: https://github.com/actix/actix-web/pull/1394 [#1395]: https://github.com/actix/actix-web/pull/1395 ## 2.0.0-alpha.1 ### Changed - Update the `time` dependency to 0.2.7. - Moved actors messages support from actix crate, enabled with feature `actors`. - Breaking change: trait MessageBody requires Unpin and accepting `Pin<&mut Self>` instead of `&mut self` in the poll_next(). - MessageBody is not implemented for &'static [u8] anymore. ### Fixed - Allow `SameSite=None` cookies to be sent in a response. ## 1.0.1 ### Fixed - Poll upgrade service's readiness from HTTP service handlers - Replace brotli with brotli2 #1224 ## 1.0.0 ### Added - Add websockets continuation frame support ### Changed - Replace `flate2-xxx` features with `compress` ## 1.0.0-alpha.5 ### Fixed - Check `Upgrade` service readiness before calling it - Fix buffer remaining capacity calculation ### Changed - Websockets: Ping and Pong should have binary data #1049 ## 1.0.0-alpha.4 ### Added - Add impl ResponseBuilder for Error ### Changed - Use rust based brotli compression library ## 1.0.0-alpha.3 ### Changed - Migrate to tokio 0.2 - Migrate to `std::future` ## 0.2.11 ### Added - Add support for serde_json::Value to be passed as argument to ResponseBuilder.body() - Add an additional `filename*` param in the `Content-Disposition` header of `actix_files::NamedFile` to be more compatible. (#1151) - Allow to use `std::convert::Infallible` as `actix_http::error::Error` ### Fixed - To be compatible with non-English error responses, `ResponseError` rendered with `text/plain; charset=utf-8` header [#1118] [#1878]: https://github.com/actix/actix-web/pull/1878 ## 0.2.10 ### Added - Add support for sending HTTP requests with `Rc` in addition to sending HTTP requests with `RequestHead` ### Fixed - h2 will use error response #1080 - on_connect result isn't added to request extensions for http2 requests #1009 ## 0.2.9 ### Changed - Dropped the `byteorder`-dependency in favor of `stdlib`-implementation - Update percent-encoding to 2.1 - Update serde_urlencoded to 0.6.1 ### Fixed - Fixed a panic in the HTTP2 handshake in client HTTP requests (#1031) ## 0.2.8 ### Added - Add `rustls` support - Add `Clone` impl for `HeaderMap` ### Fixed - awc client panic #1016 - Invalid response with compression middleware enabled, but compression-related features disabled #997 ## 0.2.7 ### Added - Add support for downcasting response errors #986 ## 0.2.6 ### Changed - Replace `ClonableService` with local copy - Upgrade `rand` dependency version to 0.7 ## 0.2.5 ### Added - Add `on-connect` callback, `HttpServiceBuilder::on_connect()` #946 ### Changed - Use `encoding_rs` crate instead of unmaintained `encoding` crate - Add `Copy` and `Clone` impls for `ws::Codec` ## 0.2.4 ### Fixed - Do not compress NoContent (204) responses #918 ## 0.2.3 ### Added - Debug impl for ResponseBuilder - From SizedStream and BodyStream for Body ### Changed - SizedStream uses u64 ## 0.2.2 ### Fixed - Parse incoming stream before closing stream on disconnect #868 ## 0.2.1 ### Fixed - Handle socket read disconnect ## 0.2.0 ### Changed - Update actix-service to 0.4 - Expect and upgrade services accept `ServerConfig` config. ### Deleted - `OneRequest` service ## 0.1.5 ### Fixed - Clean up response extensions in response pool #817 ## 0.1.4 ### Added - Allow to render h1 request headers in `Camel-Case` ### Fixed - Read until eof for http/1.0 responses #771 ## 0.1.3 ### Fixed - Fix http client pool management - Fix http client wait queue management #794 ## 0.1.2 ### Fixed - Fix BorrowMutError panic in client connector #793 ## 0.1.1 ### Changed - Cookie::max_age() accepts value in seconds - Cookie::max_age_time() accepts value in time::Duration - Allow to specify server address for client connector ## 0.1.0 ### Added - Expose peer addr via `Request::peer_addr()` and `RequestHead::peer_addr` ### Changed - `actix_http::encoding` always available - use trust-dns-resolver 0.11.0 ## 0.1.0-alpha.5 ### Added - Allow to use custom service for upgrade requests - Added `h1::SendResponse` future. ### Changed - MessageBody::length() renamed to MessageBody::size() for consistency - ws handshake verification functions take RequestHead instead of Request ## 0.1.0-alpha.4 ### Added - Allow to use custom `Expect` handler - Add minimal `std::error::Error` impl for `Error` ### Changed - Export IntoHeaderValue - Render error and return as response body - Use thread pool for response body compression ### Deleted - Removed PayloadBuffer ## 0.1.0-alpha.3 ### Added - Warn when an unsealed private cookie isn't valid UTF-8 ### Fixed - Rust 1.31.0 compatibility - Preallocate read buffer for h1 codec - Detect socket disconnection during protocol selection ## 0.1.0-alpha.2 ### Added - Added ws::Message::Nop, no-op websockets message ### Changed - Do not use thread pool for decompression if chunk size is smaller than 2048. ## 0.1.0-alpha.1 - Initial impl actix-http-3.9.0/Cargo.lock0000644000002006110000000000100111050ustar # This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "actix-codec" version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5f7b0a21988c1bf877cf4759ef5ddaac04c1c9fe808c9142ecb78ba97d97a28a" dependencies = [ "bitflags", "bytes", "futures-core", "futures-sink", "memchr", "pin-project-lite", "tokio", "tokio-util", "tracing", ] [[package]] name = "actix-http" version = "3.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3ae682f693a9cd7b058f2b0b5d9a6d7728a8555779bedbbc35dd88528611d020" dependencies = [ "actix-codec", "actix-rt", "actix-service", "actix-utils", "ahash", "base64", "bitflags", "brotli", "bytes", "bytestring", "derive_more", "encoding_rs", "flate2", "futures-core", "h2", "http 0.2.12", "httparse", "httpdate", "itoa", "language-tags", "local-channel", "mime", "percent-encoding", "pin-project-lite", "rand", "sha1", "smallvec", "tokio", "tokio-util", "tracing", "zstd", ] [[package]] name = "actix-http" version = "3.9.0" dependencies = [ "actix-codec", "actix-http-test", "actix-rt", "actix-server", "actix-service", "actix-tls", "actix-utils", "actix-web", "ahash", "async-stream", "base64", "bitflags", "brotli", "bytes", "bytestring", "criterion", "derive_more", "divan", "encoding_rs", "env_logger", "flate2", "futures-core", "futures-util", "h2", "http 0.2.12", "httparse", "httpdate", "itoa", "language-tags", "local-channel", "memchr", "mime", "once_cell", "openssl", "percent-encoding", "pin-project-lite", "rand", "rcgen", "regex", "rustls 0.23.12", "rustls-pemfile", "rustversion", "serde", "serde_json", "sha1", "smallvec", "static_assertions", "tokio", "tokio-util", "tracing", "zstd", ] [[package]] name = "actix-http-test" version = "3.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "061d27c2a6fea968fdaca0961ff429d23a4ec878c4f68f5d08626663ade69c80" dependencies = [ "actix-codec", "actix-rt", "actix-server", "actix-service", "actix-tls", "actix-utils", "awc", "bytes", "futures-core", "http 0.2.12", "log", "openssl", "serde", "serde_json", "serde_urlencoded", "slab", "socket2", "tokio", ] [[package]] name = "actix-macros" version = "0.2.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e01ed3140b2f8d422c68afa1ed2e85d996ea619c988ac834d255db32138655cb" dependencies = [ "quote", "syn", ] [[package]] name = "actix-router" version = "0.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "13d324164c51f63867b57e73ba5936ea151b8a41a1d23d1031eeb9f70d0236f8" dependencies = [ "bytestring", "cfg-if", "http 0.2.12", "regex", "regex-lite", "serde", "tracing", ] [[package]] name = "actix-rt" version = "2.10.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "24eda4e2a6e042aa4e55ac438a2ae052d3b5da0ecf83d7411e1a368946925208" dependencies = [ "actix-macros", "futures-core", "tokio", ] [[package]] name = "actix-server" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7ca2549781d8dd6d75c40cf6b6051260a2cc2f3c62343d761a969a0640646894" dependencies = [ "actix-rt", "actix-service", "actix-utils", "futures-core", "futures-util", "mio", "socket2", "tokio", "tracing", ] [[package]] name = "actix-service" version = "2.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3b894941f818cfdc7ccc4b9e60fa7e53b5042a2e8567270f9147d5591893373a" dependencies = [ "futures-core", "paste", "pin-project-lite", ] [[package]] name = "actix-tls" version = "3.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ac453898d866cdbecdbc2334fe1738c747b4eba14a677261f2b768ba05329389" dependencies = [ "actix-rt", "actix-service", "actix-utils", "futures-core", "http 0.2.12", "http 1.1.0", "impl-more", "openssl", "pin-project-lite", "rustls-pki-types", "tokio", "tokio-openssl", "tokio-rustls 0.23.4", "tokio-rustls 0.24.1", "tokio-rustls 0.25.0", "tokio-rustls 0.26.0", "tokio-util", "tracing", "webpki-roots 0.22.6", "webpki-roots 0.25.4", "webpki-roots 0.26.3", ] [[package]] name = "actix-utils" version = "3.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "88a1dcdff1466e3c2488e1cb5c36a71822750ad43839937f85d2f4d9f8b705d8" dependencies = [ "local-waker", "pin-project-lite", ] [[package]] name = "actix-web" version = "4.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1988c02af8d2b718c05bc4aeb6a66395b7cdf32858c2c71131e5637a8c05a9ff" dependencies = [ "actix-codec", "actix-http 3.8.0", "actix-macros", "actix-router", "actix-rt", "actix-server", "actix-service", "actix-utils", "actix-web-codegen", "ahash", "bytes", "bytestring", "cfg-if", "cookie", "derive_more", "encoding_rs", "futures-core", "futures-util", "itoa", "language-tags", "log", "mime", "once_cell", "pin-project-lite", "regex", "regex-lite", "serde", "serde_json", "serde_urlencoded", "smallvec", "socket2", "time", "url", ] [[package]] name = "actix-web-codegen" version = "4.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f591380e2e68490b5dfaf1dd1aa0ebe78d84ba7067078512b4ea6e4492d622b8" dependencies = [ "actix-router", "proc-macro2", "quote", "syn", ] [[package]] name = "addr2line" version = "0.22.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6e4503c46a5c0c7844e948c9a4d6acd9f50cccb4de1c48eb9e291ea17470c678" dependencies = [ "gimli", ] [[package]] name = "adler" version = "1.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe" [[package]] name = "ahash" version = "0.8.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e89da841a80418a9b391ebaea17f5c112ffaaa96f621d2c285b5174da76b9011" dependencies = [ "cfg-if", "getrandom", "once_cell", "version_check", "zerocopy", ] [[package]] name = "aho-corasick" version = "1.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e60d3430d3a69478ad0993f19238d2df97c507009a52b3c10addcd7f6bcb916" dependencies = [ "memchr", ] [[package]] name = "alloc-no-stdlib" version = "2.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cc7bb162ec39d46ab1ca8c77bf72e890535becd1751bb45f64c597edb4c8c6b3" [[package]] name = "alloc-stdlib" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "94fb8275041c72129eb51b7d0322c29b8387a0386127718b096429201a5d6ece" dependencies = [ "alloc-no-stdlib", ] [[package]] name = "anes" version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4b46cbb362ab8752921c97e041f5e366ee6297bd428a31275b9fcf1e380f7299" [[package]] name = "anstream" version = "0.6.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "64e15c1ab1f89faffbf04a634d5e1962e9074f2741eef6d97f3c4e322426d526" dependencies = [ "anstyle", "anstyle-parse", "anstyle-query", "anstyle-wincon", "colorchoice", "is_terminal_polyfill", "utf8parse", ] [[package]] name = "anstyle" version = "1.0.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1bec1de6f59aedf83baf9ff929c98f2ad654b97c9510f4e70cf6f661d49fd5b1" [[package]] name = "anstyle-parse" version = "0.2.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "eb47de1e80c2b463c735db5b217a0ddc39d612e7ac9e2e96a5aed1f57616c1cb" dependencies = [ "utf8parse", ] [[package]] name = "anstyle-query" version = "1.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6d36fc52c7f6c869915e99412912f22093507da8d9e942ceaf66fe4b7c14422a" dependencies = [ "windows-sys 0.52.0", ] [[package]] name = "anstyle-wincon" version = "3.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5bf74e1b6e971609db8ca7a9ce79fd5768ab6ae46441c572e46cf596f59e57f8" dependencies = [ "anstyle", "windows-sys 0.52.0", ] [[package]] name = "async-stream" version = "0.3.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cd56dd203fef61ac097dd65721a419ddccb106b2d2b70ba60a6b529f03961a51" dependencies = [ "async-stream-impl", "futures-core", "pin-project-lite", ] [[package]] name = "async-stream-impl" version = "0.3.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "16e62a023e7c117e27523144c5d2459f4397fcc3cab0085af8e2224f643a0193" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "autocfg" version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c4b4d0bd25bd0b74681c0ad21497610ce1b7c91b1022cd21c80c6fbdd9476b0" [[package]] name = "awc" version = "3.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fe6b67e44fb95d1dc9467e3930383e115f9b4ed60ca689db41409284e967a12d" dependencies = [ "actix-codec", "actix-http 3.8.0", "actix-rt", "actix-service", "actix-tls", "actix-utils", "base64", "bytes", "cfg-if", "derive_more", "futures-core", "futures-util", "h2", "http 0.2.12", "itoa", "log", "mime", "openssl", "percent-encoding", "pin-project-lite", "rand", "serde", "serde_json", "serde_urlencoded", "tokio", ] [[package]] name = "aws-lc-rs" version = "1.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4ae74d9bd0a7530e8afd1770739ad34b36838829d6ad61818f9230f683f5ad77" dependencies = [ "aws-lc-sys", "mirai-annotations", "paste", "zeroize", ] [[package]] name = "aws-lc-sys" version = "0.20.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0f0e249228c6ad2d240c2dc94b714d711629d52bad946075d8e9b2f5391f0703" dependencies = [ "bindgen", "cc", "cmake", "dunce", "fs_extra", "libc", "paste", ] [[package]] name = "backtrace" version = "0.3.73" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5cc23269a4f8976d0a4d2e7109211a419fe30e8d88d677cd60b6bc79c5732e0a" dependencies = [ "addr2line", "cc", "cfg-if", "libc", "miniz_oxide", "object", "rustc-demangle", ] [[package]] name = "base64" version = "0.22.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6" [[package]] name = "bindgen" version = "0.69.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a00dc851838a2120612785d195287475a3ac45514741da670b735818822129a0" dependencies = [ "bitflags", "cexpr", "clang-sys", "itertools 0.12.1", "lazy_static", "lazycell", "log", "prettyplease", "proc-macro2", "quote", "regex", "rustc-hash", "shlex", "syn", "which", ] [[package]] name = "bitflags" version = "2.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b048fb63fd8b5923fc5aa7b340d8e156aec7ec02f0c78fa8a6ddc2613f6f71de" [[package]] name = "block-buffer" version = "0.10.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71" dependencies = [ "generic-array", ] [[package]] name = "brotli" version = "6.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "74f7971dbd9326d58187408ab83117d8ac1bb9c17b085fdacd1cf2f598719b6b" dependencies = [ "alloc-no-stdlib", "alloc-stdlib", "brotli-decompressor", ] [[package]] name = "brotli-decompressor" version = "4.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9a45bd2e4095a8b518033b128020dd4a55aab1c0a381ba4404a472630f4bc362" dependencies = [ "alloc-no-stdlib", "alloc-stdlib", ] [[package]] name = "bumpalo" version = "3.16.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c" [[package]] name = "byteorder" version = "1.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b" [[package]] name = "bytes" version = "1.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8318a53db07bb3f8dca91a600466bdb3f2eaadeedfdbcf02e1accbad9271ba50" [[package]] name = "bytestring" version = "1.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "74d80203ea6b29df88012294f62733de21cfeab47f17b41af3a38bc30a03ee72" dependencies = [ "bytes", ] [[package]] name = "cast" version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5" [[package]] name = "cc" version = "1.1.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "504bdec147f2cc13c8b57ed9401fd8a147cc66b67ad5cb241394244f2c947549" dependencies = [ "jobserver", "libc", ] [[package]] name = "cexpr" version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6fac387a98bb7c37292057cffc56d62ecb629900026402633ae9160df93a8766" dependencies = [ "nom", ] [[package]] name = "cfg-if" version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "ciborium" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "42e69ffd6f0917f5c029256a24d0161db17cea3997d185db0d35926308770f0e" dependencies = [ "ciborium-io", "ciborium-ll", "serde", ] [[package]] name = "ciborium-io" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "05afea1e0a06c9be33d539b876f1ce3692f4afea2cb41f740e7743225ed1c757" [[package]] name = "ciborium-ll" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "57663b653d948a338bfb3eeba9bb2fd5fcfaecb9e199e87e1eda4d9e8b240fd9" dependencies = [ "ciborium-io", "half", ] [[package]] name = "clang-sys" version = "1.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0b023947811758c97c59bf9d1c188fd619ad4718dcaa767947df1cadb14f39f4" dependencies = [ "glob", "libc", "libloading", ] [[package]] name = "clap" version = "4.5.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "11d8838454fda655dafd3accb2b6e2bea645b9e4078abe84a22ceb947235c5cc" dependencies = [ "clap_builder", ] [[package]] name = "clap_builder" version = "4.5.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "216aec2b177652e3846684cbfe25c9964d18ec45234f0f5da5157b207ed1aab6" dependencies = [ "anstyle", "clap_lex", "terminal_size", ] [[package]] name = "clap_lex" version = "0.7.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1462739cb27611015575c0c11df5df7601141071f07518d56fcc1be504cbec97" [[package]] name = "cmake" version = "0.1.50" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a31c789563b815f77f4250caee12365734369f942439b7defd71e18a48197130" dependencies = [ "cc", ] [[package]] name = "colorchoice" version = "1.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d3fd119d74b830634cea2a0f58bbd0d54540518a14397557951e79340abc28c0" [[package]] name = "condtype" version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf0a07a401f374238ab8e2f11a104d2851bf9ce711ec69804834de8af45c7af" [[package]] name = "convert_case" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6245d59a3e82a7fc217c5828a6692dbc6dfb63a0c8c90495621f7b9d79704a0e" [[package]] name = "cookie" version = "0.16.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e859cd57d0710d9e06c381b550c06e76992472a8c6d527aecd2fc673dcc231fb" dependencies = [ "percent-encoding", "time", "version_check", ] [[package]] name = "cpufeatures" version = "0.2.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "53fe5e26ff1b7aef8bca9c6080520cfb8d9333c7568e1829cef191a9723e5504" dependencies = [ "libc", ] [[package]] name = "crc32fast" version = "1.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a97769d94ddab943e4510d138150169a2758b5ef3eb191a9ee688de3e23ef7b3" dependencies = [ "cfg-if", ] [[package]] name = "criterion" version = "0.5.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f2b12d017a929603d80db1831cd3a24082f8137ce19c69e6447f54f5fc8d692f" dependencies = [ "anes", "cast", "ciborium", "clap", "criterion-plot", "is-terminal", "itertools 0.10.5", "num-traits", "once_cell", "oorandom", "plotters", "rayon", "regex", "serde", "serde_derive", "serde_json", "tinytemplate", "walkdir", ] [[package]] name = "criterion-plot" version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6b50826342786a51a89e2da3a28f1c32b06e387201bc2d19791f622c673706b1" dependencies = [ "cast", "itertools 0.10.5", ] [[package]] name = "crossbeam-deque" version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "613f8cc01fe9cf1a3eb3d7f488fd2fa8388403e97039e2f73692932e291a770d" dependencies = [ "crossbeam-epoch", "crossbeam-utils", ] [[package]] name = "crossbeam-epoch" version = "0.9.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5b82ac4a3c2ca9c3460964f020e1402edd5753411d7737aa39c3714ad1b5420e" dependencies = [ "crossbeam-utils", ] [[package]] name = "crossbeam-utils" version = "0.8.20" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "22ec99545bb0ed0ea7bb9b8e1e9122ea386ff8a48c0922e43f36d45ab09e0e80" [[package]] name = "crunchy" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7a81dae078cea95a014a339291cec439d2f232ebe854a9d672b796c6afafa9b7" [[package]] name = "crypto-common" version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1bfb12502f3fc46cca1bb51ac28df9d618d813cdc3d2f25b9fe775a34af26bb3" dependencies = [ "generic-array", "typenum", ] [[package]] name = "deranged" version = "0.3.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b42b6fa04a440b495c8b04d0e71b707c585f83cb9cb28cf8cd0d976c315e31b4" dependencies = [ "powerfmt", ] [[package]] name = "derive_more" version = "0.99.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5f33878137e4dafd7fa914ad4e259e18a4e8e532b9617a2d0150262bf53abfce" dependencies = [ "convert_case", "proc-macro2", "quote", "rustc_version", "syn", ] [[package]] name = "digest" version = "0.10.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" dependencies = [ "block-buffer", "crypto-common", ] [[package]] name = "divan" version = "0.1.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a0d567df2c9c2870a43f3f2bd65aaeb18dbce1c18f217c3e564b4fbaeb3ee56c" dependencies = [ "cfg-if", "clap", "condtype", "divan-macros", "libc", "regex-lite", ] [[package]] name = "divan-macros" version = "0.1.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "27540baf49be0d484d8f0130d7d8da3011c32a44d4fc873368154f1510e574a2" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "dunce" version = "1.0.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "92773504d58c093f6de2459af4af33faa518c13451eb8f2b5698ed3d36e7c813" [[package]] name = "either" version = "1.13.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "60b1af1c220855b6ceac025d3f6ecdd2b7c4894bfe9cd9bda4fbb4bc7c0d4cf0" [[package]] name = "encoding_rs" version = "0.8.34" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b45de904aa0b010bce2ab45264d0631681847fa7b6f2eaa7dab7619943bc4f59" dependencies = [ "cfg-if", ] [[package]] name = "env_filter" version = "0.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4f2c92ceda6ceec50f43169f9ee8424fe2db276791afde7b2cd8bc084cb376ab" dependencies = [ "log", "regex", ] [[package]] name = "env_logger" version = "0.11.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e13fa619b91fb2381732789fc5de83b45675e882f66623b7d8cb4f643017018d" dependencies = [ "anstream", "anstyle", "env_filter", "humantime", "log", ] [[package]] name = "equivalent" version = "1.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5443807d6dff69373d433ab9ef5378ad8df50ca6298caf15de6e52e24aaf54d5" [[package]] name = "errno" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "534c5cf6194dfab3db3242765c03bbe257cf92f22b38f6bc0c58d59108a820ba" dependencies = [ "libc", "windows-sys 0.52.0", ] [[package]] name = "flate2" version = "1.0.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7f211bbe8e69bbd0cfdea405084f128ae8b4aaa6b0b522fc8f2b009084797920" dependencies = [ "crc32fast", "miniz_oxide", ] [[package]] name = "fnv" version = "1.0.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1" [[package]] name = "foreign-types" version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f6f339eb8adc052cd2ca78910fda869aefa38d22d5cb648e6485e4d3fc06f3b1" dependencies = [ "foreign-types-shared", ] [[package]] name = "foreign-types-shared" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "00b0228411908ca8685dba7fc2cdd70ec9990a6e753e89b6ac91a84c40fbaf4b" [[package]] name = "form_urlencoded" version = "1.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e13624c2627564efccf4934284bdd98cbaa14e79b0b5a141218e507b3a823456" dependencies = [ "percent-encoding", ] [[package]] name = "fs_extra" version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "42703706b716c37f96a77aea830392ad231f44c9e9a67872fa5548707e11b11c" [[package]] name = "futures-core" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dfc6580bb841c5a68e9ef15c77ccc837b40a7504914d52e47b8b0e9bbda25a1d" [[package]] name = "futures-sink" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9fb8e00e87438d937621c1c6269e53f536c14d3fbd6a042bb24879e57d474fb5" [[package]] name = "futures-task" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "38d84fa142264698cdce1a9f9172cf383a0c82de1bddcf3092901442c4097004" [[package]] name = "futures-util" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3d6401deb83407ab3da39eba7e33987a73c3df0c82b4bb5813ee871c19c41d48" dependencies = [ "futures-core", "futures-sink", "futures-task", "pin-project-lite", "pin-utils", ] [[package]] name = "generic-array" version = "0.14.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a" dependencies = [ "typenum", "version_check", ] [[package]] name = "getrandom" version = "0.2.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c4567c8db10ae91089c99af84c68c38da3ec2f087c3f82960bcdbf3656b6f4d7" dependencies = [ "cfg-if", "libc", "wasi", ] [[package]] name = "gimli" version = "0.29.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "40ecd4077b5ae9fd2e9e169b102c6c330d0605168eb0e8bf79952b256dbefffd" [[package]] name = "glob" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d2fabcfbdc87f4758337ca535fb41a6d701b65693ce38287d856d1674551ec9b" [[package]] name = "h2" version = "0.3.26" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "81fe527a889e1532da5c525686d96d4c2e74cdd345badf8dfef9f6b39dd5f5e8" dependencies = [ "bytes", "fnv", "futures-core", "futures-sink", "futures-util", "http 0.2.12", "indexmap", "slab", "tokio", "tokio-util", "tracing", ] [[package]] name = "half" version = "2.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6dd08c532ae367adf81c312a4580bc67f1d0fe8bc9c460520283f4c0ff277888" dependencies = [ "cfg-if", "crunchy", ] [[package]] name = "hashbrown" version = "0.14.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e5274423e17b7c9fc20b6e7e208532f9b19825d82dfd615708b70edd83df41f1" [[package]] name = "hermit-abi" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d231dfb89cfffdbc30e7fc41579ed6066ad03abda9e567ccafae602b97ec5024" [[package]] name = "home" version = "0.5.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e3d1354bf6b7235cb4a0576c2619fd4ed18183f689b12b006a0ee7329eeff9a5" dependencies = [ "windows-sys 0.52.0", ] [[package]] name = "http" version = "0.2.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "601cbb57e577e2f5ef5be8e7b83f0f63994f25aa94d673e54a92d5c516d101f1" dependencies = [ "bytes", "fnv", "itoa", ] [[package]] name = "http" version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "21b9ddb458710bc376481b842f5da65cdf31522de232c1ca8146abce2a358258" dependencies = [ "bytes", "fnv", "itoa", ] [[package]] name = "httparse" version = "1.9.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0fcc0b4a115bf80b728eb8ea024ad5bd707b615bfed49e0665b6e0f86fd082d9" [[package]] name = "httpdate" version = "1.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9" [[package]] name = "humantime" version = "2.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9a3a5bfb195931eeb336b2a7b4d761daec841b97f947d34394601737a7bba5e4" [[package]] name = "idna" version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "634d9b1461af396cad843f47fdba5597a4f9e6ddd4bfb6ff5d85028c25cb12f6" dependencies = [ "unicode-bidi", "unicode-normalization", ] [[package]] name = "impl-more" version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "206ca75c9c03ba3d4ace2460e57b189f39f43de612c2f85836e65c929701bb2d" [[package]] name = "indexmap" version = "2.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "de3fc2e30ba82dd1b3911c8de1ffc143c74a914a14e99514d7637e3099df5ea0" dependencies = [ "equivalent", "hashbrown", ] [[package]] name = "is-terminal" version = "0.4.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f23ff5ef2b80d608d61efee834934d862cd92461afc0560dedf493e4c033738b" dependencies = [ "hermit-abi", "libc", "windows-sys 0.52.0", ] [[package]] name = "is_terminal_polyfill" version = "1.70.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7943c866cc5cd64cbc25b2e01621d07fa8eb2a1a23160ee81ce38704e97b8ecf" [[package]] name = "itertools" version = "0.10.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b0fd2260e829bddf4cb6ea802289de2f86d6a7a690192fbe91b3f46e0f2c8473" dependencies = [ "either", ] [[package]] name = "itertools" version = "0.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ba291022dbbd398a455acf126c1e341954079855bc60dfdda641363bd6922569" dependencies = [ "either", ] [[package]] name = "itoa" version = "1.0.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "49f1f14873335454500d59611f1cf4a4b0f786f9ac11f4312a78e4cf2566695b" [[package]] name = "jobserver" version = "0.1.32" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "48d1dbcbbeb6a7fec7e059840aa538bd62aaccf972c7346c4d9d2059312853d0" dependencies = [ "libc", ] [[package]] name = "js-sys" version = "0.3.69" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "29c15563dc2726973df627357ce0c9ddddbea194836909d655df6a75d2cf296d" dependencies = [ "wasm-bindgen", ] [[package]] name = "language-tags" version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d4345964bb142484797b161f473a503a434de77149dd8c7427788c6e13379388" [[package]] name = "lazy_static" version = "1.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe" [[package]] name = "lazycell" version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "830d08ce1d1d941e6b30645f1a0eb5643013d835ce3779a5fc208261dbe10f55" [[package]] name = "libc" version = "0.2.155" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "97b3888a4aecf77e811145cadf6eef5901f4782c53886191b2f693f24761847c" [[package]] name = "libloading" version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4979f22fdb869068da03c9f7528f8297c6fd2606bc3a4affe42e6a823fdb8da4" dependencies = [ "cfg-if", "windows-targets 0.52.6", ] [[package]] name = "linux-raw-sys" version = "0.4.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "78b3ae25bc7c8c38cec158d1f2757ee79e9b3740fbc7ccf0e59e4b08d793fa89" [[package]] name = "local-channel" version = "0.1.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b6cbc85e69b8df4b8bb8b89ec634e7189099cea8927a276b7384ce5488e53ec8" dependencies = [ "futures-core", "futures-sink", "local-waker", ] [[package]] name = "local-waker" version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4d873d7c67ce09b42110d801813efbc9364414e356be9935700d368351657487" [[package]] name = "lock_api" version = "0.4.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "07af8b9cdd281b7915f413fa73f29ebd5d55d0d3f0155584dade1ff18cea1b17" dependencies = [ "autocfg", "scopeguard", ] [[package]] name = "log" version = "0.4.22" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a7a70ba024b9dc04c27ea2f0c0548feb474ec5c54bba33a7f72f873a39d07b24" [[package]] name = "memchr" version = "2.7.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3" [[package]] name = "mime" version = "0.3.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a" [[package]] name = "minimal-lexical" version = "0.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a" [[package]] name = "miniz_oxide" version = "0.7.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b8a240ddb74feaf34a79a7add65a741f3167852fba007066dcac1ca548d89c08" dependencies = [ "adler", ] [[package]] name = "mio" version = "1.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4569e456d394deccd22ce1c1913e6ea0e54519f577285001215d33557431afe4" dependencies = [ "hermit-abi", "libc", "log", "wasi", "windows-sys 0.52.0", ] [[package]] name = "mirai-annotations" version = "1.12.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c9be0862c1b3f26a88803c4a49de6889c10e608b3ee9344e6ef5b45fb37ad3d1" [[package]] name = "nom" version = "7.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d273983c5a657a70a3e8f2a01329822f3b8c8172b73826411a55751e404a0a4a" dependencies = [ "memchr", "minimal-lexical", ] [[package]] name = "num-conv" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "51d515d32fb182ee37cda2ccdcb92950d6a3c2893aa280e540671c2cd0f3b1d9" [[package]] name = "num-traits" version = "0.2.19" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841" dependencies = [ "autocfg", ] [[package]] name = "object" version = "0.36.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "27b64972346851a39438c60b341ebc01bba47464ae329e55cf343eb93964efd9" dependencies = [ "memchr", ] [[package]] name = "once_cell" version = "1.19.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92" [[package]] name = "oorandom" version = "11.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b410bbe7e14ab526a0e86877eb47c6996a2bd7746f027ba551028c925390e4e9" [[package]] name = "openssl" version = "0.10.66" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9529f4786b70a3e8c61e11179af17ab6188ad8d0ded78c5529441ed39d4bd9c1" dependencies = [ "bitflags", "cfg-if", "foreign-types", "libc", "once_cell", "openssl-macros", "openssl-sys", ] [[package]] name = "openssl-macros" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a948666b637a0f465e8564c73e89d4dde00d72d4d473cc972f390fc3dcee7d9c" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "openssl-sys" version = "0.9.103" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7f9e8deee91df40a943c71b917e5874b951d32a802526c85721ce3b776c929d6" dependencies = [ "cc", "libc", "pkg-config", "vcpkg", ] [[package]] name = "parking_lot" version = "0.12.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f1bf18183cf54e8d6059647fc3063646a1801cf30896933ec2311622cc4b9a27" dependencies = [ "lock_api", "parking_lot_core", ] [[package]] name = "parking_lot_core" version = "0.9.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1e401f977ab385c9e4e3ab30627d6f26d00e2c73eef317493c4ec6d468726cf8" dependencies = [ "cfg-if", "libc", "redox_syscall", "smallvec", "windows-targets 0.52.6", ] [[package]] name = "paste" version = "1.0.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "57c0d7b74b563b49d38dae00a0c37d4d6de9b432382b2892f0574ddcae73fd0a" [[package]] name = "pem" version = "3.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e459365e590736a54c3fa561947c84837534b8e9af6fc5bf781307e82658fae" dependencies = [ "base64", "serde", ] [[package]] name = "percent-encoding" version = "2.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e3148f5046208a5d56bcfc03053e3ca6334e51da8dfb19b6cdc8b306fae3283e" [[package]] name = "pin-project-lite" version = "0.2.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bda66fc9667c18cb2758a2ac84d1167245054bcf85d5d1aaa6923f45801bdd02" [[package]] name = "pin-utils" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184" [[package]] name = "pkg-config" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d231b230927b5e4ad203db57bbcbee2802f6bce620b1e4a9024a07d94e2907ec" [[package]] name = "plotters" version = "0.3.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a15b6eccb8484002195a3e44fe65a4ce8e93a625797a063735536fd59cb01cf3" dependencies = [ "num-traits", "plotters-backend", "plotters-svg", "wasm-bindgen", "web-sys", ] [[package]] name = "plotters-backend" version = "0.3.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "414cec62c6634ae900ea1c56128dfe87cf63e7caece0852ec76aba307cebadb7" [[package]] name = "plotters-svg" version = "0.3.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "81b30686a7d9c3e010b84284bdd26a29f2138574f52f5eb6f794fc0ad924e705" dependencies = [ "plotters-backend", ] [[package]] name = "powerfmt" version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "439ee305def115ba05938db6eb1644ff94165c5ab5e9420d1c1bcedbba909391" [[package]] name = "ppv-lite86" version = "0.2.20" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "77957b295656769bb8ad2b6a6b09d897d94f05c41b069aede1fcdaa675eaea04" dependencies = [ "zerocopy", ] [[package]] name = "prettyplease" version = "0.2.20" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5f12335488a2f3b0a83b14edad48dca9879ce89b2edd10e80237e4e852dd645e" dependencies = [ "proc-macro2", "syn", ] [[package]] name = "proc-macro2" version = "1.0.86" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5e719e8df665df0d1c8fbfd238015744736151d4445ec0836b8e628aae103b77" dependencies = [ "unicode-ident", ] [[package]] name = "quote" version = "1.0.36" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0fa76aaf39101c457836aec0ce2316dbdc3ab723cdda1c6bd4e6ad4208acaca7" dependencies = [ "proc-macro2", ] [[package]] name = "rand" version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404" dependencies = [ "libc", "rand_chacha", "rand_core", ] [[package]] name = "rand_chacha" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88" dependencies = [ "ppv-lite86", "rand_core", ] [[package]] name = "rand_core" version = "0.6.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c" dependencies = [ "getrandom", ] [[package]] name = "rayon" version = "1.10.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b418a60154510ca1a002a752ca9714984e21e4241e804d32555251faf8b78ffa" dependencies = [ "either", "rayon-core", ] [[package]] name = "rayon-core" version = "1.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1465873a3dfdaa8ae7cb14b4383657caab0b3e8a0aa9ae8e04b044854c8dfce2" dependencies = [ "crossbeam-deque", "crossbeam-utils", ] [[package]] name = "rcgen" version = "0.13.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "54077e1872c46788540de1ea3d7f4ccb1983d12f9aa909b234468676c1a36779" dependencies = [ "pem", "ring 0.17.8", "rustls-pki-types", "time", "yasna", ] [[package]] name = "redox_syscall" version = "0.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2a908a6e00f1fdd0dfd9c0eb08ce85126f6d8bbda50017e74bc4a4b7d4a926a4" dependencies = [ "bitflags", ] [[package]] name = "regex" version = "1.10.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4219d74c6b67a3654a9fbebc4b419e22126d13d2f3c4a07ee0cb61ff79a79619" dependencies = [ "aho-corasick", "memchr", "regex-automata", "regex-syntax", ] [[package]] name = "regex-automata" version = "0.4.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "38caf58cc5ef2fed281f89292ef23f6365465ed9a41b7a7754eb4e26496c92df" dependencies = [ "aho-corasick", "memchr", "regex-syntax", ] [[package]] name = "regex-lite" version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "53a49587ad06b26609c52e423de037e7f57f20d53535d66e08c695f347df952a" [[package]] name = "regex-syntax" version = "0.8.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7a66a03ae7c801facd77a29370b4faec201768915ac14a721ba36f20bc9c209b" [[package]] name = "ring" version = "0.16.20" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3053cf52e236a3ed746dfc745aa9cacf1b791d846bdaf412f60a8d7d6e17c8fc" dependencies = [ "cc", "libc", "once_cell", "spin 0.5.2", "untrusted 0.7.1", "web-sys", "winapi", ] [[package]] name = "ring" version = "0.17.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c17fa4cb658e3583423e915b9f3acc01cceaee1860e33d59ebae66adc3a2dc0d" dependencies = [ "cc", "cfg-if", "getrandom", "libc", "spin 0.9.8", "untrusted 0.9.0", "windows-sys 0.52.0", ] [[package]] name = "rustc-demangle" version = "0.1.24" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "719b953e2095829ee67db738b3bfa9fa368c94900df327b3f07fe6e794d2fe1f" [[package]] name = "rustc-hash" version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "08d43f7aa6b08d49f382cde6a7982047c3426db949b1424bc4b7ec9ae12c6ce2" [[package]] name = "rustc_version" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bfa0f585226d2e68097d4f95d113b15b83a82e819ab25717ec0590d9584ef366" dependencies = [ "semver", ] [[package]] name = "rustix" version = "0.38.34" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "70dc5ec042f7a43c4a73241207cecc9873a06d45debb38b329f8541d85c2730f" dependencies = [ "bitflags", "errno", "libc", "linux-raw-sys", "windows-sys 0.52.0", ] [[package]] name = "rustls" version = "0.20.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1b80e3dec595989ea8510028f30c408a4630db12c9cbb8de34203b89d6577e99" dependencies = [ "log", "ring 0.16.20", "sct", "webpki", ] [[package]] name = "rustls" version = "0.21.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3f56a14d1f48b391359b22f731fd4bd7e43c97f3c50eee276f3aa09c94784d3e" dependencies = [ "log", "ring 0.17.8", "rustls-webpki 0.101.7", "sct", ] [[package]] name = "rustls" version = "0.22.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bf4ef73721ac7bcd79b2b315da7779d8fc09718c6b3d2d1b2d94850eb8c18432" dependencies = [ "log", "ring 0.17.8", "rustls-pki-types", "rustls-webpki 0.102.6", "subtle", "zeroize", ] [[package]] name = "rustls" version = "0.23.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c58f8c84392efc0a126acce10fa59ff7b3d2ac06ab451a33f2741989b806b044" dependencies = [ "aws-lc-rs", "log", "once_cell", "rustls-pki-types", "rustls-webpki 0.102.6", "subtle", "zeroize", ] [[package]] name = "rustls-pemfile" version = "2.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "196fe16b00e106300d3e45ecfcb764fa292a535d7326a29a5875c579c7417425" dependencies = [ "base64", "rustls-pki-types", ] [[package]] name = "rustls-pki-types" version = "1.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fc0a2ce646f8655401bb81e7927b812614bd5d91dbc968696be50603510fcaf0" [[package]] name = "rustls-webpki" version = "0.101.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8b6275d1ee7a1cd780b64aca7726599a1dbc893b1e64144529e55c3c2f745765" dependencies = [ "ring 0.17.8", "untrusted 0.9.0", ] [[package]] name = "rustls-webpki" version = "0.102.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e6b52d4fda176fd835fdc55a835d4a89b8499cad995885a21149d5ad62f852e" dependencies = [ "aws-lc-rs", "ring 0.17.8", "rustls-pki-types", "untrusted 0.9.0", ] [[package]] name = "rustversion" version = "1.0.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "955d28af4278de8121b7ebeb796b6a45735dc01436d898801014aced2773a3d6" [[package]] name = "ryu" version = "1.0.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f" [[package]] name = "same-file" version = "1.0.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502" dependencies = [ "winapi-util", ] [[package]] name = "scopeguard" version = "1.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49" [[package]] name = "sct" version = "0.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "da046153aa2352493d6cb7da4b6e5c0c057d8a1d0a9aa8560baffdd945acd414" dependencies = [ "ring 0.17.8", "untrusted 0.9.0", ] [[package]] name = "semver" version = "1.0.23" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "61697e0a1c7e512e84a621326239844a24d8207b4669b41bc18b32ea5cbf988b" [[package]] name = "serde" version = "1.0.205" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e33aedb1a7135da52b7c21791455563facbbcc43d0f0f66165b42c21b3dfb150" dependencies = [ "serde_derive", ] [[package]] name = "serde_derive" version = "1.0.205" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "692d6f5ac90220161d6774db30c662202721e64aed9058d2c394f451261420c1" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "serde_json" version = "1.0.122" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "784b6203951c57ff748476b126ccb5e8e2959a5c19e5c617ab1956be3dbc68da" dependencies = [ "itoa", "memchr", "ryu", "serde", ] [[package]] name = "serde_urlencoded" version = "0.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d3491c14715ca2294c4d6a88f15e84739788c1d030eed8c110436aafdaa2f3fd" dependencies = [ "form_urlencoded", "itoa", "ryu", "serde", ] [[package]] name = "sha1" version = "0.10.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba" dependencies = [ "cfg-if", "cpufeatures", "digest", ] [[package]] name = "shlex" version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64" [[package]] name = "signal-hook-registry" version = "1.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a9e9e0b4211b72e7b8b6e85c807d36c212bdb33ea8587f7569562a84df5465b1" dependencies = [ "libc", ] [[package]] name = "slab" version = "0.4.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8f92a496fb766b417c996b9c5e57daf2f7ad3b0bebe1ccfca4856390e3d3bb67" dependencies = [ "autocfg", ] [[package]] name = "smallvec" version = "1.13.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3c5e1a9a646d36c3599cd173a41282daf47c44583ad367b8e6837255952e5c67" [[package]] name = "socket2" version = "0.5.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ce305eb0b4296696835b71df73eb912e0f1ffd2556a501fcede6e0c50349191c" dependencies = [ "libc", "windows-sys 0.52.0", ] [[package]] name = "spin" version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6e63cff320ae2c57904679ba7cb63280a3dc4613885beafb148ee7bf9aa9042d" [[package]] name = "spin" version = "0.9.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6980e8d7511241f8acf4aebddbb1ff938df5eebe98691418c4468d0b72a96a67" [[package]] name = "static_assertions" version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f" [[package]] name = "subtle" version = "2.6.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292" [[package]] name = "syn" version = "2.0.72" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dc4b9b9bf2add8093d3f2c0204471e951b2285580335de42f9d2534f3ae7a8af" dependencies = [ "proc-macro2", "quote", "unicode-ident", ] [[package]] name = "terminal_size" version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "21bebf2b7c9e0a515f6e0f8c51dc0f8e4696391e6f1ff30379559f8365fb0df7" dependencies = [ "rustix", "windows-sys 0.48.0", ] [[package]] name = "time" version = "0.3.36" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5dfd88e563464686c916c7e46e623e520ddc6d79fa6641390f2e3fa86e83e885" dependencies = [ "deranged", "itoa", "num-conv", "powerfmt", "serde", "time-core", "time-macros", ] [[package]] name = "time-core" version = "0.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ef927ca75afb808a4d64dd374f00a2adf8d0fcff8e7b184af886c3c87ec4a3f3" [[package]] name = "time-macros" version = "0.2.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3f252a68540fde3a3877aeea552b832b40ab9a69e318efd078774a01ddee1ccf" dependencies = [ "num-conv", "time-core", ] [[package]] name = "tinytemplate" version = "1.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "be4d6b5f19ff7664e8c98d03e2139cb510db9b0a60b55f8e8709b689d939b6bc" dependencies = [ "serde", "serde_json", ] [[package]] name = "tinyvec" version = "1.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "445e881f4f6d382d5f27c034e25eb92edd7c784ceab92a0937db7f2e9471b938" dependencies = [ "tinyvec_macros", ] [[package]] name = "tinyvec_macros" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20" [[package]] name = "tokio" version = "1.39.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "daa4fb1bc778bd6f04cbfc4bb2d06a7396a8f299dc33ea1900cedaa316f467b1" dependencies = [ "backtrace", "bytes", "libc", "mio", "parking_lot", "pin-project-lite", "signal-hook-registry", "socket2", "tokio-macros", "windows-sys 0.52.0", ] [[package]] name = "tokio-macros" version = "2.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "693d596312e88961bc67d7f1f97af8a70227d9f90c31bba5806eec004978d752" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "tokio-openssl" version = "0.6.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6ffab79df67727f6acf57f1ff743091873c24c579b1e2ce4d8f53e47ded4d63d" dependencies = [ "futures-util", "openssl", "openssl-sys", "tokio", ] [[package]] name = "tokio-rustls" version = "0.23.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c43ee83903113e03984cb9e5cebe6c04a5116269e900e3ddba8f068a62adda59" dependencies = [ "rustls 0.20.9", "tokio", "webpki", ] [[package]] name = "tokio-rustls" version = "0.24.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c28327cf380ac148141087fbfb9de9d7bd4e84ab5d2c28fbc911d753de8a7081" dependencies = [ "rustls 0.21.12", "tokio", ] [[package]] name = "tokio-rustls" version = "0.25.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "775e0c0f0adb3a2f22a00c4745d728b479985fc15ee7ca6a2608388c5569860f" dependencies = [ "rustls 0.22.4", "rustls-pki-types", "tokio", ] [[package]] name = "tokio-rustls" version = "0.26.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c7bc40d0e5a97695bb96e27995cd3a08538541b0a846f65bba7a359f36700d4" dependencies = [ "rustls 0.23.12", "rustls-pki-types", "tokio", ] [[package]] name = "tokio-util" version = "0.7.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9cf6b47b3771c49ac75ad09a6162f53ad4b8088b76ac60e8ec1455b31a189fe1" dependencies = [ "bytes", "futures-core", "futures-sink", "pin-project-lite", "tokio", ] [[package]] name = "tracing" version = "0.1.40" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c3523ab5a71916ccf420eebdf5521fcef02141234bbc0b8a49f2fdc4544364ef" dependencies = [ "log", "pin-project-lite", "tracing-core", ] [[package]] name = "tracing-core" version = "0.1.32" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c06d3da6113f116aaee68e4d601191614c9053067f9ab7f6edbcb161237daa54" dependencies = [ "once_cell", ] [[package]] name = "typenum" version = "1.17.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825" [[package]] name = "unicode-bidi" version = "0.3.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "08f95100a766bf4f8f28f90d77e0a5461bbdb219042e7679bebe79004fed8d75" [[package]] name = "unicode-ident" version = "1.0.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3354b9ac3fae1ff6755cb6db53683adb661634f67557942dea4facebec0fee4b" [[package]] name = "unicode-normalization" version = "0.1.23" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a56d1686db2308d901306f92a263857ef59ea39678a5458e7cb17f01415101f5" dependencies = [ "tinyvec", ] [[package]] name = "untrusted" version = "0.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a156c684c91ea7d62626509bce3cb4e1d9ed5c4d978f7b4352658f96a4c26b4a" [[package]] name = "untrusted" version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1" [[package]] name = "url" version = "2.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "22784dbdf76fdde8af1aeda5622b546b422b6fc585325248a2bf9f5e41e94d6c" dependencies = [ "form_urlencoded", "idna", "percent-encoding", ] [[package]] name = "utf8parse" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821" [[package]] name = "vcpkg" version = "0.2.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "accd4ea62f7bb7a82fe23066fb0957d48ef677f6eeb8215f372f52e48bb32426" [[package]] name = "version_check" version = "0.9.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a" [[package]] name = "walkdir" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "29790946404f91d9c5d06f9874efddea1dc06c5efe94541a7d6863108e3a5e4b" dependencies = [ "same-file", "winapi-util", ] [[package]] name = "wasi" version = "0.11.0+wasi-snapshot-preview1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423" [[package]] name = "wasm-bindgen" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4be2531df63900aeb2bca0daaaddec08491ee64ceecbee5076636a3b026795a8" dependencies = [ "cfg-if", "wasm-bindgen-macro", ] [[package]] name = "wasm-bindgen-backend" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "614d787b966d3989fa7bb98a654e369c762374fd3213d212cfc0251257e747da" dependencies = [ "bumpalo", "log", "once_cell", "proc-macro2", "quote", "syn", "wasm-bindgen-shared", ] [[package]] name = "wasm-bindgen-macro" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a1f8823de937b71b9460c0c34e25f3da88250760bec0ebac694b49997550d726" dependencies = [ "quote", "wasm-bindgen-macro-support", ] [[package]] name = "wasm-bindgen-macro-support" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e94f17b526d0a461a191c78ea52bbce64071ed5c04c9ffe424dcb38f74171bb7" dependencies = [ "proc-macro2", "quote", "syn", "wasm-bindgen-backend", "wasm-bindgen-shared", ] [[package]] name = "wasm-bindgen-shared" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "af190c94f2773fdb3729c55b007a722abb5384da03bc0986df4c289bf5567e96" [[package]] name = "web-sys" version = "0.3.69" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "77afa9a11836342370f4817622a2f0f418b134426d91a82dfb48f532d2ec13ef" dependencies = [ "js-sys", "wasm-bindgen", ] [[package]] name = "webpki" version = "0.22.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ed63aea5ce73d0ff405984102c42de94fc55a6b75765d621c65262469b3c9b53" dependencies = [ "ring 0.17.8", "untrusted 0.9.0", ] [[package]] name = "webpki-roots" version = "0.22.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b6c71e40d7d2c34a5106301fb632274ca37242cd0c9d3e64dbece371a40a2d87" dependencies = [ "webpki", ] [[package]] name = "webpki-roots" version = "0.25.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5f20c57d8d7db6d3b86154206ae5d8fba62dd39573114de97c2cb0578251f8e1" [[package]] name = "webpki-roots" version = "0.26.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bd7c23921eeb1713a4e851530e9b9756e4fb0e89978582942612524cf09f01cd" dependencies = [ "rustls-pki-types", ] [[package]] name = "which" version = "4.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "87ba24419a2078cd2b0f2ede2691b6c66d8e47836da3b6db8265ebad47afbfc7" dependencies = [ "either", "home", "once_cell", "rustix", ] [[package]] name = "winapi" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419" dependencies = [ "winapi-i686-pc-windows-gnu", "winapi-x86_64-pc-windows-gnu", ] [[package]] name = "winapi-i686-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6" [[package]] name = "winapi-util" version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb" dependencies = [ "windows-sys 0.59.0", ] [[package]] name = "winapi-x86_64-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f" [[package]] name = "windows-sys" version = "0.48.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "677d2418bec65e3338edb076e806bc1ec15693c5d0104683f2efe857f61056a9" dependencies = [ "windows-targets 0.48.5", ] [[package]] name = "windows-sys" version = "0.52.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "282be5f36a8ce781fad8c8ae18fa3f9beff57ec1b52cb3de0789201425d9a33d" dependencies = [ "windows-targets 0.52.6", ] [[package]] name = "windows-sys" version = "0.59.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1e38bc4d79ed67fd075bcc251a1c39b32a1776bbe92e5bef1f0bf1f8c531853b" dependencies = [ "windows-targets 0.52.6", ] [[package]] name = "windows-targets" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9a2fa6e2155d7247be68c096456083145c183cbbbc2764150dda45a87197940c" dependencies = [ "windows_aarch64_gnullvm 0.48.5", "windows_aarch64_msvc 0.48.5", "windows_i686_gnu 0.48.5", "windows_i686_msvc 0.48.5", "windows_x86_64_gnu 0.48.5", "windows_x86_64_gnullvm 0.48.5", "windows_x86_64_msvc 0.48.5", ] [[package]] name = "windows-targets" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973" dependencies = [ "windows_aarch64_gnullvm 0.52.6", "windows_aarch64_msvc 0.52.6", "windows_i686_gnu 0.52.6", "windows_i686_gnullvm", "windows_i686_msvc 0.52.6", "windows_x86_64_gnu 0.52.6", "windows_x86_64_gnullvm 0.52.6", "windows_x86_64_msvc 0.52.6", ] [[package]] name = "windows_aarch64_gnullvm" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2b38e32f0abccf9987a4e3079dfb67dcd799fb61361e53e2882c3cbaf0d905d8" [[package]] name = "windows_aarch64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3" [[package]] name = "windows_aarch64_msvc" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dc35310971f3b2dbbf3f0690a219f40e2d9afcf64f9ab7cc1be722937c26b4bc" [[package]] name = "windows_aarch64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469" [[package]] name = "windows_i686_gnu" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a75915e7def60c94dcef72200b9a8e58e5091744960da64ec734a6c6e9b3743e" [[package]] name = "windows_i686_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b" [[package]] name = "windows_i686_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66" [[package]] name = "windows_i686_msvc" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8f55c233f70c4b27f66c523580f78f1004e8b5a8b659e05a4eb49d4166cca406" [[package]] name = "windows_i686_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66" [[package]] name = "windows_x86_64_gnu" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "53d40abd2583d23e4718fddf1ebec84dbff8381c07cae67ff7768bbf19c6718e" [[package]] name = "windows_x86_64_gnu" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78" [[package]] name = "windows_x86_64_gnullvm" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0b7b52767868a23d5bab768e390dc5f5c55825b6d30b86c844ff2dc7414044cc" [[package]] name = "windows_x86_64_gnullvm" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d" [[package]] name = "windows_x86_64_msvc" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ed94fce61571a4006852b7389a063ab983c02eb1bb37b47f8272ce92d06d9538" [[package]] name = "windows_x86_64_msvc" version = "0.52.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec" [[package]] name = "yasna" version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e17bb3549cc1321ae1296b9cdc2698e2b6cb1992adfa19a8c72e5b7a738f44cd" dependencies = [ "time", ] [[package]] name = "zerocopy" version = "0.7.35" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1b9b4fd18abc82b8136838da5d50bae7bdea537c574d8dc1a34ed098d6c166f0" dependencies = [ "byteorder", "zerocopy-derive", ] [[package]] name = "zerocopy-derive" version = "0.7.35" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fa4f8080344d4671fb4e831a13ad1e68092748387dfc4f55e356242fae12ce3e" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "zeroize" version = "1.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ced3678a2879b30306d323f4542626697a464a97c0a07c9aebf7ebca65cd4dde" dependencies = [ "zeroize_derive", ] [[package]] name = "zeroize_derive" version = "1.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ce36e65b0d2999d2aafac989fb249189a141aee1f53c612c1f37d72631959f69" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "zstd" version = "0.13.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fcf2b778a664581e31e389454a7072dab1647606d44f7feea22cd5abb9c9f3f9" dependencies = [ "zstd-safe", ] [[package]] name = "zstd-safe" version = "7.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "54a3ab4db68cea366acc5c897c7b4d4d1b8994a9cd6e6f841f8964566a419059" dependencies = [ "zstd-sys", ] [[package]] name = "zstd-sys" version = "2.0.13+zstd.1.5.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "38ff0f21cfee8f97d94cef41359e0c89aa6113028ab0291aa8ca0038995a95aa" dependencies = [ "cc", "pkg-config", ] actix-http-3.9.0/Cargo.toml0000644000000157350000000000100111430ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2021" rust-version = "1.72" name = "actix-http" version = "3.9.0" authors = [ "Nikolay Kim ", "Rob Ede ", ] build = false autobins = false autoexamples = false autotests = false autobenches = false description = "HTTP types and services for the Actix ecosystem" homepage = "https://actix.rs" readme = "README.md" keywords = [ "actix", "http", "framework", "async", "futures", ] categories = [ "network-programming", "asynchronous", "web-programming::http-server", "web-programming::websocket", ] license = "MIT OR Apache-2.0" repository = "https://github.com/actix/actix-web" [package.metadata.cargo_check_external_types] allowed_external_types = [ "actix_codec::*", "actix_service::*", "actix_tls::*", "actix_utils::*", "bytes::*", "bytestring::*", "encoding_rs::*", "futures_core::*", "h2::*", "http::*", "httparse::*", "language_tags::*", "mime::*", "openssl::*", "rustls::*", "tokio_util::*", "tokio::*", ] [package.metadata.docs.rs] features = [ "http2", "ws", "openssl", "rustls-0_20", "rustls-0_21", "rustls-0_22", "rustls-0_23", "compress-brotli", "compress-gzip", "compress-zstd", ] rustdoc-args = [ "--cfg", "docsrs", ] [lib] name = "actix_http" path = "src/lib.rs" [[example]] name = "actix-web" path = "examples/actix-web.rs" [[example]] name = "bench" path = "examples/bench.rs" [[example]] name = "echo" path = "examples/echo.rs" [[example]] name = "echo2" path = "examples/echo2.rs" [[example]] name = "h2c-detect" path = "examples/h2c-detect.rs" [[example]] name = "h2spec" path = "examples/h2spec.rs" [[example]] name = "hello-world" path = "examples/hello-world.rs" [[example]] name = "streaming-error" path = "examples/streaming-error.rs" [[example]] name = "tls_rustls" path = "examples/tls_rustls.rs" required-features = [ "http2", "rustls-0_23", ] [[example]] name = "ws" path = "examples/ws.rs" required-features = [ "ws", "rustls-0_23", ] [[test]] name = "test_client" path = "tests/test_client.rs" [[test]] name = "test_h2_timer" path = "tests/test_h2_timer.rs" [[test]] name = "test_openssl" path = "tests/test_openssl.rs" [[test]] name = "test_rustls" path = "tests/test_rustls.rs" [[test]] name = "test_server" path = "tests/test_server.rs" [[test]] name = "test_ws" path = "tests/test_ws.rs" [[bench]] name = "date-formatting" path = "benches/date-formatting.rs" harness = false [[bench]] name = "response-body-compression" path = "benches/response-body-compression.rs" harness = false required-features = [ "compress-brotli", "compress-gzip", "compress-zstd", ] [dependencies.actix-codec] version = "0.5" [dependencies.actix-rt] version = "2.2" default-features = false [dependencies.actix-service] version = "2" [dependencies.actix-tls] version = "3.4" optional = true default-features = false [dependencies.actix-utils] version = "3" [dependencies.ahash] version = "0.8" [dependencies.base64] version = "0.22" optional = true [dependencies.bitflags] version = "2" [dependencies.brotli] version = "6" optional = true [dependencies.bytes] version = "1" [dependencies.bytestring] version = "1" [dependencies.derive_more] version = "0.99.5" [dependencies.encoding_rs] version = "0.8" [dependencies.flate2] version = "1.0.13" optional = true [dependencies.futures-core] version = "0.3.17" features = ["alloc"] default-features = false [dependencies.h2] version = "0.3.26" optional = true [dependencies.http] version = "0.2.7" [dependencies.httparse] version = "1.5.1" [dependencies.httpdate] version = "1.0.1" [dependencies.itoa] version = "1" [dependencies.language-tags] version = "0.3" [dependencies.local-channel] version = "0.1" optional = true [dependencies.mime] version = "0.3.4" [dependencies.percent-encoding] version = "2.1" [dependencies.pin-project-lite] version = "0.2" [dependencies.rand] version = "0.8" optional = true [dependencies.sha1] version = "0.10" optional = true [dependencies.smallvec] version = "1.6.1" [dependencies.tokio] version = "1.24.2" features = [] [dependencies.tokio-util] version = "0.7" features = [ "io", "codec", ] [dependencies.tracing] version = "0.1.30" features = ["log"] default-features = false [dependencies.zstd] version = "0.13" optional = true [dev-dependencies.actix-http-test] version = "3" features = ["openssl"] [dev-dependencies.actix-server] version = "2" [dev-dependencies.actix-tls] version = "3.4" features = [ "openssl", "rustls-0_23-webpki-roots", ] [dev-dependencies.actix-web] version = "4" [dev-dependencies.async-stream] version = "0.3" [dev-dependencies.criterion] version = "0.5" features = ["html_reports"] [dev-dependencies.divan] version = "0.1.8" [dev-dependencies.env_logger] version = "0.11" [dev-dependencies.futures-util] version = "0.3.17" features = ["alloc"] default-features = false [dev-dependencies.memchr] version = "2.4" [dev-dependencies.once_cell] version = "1.9" [dev-dependencies.rcgen] version = "0.13" [dev-dependencies.regex] version = "1.3" [dev-dependencies.rustls-pemfile] version = "2" [dev-dependencies.rustversion] version = "1" [dev-dependencies.serde] version = "1.0" features = ["derive"] [dev-dependencies.serde_json] version = "1.0" [dev-dependencies.static_assertions] version = "1" [dev-dependencies.tls-openssl] version = "0.10.55" package = "openssl" [dev-dependencies.tls-rustls_023] version = "0.23" package = "rustls" [dev-dependencies.tokio] version = "1.24.2" features = [ "net", "rt", "macros", ] [features] __compress = [] __tls = [] compress-brotli = [ "__compress", "dep:brotli", ] compress-gzip = [ "__compress", "dep:flate2", ] compress-zstd = [ "__compress", "dep:zstd", ] default = [] http2 = ["dep:h2"] openssl = [ "__tls", "actix-tls/accept", "actix-tls/openssl", ] rustls = [ "__tls", "rustls-0_20", ] rustls-0_20 = [ "__tls", "actix-tls/accept", "actix-tls/rustls-0_20", ] rustls-0_21 = [ "__tls", "actix-tls/accept", "actix-tls/rustls-0_21", ] rustls-0_22 = [ "__tls", "actix-tls/accept", "actix-tls/rustls-0_22", ] rustls-0_23 = [ "__tls", "actix-tls/accept", "actix-tls/rustls-0_23", ] ws = [ "dep:local-channel", "dep:base64", "dep:rand", "dep:sha1", ] [lints.clippy] [lints.rust.future_incompatible] level = "deny" priority = 0 [lints.rust.nonstandard_style] level = "deny" priority = 0 [lints.rust.rust_2018_idioms] level = "deny" priority = 0 actix-http-3.9.0/Cargo.toml.orig000064400000000000000000000111731046102023000146140ustar 00000000000000[package] name = "actix-http" version = "3.9.0" authors = [ "Nikolay Kim ", "Rob Ede ", ] description = "HTTP types and services for the Actix ecosystem" keywords = ["actix", "http", "framework", "async", "futures"] homepage = "https://actix.rs" repository = "https://github.com/actix/actix-web" categories = [ "network-programming", "asynchronous", "web-programming::http-server", "web-programming::websocket", ] license.workspace = true edition.workspace = true rust-version.workspace = true [package.metadata.docs.rs] rustdoc-args = ["--cfg", "docsrs"] features = [ "http2", "ws", "openssl", "rustls-0_20", "rustls-0_21", "rustls-0_22", "rustls-0_23", "compress-brotli", "compress-gzip", "compress-zstd", ] [package.metadata.cargo_check_external_types] allowed_external_types = [ "actix_codec::*", "actix_service::*", "actix_tls::*", "actix_utils::*", "bytes::*", "bytestring::*", "encoding_rs::*", "futures_core::*", "h2::*", "http::*", "httparse::*", "language_tags::*", "mime::*", "openssl::*", "rustls::*", "tokio_util::*", "tokio::*", ] [features] default = [] # HTTP/2 protocol support http2 = ["dep:h2"] # WebSocket protocol implementation ws = [ "dep:local-channel", "dep:base64", "dep:rand", "dep:sha1", ] # TLS via OpenSSL openssl = ["__tls", "actix-tls/accept", "actix-tls/openssl"] # TLS via Rustls v0.20 rustls = ["__tls", "rustls-0_20"] # TLS via Rustls v0.20 rustls-0_20 = ["__tls", "actix-tls/accept", "actix-tls/rustls-0_20"] # TLS via Rustls v0.21 rustls-0_21 = ["__tls", "actix-tls/accept", "actix-tls/rustls-0_21"] # TLS via Rustls v0.22 rustls-0_22 = ["__tls", "actix-tls/accept", "actix-tls/rustls-0_22"] # TLS via Rustls v0.23 rustls-0_23 = ["__tls", "actix-tls/accept", "actix-tls/rustls-0_23"] # Compression codecs compress-brotli = ["__compress", "dep:brotli"] compress-gzip = ["__compress", "dep:flate2"] compress-zstd = ["__compress", "dep:zstd"] # Internal (PRIVATE!) features used to aid testing and checking feature status. # Don't rely on these whatsoever. They are semver-exempt and may disappear at anytime. __compress = [] # Internal (PRIVATE!) features used to aid checking feature status. # Don't rely on these whatsoever. They may disappear at anytime. __tls = [] [dependencies] actix-service = "2" actix-codec = "0.5" actix-utils = "3" actix-rt = { version = "2.2", default-features = false } ahash = "0.8" bitflags = "2" bytes = "1" bytestring = "1" derive_more = "0.99.5" encoding_rs = "0.8" futures-core = { version = "0.3.17", default-features = false, features = ["alloc"] } http = "0.2.7" httparse = "1.5.1" httpdate = "1.0.1" itoa = "1" language-tags = "0.3" mime = "0.3.4" percent-encoding = "2.1" pin-project-lite = "0.2" smallvec = "1.6.1" tokio = { version = "1.24.2", features = [] } tokio-util = { version = "0.7", features = ["io", "codec"] } tracing = { version = "0.1.30", default-features = false, features = ["log"] } # http2 h2 = { version = "0.3.26", optional = true } # websockets local-channel = { version = "0.1", optional = true } base64 = { version = "0.22", optional = true } rand = { version = "0.8", optional = true } sha1 = { version = "0.10", optional = true } # openssl/rustls actix-tls = { version = "3.4", default-features = false, optional = true } # compress-* brotli = { version = "6", optional = true } flate2 = { version = "1.0.13", optional = true } zstd = { version = "0.13", optional = true } [dev-dependencies] actix-http-test = { version = "3", features = ["openssl"] } actix-server = "2" actix-tls = { version = "3.4", features = ["openssl", "rustls-0_23-webpki-roots"] } actix-web = "4" async-stream = "0.3" criterion = { version = "0.5", features = ["html_reports"] } divan = "0.1.8" env_logger = "0.11" futures-util = { version = "0.3.17", default-features = false, features = ["alloc"] } memchr = "2.4" once_cell = "1.9" rcgen = "0.13" regex = "1.3" rustversion = "1" rustls-pemfile = "2" serde = { version = "1.0", features = ["derive"] } serde_json = "1.0" static_assertions = "1" tls-openssl = { package = "openssl", version = "0.10.55" } tls-rustls_023 = { package = "rustls", version = "0.23" } tokio = { version = "1.24.2", features = ["net", "rt", "macros"] } [lints] workspace = true [[example]] name = "ws" required-features = ["ws", "rustls-0_23"] [[example]] name = "tls_rustls" required-features = ["http2", "rustls-0_23"] [[bench]] name = "response-body-compression" harness = false required-features = ["compress-brotli", "compress-gzip", "compress-zstd"] [[bench]] name = "date-formatting" harness = false actix-http-3.9.0/LICENSE-APACHE000064400000000000000000000261201046102023000136470ustar 00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2017-NOW Actix Team Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. actix-http-3.9.0/LICENSE-MIT000064400000000000000000000020421046102023000133540ustar 00000000000000Copyright (c) 2017-NOW Actix Team Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. actix-http-3.9.0/README.md000064400000000000000000000032411046102023000132010ustar 00000000000000# `actix-http` > HTTP types and services for the Actix ecosystem. [![crates.io](https://img.shields.io/crates/v/actix-http?label=latest)](https://crates.io/crates/actix-http) [![Documentation](https://docs.rs/actix-http/badge.svg?version=3.9.0)](https://docs.rs/actix-http/3.9.0) ![Version](https://img.shields.io/badge/rustc-1.72+-ab6000.svg) ![MIT or Apache 2.0 licensed](https://img.shields.io/crates/l/actix-http.svg)
[![dependency status](https://deps.rs/crate/actix-http/3.9.0/status.svg)](https://deps.rs/crate/actix-http/3.9.0) [![Download](https://img.shields.io/crates/d/actix-http.svg)](https://crates.io/crates/actix-http) [![Chat on Discord](https://img.shields.io/discord/771444961383153695?label=chat&logo=discord)](https://discord.gg/NWpN5mmg3x) ## Examples ```rust use std::{env, io}; use actix_http::{HttpService, Response}; use actix_server::Server; use futures_util::future; use http::header::HeaderValue; use tracing::info; #[actix_rt::main] async fn main() -> io::Result<()> { env::set_var("RUST_LOG", "hello_world=info"); env_logger::init(); Server::build() .bind("hello-world", "127.0.0.1:8080", || { HttpService::build() .client_timeout(1000) .client_disconnect(1000) .finish(|_req| { info!("{:?}", _req); let mut res = Response::Ok(); res.header("x-head", HeaderValue::from_static("dummy value!")); future::ok::<_, ()>(res.body("Hello world!")) }) .tcp() })? .run() .await } ``` actix-http-3.9.0/benches/date-formatting.rs000064400000000000000000000006211046102023000167630ustar 00000000000000use std::time::SystemTime; use actix_http::header::HttpDate; use divan::{black_box, AllocProfiler, Bencher}; #[global_allocator] static ALLOC: AllocProfiler = AllocProfiler::system(); #[divan::bench] fn date_formatting(b: Bencher<'_, '_>) { let now = SystemTime::now(); b.bench(|| { black_box(HttpDate::from(black_box(now)).to_string()); }) } fn main() { divan::main(); } actix-http-3.9.0/benches/response-body-compression.rs000064400000000000000000000054211046102023000210310ustar 00000000000000use std::convert::Infallible; use actix_http::{encoding::Encoder, ContentEncoding, Request, Response, StatusCode}; use actix_service::{fn_service, Service as _}; use criterion::{black_box, criterion_group, criterion_main, Criterion}; static BODY: &[u8] = include_bytes!("../Cargo.toml"); fn compression_responses(c: &mut Criterion) { let mut group = c.benchmark_group("compression responses"); group.bench_function("identity", |b| { let rt = actix_rt::Runtime::new().unwrap(); let identity_svc = fn_service(|_: Request| async move { let mut res = Response::with_body(StatusCode::OK, ()); let body = black_box(Encoder::response( ContentEncoding::Identity, res.head_mut(), BODY, )); Ok::<_, Infallible>(black_box(res.set_body(black_box(body)))) }); b.iter(|| { rt.block_on(identity_svc.call(Request::new())).unwrap(); }); }); group.bench_function("gzip", |b| { let rt = actix_rt::Runtime::new().unwrap(); let identity_svc = fn_service(|_: Request| async move { let mut res = Response::with_body(StatusCode::OK, ()); let body = black_box(Encoder::response( ContentEncoding::Gzip, res.head_mut(), BODY, )); Ok::<_, Infallible>(black_box(res.set_body(black_box(body)))) }); b.iter(|| { rt.block_on(identity_svc.call(Request::new())).unwrap(); }); }); group.bench_function("br", |b| { let rt = actix_rt::Runtime::new().unwrap(); let identity_svc = fn_service(|_: Request| async move { let mut res = Response::with_body(StatusCode::OK, ()); let body = black_box(Encoder::response( ContentEncoding::Brotli, res.head_mut(), BODY, )); Ok::<_, Infallible>(black_box(res.set_body(black_box(body)))) }); b.iter(|| { rt.block_on(identity_svc.call(Request::new())).unwrap(); }); }); group.bench_function("zstd", |b| { let rt = actix_rt::Runtime::new().unwrap(); let identity_svc = fn_service(|_: Request| async move { let mut res = Response::with_body(StatusCode::OK, ()); let body = black_box(Encoder::response( ContentEncoding::Zstd, res.head_mut(), BODY, )); Ok::<_, Infallible>(black_box(res.set_body(black_box(body)))) }); b.iter(|| { rt.block_on(identity_svc.call(Request::new())).unwrap(); }); }); group.finish(); } criterion_group!(benches, compression_responses); criterion_main!(benches); actix-http-3.9.0/examples/actix-web.rs000064400000000000000000000014661046102023000160000ustar 00000000000000use actix_http::HttpService; use actix_server::Server; use actix_service::map_config; use actix_web::{dev::AppConfig, get, App}; #[get("/")] async fn index() -> &'static str { "Hello, world. From Actix Web!" } #[tokio::main(flavor = "current_thread")] async fn main() -> std::io::Result<()> { Server::build() .bind("hello-world", "127.0.0.1:8080", || { // construct actix-web app let app = App::new().service(index); HttpService::build() // pass the app to service builder // map_config is used to map App's configuration to ServiceBuilder // h1 will configure server to only use HTTP/1.1 .h1(map_config(app, |_| AppConfig::default())) .tcp() })? .run() .await } actix-http-3.9.0/examples/bench.rs000064400000000000000000000016401046102023000151660ustar 00000000000000use std::{convert::Infallible, io, time::Duration}; use actix_http::{HttpService, Request, Response, StatusCode}; use actix_server::Server; use once_cell::sync::Lazy; static STR: Lazy = Lazy::new(|| "HELLO WORLD ".repeat(20)); #[actix_rt::main] async fn main() -> io::Result<()> { env_logger::init_from_env(env_logger::Env::new().default_filter_or("info")); Server::build() .bind("dispatcher-benchmark", ("127.0.0.1", 8080), || { HttpService::build() .client_request_timeout(Duration::from_secs(1)) .finish(|_: Request| async move { let mut res = Response::build(StatusCode::OK); Ok::<_, Infallible>(res.body(&**STR)) }) .tcp() })? // limiting number of workers so that bench client is not sharing as many resources .workers(4) .run() .await } actix-http-3.9.0/examples/echo.rs000064400000000000000000000024341046102023000150270ustar 00000000000000use std::{io, time::Duration}; use actix_http::{Error, HttpService, Request, Response, StatusCode}; use actix_server::Server; use bytes::BytesMut; use futures_util::StreamExt as _; use http::header::HeaderValue; use tracing::info; #[actix_rt::main] async fn main() -> io::Result<()> { env_logger::init_from_env(env_logger::Env::new().default_filter_or("info")); Server::build() .bind("echo", ("127.0.0.1", 8080), || { HttpService::build() .client_request_timeout(Duration::from_secs(1)) .client_disconnect_timeout(Duration::from_secs(1)) // handles HTTP/1.1 and HTTP/2 .finish(|mut req: Request| async move { let mut body = BytesMut::new(); while let Some(item) = req.payload().next().await { body.extend_from_slice(&item?); } info!("request body: {:?}", body); let res = Response::build(StatusCode::OK) .insert_header(("x-head", HeaderValue::from_static("dummy value!"))) .body(body); Ok::<_, Error>(res) }) // No TLS .tcp() })? .run() .await } actix-http-3.9.0/examples/echo2.rs000064400000000000000000000017331046102023000151120ustar 00000000000000use std::io; use actix_http::{ body::{BodyStream, MessageBody}, header, Error, HttpMessage, HttpService, Request, Response, StatusCode, }; async fn handle_request(mut req: Request) -> Result, Error> { let mut res = Response::build(StatusCode::OK); if let Some(ct) = req.headers().get(header::CONTENT_TYPE) { res.insert_header((header::CONTENT_TYPE, ct)); } // echo request payload stream as (chunked) response body let res = res.message_body(BodyStream::new(req.payload().take()))?; Ok(res) } #[actix_rt::main] async fn main() -> io::Result<()> { env_logger::init_from_env(env_logger::Env::new().default_filter_or("info")); actix_server::Server::build() .bind("echo", ("127.0.0.1", 8080), || { HttpService::build() // handles HTTP/1.1 only .h1(handle_request) // No TLS .tcp() })? .run() .await } actix-http-3.9.0/examples/h2c-detect.rs000064400000000000000000000022251046102023000160310ustar 00000000000000//! An example that supports automatic selection of plaintext h1/h2c connections. //! //! Notably, both the following commands will work. //! ```console //! $ curl --http1.1 'http://localhost:8080/' //! $ curl --http2-prior-knowledge 'http://localhost:8080/' //! ``` use std::{convert::Infallible, io}; use actix_http::{body::BodyStream, HttpService, Request, Response, StatusCode}; use actix_server::Server; #[tokio::main(flavor = "current_thread")] async fn main() -> io::Result<()> { env_logger::init_from_env(env_logger::Env::new().default_filter_or("info")); Server::build() .bind("h2c-detect", ("127.0.0.1", 8080), || { HttpService::build() .finish(|_req: Request| async move { Ok::<_, Infallible>(Response::build(StatusCode::OK).body(BodyStream::new( futures_util::stream::iter([ Ok::<_, String>("123".into()), Err("wertyuikmnbvcxdfty6t".to_owned()), ]), ))) }) .tcp_auto_h2c() })? .workers(2) .run() .await } actix-http-3.9.0/examples/h2spec.rs000064400000000000000000000013431046102023000152730ustar 00000000000000use std::{convert::Infallible, io}; use actix_http::{HttpService, Request, Response, StatusCode}; use actix_server::Server; use once_cell::sync::Lazy; static STR: Lazy = Lazy::new(|| "HELLO WORLD ".repeat(100)); #[actix_rt::main] async fn main() -> io::Result<()> { env_logger::init_from_env(env_logger::Env::new().default_filter_or("info")); Server::build() .bind("h2spec", ("127.0.0.1", 8080), || { HttpService::build() .h2(|_: Request| async move { let mut res = Response::build(StatusCode::OK); Ok::<_, Infallible>(res.body(&**STR)) }) .tcp() })? .workers(4) .run() .await } actix-http-3.9.0/examples/hello-world.rs000064400000000000000000000023361046102023000163420ustar 00000000000000use std::{convert::Infallible, io, time::Duration}; use actix_http::{header::HeaderValue, HttpService, Request, Response, StatusCode}; use actix_server::Server; use tracing::info; #[actix_rt::main] async fn main() -> io::Result<()> { env_logger::init_from_env(env_logger::Env::new().default_filter_or("info")); Server::build() .bind("hello-world", ("127.0.0.1", 8080), || { HttpService::build() .client_request_timeout(Duration::from_secs(1)) .client_disconnect_timeout(Duration::from_secs(1)) .on_connect_ext(|_, ext| { ext.insert(42u32); }) .finish(|req: Request| async move { info!("{:?}", req); let mut res = Response::build(StatusCode::OK); res.insert_header(("x-head", HeaderValue::from_static("dummy value!"))); let forty_two = req.conn_data::().unwrap().to_string(); res.insert_header(("x-forty-two", HeaderValue::from_str(&forty_two).unwrap())); Ok::<_, Infallible>(res.body("Hello world!")) }) .tcp() })? .run() .await } actix-http-3.9.0/examples/streaming-error.rs000064400000000000000000000023241046102023000172270ustar 00000000000000//! Example showing response body (chunked) stream erroring. //! //! Test using `nc` or `curl`. //! ```sh //! $ curl -vN 127.0.0.1:8080 //! $ echo 'GET / HTTP/1.1\n\n' | nc 127.0.0.1 8080 //! ``` use std::{convert::Infallible, io, time::Duration}; use actix_http::{body::BodyStream, HttpService, Response}; use actix_server::Server; use async_stream::stream; use bytes::Bytes; use tracing::info; #[actix_rt::main] async fn main() -> io::Result<()> { env_logger::init_from_env(env_logger::Env::new().default_filter_or("info")); Server::build() .bind("streaming-error", ("127.0.0.1", 8080), || { HttpService::build() .finish(|req| async move { info!("{:?}", req); let res = Response::ok(); Ok::<_, Infallible>(res.set_body(BodyStream::new(stream! { yield Ok(Bytes::from("123")); yield Ok(Bytes::from("456")); actix_rt::time::sleep(Duration::from_millis(1000)).await; yield Err(io::Error::new(io::ErrorKind::Other, "")); }))) }) .tcp() })? .run() .await } actix-http-3.9.0/examples/tls_rustls.rs000064400000000000000000000042721046102023000163310ustar 00000000000000//! Demonstrates TLS configuration (via Rustls) for HTTP/1.1 and HTTP/2 connections. //! //! Test using cURL: //! //! ```console //! $ curl --insecure https://127.0.0.1:8443 //! Hello World! //! Protocol: HTTP/2.0 //! //! $ curl --insecure --http1.1 https://127.0.0.1:8443 //! Hello World! //! Protocol: HTTP/1.1 //! ``` extern crate tls_rustls_023 as rustls; use std::io; use actix_http::{Error, HttpService, Request, Response}; use actix_utils::future::ok; #[actix_rt::main] async fn main() -> io::Result<()> { env_logger::init_from_env(env_logger::Env::new().default_filter_or("info")); tracing::info!("starting HTTP server at https://127.0.0.1:8443"); actix_server::Server::build() .bind("echo", ("127.0.0.1", 8443), || { HttpService::build() .finish(|req: Request| { let body = format!( "Hello World!\n\ Protocol: {:?}", req.head().version ); ok::<_, Error>(Response::ok().set_body(body)) }) .rustls_0_23(rustls_config()) })? .run() .await } fn rustls_config() -> rustls::ServerConfig { let rcgen::CertifiedKey { cert, key_pair } = rcgen::generate_simple_self_signed(["localhost".to_owned()]).unwrap(); let cert_file = cert.pem(); let key_file = key_pair.serialize_pem(); let cert_file = &mut io::BufReader::new(cert_file.as_bytes()); let key_file = &mut io::BufReader::new(key_file.as_bytes()); let cert_chain = rustls_pemfile::certs(cert_file) .collect::, _>>() .unwrap(); let mut keys = rustls_pemfile::pkcs8_private_keys(key_file) .collect::, _>>() .unwrap(); let mut config = rustls::ServerConfig::builder() .with_no_client_auth() .with_single_cert( cert_chain, rustls::pki_types::PrivateKeyDer::Pkcs8(keys.remove(0)), ) .unwrap(); const H1_ALPN: &[u8] = b"http/1.1"; const H2_ALPN: &[u8] = b"h2"; config.alpn_protocols.push(H2_ALPN.to_vec()); config.alpn_protocols.push(H1_ALPN.to_vec()); config } actix-http-3.9.0/examples/ws.rs000064400000000000000000000060531046102023000145430ustar 00000000000000//! Sets up a WebSocket server over TCP and TLS. //! Sends a heartbeat message every 4 seconds but does not respond to any incoming frames. extern crate tls_rustls_023 as rustls; use std::{ io, pin::Pin, task::{Context, Poll}, time::Duration, }; use actix_http::{body::BodyStream, error::Error, ws, HttpService, Request, Response}; use actix_rt::time::{interval, Interval}; use actix_server::Server; use bytes::{Bytes, BytesMut}; use bytestring::ByteString; use futures_core::{ready, Stream}; use tokio_util::codec::Encoder; use tracing::{info, trace}; #[actix_rt::main] async fn main() -> io::Result<()> { env_logger::init_from_env(env_logger::Env::new().default_filter_or("info")); Server::build() .bind("tcp", ("127.0.0.1", 8080), || { HttpService::build().h1(handler).tcp() })? .bind("tls", ("127.0.0.1", 8443), || { HttpService::build() .finish(handler) .rustls_0_23(tls_config()) })? .run() .await } async fn handler(req: Request) -> Result>, Error> { info!("handshaking"); let mut res = ws::handshake(req.head())?; // handshake will always fail under HTTP/2 info!("responding"); res.message_body(BodyStream::new(Heartbeat::new(ws::Codec::new()))) } struct Heartbeat { codec: ws::Codec, interval: Interval, } impl Heartbeat { fn new(codec: ws::Codec) -> Self { Self { codec, interval: interval(Duration::from_secs(4)), } } } impl Stream for Heartbeat { type Item = Result; fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { trace!("poll"); ready!(self.as_mut().interval.poll_tick(cx)); let mut buffer = BytesMut::new(); self.as_mut() .codec .encode( ws::Message::Text(ByteString::from_static("hello world")), &mut buffer, ) .unwrap(); Poll::Ready(Some(Ok(buffer.freeze()))) } } fn tls_config() -> rustls::ServerConfig { use std::io::BufReader; use rustls_pemfile::{certs, pkcs8_private_keys}; let rcgen::CertifiedKey { cert, key_pair } = rcgen::generate_simple_self_signed(["localhost".to_owned()]).unwrap(); let cert_file = cert.pem(); let key_file = key_pair.serialize_pem(); let cert_file = &mut BufReader::new(cert_file.as_bytes()); let key_file = &mut BufReader::new(key_file.as_bytes()); let cert_chain = certs(cert_file).collect::, _>>().unwrap(); let mut keys = pkcs8_private_keys(key_file) .collect::, _>>() .unwrap(); let mut config = rustls::ServerConfig::builder() .with_no_client_auth() .with_single_cert( cert_chain, rustls::pki_types::PrivateKeyDer::Pkcs8(keys.remove(0)), ) .unwrap(); config.alpn_protocols.push(b"http/1.1".to_vec()); config.alpn_protocols.push(b"h2".to_vec()); config } actix-http-3.9.0/src/body/body_stream.rs000064400000000000000000000145171046102023000163340ustar 00000000000000use std::{ error::Error as StdError, pin::Pin, task::{Context, Poll}, }; use bytes::Bytes; use futures_core::{ready, Stream}; use pin_project_lite::pin_project; use super::{BodySize, MessageBody}; pin_project! { /// Streaming response wrapper. /// /// Response does not contain `Content-Length` header and appropriate transfer encoding is used. pub struct BodyStream { #[pin] stream: S, } } // TODO: from_infallible method impl BodyStream where S: Stream>, E: Into> + 'static, { #[inline] pub fn new(stream: S) -> Self { BodyStream { stream } } } impl MessageBody for BodyStream where S: Stream>, E: Into> + 'static, { type Error = E; #[inline] fn size(&self) -> BodySize { BodySize::Stream } /// Attempts to pull out the next value of the underlying [`Stream`]. /// /// Empty values are skipped to prevent [`BodyStream`]'s transmission being ended on a /// zero-length chunk, but rather proceed until the underlying [`Stream`] ends. fn poll_next( mut self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { loop { let stream = self.as_mut().project().stream; let chunk = match ready!(stream.poll_next(cx)) { Some(Ok(ref bytes)) if bytes.is_empty() => continue, opt => opt, }; return Poll::Ready(chunk); } } } #[cfg(test)] mod tests { use std::{convert::Infallible, time::Duration}; use actix_rt::{ pin, time::{sleep, Sleep}, }; use actix_utils::future::poll_fn; use derive_more::{Display, Error}; use futures_core::ready; use futures_util::{stream, FutureExt as _}; use pin_project_lite::pin_project; use static_assertions::{assert_impl_all, assert_not_impl_any}; use super::*; use crate::body::to_bytes; assert_impl_all!(BodyStream>>: MessageBody); assert_impl_all!(BodyStream>>: MessageBody); assert_impl_all!(BodyStream>>: MessageBody); assert_impl_all!(BodyStream>>: MessageBody); assert_impl_all!(BodyStream>>: MessageBody); assert_not_impl_any!(BodyStream>: MessageBody); assert_not_impl_any!(BodyStream>: MessageBody); // crate::Error is not Clone assert_not_impl_any!(BodyStream>>: MessageBody); #[actix_rt::test] async fn skips_empty_chunks() { let body = BodyStream::new(stream::iter( ["1", "", "2"] .iter() .map(|&v| Ok::<_, Infallible>(Bytes::from(v))), )); pin!(body); assert_eq!( poll_fn(|cx| body.as_mut().poll_next(cx)) .await .unwrap() .ok(), Some(Bytes::from("1")), ); assert_eq!( poll_fn(|cx| body.as_mut().poll_next(cx)) .await .unwrap() .ok(), Some(Bytes::from("2")), ); } #[actix_rt::test] async fn read_to_bytes() { let body = BodyStream::new(stream::iter( ["1", "", "2"] .iter() .map(|&v| Ok::<_, Infallible>(Bytes::from(v))), )); assert_eq!(to_bytes(body).await.ok(), Some(Bytes::from("12"))); } #[derive(Debug, Display, Error)] #[display(fmt = "stream error")] struct StreamErr; #[actix_rt::test] async fn stream_immediate_error() { let body = BodyStream::new(stream::once(async { Err(StreamErr) })); assert!(matches!(to_bytes(body).await, Err(StreamErr))); } #[actix_rt::test] async fn stream_string_error() { // `&'static str` does not impl `Error` // but it does impl `Into>` let body = BodyStream::new(stream::once(async { Err("stringy error") })); assert!(matches!(to_bytes(body).await, Err("stringy error"))); } #[actix_rt::test] async fn stream_boxed_error() { // `Box` does not impl `Error` // but it does impl `Into>` let body = BodyStream::new(stream::once(async { Err(Box::::from("stringy error")) })); assert_eq!( to_bytes(body).await.unwrap_err().to_string(), "stringy error" ); } #[actix_rt::test] async fn stream_delayed_error() { let body = BodyStream::new(stream::iter(vec![Ok(Bytes::from("1")), Err(StreamErr)])); assert!(matches!(to_bytes(body).await, Err(StreamErr))); pin_project! { #[derive(Debug)] #[project = TimeDelayStreamProj] enum TimeDelayStream { Start, Sleep { delay: Pin> }, Done, } } impl Stream for TimeDelayStream { type Item = Result; fn poll_next( mut self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll> { match self.as_mut().get_mut() { TimeDelayStream::Start => { let sleep = sleep(Duration::from_millis(1)); self.as_mut().set(TimeDelayStream::Sleep { delay: Box::pin(sleep), }); cx.waker().wake_by_ref(); Poll::Pending } TimeDelayStream::Sleep { ref mut delay } => { ready!(delay.poll_unpin(cx)); self.set(TimeDelayStream::Done); cx.waker().wake_by_ref(); Poll::Pending } TimeDelayStream::Done => Poll::Ready(Some(Err(StreamErr))), } } } let body = BodyStream::new(TimeDelayStream::Start); assert!(matches!(to_bytes(body).await, Err(StreamErr))); } } actix-http-3.9.0/src/body/boxed.rs000064400000000000000000000064731046102023000151270ustar 00000000000000use std::{ error::Error as StdError, fmt, pin::Pin, task::{Context, Poll}, }; use bytes::Bytes; use super::{BodySize, MessageBody, MessageBodyMapErr}; use crate::body; /// A boxed message body with boxed errors. #[derive(Debug)] pub struct BoxBody(BoxBodyInner); enum BoxBodyInner { None(body::None), Bytes(Bytes), Stream(Pin>>>), } impl fmt::Debug for BoxBodyInner { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { Self::None(arg0) => f.debug_tuple("None").field(arg0).finish(), Self::Bytes(arg0) => f.debug_tuple("Bytes").field(arg0).finish(), Self::Stream(_) => f.debug_tuple("Stream").field(&"dyn MessageBody").finish(), } } } impl BoxBody { /// Boxes body type, erasing type information. /// /// If the body type to wrap is unknown or generic it is better to use [`MessageBody::boxed`] to /// avoid double boxing. #[inline] pub fn new(body: B) -> Self where B: MessageBody + 'static, { match body.size() { BodySize::None => Self(BoxBodyInner::None(body::None)), _ => match body.try_into_bytes() { Ok(bytes) => Self(BoxBodyInner::Bytes(bytes)), Err(body) => { let body = MessageBodyMapErr::new(body, Into::into); Self(BoxBodyInner::Stream(Box::pin(body))) } }, } } /// Returns a mutable pinned reference to the inner message body type. #[inline] pub fn as_pin_mut(&mut self) -> Pin<&mut Self> { Pin::new(self) } } impl MessageBody for BoxBody { type Error = Box; #[inline] fn size(&self) -> BodySize { match &self.0 { BoxBodyInner::None(none) => none.size(), BoxBodyInner::Bytes(bytes) => bytes.size(), BoxBodyInner::Stream(stream) => stream.size(), } } #[inline] fn poll_next( mut self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { match &mut self.0 { BoxBodyInner::None(body) => Pin::new(body).poll_next(cx).map_err(|err| match err {}), BoxBodyInner::Bytes(body) => Pin::new(body).poll_next(cx).map_err(|err| match err {}), BoxBodyInner::Stream(body) => Pin::new(body).poll_next(cx), } } #[inline] fn try_into_bytes(self) -> Result { match self.0 { BoxBodyInner::None(body) => Ok(body.try_into_bytes().unwrap()), BoxBodyInner::Bytes(body) => Ok(body.try_into_bytes().unwrap()), _ => Err(self), } } #[inline] fn boxed(self) -> BoxBody { self } } #[cfg(test)] mod tests { use static_assertions::{assert_impl_all, assert_not_impl_any}; use super::*; use crate::body::to_bytes; assert_impl_all!(BoxBody: fmt::Debug, MessageBody, Unpin); assert_not_impl_any!(BoxBody: Send, Sync); #[actix_rt::test] async fn nested_boxed_body() { let body = Bytes::from_static(&[1, 2, 3]); let boxed_body = BoxBody::new(BoxBody::new(body)); assert_eq!( to_bytes(boxed_body).await.unwrap(), Bytes::from(vec![1, 2, 3]), ); } } actix-http-3.9.0/src/body/either.rs000064400000000000000000000067041046102023000153030ustar 00000000000000use std::{ pin::Pin, task::{Context, Poll}, }; use bytes::Bytes; use pin_project_lite::pin_project; use super::{BodySize, BoxBody, MessageBody}; use crate::Error; pin_project! { /// An "either" type specialized for body types. /// /// It is common, in middleware especially, to conditionally return an inner service's unknown/ /// generic body `B` type or return early with a new response. This type's "right" variant /// defaults to `BoxBody` since error responses are the common case. /// /// For example, middleware will often have `type Response = ServiceResponse>`. /// This means that the inner service's response body type maps to the `Left` variant and the /// middleware's own error responses use the default `Right` variant of `BoxBody`. Of course, /// there's no reason it couldn't use `EitherBody` instead if its alternative /// responses have a known type. #[project = EitherBodyProj] #[derive(Debug, Clone)] pub enum EitherBody { /// A body of type `L`. Left { #[pin] body: L }, /// A body of type `R`. Right { #[pin] body: R }, } } impl EitherBody { /// Creates new `EitherBody` left variant with a boxed right variant. /// /// If the expected `R` type will be inferred and is not `BoxBody` then use the /// [`left`](Self::left) constructor instead. #[inline] pub fn new(body: L) -> Self { Self::Left { body } } } impl EitherBody { /// Creates new `EitherBody` using left variant. #[inline] pub fn left(body: L) -> Self { Self::Left { body } } /// Creates new `EitherBody` using right variant. #[inline] pub fn right(body: R) -> Self { Self::Right { body } } } impl MessageBody for EitherBody where L: MessageBody + 'static, R: MessageBody + 'static, { type Error = Error; #[inline] fn size(&self) -> BodySize { match self { EitherBody::Left { body } => body.size(), EitherBody::Right { body } => body.size(), } } #[inline] fn poll_next( self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { match self.project() { EitherBodyProj::Left { body } => body .poll_next(cx) .map_err(|err| Error::new_body().with_cause(err)), EitherBodyProj::Right { body } => body .poll_next(cx) .map_err(|err| Error::new_body().with_cause(err)), } } #[inline] fn try_into_bytes(self) -> Result { match self { EitherBody::Left { body } => body .try_into_bytes() .map_err(|body| EitherBody::Left { body }), EitherBody::Right { body } => body .try_into_bytes() .map_err(|body| EitherBody::Right { body }), } } #[inline] fn boxed(self) -> BoxBody { match self { EitherBody::Left { body } => body.boxed(), EitherBody::Right { body } => body.boxed(), } } } #[cfg(test)] mod tests { use super::*; #[test] fn type_parameter_inference() { let _body: EitherBody<(), _> = EitherBody::new(()); let _body: EitherBody<_, ()> = EitherBody::left(()); let _body: EitherBody<(), _> = EitherBody::right(()); } } actix-http-3.9.0/src/body/message_body.rs000064400000000000000000000515611046102023000164650ustar 00000000000000//! [`MessageBody`] trait and foreign implementations. use std::{ convert::Infallible, error::Error as StdError, mem, pin::Pin, task::{Context, Poll}, }; use bytes::{Bytes, BytesMut}; use futures_core::ready; use pin_project_lite::pin_project; use super::{BodySize, BoxBody}; /// An interface for types that can be used as a response body. /// /// It is not usually necessary to create custom body types, this trait is already [implemented for /// a large number of sensible body types](#foreign-impls) including: /// - Empty body: `()` /// - Text-based: `String`, `&'static str`, [`ByteString`](https://docs.rs/bytestring/1). /// - Byte-based: `Bytes`, `BytesMut`, `Vec`, `&'static [u8]`; /// - Streams: [`BodyStream`](super::BodyStream), [`SizedStream`](super::SizedStream) /// /// # Examples /// ``` /// # use std::convert::Infallible; /// # use std::task::{Poll, Context}; /// # use std::pin::Pin; /// # use bytes::Bytes; /// # use actix_http::body::{BodySize, MessageBody}; /// struct Repeat { /// chunk: String, /// n_times: usize, /// } /// /// impl MessageBody for Repeat { /// type Error = Infallible; /// /// fn size(&self) -> BodySize { /// BodySize::Sized((self.chunk.len() * self.n_times) as u64) /// } /// /// fn poll_next( /// self: Pin<&mut Self>, /// _cx: &mut Context<'_>, /// ) -> Poll>> { /// let payload_string = self.chunk.repeat(self.n_times); /// let payload_bytes = Bytes::from(payload_string); /// Poll::Ready(Some(Ok(payload_bytes))) /// } /// } /// ``` pub trait MessageBody { /// The type of error that will be returned if streaming body fails. /// /// Since it is not appropriate to generate a response mid-stream, it only requires `Error` for /// internal use and logging. type Error: Into>; /// Body size hint. /// /// If [`BodySize::None`] is returned, optimizations that skip reading the body are allowed. fn size(&self) -> BodySize; /// Attempt to pull out the next chunk of body bytes. /// /// # Return Value /// Similar to the `Stream` interface, there are several possible return values, each indicating /// a distinct state: /// - `Poll::Pending` means that this body's next chunk is not ready yet. Implementations must /// ensure that the current task will be notified when the next chunk may be ready. /// - `Poll::Ready(Some(val))` means that the body has successfully produced a chunk, `val`, /// and may produce further values on subsequent `poll_next` calls. /// - `Poll::Ready(None)` means that the body is complete, and `poll_next` should not be /// invoked again. /// /// # Panics /// Once a body is complete (i.e., `poll_next` returned `Ready(None)`), calling its `poll_next` /// method again may panic, block forever, or cause other kinds of problems; this trait places /// no requirements on the effects of such a call. However, as the `poll_next` method is not /// marked unsafe, Rust’s usual rules apply: calls must never cause UB, regardless of its state. fn poll_next( self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>>; /// Try to convert into the complete chunk of body bytes. /// /// Override this method if the complete body can be trivially extracted. This is useful for /// optimizations where `poll_next` calls can be avoided. /// /// Body types with [`BodySize::None`] are allowed to return empty `Bytes`. Although, if calling /// this method, it is recommended to check `size` first and return early. /// /// # Errors /// The default implementation will error and return the original type back to the caller for /// further use. #[inline] fn try_into_bytes(self) -> Result where Self: Sized, { Err(self) } /// Wraps this body into a `BoxBody`. /// /// No-op when called on a `BoxBody`, meaning there is no risk of double boxing when calling /// this on a generic `MessageBody`. Prefer this over [`BoxBody::new`] when a boxed body /// is required. #[inline] fn boxed(self) -> BoxBody where Self: Sized + 'static, { BoxBody::new(self) } } mod foreign_impls { use std::{borrow::Cow, ops::DerefMut}; use super::*; impl MessageBody for &mut B where B: MessageBody + Unpin + ?Sized, { type Error = B::Error; fn size(&self) -> BodySize { (**self).size() } fn poll_next( mut self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { Pin::new(&mut **self).poll_next(cx) } } impl MessageBody for Infallible { type Error = Infallible; fn size(&self) -> BodySize { match *self {} } fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { match *self {} } } impl MessageBody for () { type Error = Infallible; #[inline] fn size(&self) -> BodySize { BodySize::Sized(0) } #[inline] fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { Poll::Ready(None) } #[inline] fn try_into_bytes(self) -> Result { Ok(Bytes::new()) } } impl MessageBody for Box where B: MessageBody + Unpin + ?Sized, { type Error = B::Error; #[inline] fn size(&self) -> BodySize { self.as_ref().size() } #[inline] fn poll_next( self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { Pin::new(self.get_mut().as_mut()).poll_next(cx) } } impl MessageBody for Pin where T: DerefMut + Unpin, B: MessageBody + ?Sized, { type Error = B::Error; #[inline] fn size(&self) -> BodySize { self.as_ref().size() } #[inline] fn poll_next( self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { self.get_mut().as_mut().poll_next(cx) } } impl MessageBody for &'static [u8] { type Error = Infallible; #[inline] fn size(&self) -> BodySize { BodySize::Sized(self.len() as u64) } #[inline] fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { if self.is_empty() { Poll::Ready(None) } else { Poll::Ready(Some(Ok(Bytes::from_static(mem::take(self.get_mut()))))) } } #[inline] fn try_into_bytes(self) -> Result { Ok(Bytes::from_static(self)) } } impl MessageBody for Bytes { type Error = Infallible; #[inline] fn size(&self) -> BodySize { BodySize::Sized(self.len() as u64) } #[inline] fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { if self.is_empty() { Poll::Ready(None) } else { Poll::Ready(Some(Ok(mem::take(self.get_mut())))) } } #[inline] fn try_into_bytes(self) -> Result { Ok(self) } } impl MessageBody for BytesMut { type Error = Infallible; #[inline] fn size(&self) -> BodySize { BodySize::Sized(self.len() as u64) } #[inline] fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { if self.is_empty() { Poll::Ready(None) } else { Poll::Ready(Some(Ok(mem::take(self.get_mut()).freeze()))) } } #[inline] fn try_into_bytes(self) -> Result { Ok(self.freeze()) } } impl MessageBody for Vec { type Error = Infallible; #[inline] fn size(&self) -> BodySize { BodySize::Sized(self.len() as u64) } #[inline] fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { if self.is_empty() { Poll::Ready(None) } else { Poll::Ready(Some(Ok(mem::take(self.get_mut()).into()))) } } #[inline] fn try_into_bytes(self) -> Result { Ok(Bytes::from(self)) } } impl MessageBody for Cow<'static, [u8]> { type Error = Infallible; #[inline] fn size(&self) -> BodySize { BodySize::Sized(self.len() as u64) } #[inline] fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { if self.is_empty() { Poll::Ready(None) } else { let bytes = match mem::take(self.get_mut()) { Cow::Borrowed(b) => Bytes::from_static(b), Cow::Owned(b) => Bytes::from(b), }; Poll::Ready(Some(Ok(bytes))) } } #[inline] fn try_into_bytes(self) -> Result { match self { Cow::Borrowed(b) => Ok(Bytes::from_static(b)), Cow::Owned(b) => Ok(Bytes::from(b)), } } } impl MessageBody for &'static str { type Error = Infallible; #[inline] fn size(&self) -> BodySize { BodySize::Sized(self.len() as u64) } #[inline] fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { if self.is_empty() { Poll::Ready(None) } else { let string = mem::take(self.get_mut()); let bytes = Bytes::from_static(string.as_bytes()); Poll::Ready(Some(Ok(bytes))) } } #[inline] fn try_into_bytes(self) -> Result { Ok(Bytes::from_static(self.as_bytes())) } } impl MessageBody for String { type Error = Infallible; #[inline] fn size(&self) -> BodySize { BodySize::Sized(self.len() as u64) } #[inline] fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { if self.is_empty() { Poll::Ready(None) } else { let string = mem::take(self.get_mut()); Poll::Ready(Some(Ok(Bytes::from(string)))) } } #[inline] fn try_into_bytes(self) -> Result { Ok(Bytes::from(self)) } } impl MessageBody for Cow<'static, str> { type Error = Infallible; #[inline] fn size(&self) -> BodySize { BodySize::Sized(self.len() as u64) } #[inline] fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { if self.is_empty() { Poll::Ready(None) } else { let bytes = match mem::take(self.get_mut()) { Cow::Borrowed(s) => Bytes::from_static(s.as_bytes()), Cow::Owned(s) => Bytes::from(s.into_bytes()), }; Poll::Ready(Some(Ok(bytes))) } } #[inline] fn try_into_bytes(self) -> Result { match self { Cow::Borrowed(s) => Ok(Bytes::from_static(s.as_bytes())), Cow::Owned(s) => Ok(Bytes::from(s.into_bytes())), } } } impl MessageBody for bytestring::ByteString { type Error = Infallible; #[inline] fn size(&self) -> BodySize { BodySize::Sized(self.len() as u64) } #[inline] fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { let string = mem::take(self.get_mut()); Poll::Ready(Some(Ok(string.into_bytes()))) } #[inline] fn try_into_bytes(self) -> Result { Ok(self.into_bytes()) } } } pin_project! { pub(crate) struct MessageBodyMapErr { #[pin] body: B, mapper: Option, } } impl MessageBodyMapErr where B: MessageBody, F: FnOnce(B::Error) -> E, { pub(crate) fn new(body: B, mapper: F) -> Self { Self { body, mapper: Some(mapper), } } } impl MessageBody for MessageBodyMapErr where B: MessageBody, F: FnOnce(B::Error) -> E, E: Into>, { type Error = E; #[inline] fn size(&self) -> BodySize { self.body.size() } fn poll_next( mut self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { let this = self.as_mut().project(); match ready!(this.body.poll_next(cx)) { Some(Err(err)) => { let f = self.as_mut().project().mapper.take().unwrap(); let mapped_err = (f)(err); Poll::Ready(Some(Err(mapped_err))) } Some(Ok(val)) => Poll::Ready(Some(Ok(val))), None => Poll::Ready(None), } } #[inline] fn try_into_bytes(self) -> Result { let Self { body, mapper } = self; body.try_into_bytes().map_err(|body| Self { body, mapper }) } } #[cfg(test)] mod tests { use actix_rt::pin; use actix_utils::future::poll_fn; use futures_util::stream; use super::*; use crate::body::{self, EitherBody}; macro_rules! assert_poll_next { ($pin:expr, $exp:expr) => { assert_eq!( poll_fn(|cx| $pin.as_mut().poll_next(cx)) .await .unwrap() // unwrap option .unwrap(), // unwrap result $exp ); }; } macro_rules! assert_poll_next_none { ($pin:expr) => { assert!(poll_fn(|cx| $pin.as_mut().poll_next(cx)).await.is_none()); }; } #[allow(unused_allocation)] // triggered by `Box::new(()).size()` #[actix_rt::test] async fn boxing_equivalence() { assert_eq!(().size(), BodySize::Sized(0)); assert_eq!(().size(), Box::new(()).size()); assert_eq!(().size(), Box::pin(()).size()); let pl = Box::new(()); pin!(pl); assert_poll_next_none!(pl); let mut pl = Box::pin(()); assert_poll_next_none!(pl); } #[actix_rt::test] async fn mut_equivalence() { assert_eq!(().size(), BodySize::Sized(0)); assert_eq!(().size(), (&(&mut ())).size()); let pl = &mut (); pin!(pl); assert_poll_next_none!(pl); let pl = &mut Box::new(()); pin!(pl); assert_poll_next_none!(pl); let mut body = body::SizedStream::new( 8, stream::iter([ Ok::<_, std::io::Error>(Bytes::from("1234")), Ok(Bytes::from("5678")), ]), ); let body = &mut body; assert_eq!(body.size(), BodySize::Sized(8)); pin!(body); assert_poll_next!(body, Bytes::from_static(b"1234")); assert_poll_next!(body, Bytes::from_static(b"5678")); assert_poll_next_none!(body); } #[allow(clippy::let_unit_value)] #[actix_rt::test] async fn test_unit() { let pl = (); assert_eq!(pl.size(), BodySize::Sized(0)); pin!(pl); assert_poll_next_none!(pl); } #[actix_rt::test] async fn test_static_str() { assert_eq!("".size(), BodySize::Sized(0)); assert_eq!("test".size(), BodySize::Sized(4)); let pl = "test"; pin!(pl); assert_poll_next!(pl, Bytes::from("test")); } #[actix_rt::test] async fn test_static_bytes() { assert_eq!(b"".as_ref().size(), BodySize::Sized(0)); assert_eq!(b"test".as_ref().size(), BodySize::Sized(4)); let pl = b"test".as_ref(); pin!(pl); assert_poll_next!(pl, Bytes::from("test")); } #[actix_rt::test] async fn test_vec() { assert_eq!(vec![0; 0].size(), BodySize::Sized(0)); assert_eq!(Vec::from("test").size(), BodySize::Sized(4)); let pl = Vec::from("test"); pin!(pl); assert_poll_next!(pl, Bytes::from("test")); } #[actix_rt::test] async fn test_bytes() { assert_eq!(Bytes::new().size(), BodySize::Sized(0)); assert_eq!(Bytes::from_static(b"test").size(), BodySize::Sized(4)); let pl = Bytes::from_static(b"test"); pin!(pl); assert_poll_next!(pl, Bytes::from("test")); } #[actix_rt::test] async fn test_bytes_mut() { assert_eq!(BytesMut::new().size(), BodySize::Sized(0)); assert_eq!(BytesMut::from(b"test".as_ref()).size(), BodySize::Sized(4)); let pl = BytesMut::from("test"); pin!(pl); assert_poll_next!(pl, Bytes::from("test")); } #[actix_rt::test] async fn test_string() { assert_eq!(String::new().size(), BodySize::Sized(0)); assert_eq!("test".to_owned().size(), BodySize::Sized(4)); let pl = "test".to_owned(); pin!(pl); assert_poll_next!(pl, Bytes::from("test")); } #[actix_rt::test] async fn complete_body_combinators() { let body = Bytes::from_static(b"test"); let body = BoxBody::new(body); let body = EitherBody::<_, ()>::left(body); let body = EitherBody::<(), _>::right(body); // Do not support try_into_bytes: // let body = Box::new(body); // let body = Box::pin(body); assert_eq!(body.try_into_bytes().unwrap(), Bytes::from("test")); } #[actix_rt::test] async fn complete_body_combinators_poll() { let body = Bytes::from_static(b"test"); let body = BoxBody::new(body); let body = EitherBody::<_, ()>::left(body); let body = EitherBody::<(), _>::right(body); let mut body = body; assert_eq!(body.size(), BodySize::Sized(4)); assert_poll_next!(Pin::new(&mut body), Bytes::from("test")); assert_poll_next_none!(Pin::new(&mut body)); } #[actix_rt::test] async fn none_body_combinators() { fn none_body() -> BoxBody { let body = body::None; let body = BoxBody::new(body); let body = EitherBody::<_, ()>::left(body); let body = EitherBody::<(), _>::right(body); body.boxed() } assert_eq!(none_body().size(), BodySize::None); assert_eq!(none_body().try_into_bytes().unwrap(), Bytes::new()); assert_poll_next_none!(Pin::new(&mut none_body())); } // down-casting used to be done with a method on MessageBody trait // test is kept to demonstrate equivalence of Any trait #[actix_rt::test] async fn test_body_casting() { let mut body = String::from("hello cast"); // let mut resp_body: &mut dyn MessageBody = &mut body; let resp_body: &mut dyn std::any::Any = &mut body; let body = resp_body.downcast_ref::().unwrap(); assert_eq!(body, "hello cast"); let body = &mut resp_body.downcast_mut::().unwrap(); body.push('!'); let body = resp_body.downcast_ref::().unwrap(); assert_eq!(body, "hello cast!"); let not_body = resp_body.downcast_ref::<()>(); assert!(not_body.is_none()); } #[actix_rt::test] async fn non_owning_to_bytes() { let mut body = BoxBody::new(()); let bytes = body::to_bytes(&mut body).await.unwrap(); assert_eq!(bytes, Bytes::new()); let mut body = body::BodyStream::new(stream::iter([ Ok::<_, std::io::Error>(Bytes::from("1234")), Ok(Bytes::from("5678")), ])); let bytes = body::to_bytes(&mut body).await.unwrap(); assert_eq!(bytes, Bytes::from_static(b"12345678")); } } actix-http-3.9.0/src/body/mod.rs000064400000000000000000000013411046102023000145720ustar 00000000000000//! Traits and structures to aid consuming and writing HTTP payloads. //! //! "Body" and "payload" are used somewhat interchangeably in this documentation. // Though the spec kinda reads like "payload" is the possibly-transfer-encoded part of the message // and the "body" is the intended possibly-decoded version of that. mod body_stream; mod boxed; mod either; mod message_body; mod none; mod size; mod sized_stream; mod utils; pub(crate) use self::message_body::MessageBodyMapErr; pub use self::{ body_stream::BodyStream, boxed::BoxBody, either::EitherBody, message_body::MessageBody, none::None, size::BodySize, sized_stream::SizedStream, utils::{to_bytes, to_bytes_limited, BodyLimitExceeded}, }; actix-http-3.9.0/src/body/none.rs000064400000000000000000000022301046102023000147500ustar 00000000000000use std::{ convert::Infallible, pin::Pin, task::{Context, Poll}, }; use bytes::Bytes; use super::{BodySize, MessageBody}; /// Body type for responses that forbid payloads. /// /// This is distinct from an "empty" response which _would_ contain a `Content-Length` header. /// For an "empty" body, use `()` or `Bytes::new()`. /// /// For example, the HTTP spec forbids a payload to be sent with a `204 No Content` response. /// In this case, the payload (or lack thereof) is implicit from the status code, so a /// `Content-Length` header is not required. #[derive(Debug, Clone, Copy, Default)] #[non_exhaustive] pub struct None; impl None { /// Constructs new "none" body. #[inline] pub fn new() -> Self { None } } impl MessageBody for None { type Error = Infallible; #[inline] fn size(&self) -> BodySize { BodySize::None } #[inline] fn poll_next( self: Pin<&mut Self>, _cx: &mut Context<'_>, ) -> Poll>> { Poll::Ready(Option::None) } #[inline] fn try_into_bytes(self) -> Result { Ok(Bytes::new()) } } actix-http-3.9.0/src/body/size.rs000064400000000000000000000024211046102023000147650ustar 00000000000000/// Body size hint. #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum BodySize { /// Implicitly empty body. /// /// Will omit the Content-Length header. Used for responses to certain methods (e.g., `HEAD`) or /// with particular status codes (e.g., 204 No Content). Consumers that read this as a body size /// hint are allowed to make optimizations that skip reading or writing the payload. None, /// Known size body. /// /// Will write `Content-Length: N` header. Sized(u64), /// Unknown size body. /// /// Will not write Content-Length header. Can be used with chunked Transfer-Encoding. Stream, } impl BodySize { /// Equivalent to `BodySize::Sized(0)`; pub const ZERO: Self = Self::Sized(0); /// Returns true if size hint indicates omitted or empty body. /// /// Streams will return false because it cannot be known without reading the stream. /// /// ``` /// # use actix_http::body::BodySize; /// assert!(BodySize::None.is_eof()); /// assert!(BodySize::Sized(0).is_eof()); /// /// assert!(!BodySize::Sized(64).is_eof()); /// assert!(!BodySize::Stream.is_eof()); /// ``` pub fn is_eof(&self) -> bool { matches!(self, BodySize::None | BodySize::Sized(0)) } } actix-http-3.9.0/src/body/sized_stream.rs000064400000000000000000000114751046102023000165150ustar 00000000000000use std::{ error::Error as StdError, pin::Pin, task::{Context, Poll}, }; use bytes::Bytes; use futures_core::{ready, Stream}; use pin_project_lite::pin_project; use super::{BodySize, MessageBody}; pin_project! { /// Known sized streaming response wrapper. /// /// This body implementation should be used if total size of stream is known. Data is sent as-is /// without using chunked transfer encoding. pub struct SizedStream { size: u64, #[pin] stream: S, } } impl SizedStream where S: Stream>, E: Into> + 'static, { #[inline] pub fn new(size: u64, stream: S) -> Self { SizedStream { size, stream } } } // TODO: from_infallible method impl MessageBody for SizedStream where S: Stream>, E: Into> + 'static, { type Error = E; #[inline] fn size(&self) -> BodySize { BodySize::Sized(self.size) } /// Attempts to pull out the next value of the underlying [`Stream`]. /// /// Empty values are skipped to prevent [`SizedStream`]'s transmission being /// ended on a zero-length chunk, but rather proceed until the underlying /// [`Stream`] ends. fn poll_next( mut self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { loop { let stream = self.as_mut().project().stream; let chunk = match ready!(stream.poll_next(cx)) { Some(Ok(ref bytes)) if bytes.is_empty() => continue, val => val, }; return Poll::Ready(chunk); } } } #[cfg(test)] mod tests { use std::convert::Infallible; use actix_rt::pin; use actix_utils::future::poll_fn; use futures_util::stream; use static_assertions::{assert_impl_all, assert_not_impl_any}; use super::*; use crate::body::to_bytes; assert_impl_all!(SizedStream>>: MessageBody); assert_impl_all!(SizedStream>>: MessageBody); assert_impl_all!(SizedStream>>: MessageBody); assert_impl_all!(SizedStream>>: MessageBody); assert_impl_all!(SizedStream>>: MessageBody); assert_not_impl_any!(SizedStream>: MessageBody); assert_not_impl_any!(SizedStream>: MessageBody); // crate::Error is not Clone assert_not_impl_any!(SizedStream>>: MessageBody); #[actix_rt::test] async fn skips_empty_chunks() { let body = SizedStream::new( 2, stream::iter( ["1", "", "2"] .iter() .map(|&v| Ok::<_, Infallible>(Bytes::from(v))), ), ); pin!(body); assert_eq!( poll_fn(|cx| body.as_mut().poll_next(cx)) .await .unwrap() .ok(), Some(Bytes::from("1")), ); assert_eq!( poll_fn(|cx| body.as_mut().poll_next(cx)) .await .unwrap() .ok(), Some(Bytes::from("2")), ); } #[actix_rt::test] async fn read_to_bytes() { let body = SizedStream::new( 2, stream::iter( ["1", "", "2"] .iter() .map(|&v| Ok::<_, Infallible>(Bytes::from(v))), ), ); assert_eq!(to_bytes(body).await.ok(), Some(Bytes::from("12"))); } #[actix_rt::test] async fn stream_string_error() { // `&'static str` does not impl `Error` // but it does impl `Into>` let body = SizedStream::new(0, stream::once(async { Err("stringy error") })); assert_eq!(to_bytes(body).await, Ok(Bytes::new())); let body = SizedStream::new(1, stream::once(async { Err("stringy error") })); assert!(matches!(to_bytes(body).await, Err("stringy error"))); } #[actix_rt::test] async fn stream_boxed_error() { // `Box` does not impl `Error` // but it does impl `Into>` let body = SizedStream::new( 0, stream::once(async { Err(Box::::from("stringy error")) }), ); assert_eq!(to_bytes(body).await.unwrap(), Bytes::new()); let body = SizedStream::new( 1, stream::once(async { Err(Box::::from("stringy error")) }), ); assert_eq!( to_bytes(body).await.unwrap_err().to_string(), "stringy error" ); } } actix-http-3.9.0/src/body/utils.rs000064400000000000000000000145041046102023000151600ustar 00000000000000use std::task::Poll; use actix_rt::pin; use actix_utils::future::poll_fn; use bytes::{Bytes, BytesMut}; use derive_more::{Display, Error}; use futures_core::ready; use super::{BodySize, MessageBody}; /// Collects all the bytes produced by `body`. /// /// Any errors produced by the body stream are returned immediately. /// /// Consider using [`to_bytes_limited`] instead to protect against memory exhaustion. /// /// # Examples /// /// ``` /// use actix_http::body::{self, to_bytes}; /// use bytes::Bytes; /// /// # actix_rt::System::new().block_on(async { /// let body = body::None::new(); /// let bytes = to_bytes(body).await.unwrap(); /// assert!(bytes.is_empty()); /// /// let body = Bytes::from_static(b"123"); /// let bytes = to_bytes(body).await.unwrap(); /// assert_eq!(bytes, "123"); /// # }); /// ``` pub async fn to_bytes(body: B) -> Result { to_bytes_limited(body, usize::MAX) .await .expect("body should never yield more than usize::MAX bytes") } /// Error type returned from [`to_bytes_limited`] when body produced exceeds limit. #[derive(Debug, Display, Error)] #[display(fmt = "limit exceeded while collecting body bytes")] #[non_exhaustive] pub struct BodyLimitExceeded; /// Collects the bytes produced by `body`, up to `limit` bytes. /// /// If a chunk read from `poll_next` causes the total number of bytes read to exceed `limit`, an /// `Err(BodyLimitExceeded)` is returned. /// /// Any errors produced by the body stream are returned immediately as `Ok(Err(B::Error))`. /// /// # Examples /// /// ``` /// use actix_http::body::{self, to_bytes_limited}; /// use bytes::Bytes; /// /// # actix_rt::System::new().block_on(async { /// let body = body::None::new(); /// let bytes = to_bytes_limited(body, 10).await.unwrap().unwrap(); /// assert!(bytes.is_empty()); /// /// let body = Bytes::from_static(b"123"); /// let bytes = to_bytes_limited(body, 10).await.unwrap().unwrap(); /// assert_eq!(bytes, "123"); /// /// let body = Bytes::from_static(b"123"); /// assert!(to_bytes_limited(body, 2).await.is_err()); /// # }); /// ``` pub async fn to_bytes_limited( body: B, limit: usize, ) -> Result, BodyLimitExceeded> { /// Sensible default (32kB) for initial, bounded allocation when collecting body bytes. const INITIAL_ALLOC_BYTES: usize = 32 * 1024; let cap = match body.size() { BodySize::None | BodySize::Sized(0) => return Ok(Ok(Bytes::new())), BodySize::Sized(size) if size as usize > limit => return Err(BodyLimitExceeded), BodySize::Sized(size) => (size as usize).min(INITIAL_ALLOC_BYTES), BodySize::Stream => INITIAL_ALLOC_BYTES, }; let mut exceeded_limit = false; let mut buf = BytesMut::with_capacity(cap); pin!(body); match poll_fn(|cx| loop { let body = body.as_mut(); match ready!(body.poll_next(cx)) { Some(Ok(bytes)) => { // if limit is exceeded... if buf.len() + bytes.len() > limit { // ...set flag to true and break out of poll_fn exceeded_limit = true; return Poll::Ready(Ok(())); } buf.extend_from_slice(&bytes) } None => return Poll::Ready(Ok(())), Some(Err(err)) => return Poll::Ready(Err(err)), } }) .await { // propagate error returned from body poll Err(err) => Ok(Err(err)), // limit was exceeded while reading body Ok(()) if exceeded_limit => Err(BodyLimitExceeded), // otherwise return body buffer Ok(()) => Ok(Ok(buf.freeze())), } } #[cfg(test)] mod tests { use std::io; use futures_util::{stream, StreamExt as _}; use super::*; use crate::{ body::{BodyStream, SizedStream}, Error, }; #[actix_rt::test] async fn to_bytes_complete() { let bytes = to_bytes(()).await.unwrap(); assert!(bytes.is_empty()); let body = Bytes::from_static(b"123"); let bytes = to_bytes(body).await.unwrap(); assert_eq!(bytes, b"123"[..]); } #[actix_rt::test] async fn to_bytes_streams() { let stream = stream::iter(vec![Bytes::from_static(b"123"), Bytes::from_static(b"abc")]) .map(Ok::<_, Error>); let body = BodyStream::new(stream); let bytes = to_bytes(body).await.unwrap(); assert_eq!(bytes, b"123abc"[..]); } #[actix_rt::test] async fn to_bytes_limited_complete() { let bytes = to_bytes_limited((), 0).await.unwrap().unwrap(); assert!(bytes.is_empty()); let bytes = to_bytes_limited((), 1).await.unwrap().unwrap(); assert!(bytes.is_empty()); assert!(to_bytes_limited(Bytes::from_static(b"12"), 0) .await .is_err()); assert!(to_bytes_limited(Bytes::from_static(b"12"), 1) .await .is_err()); assert!(to_bytes_limited(Bytes::from_static(b"12"), 2).await.is_ok()); assert!(to_bytes_limited(Bytes::from_static(b"12"), 3).await.is_ok()); } #[actix_rt::test] async fn to_bytes_limited_streams() { // hinting a larger body fails let body = SizedStream::new(8, stream::empty().map(Ok::<_, Error>)); assert!(to_bytes_limited(body, 3).await.is_err()); // hinting a smaller body is okay let body = SizedStream::new(3, stream::empty().map(Ok::<_, Error>)); assert!(to_bytes_limited(body, 3).await.unwrap().unwrap().is_empty()); // hinting a smaller body then returning a larger one fails let stream = stream::iter(vec![Bytes::from_static(b"1234")]).map(Ok::<_, Error>); let body = SizedStream::new(3, stream); assert!(to_bytes_limited(body, 3).await.is_err()); let stream = stream::iter(vec![Bytes::from_static(b"123"), Bytes::from_static(b"abc")]) .map(Ok::<_, Error>); let body = BodyStream::new(stream); assert!(to_bytes_limited(body, 3).await.is_err()); } #[actix_rt::test] async fn to_body_limit_error() { let err_stream = stream::once(async { Err(io::Error::new(io::ErrorKind::Other, "")) }); let body = SizedStream::new(8, err_stream); // not too big, but propagates error from body stream assert!(to_bytes_limited(body, 10).await.unwrap().is_err()); } } actix-http-3.9.0/src/builder.rs000064400000000000000000000214361046102023000145130ustar 00000000000000use std::{fmt, marker::PhantomData, net, rc::Rc, time::Duration}; use actix_codec::Framed; use actix_service::{IntoServiceFactory, Service, ServiceFactory}; use crate::{ body::{BoxBody, MessageBody}, h1::{self, ExpectHandler, H1Service, UpgradeHandler}, service::HttpService, ConnectCallback, Extensions, KeepAlive, Request, Response, ServiceConfig, }; /// An HTTP service builder. /// /// This type can construct an instance of [`HttpService`] through a builder-like pattern. pub struct HttpServiceBuilder { keep_alive: KeepAlive, client_request_timeout: Duration, client_disconnect_timeout: Duration, secure: bool, local_addr: Option, expect: X, upgrade: Option, on_connect_ext: Option>>, _phantom: PhantomData, } impl Default for HttpServiceBuilder where S: ServiceFactory, S::Error: Into> + 'static, S::InitError: fmt::Debug, >::Future: 'static, { fn default() -> Self { HttpServiceBuilder { // ServiceConfig parts (make sure defaults match) keep_alive: KeepAlive::default(), client_request_timeout: Duration::from_secs(5), client_disconnect_timeout: Duration::ZERO, secure: false, local_addr: None, // dispatcher parts expect: ExpectHandler, upgrade: None, on_connect_ext: None, _phantom: PhantomData, } } } impl HttpServiceBuilder where S: ServiceFactory, S::Error: Into> + 'static, S::InitError: fmt::Debug, >::Future: 'static, X: ServiceFactory, X::Error: Into>, X::InitError: fmt::Debug, U: ServiceFactory<(Request, Framed), Config = (), Response = ()>, U::Error: fmt::Display, U::InitError: fmt::Debug, { /// Set connection keep-alive setting. /// /// Applies to HTTP/1.1 keep-alive and HTTP/2 ping-pong. /// /// By default keep-alive is 5 seconds. pub fn keep_alive>(mut self, val: W) -> Self { self.keep_alive = val.into(); self } /// Set connection secure state pub fn secure(mut self) -> Self { self.secure = true; self } /// Set the local address that this service is bound to. pub fn local_addr(mut self, addr: net::SocketAddr) -> Self { self.local_addr = Some(addr); self } /// Set client request timeout (for first request). /// /// Defines a timeout for reading client request header. If the client does not transmit the /// request head within this duration, the connection is terminated with a `408 Request Timeout` /// response error. /// /// A duration of zero disables the timeout. /// /// By default, the client timeout is 5 seconds. pub fn client_request_timeout(mut self, dur: Duration) -> Self { self.client_request_timeout = dur; self } #[doc(hidden)] #[deprecated(since = "3.0.0", note = "Renamed to `client_request_timeout`.")] pub fn client_timeout(self, dur: Duration) -> Self { self.client_request_timeout(dur) } /// Set client connection disconnect timeout. /// /// Defines a timeout for disconnect connection. If a disconnect procedure does not complete /// within this time, the request get dropped. This timeout affects secure connections. /// /// A duration of zero disables the timeout. /// /// By default, the disconnect timeout is disabled. pub fn client_disconnect_timeout(mut self, dur: Duration) -> Self { self.client_disconnect_timeout = dur; self } #[doc(hidden)] #[deprecated(since = "3.0.0", note = "Renamed to `client_disconnect_timeout`.")] pub fn client_disconnect(self, dur: Duration) -> Self { self.client_disconnect_timeout(dur) } /// Provide service for `EXPECT: 100-Continue` support. /// /// Service get called with request that contains `EXPECT` header. /// Service must return request in case of success, in that case /// request will be forwarded to main service. pub fn expect(self, expect: F) -> HttpServiceBuilder where F: IntoServiceFactory, X1: ServiceFactory, X1::Error: Into>, X1::InitError: fmt::Debug, { HttpServiceBuilder { keep_alive: self.keep_alive, client_request_timeout: self.client_request_timeout, client_disconnect_timeout: self.client_disconnect_timeout, secure: self.secure, local_addr: self.local_addr, expect: expect.into_factory(), upgrade: self.upgrade, on_connect_ext: self.on_connect_ext, _phantom: PhantomData, } } /// Provide service for custom `Connection: UPGRADE` support. /// /// If service is provided then normal requests handling get halted /// and this service get called with original request and framed object. pub fn upgrade(self, upgrade: F) -> HttpServiceBuilder where F: IntoServiceFactory)>, U1: ServiceFactory<(Request, Framed), Config = (), Response = ()>, U1::Error: fmt::Display, U1::InitError: fmt::Debug, { HttpServiceBuilder { keep_alive: self.keep_alive, client_request_timeout: self.client_request_timeout, client_disconnect_timeout: self.client_disconnect_timeout, secure: self.secure, local_addr: self.local_addr, expect: self.expect, upgrade: Some(upgrade.into_factory()), on_connect_ext: self.on_connect_ext, _phantom: PhantomData, } } /// Sets the callback to be run on connection establishment. /// /// Has mutable access to a data container that will be merged into request extensions. /// This enables transport layer data (like client certificates) to be accessed in middleware /// and handlers. pub fn on_connect_ext(mut self, f: F) -> Self where F: Fn(&T, &mut Extensions) + 'static, { self.on_connect_ext = Some(Rc::new(f)); self } /// Finish service configuration and create a service for the HTTP/1 protocol. pub fn h1(self, service: F) -> H1Service where B: MessageBody, F: IntoServiceFactory, S::Error: Into>, S::InitError: fmt::Debug, S::Response: Into>, { let cfg = ServiceConfig::new( self.keep_alive, self.client_request_timeout, self.client_disconnect_timeout, self.secure, self.local_addr, ); H1Service::with_config(cfg, service.into_factory()) .expect(self.expect) .upgrade(self.upgrade) .on_connect_ext(self.on_connect_ext) } /// Finish service configuration and create a service for the HTTP/2 protocol. #[cfg(feature = "http2")] pub fn h2(self, service: F) -> crate::h2::H2Service where F: IntoServiceFactory, S::Error: Into> + 'static, S::InitError: fmt::Debug, S::Response: Into> + 'static, B: MessageBody + 'static, { let cfg = ServiceConfig::new( self.keep_alive, self.client_request_timeout, self.client_disconnect_timeout, self.secure, self.local_addr, ); crate::h2::H2Service::with_config(cfg, service.into_factory()) .on_connect_ext(self.on_connect_ext) } /// Finish service configuration and create `HttpService` instance. pub fn finish(self, service: F) -> HttpService where F: IntoServiceFactory, S::Error: Into> + 'static, S::InitError: fmt::Debug, S::Response: Into> + 'static, B: MessageBody + 'static, { let cfg = ServiceConfig::new( self.keep_alive, self.client_request_timeout, self.client_disconnect_timeout, self.secure, self.local_addr, ); HttpService::with_config(cfg, service.into_factory()) .expect(self.expect) .upgrade(self.upgrade) .on_connect_ext(self.on_connect_ext) } } actix-http-3.9.0/src/config.rs000064400000000000000000000155461046102023000143370ustar 00000000000000use std::{ net, rc::Rc, time::{Duration, Instant}, }; use bytes::BytesMut; use crate::{date::DateService, KeepAlive}; /// HTTP service configuration. #[derive(Debug, Clone)] pub struct ServiceConfig(Rc); #[derive(Debug)] struct Inner { keep_alive: KeepAlive, client_request_timeout: Duration, client_disconnect_timeout: Duration, secure: bool, local_addr: Option, date_service: DateService, } impl Default for ServiceConfig { fn default() -> Self { Self::new( KeepAlive::default(), Duration::from_secs(5), Duration::ZERO, false, None, ) } } impl ServiceConfig { /// Create instance of `ServiceConfig`. pub fn new( keep_alive: KeepAlive, client_request_timeout: Duration, client_disconnect_timeout: Duration, secure: bool, local_addr: Option, ) -> ServiceConfig { ServiceConfig(Rc::new(Inner { keep_alive: keep_alive.normalize(), client_request_timeout, client_disconnect_timeout, secure, local_addr, date_service: DateService::new(), })) } /// Returns `true` if connection is secure (i.e., using TLS / HTTPS). #[inline] pub fn secure(&self) -> bool { self.0.secure } /// Returns the local address that this server is bound to. /// /// Returns `None` for connections via UDS (Unix Domain Socket). #[inline] pub fn local_addr(&self) -> Option { self.0.local_addr } /// Connection keep-alive setting. #[inline] pub fn keep_alive(&self) -> KeepAlive { self.0.keep_alive } /// Creates a time object representing the deadline for this connection's keep-alive period, if /// enabled. /// /// When [`KeepAlive::Os`] or [`KeepAlive::Disabled`] is set, this will return `None`. pub fn keep_alive_deadline(&self) -> Option { match self.keep_alive() { KeepAlive::Timeout(dur) => Some(self.now() + dur), KeepAlive::Os => None, KeepAlive::Disabled => None, } } /// Creates a time object representing the deadline for the client to finish sending the head of /// its first request. /// /// Returns `None` if this `ServiceConfig was` constructed with `client_request_timeout: 0`. pub fn client_request_deadline(&self) -> Option { let timeout = self.0.client_request_timeout; (timeout != Duration::ZERO).then(|| self.now() + timeout) } /// Creates a time object representing the deadline for the client to disconnect. pub fn client_disconnect_deadline(&self) -> Option { let timeout = self.0.client_disconnect_timeout; (timeout != Duration::ZERO).then(|| self.now() + timeout) } pub(crate) fn now(&self) -> Instant { self.0.date_service.now() } /// Writes date header to `dst` buffer. /// /// Low-level method that utilizes the built-in efficient date service, requiring fewer syscalls /// than normal. Note that a CRLF (`\r\n`) is included in what is written. #[doc(hidden)] pub fn write_date_header(&self, dst: &mut BytesMut, camel_case: bool) { let mut buf: [u8; 37] = [0; 37]; buf[..6].copy_from_slice(if camel_case { b"Date: " } else { b"date: " }); self.0 .date_service .with_date(|date| buf[6..35].copy_from_slice(&date.bytes)); buf[35..].copy_from_slice(b"\r\n"); dst.extend_from_slice(&buf); } #[allow(unused)] // used with `http2` feature flag pub(crate) fn write_date_header_value(&self, dst: &mut BytesMut) { self.0 .date_service .with_date(|date| dst.extend_from_slice(&date.bytes)); } } #[cfg(test)] mod tests { use actix_rt::{ task::yield_now, time::{sleep, sleep_until}, }; use memchr::memmem; use super::*; use crate::{date::DATE_VALUE_LENGTH, notify_on_drop}; #[actix_rt::test] async fn test_date_service_update() { let settings = ServiceConfig::new(KeepAlive::Os, Duration::ZERO, Duration::ZERO, false, None); yield_now().await; let mut buf1 = BytesMut::with_capacity(DATE_VALUE_LENGTH + 10); settings.write_date_header(&mut buf1, false); let now1 = settings.now(); sleep_until((Instant::now() + Duration::from_secs(2)).into()).await; yield_now().await; let now2 = settings.now(); let mut buf2 = BytesMut::with_capacity(DATE_VALUE_LENGTH + 10); settings.write_date_header(&mut buf2, false); assert_ne!(now1, now2); assert_ne!(buf1, buf2); drop(settings); // Ensure the task will drop eventually let mut times = 0; while !notify_on_drop::is_dropped() { sleep(Duration::from_millis(100)).await; times += 1; assert!(times < 10, "Timeout waiting for task drop"); } } #[actix_rt::test] async fn test_date_service_drop() { let service = Rc::new(DateService::new()); // yield so date service have a chance to register the spawned timer update task. yield_now().await; let clone1 = service.clone(); let clone2 = service.clone(); let clone3 = service.clone(); drop(clone1); assert!(!notify_on_drop::is_dropped()); drop(clone2); assert!(!notify_on_drop::is_dropped()); drop(clone3); assert!(!notify_on_drop::is_dropped()); drop(service); // Ensure the task will drop eventually let mut times = 0; while !notify_on_drop::is_dropped() { sleep(Duration::from_millis(100)).await; times += 1; assert!(times < 10, "Timeout waiting for task drop"); } } #[test] fn test_date_len() { assert_eq!(DATE_VALUE_LENGTH, "Sun, 06 Nov 1994 08:49:37 GMT".len()); } #[actix_rt::test] async fn test_date() { let settings = ServiceConfig::default(); let mut buf1 = BytesMut::with_capacity(DATE_VALUE_LENGTH + 10); settings.write_date_header(&mut buf1, false); let mut buf2 = BytesMut::with_capacity(DATE_VALUE_LENGTH + 10); settings.write_date_header(&mut buf2, false); assert_eq!(buf1, buf2); } #[actix_rt::test] async fn test_date_camel_case() { let settings = ServiceConfig::default(); let mut buf = BytesMut::with_capacity(DATE_VALUE_LENGTH + 10); settings.write_date_header(&mut buf, false); assert!(memmem::find(&buf, b"date:").is_some()); let mut buf = BytesMut::with_capacity(DATE_VALUE_LENGTH + 10); settings.write_date_header(&mut buf, true); assert!(memmem::find(&buf, b"Date:").is_some()); } } actix-http-3.9.0/src/date.rs000064400000000000000000000046311046102023000140000ustar 00000000000000use std::{ cell::Cell, fmt::{self, Write}, rc::Rc, time::{Duration, Instant, SystemTime}, }; use actix_rt::{task::JoinHandle, time::interval}; /// "Thu, 01 Jan 1970 00:00:00 GMT".len() pub(crate) const DATE_VALUE_LENGTH: usize = 29; #[derive(Clone, Copy)] pub(crate) struct Date { pub(crate) bytes: [u8; DATE_VALUE_LENGTH], pos: usize, } impl Date { fn new() -> Date { let mut date = Date { bytes: [0; DATE_VALUE_LENGTH], pos: 0, }; date.update(); date } fn update(&mut self) { self.pos = 0; write!(self, "{}", httpdate::HttpDate::from(SystemTime::now())).unwrap(); } } impl fmt::Write for Date { fn write_str(&mut self, s: &str) -> fmt::Result { let len = s.len(); self.bytes[self.pos..self.pos + len].copy_from_slice(s.as_bytes()); self.pos += len; Ok(()) } } /// Service for update Date and Instant periodically at 500 millis interval. pub(crate) struct DateService { current: Rc>, handle: JoinHandle<()>, } impl DateService { pub(crate) fn new() -> Self { // shared date and timer for DateService and update async task. let current = Rc::new(Cell::new((Date::new(), Instant::now()))); let current_clone = Rc::clone(¤t); // spawn an async task sleep for 500 millis and update current date/timer in a loop. // handle is used to stop the task on DateService drop. let handle = actix_rt::spawn(async move { #[cfg(test)] let _notify = crate::notify_on_drop::NotifyOnDrop::new(); let mut interval = interval(Duration::from_millis(500)); loop { let now = interval.tick().await; let date = Date::new(); current_clone.set((date, now.into_std())); } }); DateService { current, handle } } pub(crate) fn now(&self) -> Instant { self.current.get().1 } pub(crate) fn with_date(&self, mut f: F) { f(&self.current.get().0); } } impl fmt::Debug for DateService { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("DateService").finish_non_exhaustive() } } impl Drop for DateService { fn drop(&mut self) { // stop the timer update async task on drop. self.handle.abort(); } } actix-http-3.9.0/src/encoding/decoder.rs000064400000000000000000000225631046102023000162620ustar 00000000000000//! Stream decoders. use std::{ future::Future, io::{self, Write as _}, pin::Pin, task::{Context, Poll}, }; use actix_rt::task::{spawn_blocking, JoinHandle}; use bytes::Bytes; #[cfg(feature = "compress-gzip")] use flate2::write::{GzDecoder, ZlibDecoder}; use futures_core::{ready, Stream}; #[cfg(feature = "compress-zstd")] use zstd::stream::write::Decoder as ZstdDecoder; use crate::{ encoding::Writer, error::PayloadError, header::{ContentEncoding, HeaderMap, CONTENT_ENCODING}, }; const MAX_CHUNK_SIZE_DECODE_IN_PLACE: usize = 2049; pin_project_lite::pin_project! { pub struct Decoder { decoder: Option, #[pin] stream: S, eof: bool, fut: Option, ContentDecoder), io::Error>>>, } } impl Decoder where S: Stream>, { /// Construct a decoder. #[inline] pub fn new(stream: S, encoding: ContentEncoding) -> Decoder { let decoder = match encoding { #[cfg(feature = "compress-brotli")] ContentEncoding::Brotli => Some(ContentDecoder::Brotli(Box::new( brotli::DecompressorWriter::new(Writer::new(), 8_096), ))), #[cfg(feature = "compress-gzip")] ContentEncoding::Deflate => Some(ContentDecoder::Deflate(Box::new(ZlibDecoder::new( Writer::new(), )))), #[cfg(feature = "compress-gzip")] ContentEncoding::Gzip => Some(ContentDecoder::Gzip(Box::new(GzDecoder::new( Writer::new(), )))), #[cfg(feature = "compress-zstd")] ContentEncoding::Zstd => Some(ContentDecoder::Zstd(Box::new( ZstdDecoder::new(Writer::new()).expect( "Failed to create zstd decoder. This is a bug. \ Please report it to the actix-web repository.", ), ))), _ => None, }; Decoder { decoder, stream, fut: None, eof: false, } } /// Construct decoder based on headers. #[inline] pub fn from_headers(stream: S, headers: &HeaderMap) -> Decoder { // check content-encoding let encoding = headers .get(&CONTENT_ENCODING) .and_then(|val| val.to_str().ok()) .and_then(|x| x.parse().ok()) .unwrap_or(ContentEncoding::Identity); Self::new(stream, encoding) } } impl Stream for Decoder where S: Stream>, { type Item = Result; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { let mut this = self.project(); loop { if let Some(ref mut fut) = this.fut { let (chunk, decoder) = ready!(Pin::new(fut).poll(cx)).map_err(|_| { PayloadError::Io(io::Error::new( io::ErrorKind::Other, "Blocking task was cancelled unexpectedly", )) })??; *this.decoder = Some(decoder); this.fut.take(); if let Some(chunk) = chunk { return Poll::Ready(Some(Ok(chunk))); } } if *this.eof { return Poll::Ready(None); } match ready!(this.stream.as_mut().poll_next(cx)) { Some(Err(err)) => return Poll::Ready(Some(Err(err))), Some(Ok(chunk)) => { if let Some(mut decoder) = this.decoder.take() { if chunk.len() < MAX_CHUNK_SIZE_DECODE_IN_PLACE { let chunk = decoder.feed_data(chunk)?; *this.decoder = Some(decoder); if let Some(chunk) = chunk { return Poll::Ready(Some(Ok(chunk))); } } else { *this.fut = Some(spawn_blocking(move || { let chunk = decoder.feed_data(chunk)?; Ok((chunk, decoder)) })); } continue; } else { return Poll::Ready(Some(Ok(chunk))); } } None => { *this.eof = true; return if let Some(mut decoder) = this.decoder.take() { match decoder.feed_eof() { Ok(Some(res)) => Poll::Ready(Some(Ok(res))), Ok(None) => Poll::Ready(None), Err(err) => Poll::Ready(Some(Err(err.into()))), } } else { Poll::Ready(None) }; } } } } } enum ContentDecoder { #[cfg(feature = "compress-gzip")] Deflate(Box>), #[cfg(feature = "compress-gzip")] Gzip(Box>), #[cfg(feature = "compress-brotli")] Brotli(Box>), // We need explicit 'static lifetime here because ZstdDecoder need lifetime // argument, and we use `spawn_blocking` in `Decoder::poll_next` that require `FnOnce() -> R + Send + 'static` #[cfg(feature = "compress-zstd")] Zstd(Box>), } impl ContentDecoder { fn feed_eof(&mut self) -> io::Result> { match self { #[cfg(feature = "compress-brotli")] ContentDecoder::Brotli(ref mut decoder) => match decoder.flush() { Ok(()) => { let b = decoder.get_mut().take(); if !b.is_empty() { Ok(Some(b)) } else { Ok(None) } } Err(err) => Err(err), }, #[cfg(feature = "compress-gzip")] ContentDecoder::Gzip(ref mut decoder) => match decoder.try_finish() { Ok(_) => { let b = decoder.get_mut().take(); if !b.is_empty() { Ok(Some(b)) } else { Ok(None) } } Err(err) => Err(err), }, #[cfg(feature = "compress-gzip")] ContentDecoder::Deflate(ref mut decoder) => match decoder.try_finish() { Ok(_) => { let b = decoder.get_mut().take(); if !b.is_empty() { Ok(Some(b)) } else { Ok(None) } } Err(err) => Err(err), }, #[cfg(feature = "compress-zstd")] ContentDecoder::Zstd(ref mut decoder) => match decoder.flush() { Ok(_) => { let b = decoder.get_mut().take(); if !b.is_empty() { Ok(Some(b)) } else { Ok(None) } } Err(err) => Err(err), }, } } fn feed_data(&mut self, data: Bytes) -> io::Result> { match self { #[cfg(feature = "compress-brotli")] ContentDecoder::Brotli(ref mut decoder) => match decoder.write_all(&data) { Ok(_) => { decoder.flush()?; let b = decoder.get_mut().take(); if !b.is_empty() { Ok(Some(b)) } else { Ok(None) } } Err(err) => Err(err), }, #[cfg(feature = "compress-gzip")] ContentDecoder::Gzip(ref mut decoder) => match decoder.write_all(&data) { Ok(_) => { decoder.flush()?; let b = decoder.get_mut().take(); if !b.is_empty() { Ok(Some(b)) } else { Ok(None) } } Err(err) => Err(err), }, #[cfg(feature = "compress-gzip")] ContentDecoder::Deflate(ref mut decoder) => match decoder.write_all(&data) { Ok(_) => { decoder.flush()?; let b = decoder.get_mut().take(); if !b.is_empty() { Ok(Some(b)) } else { Ok(None) } } Err(err) => Err(err), }, #[cfg(feature = "compress-zstd")] ContentDecoder::Zstd(ref mut decoder) => match decoder.write_all(&data) { Ok(_) => { decoder.flush()?; let b = decoder.get_mut().take(); if !b.is_empty() { Ok(Some(b)) } else { Ok(None) } } Err(err) => Err(err), }, } } } actix-http-3.9.0/src/encoding/encoder.rs000064400000000000000000000315511046102023000162710ustar 00000000000000//! Stream encoders. use std::{ error::Error as StdError, future::Future, io::{self, Write as _}, pin::Pin, task::{Context, Poll}, }; use actix_rt::task::{spawn_blocking, JoinHandle}; use bytes::Bytes; use derive_more::Display; #[cfg(feature = "compress-gzip")] use flate2::write::{GzEncoder, ZlibEncoder}; use futures_core::ready; use pin_project_lite::pin_project; use tracing::trace; #[cfg(feature = "compress-zstd")] use zstd::stream::write::Encoder as ZstdEncoder; use super::Writer; use crate::{ body::{self, BodySize, MessageBody}, header::{self, ContentEncoding, HeaderValue, CONTENT_ENCODING}, ResponseHead, StatusCode, }; const MAX_CHUNK_SIZE_ENCODE_IN_PLACE: usize = 1024; pin_project! { pub struct Encoder { #[pin] body: EncoderBody, encoder: Option, fut: Option>>, eof: bool, } } impl Encoder { fn none() -> Self { Encoder { body: EncoderBody::None { body: body::None::new(), }, encoder: None, fut: None, eof: true, } } fn empty() -> Self { Encoder { body: EncoderBody::Full { body: Bytes::new() }, encoder: None, fut: None, eof: true, } } pub fn response(encoding: ContentEncoding, head: &mut ResponseHead, body: B) -> Self { // no need to compress empty bodies match body.size() { BodySize::None => return Self::none(), BodySize::Sized(0) => return Self::empty(), _ => {} } let should_encode = !(head.headers().contains_key(&CONTENT_ENCODING) || head.status == StatusCode::SWITCHING_PROTOCOLS || head.status == StatusCode::NO_CONTENT || encoding == ContentEncoding::Identity); let body = match body.try_into_bytes() { Ok(body) => EncoderBody::Full { body }, Err(body) => EncoderBody::Stream { body }, }; if should_encode { // wrap body only if encoder is feature-enabled if let Some(enc) = ContentEncoder::select(encoding) { update_head(encoding, head); return Encoder { body, encoder: Some(enc), fut: None, eof: false, }; } } Encoder { body, encoder: None, fut: None, eof: false, } } } pin_project! { #[project = EncoderBodyProj] enum EncoderBody { None { body: body::None }, Full { body: Bytes }, Stream { #[pin] body: B }, } } impl MessageBody for EncoderBody where B: MessageBody, { type Error = EncoderError; #[inline] fn size(&self) -> BodySize { match self { EncoderBody::None { body } => body.size(), EncoderBody::Full { body } => body.size(), EncoderBody::Stream { body } => body.size(), } } fn poll_next( self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { match self.project() { EncoderBodyProj::None { body } => { Pin::new(body).poll_next(cx).map_err(|err| match err {}) } EncoderBodyProj::Full { body } => { Pin::new(body).poll_next(cx).map_err(|err| match err {}) } EncoderBodyProj::Stream { body } => body .poll_next(cx) .map_err(|err| EncoderError::Body(err.into())), } } #[inline] fn try_into_bytes(self) -> Result where Self: Sized, { match self { EncoderBody::None { body } => Ok(body.try_into_bytes().unwrap()), EncoderBody::Full { body } => Ok(body.try_into_bytes().unwrap()), _ => Err(self), } } } impl MessageBody for Encoder where B: MessageBody, { type Error = EncoderError; #[inline] fn size(&self) -> BodySize { if self.encoder.is_some() { BodySize::Stream } else { self.body.size() } } fn poll_next( self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { let mut this = self.project(); loop { if *this.eof { return Poll::Ready(None); } if let Some(ref mut fut) = this.fut { let mut encoder = ready!(Pin::new(fut).poll(cx)) .map_err(|_| { EncoderError::Io(io::Error::new( io::ErrorKind::Other, "Blocking task was cancelled unexpectedly", )) })? .map_err(EncoderError::Io)?; let chunk = encoder.take(); *this.encoder = Some(encoder); this.fut.take(); if !chunk.is_empty() { return Poll::Ready(Some(Ok(chunk))); } } let result = ready!(this.body.as_mut().poll_next(cx)); match result { Some(Err(err)) => return Poll::Ready(Some(Err(err))), Some(Ok(chunk)) => { if let Some(mut encoder) = this.encoder.take() { if chunk.len() < MAX_CHUNK_SIZE_ENCODE_IN_PLACE { encoder.write(&chunk).map_err(EncoderError::Io)?; let chunk = encoder.take(); *this.encoder = Some(encoder); if !chunk.is_empty() { return Poll::Ready(Some(Ok(chunk))); } } else { *this.fut = Some(spawn_blocking(move || { encoder.write(&chunk)?; Ok(encoder) })); } } else { return Poll::Ready(Some(Ok(chunk))); } } None => { if let Some(encoder) = this.encoder.take() { let chunk = encoder.finish().map_err(EncoderError::Io)?; if chunk.is_empty() { return Poll::Ready(None); } else { *this.eof = true; return Poll::Ready(Some(Ok(chunk))); } } else { return Poll::Ready(None); } } } } } #[inline] fn try_into_bytes(mut self) -> Result where Self: Sized, { if self.encoder.is_some() { Err(self) } else { match self.body.try_into_bytes() { Ok(body) => Ok(body), Err(body) => { self.body = body; Err(self) } } } } } fn update_head(encoding: ContentEncoding, head: &mut ResponseHead) { head.headers_mut() .insert(header::CONTENT_ENCODING, encoding.to_header_value()); head.headers_mut() .append(header::VARY, HeaderValue::from_static("accept-encoding")); head.no_chunking(false); } enum ContentEncoder { #[cfg(feature = "compress-gzip")] Deflate(ZlibEncoder), #[cfg(feature = "compress-gzip")] Gzip(GzEncoder), #[cfg(feature = "compress-brotli")] Brotli(Box>), // Wwe need explicit 'static lifetime here because ZstdEncoder needs a lifetime argument and we // use `spawn_blocking` in `Encoder::poll_next` that requires `FnOnce() -> R + Send + 'static`. #[cfg(feature = "compress-zstd")] Zstd(ZstdEncoder<'static, Writer>), } impl ContentEncoder { fn select(encoding: ContentEncoding) -> Option { match encoding { #[cfg(feature = "compress-gzip")] ContentEncoding::Deflate => Some(ContentEncoder::Deflate(ZlibEncoder::new( Writer::new(), flate2::Compression::fast(), ))), #[cfg(feature = "compress-gzip")] ContentEncoding::Gzip => Some(ContentEncoder::Gzip(GzEncoder::new( Writer::new(), flate2::Compression::fast(), ))), #[cfg(feature = "compress-brotli")] ContentEncoding::Brotli => Some(ContentEncoder::Brotli(new_brotli_compressor())), #[cfg(feature = "compress-zstd")] ContentEncoding::Zstd => { let encoder = ZstdEncoder::new(Writer::new(), 3).ok()?; Some(ContentEncoder::Zstd(encoder)) } _ => None, } } #[inline] pub(crate) fn take(&mut self) -> Bytes { match *self { #[cfg(feature = "compress-brotli")] ContentEncoder::Brotli(ref mut encoder) => encoder.get_mut().take(), #[cfg(feature = "compress-gzip")] ContentEncoder::Deflate(ref mut encoder) => encoder.get_mut().take(), #[cfg(feature = "compress-gzip")] ContentEncoder::Gzip(ref mut encoder) => encoder.get_mut().take(), #[cfg(feature = "compress-zstd")] ContentEncoder::Zstd(ref mut encoder) => encoder.get_mut().take(), } } fn finish(self) -> Result { match self { #[cfg(feature = "compress-brotli")] ContentEncoder::Brotli(mut encoder) => match encoder.flush() { Ok(()) => Ok(encoder.into_inner().buf.freeze()), Err(err) => Err(err), }, #[cfg(feature = "compress-gzip")] ContentEncoder::Gzip(encoder) => match encoder.finish() { Ok(writer) => Ok(writer.buf.freeze()), Err(err) => Err(err), }, #[cfg(feature = "compress-gzip")] ContentEncoder::Deflate(encoder) => match encoder.finish() { Ok(writer) => Ok(writer.buf.freeze()), Err(err) => Err(err), }, #[cfg(feature = "compress-zstd")] ContentEncoder::Zstd(encoder) => match encoder.finish() { Ok(writer) => Ok(writer.buf.freeze()), Err(err) => Err(err), }, } } fn write(&mut self, data: &[u8]) -> Result<(), io::Error> { match *self { #[cfg(feature = "compress-brotli")] ContentEncoder::Brotli(ref mut encoder) => match encoder.write_all(data) { Ok(_) => Ok(()), Err(err) => { trace!("Error decoding br encoding: {}", err); Err(err) } }, #[cfg(feature = "compress-gzip")] ContentEncoder::Gzip(ref mut encoder) => match encoder.write_all(data) { Ok(_) => Ok(()), Err(err) => { trace!("Error decoding gzip encoding: {}", err); Err(err) } }, #[cfg(feature = "compress-gzip")] ContentEncoder::Deflate(ref mut encoder) => match encoder.write_all(data) { Ok(_) => Ok(()), Err(err) => { trace!("Error decoding deflate encoding: {}", err); Err(err) } }, #[cfg(feature = "compress-zstd")] ContentEncoder::Zstd(ref mut encoder) => match encoder.write_all(data) { Ok(_) => Ok(()), Err(err) => { trace!("Error decoding ztsd encoding: {}", err); Err(err) } }, } } } #[cfg(feature = "compress-brotli")] fn new_brotli_compressor() -> Box> { Box::new(brotli::CompressorWriter::new( Writer::new(), 32 * 1024, // 32 KiB buffer 3, // BROTLI_PARAM_QUALITY 22, // BROTLI_PARAM_LGWIN )) } #[derive(Debug, Display)] #[non_exhaustive] pub enum EncoderError { /// Wrapped body stream error. #[display(fmt = "body")] Body(Box), /// Generic I/O error. #[display(fmt = "io")] Io(io::Error), } impl StdError for EncoderError { fn source(&self) -> Option<&(dyn StdError + 'static)> { match self { EncoderError::Body(err) => Some(&**err), EncoderError::Io(err) => Some(err), } } } impl From for crate::Error { fn from(err: EncoderError) -> Self { crate::Error::new_encoder().with_cause(err) } } actix-http-3.9.0/src/encoding/mod.rs000064400000000000000000000013131046102023000154220ustar 00000000000000//! Content-Encoding support. use std::io; use bytes::{Bytes, BytesMut}; mod decoder; mod encoder; pub use self::{decoder::Decoder, encoder::Encoder}; /// Special-purpose writer for streaming (de-)compression. /// /// Pre-allocates 8KiB of capacity. struct Writer { buf: BytesMut, } impl Writer { fn new() -> Writer { Writer { buf: BytesMut::with_capacity(8192), } } fn take(&mut self) -> Bytes { self.buf.split().freeze() } } impl io::Write for Writer { fn write(&mut self, buf: &[u8]) -> io::Result { self.buf.extend_from_slice(buf); Ok(buf.len()) } fn flush(&mut self) -> io::Result<()> { Ok(()) } } actix-http-3.9.0/src/error.rs000064400000000000000000000322241046102023000142130ustar 00000000000000//! Error and Result module use std::{error::Error as StdError, fmt, io, str::Utf8Error, string::FromUtf8Error}; use derive_more::{Display, Error, From}; pub use http::{status::InvalidStatusCode, Error as HttpError}; use http::{uri::InvalidUri, StatusCode}; use crate::{body::BoxBody, Response}; pub struct Error { inner: Box, } pub(crate) struct ErrorInner { #[allow(dead_code)] kind: Kind, cause: Option>, } impl Error { fn new(kind: Kind) -> Self { Self { inner: Box::new(ErrorInner { kind, cause: None }), } } pub(crate) fn with_cause(mut self, cause: impl Into>) -> Self { self.inner.cause = Some(cause.into()); self } pub(crate) fn new_http() -> Self { Self::new(Kind::Http) } pub(crate) fn new_parse() -> Self { Self::new(Kind::Parse) } pub(crate) fn new_payload() -> Self { Self::new(Kind::Payload) } pub(crate) fn new_body() -> Self { Self::new(Kind::Body) } pub(crate) fn new_send_response() -> Self { Self::new(Kind::SendResponse) } #[allow(unused)] // available for future use pub(crate) fn new_io() -> Self { Self::new(Kind::Io) } #[allow(unused)] // used in encoder behind feature flag so ignore unused warning pub(crate) fn new_encoder() -> Self { Self::new(Kind::Encoder) } #[allow(unused)] // used with `ws` feature flag pub(crate) fn new_ws() -> Self { Self::new(Kind::Ws) } } impl From for Response { fn from(err: Error) -> Self { // TODO: more appropriate error status codes, usage assessment needed let status_code = match err.inner.kind { Kind::Parse => StatusCode::BAD_REQUEST, _ => StatusCode::INTERNAL_SERVER_ERROR, }; Response::new(status_code).set_body(BoxBody::new(err.to_string())) } } #[derive(Debug, Clone, Copy, PartialEq, Eq, Display)] pub(crate) enum Kind { #[display(fmt = "error processing HTTP")] Http, #[display(fmt = "error parsing HTTP message")] Parse, #[display(fmt = "request payload read error")] Payload, #[display(fmt = "response body write error")] Body, #[display(fmt = "send response error")] SendResponse, #[display(fmt = "error in WebSocket process")] Ws, #[display(fmt = "connection error")] Io, #[display(fmt = "encoder error")] Encoder, } impl fmt::Debug for Error { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("actix_http::Error") .field("kind", &self.inner.kind) .field("cause", &self.inner.cause) .finish() } } impl fmt::Display for Error { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self.inner.cause.as_ref() { Some(err) => write!(f, "{}: {}", &self.inner.kind, err), None => write!(f, "{}", &self.inner.kind), } } } impl StdError for Error { fn source(&self) -> Option<&(dyn StdError + 'static)> { self.inner.cause.as_ref().map(Box::as_ref) } } impl From for Error { fn from(err: std::convert::Infallible) -> Self { match err {} } } impl From for Error { fn from(err: HttpError) -> Self { Self::new_http().with_cause(err) } } #[cfg(feature = "ws")] impl From for Error { fn from(err: crate::ws::HandshakeError) -> Self { Self::new_ws().with_cause(err) } } #[cfg(feature = "ws")] impl From for Error { fn from(err: crate::ws::ProtocolError) -> Self { Self::new_ws().with_cause(err) } } /// A set of errors that can occur during parsing HTTP streams. #[derive(Debug, Display, Error)] #[non_exhaustive] pub enum ParseError { /// An invalid `Method`, such as `GE.T`. #[display(fmt = "invalid method specified")] Method, /// An invalid `Uri`, such as `exam ple.domain`. #[display(fmt = "URI error: {}", _0)] Uri(InvalidUri), /// An invalid `HttpVersion`, such as `HTP/1.1` #[display(fmt = "invalid HTTP version specified")] Version, /// An invalid `Header`. #[display(fmt = "invalid Header provided")] Header, /// A message head is too large to be reasonable. #[display(fmt = "message head is too large")] TooLarge, /// A message reached EOF, but is not complete. #[display(fmt = "message is incomplete")] Incomplete, /// An invalid `Status`, such as `1337 ELITE`. #[display(fmt = "invalid status provided")] Status, /// A timeout occurred waiting for an IO event. #[allow(dead_code)] #[display(fmt = "timeout")] Timeout, /// An I/O error that occurred while trying to read or write to a network stream. #[display(fmt = "I/O error: {}", _0)] Io(io::Error), /// Parsing a field as string failed. #[display(fmt = "UTF-8 error: {}", _0)] Utf8(Utf8Error), } impl From for ParseError { fn from(err: io::Error) -> ParseError { ParseError::Io(err) } } impl From for ParseError { fn from(err: InvalidUri) -> ParseError { ParseError::Uri(err) } } impl From for ParseError { fn from(err: Utf8Error) -> ParseError { ParseError::Utf8(err) } } impl From for ParseError { fn from(err: FromUtf8Error) -> ParseError { ParseError::Utf8(err.utf8_error()) } } impl From for ParseError { fn from(err: httparse::Error) -> ParseError { match err { httparse::Error::HeaderName | httparse::Error::HeaderValue | httparse::Error::NewLine | httparse::Error::Token => ParseError::Header, httparse::Error::Status => ParseError::Status, httparse::Error::TooManyHeaders => ParseError::TooLarge, httparse::Error::Version => ParseError::Version, } } } impl From for Error { fn from(err: ParseError) -> Self { Self::new_parse().with_cause(err) } } impl From for Response { fn from(err: ParseError) -> Self { Error::from(err).into() } } /// A set of errors that can occur during payload parsing. #[derive(Debug, Display)] #[non_exhaustive] pub enum PayloadError { /// A payload reached EOF, but is not complete. #[display(fmt = "payload reached EOF before completing: {:?}", _0)] Incomplete(Option), /// Content encoding stream corruption. #[display(fmt = "can not decode content-encoding")] EncodingCorrupted, /// Payload reached size limit. #[display(fmt = "payload reached size limit")] Overflow, /// Payload length is unknown. #[display(fmt = "payload length is unknown")] UnknownLength, /// HTTP/2 payload error. #[cfg(feature = "http2")] #[display(fmt = "{}", _0)] Http2Payload(::h2::Error), /// Generic I/O error. #[display(fmt = "{}", _0)] Io(io::Error), } impl std::error::Error for PayloadError { fn source(&self) -> Option<&(dyn std::error::Error + 'static)> { match self { PayloadError::Incomplete(None) => None, PayloadError::Incomplete(Some(err)) => Some(err), PayloadError::EncodingCorrupted => None, PayloadError::Overflow => None, PayloadError::UnknownLength => None, #[cfg(feature = "http2")] PayloadError::Http2Payload(err) => Some(err), PayloadError::Io(err) => Some(err), } } } #[cfg(feature = "http2")] impl From<::h2::Error> for PayloadError { fn from(err: ::h2::Error) -> Self { PayloadError::Http2Payload(err) } } impl From> for PayloadError { fn from(err: Option) -> Self { PayloadError::Incomplete(err) } } impl From for PayloadError { fn from(err: io::Error) -> Self { PayloadError::Incomplete(Some(err)) } } impl From for Error { fn from(err: PayloadError) -> Self { Self::new_payload().with_cause(err) } } /// A set of errors that can occur during dispatching HTTP requests. #[derive(Debug, Display, From)] #[non_exhaustive] pub enum DispatchError { /// Service error. #[display(fmt = "service error")] Service(Response), /// Body streaming error. #[display(fmt = "body error: {}", _0)] Body(Box), /// Upgrade service error. #[display(fmt = "upgrade error")] Upgrade, /// An `io::Error` that occurred while trying to read or write to a network stream. #[display(fmt = "I/O error: {}", _0)] Io(io::Error), /// Request parse error. #[display(fmt = "request parse error: {}", _0)] Parse(ParseError), /// HTTP/2 error. #[display(fmt = "{}", _0)] #[cfg(feature = "http2")] H2(h2::Error), /// The first request did not complete within the specified timeout. #[display(fmt = "request did not complete within the specified timeout")] SlowRequestTimeout, /// Disconnect timeout. Makes sense for TLS streams. #[display(fmt = "connection shutdown timeout")] DisconnectTimeout, /// Handler dropped payload before reading EOF. #[display(fmt = "handler dropped payload before reading EOF")] HandlerDroppedPayload, /// Internal error. #[display(fmt = "internal error")] InternalError, } impl StdError for DispatchError { fn source(&self) -> Option<&(dyn StdError + 'static)> { match self { DispatchError::Service(_res) => None, DispatchError::Body(err) => Some(&**err), DispatchError::Io(err) => Some(err), DispatchError::Parse(err) => Some(err), #[cfg(feature = "http2")] DispatchError::H2(err) => Some(err), _ => None, } } } /// A set of error that can occur during parsing content type. #[derive(Debug, Display, Error)] #[cfg_attr(test, derive(PartialEq, Eq))] #[non_exhaustive] pub enum ContentTypeError { /// Can not parse content type. #[display(fmt = "could not parse content type")] ParseError, /// Unknown content encoding. #[display(fmt = "unknown content encoding")] UnknownEncoding, } #[cfg(test)] mod tests { use http::Error as HttpError; use super::*; #[test] fn test_into_response() { let resp: Response = ParseError::Incomplete.into(); assert_eq!(resp.status(), StatusCode::BAD_REQUEST); let err: HttpError = StatusCode::from_u16(10000).err().unwrap().into(); let resp: Response = Error::new_http().with_cause(err).into(); assert_eq!(resp.status(), StatusCode::INTERNAL_SERVER_ERROR); } #[test] fn test_as_response() { let orig = io::Error::new(io::ErrorKind::Other, "other"); let err: Error = ParseError::Io(orig).into(); assert_eq!( format!("{}", err), "error parsing HTTP message: I/O error: other" ); } #[test] fn test_error_display() { let orig = io::Error::new(io::ErrorKind::Other, "other"); let err = Error::new_io().with_cause(orig); assert_eq!("connection error: other", err.to_string()); } #[test] fn test_error_http_response() { let orig = io::Error::new(io::ErrorKind::Other, "other"); let err = Error::new_io().with_cause(orig); let resp: Response = err.into(); assert_eq!(resp.status(), StatusCode::INTERNAL_SERVER_ERROR); } #[test] fn test_payload_error() { let err: PayloadError = io::Error::new(io::ErrorKind::Other, "ParseError").into(); assert!(err.to_string().contains("ParseError")); let err = PayloadError::Incomplete(None); assert_eq!( err.to_string(), "payload reached EOF before completing: None" ); } macro_rules! from { ($from:expr => $error:pat) => { match ParseError::from($from) { err @ $error => { assert!(err.to_string().len() >= 5); } err => unreachable!("{:?}", err), } }; } macro_rules! from_and_cause { ($from:expr => $error:pat) => { match ParseError::from($from) { e @ $error => { let desc = format!("{}", e); assert_eq!(desc, format!("I/O error: {}", $from)); } _ => unreachable!("{:?}", $from), } }; } #[test] fn test_from() { from_and_cause!(io::Error::new(io::ErrorKind::Other, "other") => ParseError::Io(..)); from!(httparse::Error::HeaderName => ParseError::Header); from!(httparse::Error::HeaderName => ParseError::Header); from!(httparse::Error::HeaderValue => ParseError::Header); from!(httparse::Error::NewLine => ParseError::Header); from!(httparse::Error::Status => ParseError::Status); from!(httparse::Error::Token => ParseError::Header); from!(httparse::Error::TooManyHeaders => ParseError::TooLarge); from!(httparse::Error::Version => ParseError::Version); } } actix-http-3.9.0/src/extensions.rs000064400000000000000000000177631046102023000152740ustar 00000000000000use std::{ any::{Any, TypeId}, collections::HashMap, fmt, hash::{BuildHasherDefault, Hasher}, }; /// A hasher for `TypeId`s that takes advantage of its known characteristics. /// /// Author of `anymap` crate has done research on the topic: /// https://github.com/chris-morgan/anymap/blob/2e9a5704/src/lib.rs#L599 #[derive(Debug, Default)] struct NoOpHasher(u64); impl Hasher for NoOpHasher { fn write(&mut self, _bytes: &[u8]) { unimplemented!("This NoOpHasher can only handle u64s") } fn write_u64(&mut self, i: u64) { self.0 = i; } fn finish(&self) -> u64 { self.0 } } /// A type map for request extensions. /// /// All entries into this map must be owned types (or static references). #[derive(Default)] pub struct Extensions { /// Use AHasher with a std HashMap with for faster lookups on the small `TypeId` keys. map: HashMap, BuildHasherDefault>, } impl Extensions { /// Creates an empty `Extensions`. #[inline] pub fn new() -> Extensions { Extensions { map: HashMap::default(), } } /// Insert an item into the map. /// /// If an item of this type was already stored, it will be replaced and returned. /// /// ``` /// # use actix_http::Extensions; /// let mut map = Extensions::new(); /// assert_eq!(map.insert(""), None); /// assert_eq!(map.insert(1u32), None); /// assert_eq!(map.insert(2u32), Some(1u32)); /// assert_eq!(*map.get::().unwrap(), 2u32); /// ``` pub fn insert(&mut self, val: T) -> Option { self.map .insert(TypeId::of::(), Box::new(val)) .and_then(downcast_owned) } /// Check if map contains an item of a given type. /// /// ``` /// # use actix_http::Extensions; /// let mut map = Extensions::new(); /// assert!(!map.contains::()); /// /// assert_eq!(map.insert(1u32), None); /// assert!(map.contains::()); /// ``` pub fn contains(&self) -> bool { self.map.contains_key(&TypeId::of::()) } /// Get a reference to an item of a given type. /// /// ``` /// # use actix_http::Extensions; /// let mut map = Extensions::new(); /// map.insert(1u32); /// assert_eq!(map.get::(), Some(&1u32)); /// ``` pub fn get(&self) -> Option<&T> { self.map .get(&TypeId::of::()) .and_then(|boxed| boxed.downcast_ref()) } /// Get a mutable reference to an item of a given type. /// /// ``` /// # use actix_http::Extensions; /// let mut map = Extensions::new(); /// map.insert(1u32); /// assert_eq!(map.get_mut::(), Some(&mut 1u32)); /// ``` pub fn get_mut(&mut self) -> Option<&mut T> { self.map .get_mut(&TypeId::of::()) .and_then(|boxed| boxed.downcast_mut()) } /// Remove an item from the map of a given type. /// /// If an item of this type was already stored, it will be returned. /// /// ``` /// # use actix_http::Extensions; /// let mut map = Extensions::new(); /// /// map.insert(1u32); /// assert_eq!(map.get::(), Some(&1u32)); /// /// assert_eq!(map.remove::(), Some(1u32)); /// assert!(!map.contains::()); /// ``` pub fn remove(&mut self) -> Option { self.map.remove(&TypeId::of::()).and_then(downcast_owned) } /// Clear the `Extensions` of all inserted extensions. /// /// ``` /// # use actix_http::Extensions; /// let mut map = Extensions::new(); /// /// map.insert(1u32); /// assert!(map.contains::()); /// /// map.clear(); /// assert!(!map.contains::()); /// ``` #[inline] pub fn clear(&mut self) { self.map.clear(); } /// Extends self with the items from another `Extensions`. pub fn extend(&mut self, other: Extensions) { self.map.extend(other.map); } } impl fmt::Debug for Extensions { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("Extensions").finish() } } fn downcast_owned(boxed: Box) -> Option { boxed.downcast().ok().map(|boxed| *boxed) } #[cfg(test)] mod tests { use super::*; #[test] fn test_remove() { let mut map = Extensions::new(); map.insert::(123); assert!(map.get::().is_some()); map.remove::(); assert!(map.get::().is_none()); } #[test] fn test_clear() { let mut map = Extensions::new(); map.insert::(8); map.insert::(16); map.insert::(32); assert!(map.contains::()); assert!(map.contains::()); assert!(map.contains::()); map.clear(); assert!(!map.contains::()); assert!(!map.contains::()); assert!(!map.contains::()); map.insert::(10); assert_eq!(*map.get::().unwrap(), 10); } #[test] fn test_integers() { static A: u32 = 8; let mut map = Extensions::new(); map.insert::(8); map.insert::(16); map.insert::(32); map.insert::(64); map.insert::(128); map.insert::(8); map.insert::(16); map.insert::(32); map.insert::(64); map.insert::(128); map.insert::<&'static u32>(&A); assert!(map.get::().is_some()); assert!(map.get::().is_some()); assert!(map.get::().is_some()); assert!(map.get::().is_some()); assert!(map.get::().is_some()); assert!(map.get::().is_some()); assert!(map.get::().is_some()); assert!(map.get::().is_some()); assert!(map.get::().is_some()); assert!(map.get::().is_some()); assert!(map.get::<&'static u32>().is_some()); } #[test] fn test_composition() { struct Magi(pub T); struct Madoka { pub god: bool, } struct Homura { pub attempts: usize, } struct Mami { pub guns: usize, } let mut map = Extensions::new(); map.insert(Magi(Madoka { god: false })); map.insert(Magi(Homura { attempts: 0 })); map.insert(Magi(Mami { guns: 999 })); assert!(!map.get::>().unwrap().0.god); assert_eq!(0, map.get::>().unwrap().0.attempts); assert_eq!(999, map.get::>().unwrap().0.guns); } #[test] fn test_extensions() { #[derive(Debug, PartialEq)] struct MyType(i32); let mut extensions = Extensions::new(); extensions.insert(5i32); extensions.insert(MyType(10)); assert_eq!(extensions.get(), Some(&5i32)); assert_eq!(extensions.get_mut(), Some(&mut 5i32)); assert_eq!(extensions.remove::(), Some(5i32)); assert!(extensions.get::().is_none()); assert_eq!(extensions.get::(), None); assert_eq!(extensions.get(), Some(&MyType(10))); } #[test] fn test_extend() { #[derive(Debug, PartialEq)] struct MyType(i32); let mut extensions = Extensions::new(); extensions.insert(5i32); extensions.insert(MyType(10)); let mut other = Extensions::new(); other.insert(15i32); other.insert(20u8); extensions.extend(other); assert_eq!(extensions.get(), Some(&15i32)); assert_eq!(extensions.get_mut(), Some(&mut 15i32)); assert_eq!(extensions.remove::(), Some(15i32)); assert!(extensions.get::().is_none()); assert_eq!(extensions.get::(), None); assert_eq!(extensions.get(), Some(&MyType(10))); assert_eq!(extensions.get(), Some(&20u8)); assert_eq!(extensions.get_mut(), Some(&mut 20u8)); } } actix-http-3.9.0/src/h1/chunked.rs000064400000000000000000000331741046102023000150200ustar 00000000000000use std::{io, task::Poll}; use bytes::{Buf as _, Bytes, BytesMut}; use tracing::{debug, trace}; macro_rules! byte ( ($rdr:ident) => ({ if $rdr.len() > 0 { let b = $rdr[0]; $rdr.advance(1); b } else { return Poll::Pending } }) ); #[derive(Debug, Clone, PartialEq, Eq)] pub(super) enum ChunkedState { Size, SizeLws, Extension, SizeLf, Body, BodyCr, BodyLf, EndCr, EndLf, End, } impl ChunkedState { pub(super) fn step( &self, body: &mut BytesMut, size: &mut u64, buf: &mut Option, ) -> Poll> { use self::ChunkedState::*; match *self { Size => ChunkedState::read_size(body, size), SizeLws => ChunkedState::read_size_lws(body), Extension => ChunkedState::read_extension(body), SizeLf => ChunkedState::read_size_lf(body, *size), Body => ChunkedState::read_body(body, size, buf), BodyCr => ChunkedState::read_body_cr(body), BodyLf => ChunkedState::read_body_lf(body), EndCr => ChunkedState::read_end_cr(body), EndLf => ChunkedState::read_end_lf(body), End => Poll::Ready(Ok(ChunkedState::End)), } } fn read_size(rdr: &mut BytesMut, size: &mut u64) -> Poll> { let radix = 16; let rem = match byte!(rdr) { b @ b'0'..=b'9' => b - b'0', b @ b'a'..=b'f' => b + 10 - b'a', b @ b'A'..=b'F' => b + 10 - b'A', b'\t' | b' ' => return Poll::Ready(Ok(ChunkedState::SizeLws)), b';' => return Poll::Ready(Ok(ChunkedState::Extension)), b'\r' => return Poll::Ready(Ok(ChunkedState::SizeLf)), _ => { return Poll::Ready(Err(io::Error::new( io::ErrorKind::InvalidInput, "Invalid chunk size line: Invalid Size", ))); } }; match size.checked_mul(radix) { Some(n) => { *size = n; *size += rem as u64; Poll::Ready(Ok(ChunkedState::Size)) } None => { debug!("chunk size would overflow u64"); Poll::Ready(Err(io::Error::new( io::ErrorKind::InvalidInput, "Invalid chunk size line: Size is too big", ))) } } } fn read_size_lws(rdr: &mut BytesMut) -> Poll> { match byte!(rdr) { // LWS can follow the chunk size, but no more digits can come b'\t' | b' ' => Poll::Ready(Ok(ChunkedState::SizeLws)), b';' => Poll::Ready(Ok(ChunkedState::Extension)), b'\r' => Poll::Ready(Ok(ChunkedState::SizeLf)), _ => Poll::Ready(Err(io::Error::new( io::ErrorKind::InvalidInput, "Invalid chunk size linear white space", ))), } } fn read_extension(rdr: &mut BytesMut) -> Poll> { match byte!(rdr) { b'\r' => Poll::Ready(Ok(ChunkedState::SizeLf)), // strictly 0x20 (space) should be disallowed but we don't parse quoted strings here 0x00..=0x08 | 0x0a..=0x1f | 0x7f => Poll::Ready(Err(io::Error::new( io::ErrorKind::InvalidInput, "Invalid character in chunk extension", ))), _ => Poll::Ready(Ok(ChunkedState::Extension)), // no supported extensions } } fn read_size_lf(rdr: &mut BytesMut, size: u64) -> Poll> { match byte!(rdr) { b'\n' if size > 0 => Poll::Ready(Ok(ChunkedState::Body)), b'\n' if size == 0 => Poll::Ready(Ok(ChunkedState::EndCr)), _ => Poll::Ready(Err(io::Error::new( io::ErrorKind::InvalidInput, "Invalid chunk size LF", ))), } } fn read_body( rdr: &mut BytesMut, rem: &mut u64, buf: &mut Option, ) -> Poll> { trace!("Chunked read, remaining={:?}", rem); let len = rdr.len() as u64; if len == 0 { Poll::Ready(Ok(ChunkedState::Body)) } else { let slice; if *rem > len { slice = rdr.split().freeze(); *rem -= len; } else { slice = rdr.split_to(*rem as usize).freeze(); *rem = 0; } *buf = Some(slice); if *rem > 0 { Poll::Ready(Ok(ChunkedState::Body)) } else { Poll::Ready(Ok(ChunkedState::BodyCr)) } } } fn read_body_cr(rdr: &mut BytesMut) -> Poll> { match byte!(rdr) { b'\r' => Poll::Ready(Ok(ChunkedState::BodyLf)), _ => Poll::Ready(Err(io::Error::new( io::ErrorKind::InvalidInput, "Invalid chunk body CR", ))), } } fn read_body_lf(rdr: &mut BytesMut) -> Poll> { match byte!(rdr) { b'\n' => Poll::Ready(Ok(ChunkedState::Size)), _ => Poll::Ready(Err(io::Error::new( io::ErrorKind::InvalidInput, "Invalid chunk body LF", ))), } } fn read_end_cr(rdr: &mut BytesMut) -> Poll> { match byte!(rdr) { b'\r' => Poll::Ready(Ok(ChunkedState::EndLf)), _ => Poll::Ready(Err(io::Error::new( io::ErrorKind::InvalidInput, "Invalid chunk end CR", ))), } } fn read_end_lf(rdr: &mut BytesMut) -> Poll> { match byte!(rdr) { b'\n' => Poll::Ready(Ok(ChunkedState::End)), _ => Poll::Ready(Err(io::Error::new( io::ErrorKind::InvalidInput, "Invalid chunk end LF", ))), } } } #[cfg(test)] mod tests { use actix_codec::Decoder as _; use bytes::{Bytes, BytesMut}; use http::Method; use crate::{ error::ParseError, h1::decoder::{MessageDecoder, PayloadItem}, HttpMessage as _, Request, }; macro_rules! parse_ready { ($e:expr) => {{ match MessageDecoder::::default().decode($e) { Ok(Some((msg, _))) => msg, Ok(_) => unreachable!("Eof during parsing http request"), Err(err) => unreachable!("Error during parsing http request: {:?}", err), } }}; } macro_rules! expect_parse_err { ($e:expr) => {{ match MessageDecoder::::default().decode($e) { Err(err) => match err { ParseError::Io(_) => unreachable!("Parse error expected"), _ => {} }, _ => unreachable!("Error expected"), } }}; } #[test] fn test_parse_chunked_payload_chunk_extension() { let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ transfer-encoding: chunked\r\n\ \r\n", ); let mut reader = MessageDecoder::::default(); let (msg, pl) = reader.decode(&mut buf).unwrap().unwrap(); let mut pl = pl.unwrap(); assert!(msg.chunked().unwrap()); buf.extend(b"4;test\r\ndata\r\n4\r\nline\r\n0\r\n\r\n"); // test: test\r\n\r\n") let chunk = pl.decode(&mut buf).unwrap().unwrap().chunk(); assert_eq!(chunk, Bytes::from_static(b"data")); let chunk = pl.decode(&mut buf).unwrap().unwrap().chunk(); assert_eq!(chunk, Bytes::from_static(b"line")); let msg = pl.decode(&mut buf).unwrap().unwrap(); assert!(msg.eof()); } #[test] fn test_request_chunked() { let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ transfer-encoding: chunked\r\n\r\n", ); let req = parse_ready!(&mut buf); if let Ok(val) = req.chunked() { assert!(val); } else { unreachable!("Error"); } // intentional typo in "chunked" let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ transfer-encoding: chnked\r\n\r\n", ); expect_parse_err!(&mut buf); } #[test] fn test_http_request_chunked_payload() { let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ transfer-encoding: chunked\r\n\r\n", ); let mut reader = MessageDecoder::::default(); let (req, pl) = reader.decode(&mut buf).unwrap().unwrap(); let mut pl = pl.unwrap(); assert!(req.chunked().unwrap()); buf.extend(b"4\r\ndata\r\n4\r\nline\r\n0\r\n\r\n"); assert_eq!( pl.decode(&mut buf).unwrap().unwrap().chunk().as_ref(), b"data" ); assert_eq!( pl.decode(&mut buf).unwrap().unwrap().chunk().as_ref(), b"line" ); assert!(pl.decode(&mut buf).unwrap().unwrap().eof()); } #[test] fn test_http_request_chunked_payload_and_next_message() { let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ transfer-encoding: chunked\r\n\r\n", ); let mut reader = MessageDecoder::::default(); let (req, pl) = reader.decode(&mut buf).unwrap().unwrap(); let mut pl = pl.unwrap(); assert!(req.chunked().unwrap()); buf.extend( b"4\r\ndata\r\n4\r\nline\r\n0\r\n\r\n\ POST /test2 HTTP/1.1\r\n\ transfer-encoding: chunked\r\n\r\n" .iter(), ); let msg = pl.decode(&mut buf).unwrap().unwrap(); assert_eq!(msg.chunk().as_ref(), b"data"); let msg = pl.decode(&mut buf).unwrap().unwrap(); assert_eq!(msg.chunk().as_ref(), b"line"); let msg = pl.decode(&mut buf).unwrap().unwrap(); assert!(msg.eof()); let (req, _) = reader.decode(&mut buf).unwrap().unwrap(); assert!(req.chunked().unwrap()); assert_eq!(*req.method(), Method::POST); assert!(req.chunked().unwrap()); } #[test] fn test_http_request_chunked_payload_chunks() { let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ transfer-encoding: chunked\r\n\r\n", ); let mut reader = MessageDecoder::::default(); let (req, pl) = reader.decode(&mut buf).unwrap().unwrap(); let mut pl = pl.unwrap(); assert!(req.chunked().unwrap()); buf.extend(b"4\r\n1111\r\n"); let msg = pl.decode(&mut buf).unwrap().unwrap(); assert_eq!(msg.chunk().as_ref(), b"1111"); buf.extend(b"4\r\ndata\r"); let msg = pl.decode(&mut buf).unwrap().unwrap(); assert_eq!(msg.chunk().as_ref(), b"data"); buf.extend(b"\n4"); assert!(pl.decode(&mut buf).unwrap().is_none()); buf.extend(b"\r"); assert!(pl.decode(&mut buf).unwrap().is_none()); buf.extend(b"\n"); assert!(pl.decode(&mut buf).unwrap().is_none()); buf.extend(b"li"); let msg = pl.decode(&mut buf).unwrap().unwrap(); assert_eq!(msg.chunk().as_ref(), b"li"); //trailers //buf.feed_data("test: test\r\n"); //not_ready!(reader.parse(&mut buf, &mut readbuf)); buf.extend(b"ne\r\n0\r\n"); let msg = pl.decode(&mut buf).unwrap().unwrap(); assert_eq!(msg.chunk().as_ref(), b"ne"); assert!(pl.decode(&mut buf).unwrap().is_none()); buf.extend(b"\r\n"); assert!(pl.decode(&mut buf).unwrap().unwrap().eof()); } #[test] fn chunk_extension_quoted() { let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ Host: localhost:8080\r\n\ Transfer-Encoding: chunked\r\n\ \r\n\ 2;hello=b;one=\"1 2 3\"\r\n\ xx", ); let mut reader = MessageDecoder::::default(); let (_msg, pl) = reader.decode(&mut buf).unwrap().unwrap(); let mut pl = pl.unwrap(); let chunk = pl.decode(&mut buf).unwrap().unwrap(); assert_eq!(chunk, PayloadItem::Chunk(Bytes::from_static(b"xx"))); } #[test] fn hrs_chunk_extension_invalid() { let mut buf = BytesMut::from( "GET / HTTP/1.1\r\n\ Host: localhost:8080\r\n\ Transfer-Encoding: chunked\r\n\ \r\n\ 2;x\nx\r\n\ 4c\r\n\ 0\r\n", ); let mut reader = MessageDecoder::::default(); let (_msg, pl) = reader.decode(&mut buf).unwrap().unwrap(); let mut pl = pl.unwrap(); let err = pl.decode(&mut buf).unwrap_err(); assert!(err .to_string() .contains("Invalid character in chunk extension")); } #[test] fn hrs_chunk_size_overflow() { let mut buf = BytesMut::from( "GET / HTTP/1.1\r\n\ Host: example.com\r\n\ Transfer-Encoding: chunked\r\n\ \r\n\ f0000000000000003\r\n\ abc\r\n\ 0\r\n", ); let mut reader = MessageDecoder::::default(); let (_msg, pl) = reader.decode(&mut buf).unwrap().unwrap(); let mut pl = pl.unwrap(); let err = pl.decode(&mut buf).unwrap_err(); assert!(err .to_string() .contains("Invalid chunk size line: Size is too big")); } } actix-http-3.9.0/src/h1/client.rs000064400000000000000000000155331046102023000146540ustar 00000000000000use std::{fmt, io}; use bitflags::bitflags; use bytes::{Bytes, BytesMut}; use http::{Method, Version}; use tokio_util::codec::{Decoder, Encoder}; use super::{ decoder::{self, PayloadDecoder, PayloadItem, PayloadType}, encoder, reserve_readbuf, Message, MessageType, }; use crate::{ body::BodySize, error::{ParseError, PayloadError}, ConnectionType, RequestHeadType, ResponseHead, ServiceConfig, }; bitflags! { #[derive(Debug, Clone, Copy)] struct Flags: u8 { const HEAD = 0b0000_0001; const KEEP_ALIVE_ENABLED = 0b0000_1000; const STREAM = 0b0001_0000; } } /// HTTP/1 Codec pub struct ClientCodec { inner: ClientCodecInner, } /// HTTP/1 Payload Codec pub struct ClientPayloadCodec { inner: ClientCodecInner, } struct ClientCodecInner { config: ServiceConfig, decoder: decoder::MessageDecoder, payload: Option, version: Version, conn_type: ConnectionType, // encoder part flags: Flags, encoder: encoder::MessageEncoder, } impl Default for ClientCodec { fn default() -> Self { ClientCodec::new(ServiceConfig::default()) } } impl fmt::Debug for ClientCodec { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("h1::ClientCodec") .field("flags", &self.inner.flags) .finish_non_exhaustive() } } impl ClientCodec { /// Create HTTP/1 codec. /// /// `keepalive_enabled` how response `connection` header get generated. pub fn new(config: ServiceConfig) -> Self { let flags = if config.keep_alive().enabled() { Flags::KEEP_ALIVE_ENABLED } else { Flags::empty() }; ClientCodec { inner: ClientCodecInner { config, decoder: decoder::MessageDecoder::default(), payload: None, version: Version::HTTP_11, conn_type: ConnectionType::Close, flags, encoder: encoder::MessageEncoder::default(), }, } } /// Check if request is upgrade pub fn upgrade(&self) -> bool { self.inner.conn_type == ConnectionType::Upgrade } /// Check if last response is keep-alive pub fn keep_alive(&self) -> bool { self.inner.conn_type == ConnectionType::KeepAlive } /// Check last request's message type pub fn message_type(&self) -> MessageType { if self.inner.flags.contains(Flags::STREAM) { MessageType::Stream } else if self.inner.payload.is_none() { MessageType::None } else { MessageType::Payload } } /// Convert message codec to a payload codec pub fn into_payload_codec(self) -> ClientPayloadCodec { ClientPayloadCodec { inner: self.inner } } } impl ClientPayloadCodec { /// Check if last response is keep-alive pub fn keep_alive(&self) -> bool { self.inner.conn_type == ConnectionType::KeepAlive } /// Transform payload codec to a message codec pub fn into_message_codec(self) -> ClientCodec { ClientCodec { inner: self.inner } } } impl Decoder for ClientCodec { type Item = ResponseHead; type Error = ParseError; fn decode(&mut self, src: &mut BytesMut) -> Result, Self::Error> { debug_assert!( self.inner.payload.is_none(), "Payload decoder should not be set" ); if let Some((req, payload)) = self.inner.decoder.decode(src)? { if let Some(conn_type) = req.conn_type() { // do not use peer's keep-alive self.inner.conn_type = if conn_type == ConnectionType::KeepAlive { self.inner.conn_type } else { conn_type }; } if !self.inner.flags.contains(Flags::HEAD) { match payload { PayloadType::None => self.inner.payload = None, PayloadType::Payload(pl) => self.inner.payload = Some(pl), PayloadType::Stream(pl) => { self.inner.payload = Some(pl); self.inner.flags.insert(Flags::STREAM); } } } else { self.inner.payload = None; } reserve_readbuf(src); Ok(Some(req)) } else { Ok(None) } } } impl Decoder for ClientPayloadCodec { type Item = Option; type Error = PayloadError; fn decode(&mut self, src: &mut BytesMut) -> Result, Self::Error> { debug_assert!( self.inner.payload.is_some(), "Payload decoder is not specified" ); Ok(match self.inner.payload.as_mut().unwrap().decode(src)? { Some(PayloadItem::Chunk(chunk)) => { reserve_readbuf(src); Some(Some(chunk)) } Some(PayloadItem::Eof) => { self.inner.payload.take(); Some(None) } None => None, }) } } impl Encoder> for ClientCodec { type Error = io::Error; fn encode( &mut self, item: Message<(RequestHeadType, BodySize)>, dst: &mut BytesMut, ) -> Result<(), Self::Error> { match item { Message::Item((mut head, length)) => { let inner = &mut self.inner; inner.version = head.as_ref().version; inner .flags .set(Flags::HEAD, head.as_ref().method == Method::HEAD); // connection status inner.conn_type = match head.as_ref().connection_type() { ConnectionType::KeepAlive => { if inner.flags.contains(Flags::KEEP_ALIVE_ENABLED) { ConnectionType::KeepAlive } else { ConnectionType::Close } } ConnectionType::Upgrade => ConnectionType::Upgrade, ConnectionType::Close => ConnectionType::Close, }; inner.encoder.encode( dst, &mut head, false, false, inner.version, length, inner.conn_type, &inner.config, )?; } Message::Chunk(Some(bytes)) => { self.inner.encoder.encode_chunk(bytes.as_ref(), dst)?; } Message::Chunk(None) => { self.inner.encoder.encode_eof(dst)?; } } Ok(()) } } actix-http-3.9.0/src/h1/codec.rs000064400000000000000000000153261046102023000144530ustar 00000000000000use std::{fmt, io}; use bitflags::bitflags; use bytes::BytesMut; use http::{Method, Version}; use tokio_util::codec::{Decoder, Encoder}; use super::{ decoder::{self, PayloadDecoder, PayloadItem, PayloadType}, encoder, Message, MessageType, }; use crate::{body::BodySize, error::ParseError, ConnectionType, Request, Response, ServiceConfig}; bitflags! { #[derive(Debug, Clone, Copy)] struct Flags: u8 { const HEAD = 0b0000_0001; const KEEP_ALIVE_ENABLED = 0b0000_0010; const STREAM = 0b0000_0100; } } /// HTTP/1 Codec pub struct Codec { config: ServiceConfig, decoder: decoder::MessageDecoder, payload: Option, version: Version, conn_type: ConnectionType, // encoder part flags: Flags, encoder: encoder::MessageEncoder>, } impl Default for Codec { fn default() -> Self { Codec::new(ServiceConfig::default()) } } impl fmt::Debug for Codec { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("h1::Codec") .field("flags", &self.flags) .finish_non_exhaustive() } } impl Codec { /// Create HTTP/1 codec. /// /// `keepalive_enabled` how response `connection` header get generated. pub fn new(config: ServiceConfig) -> Self { let flags = if config.keep_alive().enabled() { Flags::KEEP_ALIVE_ENABLED } else { Flags::empty() }; Codec { config, flags, decoder: decoder::MessageDecoder::default(), payload: None, version: Version::HTTP_11, conn_type: ConnectionType::Close, encoder: encoder::MessageEncoder::default(), } } /// Check if request is upgrade. #[inline] pub fn upgrade(&self) -> bool { self.conn_type == ConnectionType::Upgrade } /// Check if last response is keep-alive. #[inline] pub fn keep_alive(&self) -> bool { self.conn_type == ConnectionType::KeepAlive } /// Check if keep-alive enabled on server level. #[inline] pub fn keep_alive_enabled(&self) -> bool { self.flags.contains(Flags::KEEP_ALIVE_ENABLED) } /// Check last request's message type. #[inline] pub fn message_type(&self) -> MessageType { if self.flags.contains(Flags::STREAM) { MessageType::Stream } else if self.payload.is_none() { MessageType::None } else { MessageType::Payload } } #[inline] pub fn config(&self) -> &ServiceConfig { &self.config } } impl Decoder for Codec { type Item = Message; type Error = ParseError; fn decode(&mut self, src: &mut BytesMut) -> Result, Self::Error> { if let Some(ref mut payload) = self.payload { Ok(match payload.decode(src)? { Some(PayloadItem::Chunk(chunk)) => Some(Message::Chunk(Some(chunk))), Some(PayloadItem::Eof) => { self.payload.take(); Some(Message::Chunk(None)) } None => None, }) } else if let Some((req, payload)) = self.decoder.decode(src)? { let head = req.head(); self.flags.set(Flags::HEAD, head.method == Method::HEAD); self.version = head.version; self.conn_type = head.connection_type(); if self.conn_type == ConnectionType::KeepAlive && !self.flags.contains(Flags::KEEP_ALIVE_ENABLED) { self.conn_type = ConnectionType::Close } match payload { PayloadType::None => self.payload = None, PayloadType::Payload(pl) => self.payload = Some(pl), PayloadType::Stream(pl) => { self.payload = Some(pl); self.flags.insert(Flags::STREAM); } } Ok(Some(Message::Item(req))) } else { Ok(None) } } } impl Encoder, BodySize)>> for Codec { type Error = io::Error; fn encode( &mut self, item: Message<(Response<()>, BodySize)>, dst: &mut BytesMut, ) -> Result<(), Self::Error> { match item { Message::Item((mut res, length)) => { // set response version res.head_mut().version = self.version; // connection status self.conn_type = if let Some(ct) = res.head().conn_type() { if ct == ConnectionType::KeepAlive { self.conn_type } else { ct } } else { self.conn_type }; // encode message self.encoder.encode( dst, &mut res, self.flags.contains(Flags::HEAD), self.flags.contains(Flags::STREAM), self.version, length, self.conn_type, &self.config, )?; } Message::Chunk(Some(bytes)) => { self.encoder.encode_chunk(bytes.as_ref(), dst)?; } Message::Chunk(None) => { self.encoder.encode_eof(dst)?; } } Ok(()) } } #[cfg(test)] mod tests { use super::*; use crate::HttpMessage as _; #[actix_rt::test] async fn test_http_request_chunked_payload_and_next_message() { let mut codec = Codec::default(); let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ transfer-encoding: chunked\r\n\r\n", ); let item = codec.decode(&mut buf).unwrap().unwrap(); let req = item.message(); assert_eq!(req.method(), Method::GET); assert!(req.chunked().unwrap()); buf.extend( b"4\r\ndata\r\n4\r\nline\r\n0\r\n\r\n\ POST /test2 HTTP/1.1\r\n\ transfer-encoding: chunked\r\n\r\n" .iter(), ); let msg = codec.decode(&mut buf).unwrap().unwrap(); assert_eq!(msg.chunk().as_ref(), b"data"); let msg = codec.decode(&mut buf).unwrap().unwrap(); assert_eq!(msg.chunk().as_ref(), b"line"); let msg = codec.decode(&mut buf).unwrap().unwrap(); assert!(msg.eof()); // decode next message let item = codec.decode(&mut buf).unwrap().unwrap(); let req = item.message(); assert_eq!(*req.method(), Method::POST); assert!(req.chunked().unwrap()); } } actix-http-3.9.0/src/h1/decoder.rs000064400000000000000000001135511046102023000150020ustar 00000000000000use std::{io, marker::PhantomData, mem::MaybeUninit, task::Poll}; use actix_codec::Decoder; use bytes::{Bytes, BytesMut}; use http::{ header::{self, HeaderName, HeaderValue}, Method, StatusCode, Uri, Version, }; use tracing::{debug, error, trace}; use super::chunked::ChunkedState; use crate::{error::ParseError, header::HeaderMap, ConnectionType, Request, ResponseHead}; pub(crate) const MAX_BUFFER_SIZE: usize = 131_072; const MAX_HEADERS: usize = 96; /// Incoming message decoder pub(crate) struct MessageDecoder(PhantomData); #[derive(Debug)] /// Incoming request type pub(crate) enum PayloadType { None, Payload(PayloadDecoder), Stream(PayloadDecoder), } impl Default for MessageDecoder { fn default() -> Self { MessageDecoder(PhantomData) } } impl Decoder for MessageDecoder { type Item = (T, PayloadType); type Error = ParseError; fn decode(&mut self, src: &mut BytesMut) -> Result, Self::Error> { T::decode(src) } } pub(crate) enum PayloadLength { Payload(PayloadType), UpgradeWebSocket, None, } impl PayloadLength { /// Returns true if variant is `None`. fn is_none(&self) -> bool { matches!(self, Self::None) } /// Returns true if variant is represents zero-length (not none) payload. fn is_zero(&self) -> bool { matches!( self, PayloadLength::Payload(PayloadType::Payload(PayloadDecoder { kind: Kind::Length(0) })) ) } } pub(crate) trait MessageType: Sized { fn set_connection_type(&mut self, conn_type: Option); fn set_expect(&mut self); fn headers_mut(&mut self) -> &mut HeaderMap; fn decode(src: &mut BytesMut) -> Result, ParseError>; fn set_headers( &mut self, slice: &Bytes, raw_headers: &[HeaderIndex], version: Version, ) -> Result { let mut ka = None; let mut has_upgrade_websocket = false; let mut expect = false; let mut chunked = false; let mut seen_te = false; let mut content_length = None; { let headers = self.headers_mut(); for idx in raw_headers.iter() { let name = HeaderName::from_bytes(&slice[idx.name.0..idx.name.1]).unwrap(); // SAFETY: httparse already checks header value is only visible ASCII bytes // from_maybe_shared_unchecked contains debug assertions so they are omitted here let value = unsafe { HeaderValue::from_maybe_shared_unchecked(slice.slice(idx.value.0..idx.value.1)) }; match name { header::CONTENT_LENGTH if content_length.is_some() => { debug!("multiple Content-Length"); return Err(ParseError::Header); } header::CONTENT_LENGTH => match value.to_str().map(str::trim) { Ok(val) if val.starts_with('+') => { debug!("illegal Content-Length: {:?}", val); return Err(ParseError::Header); } Ok(val) => { if let Ok(len) = val.parse::() { // accept 0 lengths here and remove them in `decode` after all // headers have been processed to prevent request smuggling issues content_length = Some(len); } else { debug!("illegal Content-Length: {:?}", val); return Err(ParseError::Header); } } Err(_) => { debug!("illegal Content-Length: {:?}", value); return Err(ParseError::Header); } }, // transfer-encoding header::TRANSFER_ENCODING if seen_te => { debug!("multiple Transfer-Encoding not allowed"); return Err(ParseError::Header); } header::TRANSFER_ENCODING if version == Version::HTTP_11 => { seen_te = true; if let Ok(val) = value.to_str().map(str::trim) { if val.eq_ignore_ascii_case("chunked") { chunked = true; } else if val.eq_ignore_ascii_case("identity") { // allow silently since multiple TE headers are already checked } else { debug!("illegal Transfer-Encoding: {:?}", val); return Err(ParseError::Header); } } else { return Err(ParseError::Header); } } // connection keep-alive state header::CONNECTION => { ka = if let Ok(conn) = value.to_str().map(str::trim) { if conn.eq_ignore_ascii_case("keep-alive") { Some(ConnectionType::KeepAlive) } else if conn.eq_ignore_ascii_case("close") { Some(ConnectionType::Close) } else if conn.eq_ignore_ascii_case("upgrade") { Some(ConnectionType::Upgrade) } else { None } } else { None }; } header::UPGRADE => { if let Ok(val) = value.to_str().map(str::trim) { if val.eq_ignore_ascii_case("websocket") { has_upgrade_websocket = true; } } } header::EXPECT => { let bytes = value.as_bytes(); if bytes.len() >= 4 && &bytes[0..4] == b"100-" { expect = true; } } _ => {} } headers.append(name, value); } } self.set_connection_type(ka); if expect { self.set_expect() } // https://datatracker.ietf.org/doc/html/rfc7230#section-3.3.3 if chunked { // Chunked encoding Ok(PayloadLength::Payload(PayloadType::Payload( PayloadDecoder::chunked(), ))) } else if has_upgrade_websocket { Ok(PayloadLength::UpgradeWebSocket) } else if let Some(len) = content_length { // Content-Length Ok(PayloadLength::Payload(PayloadType::Payload( PayloadDecoder::length(len), ))) } else { Ok(PayloadLength::None) } } } impl MessageType for Request { fn set_connection_type(&mut self, conn_type: Option) { if let Some(ctype) = conn_type { self.head_mut().set_connection_type(ctype); } } fn set_expect(&mut self) { self.head_mut().set_expect(); } fn headers_mut(&mut self) -> &mut HeaderMap { &mut self.head_mut().headers } fn decode(src: &mut BytesMut) -> Result, ParseError> { let mut headers: [HeaderIndex; MAX_HEADERS] = EMPTY_HEADER_INDEX_ARRAY; let (len, method, uri, ver, h_len) = { // SAFETY: // Create an uninitialized array of `MaybeUninit`. The `assume_init` is safe because the // type we are claiming to have initialized here is a bunch of `MaybeUninit`s, which // do not require initialization. let mut parsed = unsafe { MaybeUninit::<[MaybeUninit>; MAX_HEADERS]>::uninit() .assume_init() }; let mut req = httparse::Request::new(&mut []); match req.parse_with_uninit_headers(src, &mut parsed)? { httparse::Status::Complete(len) => { let method = Method::from_bytes(req.method.unwrap().as_bytes()) .map_err(|_| ParseError::Method)?; let uri = Uri::try_from(req.path.unwrap())?; let version = if req.version.unwrap() == 1 { Version::HTTP_11 } else { Version::HTTP_10 }; HeaderIndex::record(src, req.headers, &mut headers); (len, method, uri, version, req.headers.len()) } httparse::Status::Partial => { return if src.len() >= MAX_BUFFER_SIZE { trace!("MAX_BUFFER_SIZE unprocessed data reached, closing"); Err(ParseError::TooLarge) } else { // Return None to notify more read are needed for parsing request Ok(None) }; } } }; let mut msg = Request::new(); // convert headers let mut length = msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len], ver)?; // disallow HTTP/1.0 POST requests that do not contain a Content-Length headers // see https://datatracker.ietf.org/doc/html/rfc1945#section-7.2.2 if ver == Version::HTTP_10 && method == Method::POST && length.is_none() { debug!("no Content-Length specified for HTTP/1.0 POST request"); return Err(ParseError::Header); } // Remove CL value if 0 now that all headers and HTTP/1.0 special cases are processed. // Protects against some request smuggling attacks. // See https://github.com/actix/actix-web/issues/2767. if length.is_zero() { length = PayloadLength::None; } // payload decoder let decoder = match length { PayloadLength::Payload(pl) => pl, PayloadLength::UpgradeWebSocket => { // upgrade (WebSocket) PayloadType::Stream(PayloadDecoder::eof()) } PayloadLength::None => { if method == Method::CONNECT { PayloadType::Stream(PayloadDecoder::eof()) } else { PayloadType::None } } }; let head = msg.head_mut(); head.uri = uri; head.method = method; head.version = ver; Ok(Some((msg, decoder))) } } impl MessageType for ResponseHead { fn set_connection_type(&mut self, conn_type: Option) { if let Some(ctype) = conn_type { ResponseHead::set_connection_type(self, ctype); } } fn set_expect(&mut self) {} fn headers_mut(&mut self) -> &mut HeaderMap { &mut self.headers } fn decode(src: &mut BytesMut) -> Result, ParseError> { let mut headers: [HeaderIndex; MAX_HEADERS] = EMPTY_HEADER_INDEX_ARRAY; let (len, ver, status, h_len) = { // SAFETY: // Create an uninitialized array of `MaybeUninit`. The `assume_init` is safe because the // type we are claiming to have initialized here is a bunch of `MaybeUninit`s, which // do not require initialization. let mut parsed = unsafe { MaybeUninit::<[MaybeUninit>; MAX_HEADERS]>::uninit() .assume_init() }; let mut res = httparse::Response::new(&mut []); let mut config = httparse::ParserConfig::default(); config.allow_spaces_after_header_name_in_responses(true); match config.parse_response_with_uninit_headers(&mut res, src, &mut parsed)? { httparse::Status::Complete(len) => { let version = if res.version.unwrap() == 1 { Version::HTTP_11 } else { Version::HTTP_10 }; let status = StatusCode::from_u16(res.code.unwrap()).map_err(|_| ParseError::Status)?; HeaderIndex::record(src, res.headers, &mut headers); (len, version, status, res.headers.len()) } httparse::Status::Partial => { return if src.len() >= MAX_BUFFER_SIZE { error!("MAX_BUFFER_SIZE unprocessed data reached, closing"); Err(ParseError::TooLarge) } else { Ok(None) } } } }; let mut msg = ResponseHead::new(status); msg.version = ver; // convert headers let mut length = msg.set_headers(&src.split_to(len).freeze(), &headers[..h_len], ver)?; // Remove CL value if 0 now that all headers and HTTP/1.0 special cases are processed. // Protects against some request smuggling attacks. // See https://github.com/actix/actix-web/issues/2767. if length.is_zero() { length = PayloadLength::None; } // message payload let decoder = if let PayloadLength::Payload(pl) = length { pl } else if status == StatusCode::SWITCHING_PROTOCOLS { // switching protocol or connect PayloadType::Stream(PayloadDecoder::eof()) } else { // for HTTP/1.0 read to eof and close connection if msg.version == Version::HTTP_10 { msg.set_connection_type(ConnectionType::Close); PayloadType::Payload(PayloadDecoder::eof()) } else { PayloadType::None } }; Ok(Some((msg, decoder))) } } #[derive(Clone, Copy)] pub(crate) struct HeaderIndex { pub(crate) name: (usize, usize), pub(crate) value: (usize, usize), } pub(crate) const EMPTY_HEADER_INDEX: HeaderIndex = HeaderIndex { name: (0, 0), value: (0, 0), }; pub(crate) const EMPTY_HEADER_INDEX_ARRAY: [HeaderIndex; MAX_HEADERS] = [EMPTY_HEADER_INDEX; MAX_HEADERS]; impl HeaderIndex { pub(crate) fn record( bytes: &[u8], headers: &[httparse::Header<'_>], indices: &mut [HeaderIndex], ) { let bytes_ptr = bytes.as_ptr() as usize; for (header, indices) in headers.iter().zip(indices.iter_mut()) { let name_start = header.name.as_ptr() as usize - bytes_ptr; let name_end = name_start + header.name.len(); indices.name = (name_start, name_end); let value_start = header.value.as_ptr() as usize - bytes_ptr; let value_end = value_start + header.value.len(); indices.value = (value_start, value_end); } } } #[derive(Debug, Clone, PartialEq, Eq)] /// Chunk type yielded while decoding a payload. pub enum PayloadItem { Chunk(Bytes), Eof, } /// Decoder that can handle different payload types. /// /// If a message body does not use `Transfer-Encoding`, it should include a `Content-Length`. #[derive(Debug, Clone, PartialEq, Eq)] pub struct PayloadDecoder { kind: Kind, } impl PayloadDecoder { /// Constructs a fixed-length payload decoder. pub fn length(x: u64) -> PayloadDecoder { PayloadDecoder { kind: Kind::Length(x), } } /// Constructs a chunked encoding decoder. pub fn chunked() -> PayloadDecoder { PayloadDecoder { kind: Kind::Chunked(ChunkedState::Size, 0), } } /// Creates an decoder that yields chunks until the stream returns EOF. pub fn eof() -> PayloadDecoder { PayloadDecoder { kind: Kind::Eof } } } #[derive(Debug, Clone, PartialEq, Eq)] enum Kind { /// A reader used when a `Content-Length` header is passed with a positive integer. Length(u64), /// A reader used when `Transfer-Encoding` is `chunked`. Chunked(ChunkedState, u64), /// A reader used for responses that don't indicate a length or chunked. /// /// Note: This should only used for `Response`s. It is illegal for a `Request` to be made /// without either of `Content-Length` and `Transfer-Encoding: chunked` missing, as explained /// in [RFC 7230 §3.3.3]: /// /// > If a Transfer-Encoding header field is present in a response and the chunked transfer /// > coding is not the final encoding, the message body length is determined by reading the /// > connection until it is closed by the server. If a Transfer-Encoding header field is /// > present in a request and the chunked transfer coding is not the final encoding, the /// > message body length cannot be determined reliably; the server MUST respond with the 400 /// > (Bad Request) status code and then close the connection. /// /// [RFC 7230 §3.3.3]: https://datatracker.ietf.org/doc/html/rfc7230#section-3.3.3 Eof, } impl Decoder for PayloadDecoder { type Item = PayloadItem; type Error = io::Error; fn decode(&mut self, src: &mut BytesMut) -> Result, Self::Error> { match self.kind { Kind::Length(ref mut remaining) => { if *remaining == 0 { Ok(Some(PayloadItem::Eof)) } else { if src.is_empty() { return Ok(None); } let len = src.len() as u64; let buf; if *remaining > len { buf = src.split().freeze(); *remaining -= len; } else { buf = src.split_to(*remaining as usize).freeze(); *remaining = 0; }; trace!("Length read: {}", buf.len()); Ok(Some(PayloadItem::Chunk(buf))) } } Kind::Chunked(ref mut state, ref mut size) => { loop { let mut buf = None; // advances the chunked state *state = match state.step(src, size, &mut buf) { Poll::Pending => return Ok(None), Poll::Ready(Ok(state)) => state, Poll::Ready(Err(err)) => return Err(err), }; if *state == ChunkedState::End { trace!("End of chunked stream"); return Ok(Some(PayloadItem::Eof)); } if let Some(buf) = buf { return Ok(Some(PayloadItem::Chunk(buf))); } if src.is_empty() { return Ok(None); } } } Kind::Eof => { if src.is_empty() { Ok(None) } else { Ok(Some(PayloadItem::Chunk(src.split().freeze()))) } } } } } #[cfg(test)] mod tests { use super::*; use crate::{header::SET_COOKIE, HttpMessage as _}; impl PayloadType { pub(crate) fn unwrap(self) -> PayloadDecoder { match self { PayloadType::Payload(pl) => pl, _ => panic!(), } } pub(crate) fn is_unhandled(&self) -> bool { matches!(self, PayloadType::Stream(_)) } } impl PayloadItem { pub(crate) fn chunk(self) -> Bytes { match self { PayloadItem::Chunk(chunk) => chunk, _ => panic!("error"), } } pub(crate) fn eof(&self) -> bool { matches!(*self, PayloadItem::Eof) } } macro_rules! parse_ready { ($e:expr) => {{ match MessageDecoder::::default().decode($e) { Ok(Some((msg, _))) => msg, Ok(_) => unreachable!("Eof during parsing http request"), Err(err) => unreachable!("Error during parsing http request: {:?}", err), } }}; } macro_rules! expect_parse_err { ($e:expr) => {{ match MessageDecoder::::default().decode($e) { Err(err) => match err { ParseError::Io(_) => unreachable!("Parse error expected"), _ => {} }, _ => unreachable!("Error expected"), } }}; } #[test] fn test_parse() { let mut buf = BytesMut::from("GET /test HTTP/1.1\r\n\r\n"); let mut reader = MessageDecoder::::default(); match reader.decode(&mut buf) { Ok(Some((req, _))) => { assert_eq!(req.version(), Version::HTTP_11); assert_eq!(*req.method(), Method::GET); assert_eq!(req.path(), "/test"); } Ok(_) | Err(_) => unreachable!("Error during parsing http request"), } } #[test] fn test_parse_partial() { let mut buf = BytesMut::from("PUT /test HTTP/1"); let mut reader = MessageDecoder::::default(); assert!(reader.decode(&mut buf).unwrap().is_none()); buf.extend(b".1\r\n\r\n"); let (req, _) = reader.decode(&mut buf).unwrap().unwrap(); assert_eq!(req.version(), Version::HTTP_11); assert_eq!(*req.method(), Method::PUT); assert_eq!(req.path(), "/test"); } #[test] fn parse_h09_reject() { let mut buf = BytesMut::from( "GET /test1 HTTP/0.9\r\n\ \r\n", ); let mut reader = MessageDecoder::::default(); reader.decode(&mut buf).unwrap_err(); let mut buf = BytesMut::from( "POST /test2 HTTP/0.9\r\n\ Content-Length: 3\r\n\ \r\n abc", ); let mut reader = MessageDecoder::::default(); reader.decode(&mut buf).unwrap_err(); } #[test] fn parse_h10_get() { let mut buf = BytesMut::from( "GET /test1 HTTP/1.0\r\n\ \r\n", ); let mut reader = MessageDecoder::::default(); let (req, _) = reader.decode(&mut buf).unwrap().unwrap(); assert_eq!(req.version(), Version::HTTP_10); assert_eq!(*req.method(), Method::GET); assert_eq!(req.path(), "/test1"); let mut buf = BytesMut::from( "GET /test2 HTTP/1.0\r\n\ Content-Length: 0\r\n\ \r\n", ); let mut reader = MessageDecoder::::default(); let (req, _) = reader.decode(&mut buf).unwrap().unwrap(); assert_eq!(req.version(), Version::HTTP_10); assert_eq!(*req.method(), Method::GET); assert_eq!(req.path(), "/test2"); let mut buf = BytesMut::from( "GET /test3 HTTP/1.0\r\n\ Content-Length: 3\r\n\ \r\n abc", ); let mut reader = MessageDecoder::::default(); let (req, _) = reader.decode(&mut buf).unwrap().unwrap(); assert_eq!(req.version(), Version::HTTP_10); assert_eq!(*req.method(), Method::GET); assert_eq!(req.path(), "/test3"); } #[test] fn parse_h10_post() { let mut buf = BytesMut::from( "POST /test1 HTTP/1.0\r\n\ Content-Length: 3\r\n\ \r\n\ abc", ); let mut reader = MessageDecoder::::default(); let (req, _) = reader.decode(&mut buf).unwrap().unwrap(); assert_eq!(req.version(), Version::HTTP_10); assert_eq!(*req.method(), Method::POST); assert_eq!(req.path(), "/test1"); let mut buf = BytesMut::from( "POST /test2 HTTP/1.0\r\n\ Content-Length: 0\r\n\ \r\n", ); let mut reader = MessageDecoder::::default(); let (req, _) = reader.decode(&mut buf).unwrap().unwrap(); assert_eq!(req.version(), Version::HTTP_10); assert_eq!(*req.method(), Method::POST); assert_eq!(req.path(), "/test2"); let mut buf = BytesMut::from( "POST /test3 HTTP/1.0\r\n\ \r\n", ); let mut reader = MessageDecoder::::default(); let err = reader.decode(&mut buf).unwrap_err(); assert!(err.to_string().contains("Header")) } #[test] fn test_parse_body() { let mut buf = BytesMut::from("GET /test HTTP/1.1\r\nContent-Length: 4\r\n\r\nbody"); let mut reader = MessageDecoder::::default(); let (req, pl) = reader.decode(&mut buf).unwrap().unwrap(); let mut pl = pl.unwrap(); assert_eq!(req.version(), Version::HTTP_11); assert_eq!(*req.method(), Method::GET); assert_eq!(req.path(), "/test"); assert_eq!( pl.decode(&mut buf).unwrap().unwrap().chunk().as_ref(), b"body" ); } #[test] fn test_parse_body_crlf() { let mut buf = BytesMut::from("\r\nGET /test HTTP/1.1\r\nContent-Length: 4\r\n\r\nbody"); let mut reader = MessageDecoder::::default(); let (req, pl) = reader.decode(&mut buf).unwrap().unwrap(); let mut pl = pl.unwrap(); assert_eq!(req.version(), Version::HTTP_11); assert_eq!(*req.method(), Method::GET); assert_eq!(req.path(), "/test"); assert_eq!( pl.decode(&mut buf).unwrap().unwrap().chunk().as_ref(), b"body" ); } #[test] fn test_parse_partial_eof() { let mut buf = BytesMut::from("GET /test HTTP/1.1\r\n"); let mut reader = MessageDecoder::::default(); assert!(reader.decode(&mut buf).unwrap().is_none()); buf.extend(b"\r\n"); let (req, _) = reader.decode(&mut buf).unwrap().unwrap(); assert_eq!(req.version(), Version::HTTP_11); assert_eq!(*req.method(), Method::GET); assert_eq!(req.path(), "/test"); } #[test] fn test_headers_split_field() { let mut buf = BytesMut::from("GET /test HTTP/1.1\r\n"); let mut reader = MessageDecoder::::default(); assert! { reader.decode(&mut buf).unwrap().is_none() } buf.extend(b"t"); assert! { reader.decode(&mut buf).unwrap().is_none() } buf.extend(b"es"); assert! { reader.decode(&mut buf).unwrap().is_none() } buf.extend(b"t: value\r\n\r\n"); let (req, _) = reader.decode(&mut buf).unwrap().unwrap(); assert_eq!(req.version(), Version::HTTP_11); assert_eq!(*req.method(), Method::GET); assert_eq!(req.path(), "/test"); assert_eq!( req.headers() .get(HeaderName::try_from("test").unwrap()) .unwrap() .as_bytes(), b"value" ); } #[test] fn test_headers_multi_value() { let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ Set-Cookie: c1=cookie1\r\n\ Set-Cookie: c2=cookie2\r\n\r\n", ); let mut reader = MessageDecoder::::default(); let (req, _) = reader.decode(&mut buf).unwrap().unwrap(); let val: Vec<_> = req .headers() .get_all(SET_COOKIE) .map(|v| v.to_str().unwrap().to_owned()) .collect(); assert_eq!(val[0], "c1=cookie1"); assert_eq!(val[1], "c2=cookie2"); } #[test] fn test_conn_default_1_0() { let req = parse_ready!(&mut BytesMut::from("GET /test HTTP/1.0\r\n\r\n")); assert_eq!(req.head().connection_type(), ConnectionType::Close); } #[test] fn test_conn_default_1_1() { let req = parse_ready!(&mut BytesMut::from("GET /test HTTP/1.1\r\n\r\n")); assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive); } #[test] fn test_conn_close() { let req = parse_ready!(&mut BytesMut::from( "GET /test HTTP/1.1\r\n\ connection: close\r\n\r\n", )); assert_eq!(req.head().connection_type(), ConnectionType::Close); let req = parse_ready!(&mut BytesMut::from( "GET /test HTTP/1.1\r\n\ connection: Close\r\n\r\n", )); assert_eq!(req.head().connection_type(), ConnectionType::Close); } #[test] fn test_conn_close_1_0() { let req = parse_ready!(&mut BytesMut::from( "GET /test HTTP/1.0\r\n\ connection: close\r\n\r\n", )); assert_eq!(req.head().connection_type(), ConnectionType::Close); } #[test] fn test_conn_keep_alive_1_0() { let req = parse_ready!(&mut BytesMut::from( "GET /test HTTP/1.0\r\n\ connection: keep-alive\r\n\r\n", )); assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive); let req = parse_ready!(&mut BytesMut::from( "GET /test HTTP/1.0\r\n\ connection: Keep-Alive\r\n\r\n", )); assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive); } #[test] fn test_conn_keep_alive_1_1() { let req = parse_ready!(&mut BytesMut::from( "GET /test HTTP/1.1\r\n\ connection: keep-alive\r\n\r\n", )); assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive); } #[test] fn test_conn_other_1_0() { let req = parse_ready!(&mut BytesMut::from( "GET /test HTTP/1.0\r\n\ connection: other\r\n\r\n", )); assert_eq!(req.head().connection_type(), ConnectionType::Close); } #[test] fn test_conn_other_1_1() { let req = parse_ready!(&mut BytesMut::from( "GET /test HTTP/1.1\r\n\ connection: other\r\n\r\n", )); assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive); } #[test] fn test_conn_upgrade() { let req = parse_ready!(&mut BytesMut::from( "GET /test HTTP/1.1\r\n\ upgrade: websockets\r\n\ connection: upgrade\r\n\r\n", )); assert!(req.upgrade()); assert_eq!(req.head().connection_type(), ConnectionType::Upgrade); let req = parse_ready!(&mut BytesMut::from( "GET /test HTTP/1.1\r\n\ upgrade: Websockets\r\n\ connection: Upgrade\r\n\r\n", )); assert!(req.upgrade()); assert_eq!(req.head().connection_type(), ConnectionType::Upgrade); } #[test] fn test_conn_upgrade_connect_method() { let req = parse_ready!(&mut BytesMut::from( "CONNECT /test HTTP/1.1\r\n\ content-type: text/plain\r\n\r\n", )); assert!(req.upgrade()); } #[test] fn test_headers_bad_content_length() { // string CL expect_parse_err!(&mut BytesMut::from( "GET /test HTTP/1.1\r\n\ content-length: line\r\n\r\n", )); // negative CL expect_parse_err!(&mut BytesMut::from( "GET /test HTTP/1.1\r\n\ content-length: -1\r\n\r\n", )); } #[test] fn octal_ish_cl_parsed_as_decimal() { let mut buf = BytesMut::from( "POST /test HTTP/1.1\r\n\ content-length: 011\r\n\r\n", ); let mut reader = MessageDecoder::::default(); let (_req, pl) = reader.decode(&mut buf).unwrap().unwrap(); assert!(matches!( pl, PayloadType::Payload(pl) if pl == PayloadDecoder::length(11) )); } #[test] fn test_invalid_header() { expect_parse_err!(&mut BytesMut::from( "GET /test HTTP/1.1\r\n\ test line\r\n\r\n", )); } #[test] fn test_invalid_name() { expect_parse_err!(&mut BytesMut::from( "GET /test HTTP/1.1\r\n\ test[]: line\r\n\r\n", )); } #[test] fn test_http_request_bad_status_line() { expect_parse_err!(&mut BytesMut::from("getpath \r\n\r\n")); } #[test] fn test_http_request_upgrade_websocket() { let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ connection: upgrade\r\n\ upgrade: websocket\r\n\r\n\ some raw data", ); let mut reader = MessageDecoder::::default(); let (req, pl) = reader.decode(&mut buf).unwrap().unwrap(); assert_eq!(req.head().connection_type(), ConnectionType::Upgrade); assert!(req.upgrade()); assert!(pl.is_unhandled()); } #[test] fn test_http_request_upgrade_h2c() { let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ connection: upgrade, http2-settings\r\n\ upgrade: h2c\r\n\ http2-settings: dummy\r\n\r\n", ); let mut reader = MessageDecoder::::default(); let (req, pl) = reader.decode(&mut buf).unwrap().unwrap(); // `connection: upgrade, http2-settings` doesn't work properly.. // see MessageType::set_headers(). // // The line below should be: // assert_eq!(req.head().connection_type(), ConnectionType::Upgrade); assert_eq!(req.head().connection_type(), ConnectionType::KeepAlive); assert!(req.upgrade()); assert!(!pl.is_unhandled()); } #[test] fn test_http_request_parser_utf8() { let req = parse_ready!(&mut BytesMut::from( "GET /test HTTP/1.1\r\n\ x-test: тест\r\n\r\n", )); assert_eq!( req.headers().get("x-test").unwrap().as_bytes(), "тест".as_bytes() ); } #[test] fn test_http_request_parser_two_slashes() { let req = parse_ready!(&mut BytesMut::from("GET //path HTTP/1.1\r\n\r\n")); assert_eq!(req.path(), "//path"); } #[test] fn test_http_request_parser_bad_method() { expect_parse_err!(&mut BytesMut::from("!12%()+=~$ /get HTTP/1.1\r\n\r\n")); } #[test] fn test_http_request_parser_bad_version() { expect_parse_err!(&mut BytesMut::from("GET //get HT/11\r\n\r\n")); } #[test] fn test_response_http10_read_until_eof() { let mut buf = BytesMut::from("HTTP/1.0 200 Ok\r\n\r\ntest data"); let mut reader = MessageDecoder::::default(); let (_msg, pl) = reader.decode(&mut buf).unwrap().unwrap(); let mut pl = pl.unwrap(); let chunk = pl.decode(&mut buf).unwrap().unwrap(); assert_eq!(chunk, PayloadItem::Chunk(Bytes::from_static(b"test data"))); } #[test] fn hrs_multiple_content_length() { expect_parse_err!(&mut BytesMut::from( "GET / HTTP/1.1\r\n\ Host: example.com\r\n\ Content-Length: 4\r\n\ Content-Length: 2\r\n\ \r\n\ abcd", )); expect_parse_err!(&mut BytesMut::from( "GET / HTTP/1.1\r\n\ Host: example.com\r\n\ Content-Length: 0\r\n\ Content-Length: 2\r\n\ \r\n\ ab", )); } #[test] fn hrs_content_length_plus() { expect_parse_err!(&mut BytesMut::from( "GET / HTTP/1.1\r\n\ Host: example.com\r\n\ Content-Length: +3\r\n\ \r\n\ 000", )); } #[test] fn hrs_te_http10() { // in HTTP/1.0 transfer encoding is ignored and must therefore contain a CL header expect_parse_err!(&mut BytesMut::from( "POST / HTTP/1.0\r\n\ Host: example.com\r\n\ Transfer-Encoding: chunked\r\n\ \r\n\ 3\r\n\ aaa\r\n\ 0\r\n\ ", )); } #[test] fn hrs_cl_and_te_http10() { // in HTTP/1.0 transfer encoding is simply ignored so it's fine to have both let mut buf = BytesMut::from( "GET / HTTP/1.0\r\n\ Host: example.com\r\n\ Content-Length: 3\r\n\ Transfer-Encoding: chunked\r\n\ \r\n\ 000", ); parse_ready!(&mut buf); } #[test] fn hrs_unknown_transfer_encoding() { let mut buf = BytesMut::from( "GET / HTTP/1.1\r\n\ Host: example.com\r\n\ Transfer-Encoding: JUNK\r\n\ Transfer-Encoding: chunked\r\n\ \r\n\ 5\r\n\ hello\r\n\ 0", ); expect_parse_err!(&mut buf); } #[test] fn hrs_multiple_transfer_encoding() { let mut buf = BytesMut::from( "GET / HTTP/1.1\r\n\ Host: example.com\r\n\ Content-Length: 51\r\n\ Transfer-Encoding: identity\r\n\ Transfer-Encoding: chunked\r\n\ \r\n\ 0\r\n\ \r\n\ GET /forbidden HTTP/1.1\r\n\ Host: example.com\r\n\r\n", ); expect_parse_err!(&mut buf); } #[test] fn transfer_encoding_agrees() { let mut buf = BytesMut::from( "GET /test HTTP/1.1\r\n\ Host: example.com\r\n\ Content-Length: 3\r\n\ Transfer-Encoding: identity\r\n\ \r\n\ 0\r\n", ); let mut reader = MessageDecoder::::default(); let (_msg, pl) = reader.decode(&mut buf).unwrap().unwrap(); let mut pl = pl.unwrap(); let chunk = pl.decode(&mut buf).unwrap().unwrap(); assert_eq!(chunk, PayloadItem::Chunk(Bytes::from_static(b"0\r\n"))); } } actix-http-3.9.0/src/h1/dispatcher.rs000064400000000000000000001333701046102023000155240ustar 00000000000000use std::{ collections::VecDeque, fmt, future::Future, io, mem, net, pin::Pin, rc::Rc, task::{Context, Poll}, }; use actix_codec::{Framed, FramedParts}; use actix_rt::time::sleep_until; use actix_service::Service; use bitflags::bitflags; use bytes::{Buf, BytesMut}; use futures_core::ready; use pin_project_lite::pin_project; use tokio::io::{AsyncRead, AsyncWrite}; use tokio_util::codec::{Decoder as _, Encoder as _}; use tracing::{error, trace}; use super::{ codec::Codec, decoder::MAX_BUFFER_SIZE, payload::{Payload, PayloadSender, PayloadStatus}, timer::TimerState, Message, MessageType, }; use crate::{ body::{BodySize, BoxBody, MessageBody}, config::ServiceConfig, error::{DispatchError, ParseError, PayloadError}, service::HttpFlow, Error, Extensions, OnConnectData, Request, Response, StatusCode, }; const LW_BUFFER_SIZE: usize = 1024; const HW_BUFFER_SIZE: usize = 1024 * 8; const MAX_PIPELINED_MESSAGES: usize = 16; bitflags! { #[derive(Debug, Clone, Copy)] pub struct Flags: u8 { /// Set when stream is read for first time. const STARTED = 0b0000_0001; /// Set when full request-response cycle has occurred. const FINISHED = 0b0000_0010; /// Set if connection is in keep-alive (inactive) state. const KEEP_ALIVE = 0b0000_0100; /// Set if in shutdown procedure. const SHUTDOWN = 0b0000_1000; /// Set if read-half is disconnected. const READ_DISCONNECT = 0b0001_0000; /// Set if write-half is disconnected. const WRITE_DISCONNECT = 0b0010_0000; } } // there's 2 versions of Dispatcher state because of: // https://github.com/taiki-e/pin-project-lite/issues/3 // // tl;dr: pin-project-lite doesn't play well with other attribute macros #[cfg(not(test))] pin_project! { /// Dispatcher for HTTP/1.1 protocol pub struct Dispatcher where S: Service, S::Error: Into>, B: MessageBody, X: Service, X::Error: Into>, U: Service<(Request, Framed), Response = ()>, U::Error: fmt::Display, { #[pin] inner: DispatcherState, } } #[cfg(test)] pin_project! { /// Dispatcher for HTTP/1.1 protocol pub struct Dispatcher where S: Service, S::Error: Into>, B: MessageBody, X: Service, X::Error: Into>, U: Service<(Request, Framed), Response = ()>, U::Error: fmt::Display, { #[pin] pub(super) inner: DispatcherState, // used in tests pub(super) poll_count: u64, } } pin_project! { #[project = DispatcherStateProj] pub(super) enum DispatcherState where S: Service, S::Error: Into>, B: MessageBody, X: Service, X::Error: Into>, U: Service<(Request, Framed), Response = ()>, U::Error: fmt::Display, { Normal { #[pin] inner: InnerDispatcher }, Upgrade { #[pin] fut: U::Future }, } } pin_project! { #[project = InnerDispatcherProj] pub(super) struct InnerDispatcher where S: Service, S::Error: Into>, B: MessageBody, X: Service, X::Error: Into>, U: Service<(Request, Framed), Response = ()>, U::Error: fmt::Display, { flow: Rc>, pub(super) flags: Flags, peer_addr: Option, conn_data: Option>, config: ServiceConfig, error: Option, #[pin] pub(super) state: State, // when Some(_) dispatcher is in state of receiving request payload payload: Option, messages: VecDeque, head_timer: TimerState, ka_timer: TimerState, shutdown_timer: TimerState, pub(super) io: Option, read_buf: BytesMut, write_buf: BytesMut, codec: Codec, } } enum DispatcherMessage { Item(Request), Upgrade(Request), Error(Response<()>), } pin_project! { #[project = StateProj] pub(super) enum State where S: Service, X: Service, B: MessageBody, { None, ExpectCall { #[pin] fut: X::Future }, ServiceCall { #[pin] fut: S::Future }, SendPayload { #[pin] body: B }, SendErrorPayload { #[pin] body: BoxBody }, } } impl State where S: Service, X: Service, B: MessageBody, { pub(super) fn is_none(&self) -> bool { matches!(self, State::None) } } impl fmt::Debug for State where S: Service, X: Service, B: MessageBody, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { Self::None => write!(f, "State::None"), Self::ExpectCall { .. } => f.debug_struct("State::ExpectCall").finish_non_exhaustive(), Self::ServiceCall { .. } => { f.debug_struct("State::ServiceCall").finish_non_exhaustive() } Self::SendPayload { .. } => { f.debug_struct("State::SendPayload").finish_non_exhaustive() } Self::SendErrorPayload { .. } => f .debug_struct("State::SendErrorPayload") .finish_non_exhaustive(), } } } #[derive(Debug)] enum PollResponse { Upgrade(Request), DoNothing, DrainWriteBuf, } impl Dispatcher where T: AsyncRead + AsyncWrite + Unpin, S: Service, S::Error: Into>, S::Response: Into>, B: MessageBody, X: Service, X::Error: Into>, U: Service<(Request, Framed), Response = ()>, U::Error: fmt::Display, { /// Create HTTP/1 dispatcher. pub(crate) fn new( io: T, flow: Rc>, config: ServiceConfig, peer_addr: Option, conn_data: OnConnectData, ) -> Self { Dispatcher { inner: DispatcherState::Normal { inner: InnerDispatcher { flow, flags: Flags::empty(), peer_addr, conn_data: conn_data.0.map(Rc::new), config: config.clone(), error: None, state: State::None, payload: None, messages: VecDeque::new(), head_timer: TimerState::new(config.client_request_deadline().is_some()), ka_timer: TimerState::new(config.keep_alive().enabled()), shutdown_timer: TimerState::new(config.client_disconnect_deadline().is_some()), io: Some(io), read_buf: BytesMut::with_capacity(HW_BUFFER_SIZE), write_buf: BytesMut::with_capacity(HW_BUFFER_SIZE), codec: Codec::new(config), }, }, #[cfg(test)] poll_count: 0, } } } impl InnerDispatcher where T: AsyncRead + AsyncWrite + Unpin, S: Service, S::Error: Into>, S::Response: Into>, B: MessageBody, X: Service, X::Error: Into>, U: Service<(Request, Framed), Response = ()>, U::Error: fmt::Display, { fn can_read(&self, cx: &mut Context<'_>) -> bool { if self.flags.contains(Flags::READ_DISCONNECT) { false } else if let Some(ref info) = self.payload { info.need_read(cx) == PayloadStatus::Read } else { true } } fn client_disconnected(self: Pin<&mut Self>) { let this = self.project(); this.flags .insert(Flags::READ_DISCONNECT | Flags::WRITE_DISCONNECT); if let Some(mut payload) = this.payload.take() { payload.set_error(PayloadError::Incomplete(None)); } } fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { let InnerDispatcherProj { io, write_buf, .. } = self.project(); let mut io = Pin::new(io.as_mut().unwrap()); let len = write_buf.len(); let mut written = 0; while written < len { match io.as_mut().poll_write(cx, &write_buf[written..])? { Poll::Ready(0) => { error!("write zero; closing"); return Poll::Ready(Err(io::Error::new(io::ErrorKind::WriteZero, ""))); } Poll::Ready(n) => written += n, Poll::Pending => { write_buf.advance(written); return Poll::Pending; } } } // everything has written to I/O; clear buffer write_buf.clear(); // flush the I/O and check if get blocked io.poll_flush(cx) } fn send_response_inner( self: Pin<&mut Self>, res: Response<()>, body: &impl MessageBody, ) -> Result { let this = self.project(); let size = body.size(); this.codec .encode(Message::Item((res, size)), this.write_buf) .map_err(|err| { if let Some(mut payload) = this.payload.take() { payload.set_error(PayloadError::Incomplete(None)); } DispatchError::Io(err) })?; Ok(size) } fn send_response( mut self: Pin<&mut Self>, res: Response<()>, body: B, ) -> Result<(), DispatchError> { let size = self.as_mut().send_response_inner(res, &body)?; let mut this = self.project(); this.state.set(match size { BodySize::None | BodySize::Sized(0) => { this.flags.insert(Flags::FINISHED); State::None } _ => State::SendPayload { body }, }); Ok(()) } fn send_error_response( mut self: Pin<&mut Self>, res: Response<()>, body: BoxBody, ) -> Result<(), DispatchError> { let size = self.as_mut().send_response_inner(res, &body)?; let mut this = self.project(); this.state.set(match size { BodySize::None | BodySize::Sized(0) => { this.flags.insert(Flags::FINISHED); State::None } _ => State::SendErrorPayload { body }, }); Ok(()) } fn send_continue(self: Pin<&mut Self>) { self.project() .write_buf .extend_from_slice(b"HTTP/1.1 100 Continue\r\n\r\n"); } fn poll_response( mut self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Result { 'res: loop { let mut this = self.as_mut().project(); match this.state.as_mut().project() { // no future is in InnerDispatcher state; pop next message StateProj::None => match this.messages.pop_front() { // handle request message Some(DispatcherMessage::Item(req)) => { // Handle `EXPECT: 100-Continue` header if req.head().expect() { // set InnerDispatcher state and continue loop to poll it let fut = this.flow.expect.call(req); this.state.set(State::ExpectCall { fut }); } else { // set InnerDispatcher state and continue loop to poll it let fut = this.flow.service.call(req); this.state.set(State::ServiceCall { fut }); }; } // handle error message Some(DispatcherMessage::Error(res)) => { // send_response would update InnerDispatcher state to SendPayload or None // (If response body is empty) // continue loop to poll it self.as_mut().send_error_response(res, BoxBody::new(()))?; } // return with upgrade request and poll it exclusively Some(DispatcherMessage::Upgrade(req)) => return Ok(PollResponse::Upgrade(req)), // all messages are dealt with None => { // start keep-alive if last request allowed it this.flags.set(Flags::KEEP_ALIVE, this.codec.keep_alive()); return Ok(PollResponse::DoNothing); } }, StateProj::ServiceCall { fut } => { match fut.poll(cx) { // service call resolved. send response. Poll::Ready(Ok(res)) => { let (res, body) = res.into().replace_body(()); self.as_mut().send_response(res, body)?; } // send service call error as response Poll::Ready(Err(err)) => { let res: Response = err.into(); let (res, body) = res.replace_body(()); self.as_mut().send_error_response(res, body)?; } // service call pending and could be waiting for more chunk messages // (pipeline message limit and/or payload can_read limit) Poll::Pending => { // no new message is decoded and no new payload is fed // nothing to do except waiting for new incoming data from client if !self.as_mut().poll_request(cx)? { return Ok(PollResponse::DoNothing); } // else loop } } } StateProj::SendPayload { mut body } => { // keep populate writer buffer until buffer size limit hit, // get blocked or finished. while this.write_buf.len() < super::payload::MAX_BUFFER_SIZE { match body.as_mut().poll_next(cx) { Poll::Ready(Some(Ok(item))) => { this.codec .encode(Message::Chunk(Some(item)), this.write_buf)?; } Poll::Ready(None) => { this.codec.encode(Message::Chunk(None), this.write_buf)?; // payload stream finished. // set state to None and handle next message this.state.set(State::None); this.flags.insert(Flags::FINISHED); continue 'res; } Poll::Ready(Some(Err(err))) => { let err = err.into(); tracing::error!("Response payload stream error: {err:?}"); this.flags.insert(Flags::FINISHED); return Err(DispatchError::Body(err)); } Poll::Pending => return Ok(PollResponse::DoNothing), } } // buffer is beyond max size // return and try to write the whole buffer to I/O stream. return Ok(PollResponse::DrainWriteBuf); } StateProj::SendErrorPayload { mut body } => { // TODO: de-dupe impl with SendPayload // keep populate writer buffer until buffer size limit hit, // get blocked or finished. while this.write_buf.len() < super::payload::MAX_BUFFER_SIZE { match body.as_mut().poll_next(cx) { Poll::Ready(Some(Ok(item))) => { this.codec .encode(Message::Chunk(Some(item)), this.write_buf)?; } Poll::Ready(None) => { this.codec.encode(Message::Chunk(None), this.write_buf)?; // payload stream finished // set state to None and handle next message this.state.set(State::None); this.flags.insert(Flags::FINISHED); continue 'res; } Poll::Ready(Some(Err(err))) => { tracing::error!("Response payload stream error: {err:?}"); this.flags.insert(Flags::FINISHED); return Err(DispatchError::Body( Error::new_body().with_cause(err).into(), )); } Poll::Pending => return Ok(PollResponse::DoNothing), } } // buffer is beyond max size // return and try to write the whole buffer to stream return Ok(PollResponse::DrainWriteBuf); } StateProj::ExpectCall { fut } => { trace!(" calling expect service"); match fut.poll(cx) { // expect resolved. write continue to buffer and set InnerDispatcher state // to service call. Poll::Ready(Ok(req)) => { this.write_buf .extend_from_slice(b"HTTP/1.1 100 Continue\r\n\r\n"); let fut = this.flow.service.call(req); this.state.set(State::ServiceCall { fut }); } // send expect error as response Poll::Ready(Err(err)) => { let res: Response = err.into(); let (res, body) = res.replace_body(()); self.as_mut().send_error_response(res, body)?; } // expect must be solved before progress can be made. Poll::Pending => return Ok(PollResponse::DoNothing), } } } } } fn handle_request( mut self: Pin<&mut Self>, req: Request, cx: &mut Context<'_>, ) -> Result<(), DispatchError> { // initialize dispatcher state { let mut this = self.as_mut().project(); // Handle `EXPECT: 100-Continue` header if req.head().expect() { // set dispatcher state to call expect handler let fut = this.flow.expect.call(req); this.state.set(State::ExpectCall { fut }); } else { // set dispatcher state to call service handler let fut = this.flow.service.call(req); this.state.set(State::ServiceCall { fut }); }; }; // eagerly poll the future once (or twice if expect is resolved immediately). loop { match self.as_mut().project().state.project() { StateProj::ExpectCall { fut } => { match fut.poll(cx) { // expect is resolved; continue loop and poll the service call branch. Poll::Ready(Ok(req)) => { self.as_mut().send_continue(); let mut this = self.as_mut().project(); let fut = this.flow.service.call(req); this.state.set(State::ServiceCall { fut }); continue; } // future is error; send response and return a result // on success to notify the dispatcher a new state is set and the outer loop // should be continued Poll::Ready(Err(err)) => { let res: Response = err.into(); let (res, body) = res.replace_body(()); return self.send_error_response(res, body); } // future is pending; return Ok(()) to notify that a new state is // set and the outer loop should be continue. Poll::Pending => return Ok(()), } } StateProj::ServiceCall { fut } => { // return no matter the service call future's result. return match fut.poll(cx) { // Future is resolved. Send response and return a result. On success // to notify the dispatcher a new state is set and the outer loop // should be continue. Poll::Ready(Ok(res)) => { let (res, body) = res.into().replace_body(()); self.as_mut().send_response(res, body) } // see the comment on ExpectCall state branch's Pending Poll::Pending => Ok(()), // see the comment on ExpectCall state branch's Ready(Err(_)) Poll::Ready(Err(err)) => { let res: Response = err.into(); let (res, body) = res.replace_body(()); self.as_mut().send_error_response(res, body) } }; } _ => { unreachable!("State must be set to ServiceCall or ExceptCall in handle_request") } } } } /// Process one incoming request. /// /// Returns true if any meaningful work was done. fn poll_request(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Result { let pipeline_queue_full = self.messages.len() >= MAX_PIPELINED_MESSAGES; let can_not_read = !self.can_read(cx); // limit amount of non-processed requests if pipeline_queue_full || can_not_read { return Ok(false); } let mut this = self.as_mut().project(); let mut updated = false; // decode from read buf as many full requests as possible loop { match this.codec.decode(this.read_buf) { Ok(Some(msg)) => { updated = true; match msg { Message::Item(mut req) => { // head timer only applies to first request on connection this.head_timer.clear(line!()); req.head_mut().peer_addr = *this.peer_addr; req.conn_data.clone_from(this.conn_data); match this.codec.message_type() { // request has no payload MessageType::None => {} // Request is upgradable. Add upgrade message and break. // Everything remaining in read buffer will be handed to // upgraded Request. MessageType::Stream if this.flow.upgrade.is_some() => { this.messages.push_back(DispatcherMessage::Upgrade(req)); break; } // request is not upgradable MessageType::Payload | MessageType::Stream => { // PayloadSender and Payload are smart pointers share the // same state. PayloadSender is attached to dispatcher and used // to sink new chunked request data to state. Payload is // attached to Request and passed to Service::call where the // state can be collected and consumed. let (sender, payload) = Payload::create(false); *req.payload() = crate::Payload::H1 { payload }; *this.payload = Some(sender); } } // handle request early when no future in InnerDispatcher state. if this.state.is_none() { self.as_mut().handle_request(req, cx)?; this = self.as_mut().project(); } else { this.messages.push_back(DispatcherMessage::Item(req)); } } Message::Chunk(Some(chunk)) => { if let Some(ref mut payload) = this.payload { payload.feed_data(chunk); } else { error!("Internal server error: unexpected payload chunk"); this.flags.insert(Flags::READ_DISCONNECT); this.messages.push_back(DispatcherMessage::Error( Response::internal_server_error().drop_body(), )); *this.error = Some(DispatchError::InternalError); break; } } Message::Chunk(None) => { if let Some(mut payload) = this.payload.take() { payload.feed_eof(); } else { error!("Internal server error: unexpected eof"); this.flags.insert(Flags::READ_DISCONNECT); this.messages.push_back(DispatcherMessage::Error( Response::internal_server_error().drop_body(), )); *this.error = Some(DispatchError::InternalError); break; } } } } // decode is partial and buffer is not full yet // break and wait for more read Ok(None) => break, Err(ParseError::Io(err)) => { trace!("I/O error: {}", &err); self.as_mut().client_disconnected(); this = self.as_mut().project(); *this.error = Some(DispatchError::Io(err)); break; } Err(ParseError::TooLarge) => { trace!("request head was too big; returning 431 response"); if let Some(mut payload) = this.payload.take() { payload.set_error(PayloadError::Overflow); } // request heads that overflow buffer size return a 431 error this.messages .push_back(DispatcherMessage::Error(Response::with_body( StatusCode::REQUEST_HEADER_FIELDS_TOO_LARGE, (), ))); this.flags.insert(Flags::READ_DISCONNECT); *this.error = Some(ParseError::TooLarge.into()); break; } Err(err) => { trace!("parse error {}", &err); if let Some(mut payload) = this.payload.take() { payload.set_error(PayloadError::EncodingCorrupted); } // malformed requests should be responded with 400 this.messages.push_back(DispatcherMessage::Error( Response::bad_request().drop_body(), )); this.flags.insert(Flags::READ_DISCONNECT); *this.error = Some(err.into()); break; } } } Ok(updated) } fn poll_head_timer( mut self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Result<(), DispatchError> { let this = self.as_mut().project(); if let TimerState::Active { timer } = this.head_timer { if timer.as_mut().poll(cx).is_ready() { // timeout on first request (slow request) return 408 trace!("timed out on slow request; replying with 408 and closing connection"); let _ = self.as_mut().send_error_response( Response::with_body(StatusCode::REQUEST_TIMEOUT, ()), BoxBody::new(()), ); self.project().flags.insert(Flags::SHUTDOWN); } }; Ok(()) } fn poll_ka_timer(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Result<(), DispatchError> { let this = self.as_mut().project(); if let TimerState::Active { timer } = this.ka_timer { debug_assert!( this.flags.contains(Flags::KEEP_ALIVE), "keep-alive flag should be set when timer is active", ); debug_assert!( this.state.is_none(), "dispatcher should not be in keep-alive phase if state is not none: {:?}", this.state, ); // Assert removed by @robjtede on account of issue #2655. There are cases where an I/O // flush can be pending after entering the keep-alive state causing the subsequent flush // wake up to panic here. This appears to be a Linux-only problem. Leaving original code // below for posterity because a simple and reliable test could not be found to trigger // the behavior. // debug_assert!( // this.write_buf.is_empty(), // "dispatcher should not be in keep-alive phase if write_buf is not empty", // ); // keep-alive timer has timed out if timer.as_mut().poll(cx).is_ready() { // no tasks at hand trace!("timer timed out; closing connection"); this.flags.insert(Flags::SHUTDOWN); if let Some(deadline) = this.config.client_disconnect_deadline() { // start shutdown timeout if enabled this.shutdown_timer .set_and_init(cx, sleep_until(deadline.into()), line!()); } else { // no shutdown timeout, drop socket this.flags.insert(Flags::WRITE_DISCONNECT); } } } Ok(()) } fn poll_shutdown_timer( mut self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Result<(), DispatchError> { let this = self.as_mut().project(); if let TimerState::Active { timer } = this.shutdown_timer { debug_assert!( this.flags.contains(Flags::SHUTDOWN), "shutdown flag should be set when timer is active", ); // timed-out during shutdown; drop connection if timer.as_mut().poll(cx).is_ready() { trace!("timed-out during shutdown"); return Err(DispatchError::DisconnectTimeout); } } Ok(()) } /// Poll head, keep-alive, and disconnect timer. fn poll_timers(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Result<(), DispatchError> { self.as_mut().poll_head_timer(cx)?; self.as_mut().poll_ka_timer(cx)?; self.as_mut().poll_shutdown_timer(cx)?; Ok(()) } /// Returns true when I/O stream can be disconnected after write to it. /// /// It covers these conditions: /// - `std::io::ErrorKind::ConnectionReset` after partial read; /// - all data read done. #[inline(always)] // TODO: bench this inline fn read_available(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Result { let this = self.project(); if this.flags.contains(Flags::READ_DISCONNECT) { return Ok(false); }; let mut io = Pin::new(this.io.as_mut().unwrap()); let mut read_some = false; loop { // Return early when read buf exceed decoder's max buffer size. if this.read_buf.len() >= MAX_BUFFER_SIZE { // At this point it's not known IO stream is still scheduled to be waked up so // force wake up dispatcher just in case. // // Reason: // AsyncRead mostly would only have guarantee wake up when the poll_read // return Poll::Pending. // // Case: // When read_buf is beyond max buffer size the early return could be successfully // be parsed as a new Request. This case would not generate ParseError::TooLarge and // at this point IO stream is not fully read to Pending and would result in // dispatcher stuck until timeout (keep-alive). // // Note: // This is a perf choice to reduce branch on ::decode. // // A Request head too large to parse is only checked on `httparse::Status::Partial`. match this.payload { // When dispatcher has a payload the responsibility of wake ups is shifted to // `h1::payload::Payload` unless the payload is needing a read, in which case it // might not have access to the waker and could result in the dispatcher // getting stuck until timeout. // // Reason: // Self wake up when there is payload would waste poll and/or result in // over read. // // Case: // When payload is (partial) dropped by user there is no need to do // read anymore. At this case read_buf could always remain beyond // MAX_BUFFER_SIZE and self wake up would be busy poll dispatcher and // waste resources. Some(ref p) if p.need_read(cx) != PayloadStatus::Read => {} _ => cx.waker().wake_by_ref(), } return Ok(false); } // grow buffer if necessary. let remaining = this.read_buf.capacity() - this.read_buf.len(); if remaining < LW_BUFFER_SIZE { this.read_buf.reserve(HW_BUFFER_SIZE - remaining); } match tokio_util::io::poll_read_buf(io.as_mut(), cx, this.read_buf) { Poll::Ready(Ok(n)) => { this.flags.remove(Flags::FINISHED); if n == 0 { return Ok(true); } read_some = true; } Poll::Pending => { return Ok(false); } Poll::Ready(Err(err)) => { return match err.kind() { // convert WouldBlock error to the same as Pending return io::ErrorKind::WouldBlock => Ok(false), // connection reset after partial read io::ErrorKind::ConnectionReset if read_some => Ok(true), _ => Err(DispatchError::Io(err)), }; } } } } /// call upgrade service with request. fn upgrade(self: Pin<&mut Self>, req: Request) -> U::Future { let this = self.project(); let mut parts = FramedParts::with_read_buf( this.io.take().unwrap(), mem::take(this.codec), mem::take(this.read_buf), ); parts.write_buf = mem::take(this.write_buf); let framed = Framed::from_parts(parts); this.flow.upgrade.as_ref().unwrap().call((req, framed)) } } impl Future for Dispatcher where T: AsyncRead + AsyncWrite + Unpin, S: Service, S::Error: Into>, S::Response: Into>, B: MessageBody, X: Service, X::Error: Into>, U: Service<(Request, Framed), Response = ()>, U::Error: fmt::Display, { type Output = Result<(), DispatchError>; #[inline] fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let this = self.as_mut().project(); #[cfg(test)] { *this.poll_count += 1; } match this.inner.project() { DispatcherStateProj::Upgrade { fut: upgrade } => upgrade.poll(cx).map_err(|err| { error!("Upgrade handler error: {}", err); DispatchError::Upgrade }), DispatcherStateProj::Normal { mut inner } => { trace!("start flags: {:?}", &inner.flags); trace_timer_states( "start", &inner.head_timer, &inner.ka_timer, &inner.shutdown_timer, ); inner.as_mut().poll_timers(cx)?; let poll = if inner.flags.contains(Flags::SHUTDOWN) { if inner.flags.contains(Flags::WRITE_DISCONNECT) { Poll::Ready(Ok(())) } else { // flush buffer and wait on blocked ready!(inner.as_mut().poll_flush(cx))?; Pin::new(inner.as_mut().project().io.as_mut().unwrap()) .poll_shutdown(cx) .map_err(DispatchError::from) } } else { // read from I/O stream and fill read buffer let should_disconnect = inner.as_mut().read_available(cx)?; // after reading something from stream, clear keep-alive timer if !inner.read_buf.is_empty() && inner.flags.contains(Flags::KEEP_ALIVE) { let inner = inner.as_mut().project(); inner.flags.remove(Flags::KEEP_ALIVE); inner.ka_timer.clear(line!()); } if !inner.flags.contains(Flags::STARTED) { inner.as_mut().project().flags.insert(Flags::STARTED); if let Some(deadline) = inner.config.client_request_deadline() { inner.as_mut().project().head_timer.set_and_init( cx, sleep_until(deadline.into()), line!(), ); } } inner.as_mut().poll_request(cx)?; if should_disconnect { // I/O stream should to be closed let inner = inner.as_mut().project(); inner.flags.insert(Flags::READ_DISCONNECT); if let Some(mut payload) = inner.payload.take() { payload.feed_eof(); } }; loop { // poll response to populate write buffer // drain indicates whether write buffer should be emptied before next run let drain = match inner.as_mut().poll_response(cx)? { PollResponse::DrainWriteBuf => true, PollResponse::DoNothing => { // KEEP_ALIVE is set in send_response_inner if client allows it // FINISHED is set after writing last chunk of response if inner.flags.contains(Flags::KEEP_ALIVE | Flags::FINISHED) { if let Some(timer) = inner.config.keep_alive_deadline() { inner.as_mut().project().ka_timer.set_and_init( cx, sleep_until(timer.into()), line!(), ); } } false } // upgrade request and goes Upgrade variant of DispatcherState. PollResponse::Upgrade(req) => { let upgrade = inner.upgrade(req); self.as_mut() .project() .inner .set(DispatcherState::Upgrade { fut: upgrade }); return self.poll(cx); } }; // we didn't get WouldBlock from write operation, so data get written to // kernel completely (macOS) and we have to write again otherwise response // can get stuck // // TODO: want to find a reference for this behavior // see introduced commit: 3872d3ba let flush_was_ready = inner.as_mut().poll_flush(cx)?.is_ready(); // this assert seems to always be true but not willing to commit to it until // we understand what Nikolay meant when writing the above comment // debug_assert!(flush_was_ready); if !flush_was_ready || !drain { break; } } // client is gone if inner.flags.contains(Flags::WRITE_DISCONNECT) { trace!("client is gone; disconnecting"); return Poll::Ready(Ok(())); } let inner_p = inner.as_mut().project(); let state_is_none = inner_p.state.is_none(); // read half is closed; we do not process any responses if inner_p.flags.contains(Flags::READ_DISCONNECT) && state_is_none { trace!("read half closed; start shutdown"); inner_p.flags.insert(Flags::SHUTDOWN); } // keep-alive and stream errors if state_is_none && inner_p.write_buf.is_empty() { if let Some(err) = inner_p.error.take() { error!("stream error: {}", &err); return Poll::Ready(Err(err)); } // disconnect if keep-alive is not enabled if inner_p.flags.contains(Flags::FINISHED) && !inner_p.flags.contains(Flags::KEEP_ALIVE) { inner_p.flags.remove(Flags::FINISHED); inner_p.flags.insert(Flags::SHUTDOWN); return self.poll(cx); } // disconnect if shutdown if inner_p.flags.contains(Flags::SHUTDOWN) { return self.poll(cx); } } trace_timer_states( "end", inner_p.head_timer, inner_p.ka_timer, inner_p.shutdown_timer, ); Poll::Pending }; trace!("end flags: {:?}", &inner.flags); poll } } } } #[allow(dead_code)] fn trace_timer_states( label: &str, head_timer: &TimerState, ka_timer: &TimerState, shutdown_timer: &TimerState, ) { trace!("{} timers:", label); if head_timer.is_enabled() { trace!(" head {}", &head_timer); } if ka_timer.is_enabled() { trace!(" keep-alive {}", &ka_timer); } if shutdown_timer.is_enabled() { trace!(" shutdown {}", &shutdown_timer); } } actix-http-3.9.0/src/h1/dispatcher_tests.rs000064400000000000000000000634011046102023000167430ustar 00000000000000use std::{future::Future, str, task::Poll, time::Duration}; use actix_codec::Framed; use actix_rt::{pin, time::sleep}; use actix_service::{fn_service, Service}; use actix_utils::future::{ready, Ready}; use bytes::{Buf, Bytes, BytesMut}; use futures_util::future::lazy; use super::dispatcher::{Dispatcher, DispatcherState, DispatcherStateProj, Flags}; use crate::{ body::MessageBody, config::ServiceConfig, h1::{Codec, ExpectHandler, UpgradeHandler}, service::HttpFlow, test::{TestBuffer, TestSeqBuffer}, Error, HttpMessage, KeepAlive, Method, OnConnectData, Request, Response, StatusCode, }; fn find_slice(haystack: &[u8], needle: &[u8], from: usize) -> Option { memchr::memmem::find(&haystack[from..], needle) } fn stabilize_date_header(payload: &mut [u8]) { let mut from = 0; while let Some(pos) = find_slice(payload, b"date", from) { payload[(from + pos)..(from + pos + 35)] .copy_from_slice(b"date: Thu, 01 Jan 1970 12:34:56 UTC"); from += 35; } } fn ok_service() -> impl Service, Error = Error> { status_service(StatusCode::OK) } fn status_service( status: StatusCode, ) -> impl Service, Error = Error> { fn_service(move |_req: Request| ready(Ok::<_, Error>(Response::new(status)))) } fn echo_path_service() -> impl Service, Error = Error> { fn_service(|req: Request| { let path = req.path().as_bytes(); ready(Ok::<_, Error>( Response::ok().set_body(Bytes::copy_from_slice(path)), )) }) } fn drop_payload_service() -> impl Service, Error = Error> { fn_service(|mut req: Request| async move { let _ = req.take_payload(); Ok::<_, Error>(Response::with_body(StatusCode::OK, "payload dropped")) }) } fn echo_payload_service() -> impl Service, Error = Error> { fn_service(|mut req: Request| { Box::pin(async move { use futures_util::StreamExt as _; let mut pl = req.take_payload(); let mut body = BytesMut::new(); while let Some(chunk) = pl.next().await { body.extend_from_slice(chunk.unwrap().chunk()) } Ok::<_, Error>(Response::ok().set_body(body.freeze())) }) }) } #[actix_rt::test] async fn late_request() { let mut buf = TestBuffer::empty(); let cfg = ServiceConfig::new( KeepAlive::Disabled, Duration::from_millis(100), Duration::ZERO, false, None, ); let services = HttpFlow::new(ok_service(), ExpectHandler, None); let h1 = Dispatcher::<_, _, _, _, UpgradeHandler>::new( buf.clone(), services, cfg, None, OnConnectData::default(), ); pin!(h1); lazy(|cx| { assert!(matches!(&h1.inner, DispatcherState::Normal { .. })); match h1.as_mut().poll(cx) { Poll::Ready(_) => panic!("first poll should not be ready"), Poll::Pending => {} } // polls: initial assert_eq!(h1.poll_count, 1); buf.extend_read_buf("GET /abcd HTTP/1.1\r\nConnection: close\r\n\r\n"); match h1.as_mut().poll(cx) { Poll::Pending => panic!("second poll should not be pending"), Poll::Ready(res) => assert!(res.is_ok()), } // polls: initial pending => handle req => shutdown assert_eq!(h1.poll_count, 3); let mut res = buf.take_write_buf().to_vec(); stabilize_date_header(&mut res); let res = &res[..]; let exp = b"\ HTTP/1.1 200 OK\r\n\ content-length: 0\r\n\ connection: close\r\n\ date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\ "; assert_eq!( res, exp, "\nexpected response not in write buffer:\n\ response: {:?}\n\ expected: {:?}", String::from_utf8_lossy(res), String::from_utf8_lossy(exp) ); }) .await; } #[actix_rt::test] async fn oneshot_connection() { let buf = TestBuffer::new("GET /abcd HTTP/1.1\r\n\r\n"); let cfg = ServiceConfig::new( KeepAlive::Disabled, Duration::from_millis(100), Duration::ZERO, false, None, ); let services = HttpFlow::new(echo_path_service(), ExpectHandler, None); let h1 = Dispatcher::<_, _, _, _, UpgradeHandler>::new( buf.clone(), services, cfg, None, OnConnectData::default(), ); pin!(h1); lazy(|cx| { assert!(matches!(&h1.inner, DispatcherState::Normal { .. })); match h1.as_mut().poll(cx) { Poll::Pending => panic!("first poll should not be pending"), Poll::Ready(res) => assert!(res.is_ok()), } // polls: initial => shutdown assert_eq!(h1.poll_count, 2); let mut res = buf.take_write_buf().to_vec(); stabilize_date_header(&mut res); let res = &res[..]; let exp = http_msg( r" HTTP/1.1 200 OK content-length: 5 connection: close date: Thu, 01 Jan 1970 12:34:56 UTC /abcd ", ); assert_eq!( res, exp, "\nexpected response not in write buffer:\n\ response: {:?}\n\ expected: {:?}", String::from_utf8_lossy(res), String::from_utf8_lossy(&exp) ); }) .await; } #[actix_rt::test] async fn keep_alive_timeout() { let buf = TestBuffer::new("GET /abcd HTTP/1.1\r\n\r\n"); let cfg = ServiceConfig::new( KeepAlive::Timeout(Duration::from_millis(200)), Duration::from_millis(100), Duration::ZERO, false, None, ); let services = HttpFlow::new(echo_path_service(), ExpectHandler, None); let h1 = Dispatcher::<_, _, _, _, UpgradeHandler>::new( buf.clone(), services, cfg, None, OnConnectData::default(), ); pin!(h1); lazy(|cx| { assert!(matches!(&h1.inner, DispatcherState::Normal { .. })); assert!( h1.as_mut().poll(cx).is_pending(), "keep-alive should prevent poll from resolving" ); // polls: initial assert_eq!(h1.poll_count, 1); let mut res = buf.take_write_buf().to_vec(); stabilize_date_header(&mut res); let res = &res[..]; let exp = b"\ HTTP/1.1 200 OK\r\n\ content-length: 5\r\n\ date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\ /abcd\ "; assert_eq!( res, exp, "\nexpected response not in write buffer:\n\ response: {:?}\n\ expected: {:?}", String::from_utf8_lossy(res), String::from_utf8_lossy(exp) ); }) .await; // sleep slightly longer than keep-alive timeout sleep(Duration::from_millis(250)).await; lazy(|cx| { assert!( h1.as_mut().poll(cx).is_ready(), "keep-alive should have resolved", ); // polls: initial => keep-alive wake-up shutdown assert_eq!(h1.poll_count, 2); if let DispatcherStateProj::Normal { inner } = h1.project().inner.project() { // connection closed assert!(inner.flags.contains(Flags::SHUTDOWN)); assert!(inner.flags.contains(Flags::WRITE_DISCONNECT)); // and nothing added to write buffer assert!(buf.write_buf_slice().is_empty()); } }) .await; } #[actix_rt::test] async fn keep_alive_follow_up_req() { let mut buf = TestBuffer::new("GET /abcd HTTP/1.1\r\n\r\n"); let cfg = ServiceConfig::new( KeepAlive::Timeout(Duration::from_millis(500)), Duration::from_millis(100), Duration::ZERO, false, None, ); let services = HttpFlow::new(echo_path_service(), ExpectHandler, None); let h1 = Dispatcher::<_, _, _, _, UpgradeHandler>::new( buf.clone(), services, cfg, None, OnConnectData::default(), ); pin!(h1); lazy(|cx| { assert!(matches!(&h1.inner, DispatcherState::Normal { .. })); assert!( h1.as_mut().poll(cx).is_pending(), "keep-alive should prevent poll from resolving" ); // polls: initial assert_eq!(h1.poll_count, 1); let mut res = buf.take_write_buf().to_vec(); stabilize_date_header(&mut res); let res = &res[..]; let exp = b"\ HTTP/1.1 200 OK\r\n\ content-length: 5\r\n\ date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\ /abcd\ "; assert_eq!( res, exp, "\nexpected response not in write buffer:\n\ response: {:?}\n\ expected: {:?}", String::from_utf8_lossy(res), String::from_utf8_lossy(exp) ); }) .await; // sleep for less than KA timeout sleep(Duration::from_millis(100)).await; lazy(|cx| { assert!( h1.as_mut().poll(cx).is_pending(), "keep-alive should not have resolved dispatcher yet", ); // polls: initial => manual assert_eq!(h1.poll_count, 2); if let DispatcherStateProj::Normal { inner } = h1.as_mut().project().inner.project() { // connection not closed assert!(!inner.flags.contains(Flags::SHUTDOWN)); assert!(!inner.flags.contains(Flags::WRITE_DISCONNECT)); // and nothing added to write buffer assert!(buf.write_buf_slice().is_empty()); } }) .await; lazy(|cx| { buf.extend_read_buf( "\ GET /efg HTTP/1.1\r\n\ Connection: close\r\n\ \r\n\r\n", ); assert!( h1.as_mut().poll(cx).is_ready(), "connection close header should override keep-alive setting", ); // polls: initial => manual => follow-up req => shutdown assert_eq!(h1.poll_count, 4); if let DispatcherStateProj::Normal { inner } = h1.as_mut().project().inner.project() { // connection closed assert!(inner.flags.contains(Flags::SHUTDOWN)); assert!(!inner.flags.contains(Flags::WRITE_DISCONNECT)); } let mut res = buf.take_write_buf().to_vec(); stabilize_date_header(&mut res); let res = &res[..]; let exp = b"\ HTTP/1.1 200 OK\r\n\ content-length: 4\r\n\ connection: close\r\n\ date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\ /efg\ "; assert_eq!( res, exp, "\nexpected response not in write buffer:\n\ response: {:?}\n\ expected: {:?}", String::from_utf8_lossy(res), String::from_utf8_lossy(exp) ); }) .await; } #[actix_rt::test] async fn req_parse_err() { lazy(|cx| { let buf = TestBuffer::new("GET /test HTTP/1\r\n\r\n"); let services = HttpFlow::new(ok_service(), ExpectHandler, None); let h1 = Dispatcher::<_, _, _, _, UpgradeHandler>::new( buf.clone(), services, ServiceConfig::default(), None, OnConnectData::default(), ); pin!(h1); match h1.as_mut().poll(cx) { Poll::Pending => panic!(), Poll::Ready(res) => assert!(res.is_err()), } if let DispatcherStateProj::Normal { inner } = h1.project().inner.project() { assert!(inner.flags.contains(Flags::READ_DISCONNECT)); assert_eq!( &buf.write_buf_slice()[..26], b"HTTP/1.1 400 Bad Request\r\n" ); } }) .await; } #[actix_rt::test] async fn pipelining_ok_then_ok() { lazy(|cx| { let buf = TestBuffer::new( "\ GET /abcd HTTP/1.1\r\n\r\n\ GET /def HTTP/1.1\r\n\r\n\ ", ); let cfg = ServiceConfig::new( KeepAlive::Disabled, Duration::from_millis(1), Duration::from_millis(1), false, None, ); let services = HttpFlow::new(echo_path_service(), ExpectHandler, None); let h1 = Dispatcher::<_, _, _, _, UpgradeHandler>::new( buf.clone(), services, cfg, None, OnConnectData::default(), ); pin!(h1); assert!(matches!(&h1.inner, DispatcherState::Normal { .. })); match h1.as_mut().poll(cx) { Poll::Pending => panic!("first poll should not be pending"), Poll::Ready(res) => assert!(res.is_ok()), } // polls: initial => shutdown assert_eq!(h1.poll_count, 2); let mut res = buf.write_buf_slice_mut(); stabilize_date_header(&mut res); let res = &res[..]; let exp = b"\ HTTP/1.1 200 OK\r\n\ content-length: 5\r\n\ connection: close\r\n\ date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\ /abcd\ HTTP/1.1 200 OK\r\n\ content-length: 4\r\n\ connection: close\r\n\ date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\ /def\ "; assert_eq!( res, exp, "\nexpected response not in write buffer:\n\ response: {:?}\n\ expected: {:?}", String::from_utf8_lossy(res), String::from_utf8_lossy(exp) ); }) .await; } #[actix_rt::test] async fn pipelining_ok_then_bad() { lazy(|cx| { let buf = TestBuffer::new( "\ GET /abcd HTTP/1.1\r\n\r\n\ GET /def HTTP/1\r\n\r\n\ ", ); let cfg = ServiceConfig::new( KeepAlive::Disabled, Duration::from_millis(1), Duration::from_millis(1), false, None, ); let services = HttpFlow::new(echo_path_service(), ExpectHandler, None); let h1 = Dispatcher::<_, _, _, _, UpgradeHandler>::new( buf.clone(), services, cfg, None, OnConnectData::default(), ); pin!(h1); assert!(matches!(&h1.inner, DispatcherState::Normal { .. })); match h1.as_mut().poll(cx) { Poll::Pending => panic!("first poll should not be pending"), Poll::Ready(res) => assert!(res.is_err()), } // polls: initial => shutdown assert_eq!(h1.poll_count, 1); let mut res = buf.write_buf_slice_mut(); stabilize_date_header(&mut res); let res = &res[..]; let exp = b"\ HTTP/1.1 200 OK\r\n\ content-length: 5\r\n\ connection: close\r\n\ date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\ /abcd\ HTTP/1.1 400 Bad Request\r\n\ content-length: 0\r\n\ connection: close\r\n\ date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\r\n\ "; assert_eq!( res, exp, "\nexpected response not in write buffer:\n\ response: {:?}\n\ expected: {:?}", String::from_utf8_lossy(res), String::from_utf8_lossy(exp) ); }) .await; } #[actix_rt::test] async fn expect_handling() { lazy(|cx| { let mut buf = TestSeqBuffer::empty(); let cfg = ServiceConfig::new( KeepAlive::Disabled, Duration::ZERO, Duration::ZERO, false, None, ); let services = HttpFlow::new(echo_payload_service(), ExpectHandler, None); let h1 = Dispatcher::<_, _, _, _, UpgradeHandler>::new( buf.clone(), services, cfg, None, OnConnectData::default(), ); buf.extend_read_buf( "\ POST /upload HTTP/1.1\r\n\ Content-Length: 5\r\n\ Expect: 100-continue\r\n\ \r\n\ ", ); pin!(h1); assert!(h1.as_mut().poll(cx).is_pending()); assert!(matches!(&h1.inner, DispatcherState::Normal { .. })); // polls: manual assert_eq!(h1.poll_count, 1); if let DispatcherState::Normal { ref inner } = h1.inner { let io = inner.io.as_ref().unwrap(); let res = &io.write_buf()[..]; assert_eq!( str::from_utf8(res).unwrap(), "HTTP/1.1 100 Continue\r\n\r\n" ); } buf.extend_read_buf("12345"); assert!(h1.as_mut().poll(cx).is_ready()); // polls: manual manual shutdown assert_eq!(h1.poll_count, 3); if let DispatcherState::Normal { ref inner } = h1.inner { let io = inner.io.as_ref().unwrap(); let mut res = io.write_buf()[..].to_owned(); stabilize_date_header(&mut res); assert_eq!( str::from_utf8(&res).unwrap(), "\ HTTP/1.1 100 Continue\r\n\ \r\n\ HTTP/1.1 200 OK\r\n\ content-length: 5\r\n\ connection: close\r\n\ date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\ \r\n\ 12345\ " ); } }) .await; } #[actix_rt::test] async fn expect_eager() { lazy(|cx| { let mut buf = TestSeqBuffer::empty(); let cfg = ServiceConfig::new( KeepAlive::Disabled, Duration::ZERO, Duration::ZERO, false, None, ); let services = HttpFlow::new(echo_path_service(), ExpectHandler, None); let h1 = Dispatcher::<_, _, _, _, UpgradeHandler>::new( buf.clone(), services, cfg, None, OnConnectData::default(), ); buf.extend_read_buf( "\ POST /upload HTTP/1.1\r\n\ Content-Length: 5\r\n\ Expect: 100-continue\r\n\ \r\n\ ", ); pin!(h1); assert!(h1.as_mut().poll(cx).is_ready()); assert!(matches!(&h1.inner, DispatcherState::Normal { .. })); // polls: manual shutdown assert_eq!(h1.poll_count, 2); if let DispatcherState::Normal { ref inner } = h1.inner { let io = inner.io.as_ref().unwrap(); let mut res = io.write_buf()[..].to_owned(); stabilize_date_header(&mut res); // Despite the content-length header and even though the request payload has not // been sent, this test expects a complete service response since the payload // is not used at all. The service passed to dispatcher is path echo and doesn't // consume payload bytes. assert_eq!( str::from_utf8(&res).unwrap(), "\ HTTP/1.1 100 Continue\r\n\ \r\n\ HTTP/1.1 200 OK\r\n\ content-length: 7\r\n\ connection: close\r\n\ date: Thu, 01 Jan 1970 12:34:56 UTC\r\n\ \r\n\ /upload\ " ); } }) .await; } #[actix_rt::test] async fn upgrade_handling() { struct TestUpgrade; impl Service<(Request, Framed)> for TestUpgrade { type Response = (); type Error = Error; type Future = Ready>; actix_service::always_ready!(); fn call(&self, (req, _framed): (Request, Framed)) -> Self::Future { assert_eq!(req.method(), Method::GET); assert!(req.upgrade()); assert_eq!(req.headers().get("upgrade").unwrap(), "websocket"); ready(Ok(())) } } lazy(|cx| { let mut buf = TestSeqBuffer::empty(); let cfg = ServiceConfig::new( KeepAlive::Disabled, Duration::ZERO, Duration::ZERO, false, None, ); let services = HttpFlow::new(ok_service(), ExpectHandler, Some(TestUpgrade)); let h1 = Dispatcher::<_, _, _, _, TestUpgrade>::new( buf.clone(), services, cfg, None, OnConnectData::default(), ); buf.extend_read_buf( "\ GET /ws HTTP/1.1\r\n\ Connection: Upgrade\r\n\ Upgrade: websocket\r\n\ \r\n\ ", ); pin!(h1); assert!(h1.as_mut().poll(cx).is_ready()); assert!(matches!(&h1.inner, DispatcherState::Upgrade { .. })); // polls: manual shutdown assert_eq!(h1.poll_count, 2); }) .await; } // fix in #2624 reverted temporarily // complete fix tracked in #2745 #[ignore] #[actix_rt::test] async fn handler_drop_payload() { let _ = env_logger::try_init(); let mut buf = TestBuffer::new(http_msg( r" POST /drop-payload HTTP/1.1 Content-Length: 3 abc ", )); let services = HttpFlow::new( drop_payload_service(), ExpectHandler, None::, ); let h1 = Dispatcher::new( buf.clone(), services, ServiceConfig::default(), None, OnConnectData::default(), ); pin!(h1); lazy(|cx| { assert!(h1.as_mut().poll(cx).is_pending()); // polls: manual assert_eq!(h1.poll_count, 1); let mut res = BytesMut::from(buf.take_write_buf().as_ref()); stabilize_date_header(&mut res); let res = &res[..]; let exp = http_msg( r" HTTP/1.1 200 OK content-length: 15 date: Thu, 01 Jan 1970 12:34:56 UTC payload dropped ", ); assert_eq!( res, exp, "\nexpected response not in write buffer:\n\ response: {:?}\n\ expected: {:?}", String::from_utf8_lossy(res), String::from_utf8_lossy(&exp) ); if let DispatcherStateProj::Normal { inner } = h1.as_mut().project().inner.project() { assert!(inner.state.is_none()); } }) .await; lazy(|cx| { // add message that claims to have payload longer than provided buf.extend_read_buf(http_msg( r" POST /drop-payload HTTP/1.1 Content-Length: 200 abc ", )); assert!(h1.as_mut().poll(cx).is_pending()); // polls: manual => manual assert_eq!(h1.poll_count, 2); let mut res = BytesMut::from(buf.take_write_buf().as_ref()); stabilize_date_header(&mut res); let res = &res[..]; // expect response immediately even though request side has not finished reading payload let exp = http_msg( r" HTTP/1.1 200 OK content-length: 15 date: Thu, 01 Jan 1970 12:34:56 UTC payload dropped ", ); assert_eq!( res, exp, "\nexpected response not in write buffer:\n\ response: {:?}\n\ expected: {:?}", String::from_utf8_lossy(res), String::from_utf8_lossy(&exp) ); }) .await; lazy(|cx| { assert!(h1.as_mut().poll(cx).is_ready()); // polls: manual => manual => manual assert_eq!(h1.poll_count, 3); let mut res = BytesMut::from(buf.take_write_buf().as_ref()); stabilize_date_header(&mut res); let res = &res[..]; // expect that unrequested error response is sent back since connection could not be cleaned let exp = http_msg( r" HTTP/1.1 500 Internal Server Error content-length: 0 connection: close date: Thu, 01 Jan 1970 12:34:56 UTC ", ); assert_eq!( res, exp, "\nexpected response not in write buffer:\n\ response: {:?}\n\ expected: {:?}", String::from_utf8_lossy(res), String::from_utf8_lossy(&exp) ); }) .await; } fn http_msg(msg: impl AsRef) -> BytesMut { let mut msg = msg .as_ref() .trim() .split('\n') .map(|line| [line.trim_start(), "\r"].concat()) .collect::>() .join("\n"); // remove trailing \r msg.pop(); if !msg.is_empty() && !msg.contains("\r\n\r\n") { msg.push_str("\r\n\r\n"); } BytesMut::from(msg.as_bytes()) } #[test] fn http_msg_creates_msg() { assert_eq!(http_msg(r""), ""); assert_eq!( http_msg( r" POST / HTTP/1.1 Content-Length: 3 abc " ), "POST / HTTP/1.1\r\nContent-Length: 3\r\n\r\nabc" ); assert_eq!( http_msg( r" GET / HTTP/1.1 Content-Length: 3 " ), "GET / HTTP/1.1\r\nContent-Length: 3\r\n\r\n" ); } actix-http-3.9.0/src/h1/encoder.rs000064400000000000000000000514741046102023000150210ustar 00000000000000use std::{ cmp, io::{self, Write as _}, marker::PhantomData, ptr::copy_nonoverlapping, slice::from_raw_parts_mut, }; use bytes::{BufMut, BytesMut}; use crate::{ body::BodySize, header::{ map::Value, HeaderMap, HeaderName, CONNECTION, CONTENT_LENGTH, DATE, TRANSFER_ENCODING, }, helpers, ConnectionType, RequestHeadType, Response, ServiceConfig, StatusCode, Version, }; const AVERAGE_HEADER_SIZE: usize = 30; #[derive(Debug)] pub(crate) struct MessageEncoder { #[allow(dead_code)] pub length: BodySize, pub te: TransferEncoding, _phantom: PhantomData, } impl Default for MessageEncoder { fn default() -> Self { MessageEncoder { length: BodySize::None, te: TransferEncoding::empty(), _phantom: PhantomData, } } } pub(crate) trait MessageType: Sized { fn status(&self) -> Option; fn headers(&self) -> &HeaderMap; fn extra_headers(&self) -> Option<&HeaderMap>; fn camel_case(&self) -> bool { false } fn chunked(&self) -> bool; fn encode_status(&mut self, dst: &mut BytesMut) -> io::Result<()>; fn encode_headers( &mut self, dst: &mut BytesMut, version: Version, mut length: BodySize, conn_type: ConnectionType, config: &ServiceConfig, ) -> io::Result<()> { let chunked = self.chunked(); let mut skip_len = length != BodySize::Stream; let camel_case = self.camel_case(); // Content length if let Some(status) = self.status() { match status { StatusCode::CONTINUE | StatusCode::SWITCHING_PROTOCOLS | StatusCode::PROCESSING | StatusCode::NO_CONTENT => { // skip content-length and transfer-encoding headers // see https://datatracker.ietf.org/doc/html/rfc7230#section-3.3.1 // and https://datatracker.ietf.org/doc/html/rfc7230#section-3.3.2 skip_len = true; length = BodySize::None } StatusCode::NOT_MODIFIED => { // 304 responses should never have a body but should retain a manually set // content-length header // see https://datatracker.ietf.org/doc/html/rfc7232#section-4.1 skip_len = false; length = BodySize::None; } _ => {} } } match length { BodySize::Stream => { if chunked { skip_len = true; if camel_case { dst.put_slice(b"\r\nTransfer-Encoding: chunked\r\n") } else { dst.put_slice(b"\r\ntransfer-encoding: chunked\r\n") } } else { skip_len = false; dst.put_slice(b"\r\n"); } } BodySize::Sized(0) if camel_case => dst.put_slice(b"\r\nContent-Length: 0\r\n"), BodySize::Sized(0) => dst.put_slice(b"\r\ncontent-length: 0\r\n"), BodySize::Sized(len) => helpers::write_content_length(len, dst, camel_case), BodySize::None => dst.put_slice(b"\r\n"), } // Connection match conn_type { ConnectionType::Upgrade => dst.put_slice(b"connection: upgrade\r\n"), ConnectionType::KeepAlive if version < Version::HTTP_11 => { if camel_case { dst.put_slice(b"Connection: keep-alive\r\n") } else { dst.put_slice(b"connection: keep-alive\r\n") } } ConnectionType::Close if version >= Version::HTTP_11 => { if camel_case { dst.put_slice(b"Connection: close\r\n") } else { dst.put_slice(b"connection: close\r\n") } } _ => {} } // write headers let mut has_date = false; let mut buf = dst.chunk_mut().as_mut_ptr(); let mut remaining = dst.capacity() - dst.len(); // tracks bytes written since last buffer resize // since buf is a raw pointer to a bytes container storage but is written to without the // container's knowledge, this is used to sync the containers cursor after data is written let mut pos = 0; self.write_headers(|key, value| { match *key { CONNECTION => return, TRANSFER_ENCODING | CONTENT_LENGTH if skip_len => return, DATE => has_date = true, _ => {} } let k = key.as_str().as_bytes(); let k_len = k.len(); for val in value.iter() { let v = val.as_ref(); let v_len = v.len(); // key length + value length + colon + space + \r\n let len = k_len + v_len + 4; if len > remaining { // SAFETY: all the bytes written up to position "pos" are initialized // the written byte count and pointer advancement are kept in sync unsafe { dst.advance_mut(pos); } pos = 0; dst.reserve(len * 2); remaining = dst.capacity() - dst.len(); // re-assign buf raw pointer since it's possible that the buffer was // reallocated and/or resized buf = dst.chunk_mut().as_mut_ptr(); } // SAFETY: on each write, it is enough to ensure that the advancement of // the cursor matches the number of bytes written unsafe { if camel_case { // use Camel-Case headers write_camel_case(k, buf, k_len); } else { write_data(k, buf, k_len); } buf = buf.add(k_len); write_data(b": ", buf, 2); buf = buf.add(2); write_data(v, buf, v_len); buf = buf.add(v_len); write_data(b"\r\n", buf, 2); buf = buf.add(2); }; pos += len; remaining -= len; } }); // final cursor synchronization with the bytes container // // SAFETY: all the bytes written up to position "pos" are initialized // the written byte count and pointer advancement are kept in sync unsafe { dst.advance_mut(pos); } if !has_date { // optimized date header, write_date_header writes its own \r\n config.write_date_header(dst, camel_case); } // end-of-headers marker dst.extend_from_slice(b"\r\n"); Ok(()) } fn write_headers(&mut self, mut f: F) where F: FnMut(&HeaderName, &Value), { match self.extra_headers() { Some(headers) => { // merging headers from head and extra headers. self.headers() .inner .iter() .filter(|(name, _)| !headers.contains_key(*name)) .chain(headers.inner.iter()) .for_each(|(k, v)| f(k, v)) } None => self.headers().inner.iter().for_each(|(k, v)| f(k, v)), } } } impl MessageType for Response<()> { fn status(&self) -> Option { Some(self.head().status) } fn chunked(&self) -> bool { self.head().chunked() } fn headers(&self) -> &HeaderMap { &self.head().headers } fn extra_headers(&self) -> Option<&HeaderMap> { None } fn camel_case(&self) -> bool { self.head() .flags .contains(crate::message::Flags::CAMEL_CASE) } fn encode_status(&mut self, dst: &mut BytesMut) -> io::Result<()> { let head = self.head(); let reason = head.reason().as_bytes(); dst.reserve(256 + head.headers.len() * AVERAGE_HEADER_SIZE + reason.len()); // status line helpers::write_status_line(head.version, head.status.as_u16(), dst); dst.put_slice(reason); Ok(()) } } impl MessageType for RequestHeadType { fn status(&self) -> Option { None } fn chunked(&self) -> bool { self.as_ref().chunked() } fn camel_case(&self) -> bool { self.as_ref().camel_case_headers() } fn headers(&self) -> &HeaderMap { self.as_ref().headers() } fn extra_headers(&self) -> Option<&HeaderMap> { self.extra_headers() } fn encode_status(&mut self, dst: &mut BytesMut) -> io::Result<()> { let head = self.as_ref(); dst.reserve(256 + head.headers.len() * AVERAGE_HEADER_SIZE); write!( helpers::MutWriter(dst), "{} {} {}", head.method, head.uri.path_and_query().map(|u| u.as_str()).unwrap_or("/"), match head.version { Version::HTTP_09 => "HTTP/0.9", Version::HTTP_10 => "HTTP/1.0", Version::HTTP_11 => "HTTP/1.1", Version::HTTP_2 => "HTTP/2.0", Version::HTTP_3 => "HTTP/3.0", _ => return Err(io::Error::new(io::ErrorKind::Other, "unsupported version")), } ) .map_err(|e| io::Error::new(io::ErrorKind::Other, e)) } } impl MessageEncoder { /// Encode chunk. pub fn encode_chunk(&mut self, msg: &[u8], buf: &mut BytesMut) -> io::Result { self.te.encode(msg, buf) } /// Encode EOF. pub fn encode_eof(&mut self, buf: &mut BytesMut) -> io::Result<()> { self.te.encode_eof(buf) } /// Encode message. pub fn encode( &mut self, dst: &mut BytesMut, message: &mut T, head: bool, stream: bool, version: Version, length: BodySize, conn_type: ConnectionType, config: &ServiceConfig, ) -> io::Result<()> { // transfer encoding if !head { self.te = match length { BodySize::Sized(0) => TransferEncoding::empty(), BodySize::Sized(len) => TransferEncoding::length(len), BodySize::Stream => { if message.chunked() && !stream { TransferEncoding::chunked() } else { TransferEncoding::eof() } } BodySize::None => TransferEncoding::empty(), }; } else { self.te = TransferEncoding::empty(); } message.encode_status(dst)?; message.encode_headers(dst, version, length, conn_type, config) } } /// Encoders to handle different Transfer-Encodings. #[derive(Debug)] pub(crate) struct TransferEncoding { kind: TransferEncodingKind, } #[derive(Debug, PartialEq, Clone)] enum TransferEncodingKind { /// An Encoder for when Transfer-Encoding includes `chunked`. Chunked(bool), /// An Encoder for when Content-Length is set. /// /// Enforces that the body is not longer than the Content-Length header. Length(u64), /// An Encoder for when Content-Length is not known. /// /// Application decides when to stop writing. Eof, } impl TransferEncoding { #[inline] pub fn empty() -> TransferEncoding { TransferEncoding { kind: TransferEncodingKind::Length(0), } } #[inline] pub fn eof() -> TransferEncoding { TransferEncoding { kind: TransferEncodingKind::Eof, } } #[inline] pub fn chunked() -> TransferEncoding { TransferEncoding { kind: TransferEncodingKind::Chunked(false), } } #[inline] pub fn length(len: u64) -> TransferEncoding { TransferEncoding { kind: TransferEncodingKind::Length(len), } } /// Encode message. Return `EOF` state of encoder #[inline] pub fn encode(&mut self, msg: &[u8], buf: &mut BytesMut) -> io::Result { match self.kind { TransferEncodingKind::Eof => { let eof = msg.is_empty(); buf.extend_from_slice(msg); Ok(eof) } TransferEncodingKind::Chunked(ref mut eof) => { if *eof { return Ok(true); } if msg.is_empty() { *eof = true; buf.extend_from_slice(b"0\r\n\r\n"); } else { writeln!(helpers::MutWriter(buf), "{:X}\r", msg.len()) .map_err(|e| io::Error::new(io::ErrorKind::Other, e))?; buf.reserve(msg.len() + 2); buf.extend_from_slice(msg); buf.extend_from_slice(b"\r\n"); } Ok(*eof) } TransferEncodingKind::Length(ref mut remaining) => { if *remaining > 0 { if msg.is_empty() { return Ok(*remaining == 0); } let len = cmp::min(*remaining, msg.len() as u64); buf.extend_from_slice(&msg[..len as usize]); *remaining -= len; Ok(*remaining == 0) } else { Ok(true) } } } } /// Encode eof. Return `EOF` state of encoder #[inline] pub fn encode_eof(&mut self, buf: &mut BytesMut) -> io::Result<()> { match self.kind { TransferEncodingKind::Eof => Ok(()), TransferEncodingKind::Length(rem) => { if rem != 0 { Err(io::Error::new(io::ErrorKind::UnexpectedEof, "")) } else { Ok(()) } } TransferEncodingKind::Chunked(ref mut eof) => { if !*eof { *eof = true; buf.extend_from_slice(b"0\r\n\r\n"); } Ok(()) } } } } /// # Safety /// Callers must ensure that the given `len` matches the given `value` length and that `buf` is /// valid for writes of at least `len` bytes. unsafe fn write_data(value: &[u8], buf: *mut u8, len: usize) { debug_assert_eq!(value.len(), len); copy_nonoverlapping(value.as_ptr(), buf, len); } /// # Safety /// Callers must ensure that the given `len` matches the given `value` length and that `buf` is /// valid for writes of at least `len` bytes. unsafe fn write_camel_case(value: &[u8], buf: *mut u8, len: usize) { // first copy entire (potentially wrong) slice to output write_data(value, buf, len); // SAFETY: We just initialized the buffer with `value` let buffer = from_raw_parts_mut(buf, len); let mut iter = value.iter(); // first character should be uppercase if let Some(c @ b'a'..=b'z') = iter.next() { buffer[0] = c & 0b1101_1111; } // track 1 ahead of the current position since that's the location being assigned to let mut index = 2; // remaining characters after hyphens should also be uppercase while let Some(&c) = iter.next() { if c == b'-' { // advance iter by one and uppercase if needed if let Some(c @ b'a'..=b'z') = iter.next() { buffer[index] = c & 0b1101_1111; } index += 1; } index += 1; } } #[cfg(test)] mod tests { use std::rc::Rc; use bytes::Bytes; use http::header::{AUTHORIZATION, UPGRADE_INSECURE_REQUESTS}; use super::*; use crate::{ header::{HeaderValue, CONTENT_TYPE}, RequestHead, }; #[test] fn test_chunked_te() { let mut bytes = BytesMut::new(); let mut enc = TransferEncoding::chunked(); { assert!(!enc.encode(b"test", &mut bytes).ok().unwrap()); assert!(enc.encode(b"", &mut bytes).ok().unwrap()); } assert_eq!( bytes.split().freeze(), Bytes::from_static(b"4\r\ntest\r\n0\r\n\r\n") ); } #[actix_rt::test] async fn test_camel_case() { let mut bytes = BytesMut::with_capacity(2048); let mut head = RequestHead::default(); head.set_camel_case_headers(true); head.headers.insert(DATE, HeaderValue::from_static("date")); head.headers .insert(CONTENT_TYPE, HeaderValue::from_static("plain/text")); head.headers .insert(UPGRADE_INSECURE_REQUESTS, HeaderValue::from_static("1")); let mut head = RequestHeadType::Owned(head); let _ = head.encode_headers( &mut bytes, Version::HTTP_11, BodySize::Sized(0), ConnectionType::Close, &ServiceConfig::default(), ); let data = String::from_utf8(Vec::from(bytes.split().freeze().as_ref())).unwrap(); assert!(data.contains("Content-Length: 0\r\n")); assert!(data.contains("Connection: close\r\n")); assert!(data.contains("Content-Type: plain/text\r\n")); assert!(data.contains("Date: date\r\n")); assert!(data.contains("Upgrade-Insecure-Requests: 1\r\n")); let _ = head.encode_headers( &mut bytes, Version::HTTP_11, BodySize::Stream, ConnectionType::KeepAlive, &ServiceConfig::default(), ); let data = String::from_utf8(Vec::from(bytes.split().freeze().as_ref())).unwrap(); assert!(data.contains("Transfer-Encoding: chunked\r\n")); assert!(data.contains("Content-Type: plain/text\r\n")); assert!(data.contains("Date: date\r\n")); let mut head = RequestHead::default(); head.set_camel_case_headers(false); head.headers.insert(DATE, HeaderValue::from_static("date")); head.headers .insert(CONTENT_TYPE, HeaderValue::from_static("plain/text")); head.headers .append(CONTENT_TYPE, HeaderValue::from_static("xml")); let mut head = RequestHeadType::Owned(head); let _ = head.encode_headers( &mut bytes, Version::HTTP_11, BodySize::Stream, ConnectionType::KeepAlive, &ServiceConfig::default(), ); let data = String::from_utf8(Vec::from(bytes.split().freeze().as_ref())).unwrap(); assert!(data.contains("transfer-encoding: chunked\r\n")); assert!(data.contains("content-type: xml\r\n")); assert!(data.contains("content-type: plain/text\r\n")); assert!(data.contains("date: date\r\n")); } #[actix_rt::test] async fn test_extra_headers() { let mut bytes = BytesMut::with_capacity(2048); let mut head = RequestHead::default(); head.headers.insert( AUTHORIZATION, HeaderValue::from_static("some authorization"), ); let mut extra_headers = HeaderMap::new(); extra_headers.insert( AUTHORIZATION, HeaderValue::from_static("another authorization"), ); extra_headers.insert(DATE, HeaderValue::from_static("date")); let mut head = RequestHeadType::Rc(Rc::new(head), Some(extra_headers)); let _ = head.encode_headers( &mut bytes, Version::HTTP_11, BodySize::Sized(0), ConnectionType::Close, &ServiceConfig::default(), ); let data = String::from_utf8(Vec::from(bytes.split().freeze().as_ref())).unwrap(); assert!(data.contains("content-length: 0\r\n")); assert!(data.contains("connection: close\r\n")); assert!(data.contains("authorization: another authorization\r\n")); assert!(data.contains("date: date\r\n")); } #[actix_rt::test] async fn test_no_content_length() { let mut bytes = BytesMut::with_capacity(2048); let mut res = Response::with_body(StatusCode::SWITCHING_PROTOCOLS, ()); res.headers_mut().insert(DATE, HeaderValue::from_static("")); res.headers_mut() .insert(CONTENT_LENGTH, HeaderValue::from_static("0")); let _ = res.encode_headers( &mut bytes, Version::HTTP_11, BodySize::Stream, ConnectionType::Upgrade, &ServiceConfig::default(), ); let data = String::from_utf8(Vec::from(bytes.split().freeze().as_ref())).unwrap(); assert!(!data.contains("content-length: 0\r\n")); assert!(!data.contains("transfer-encoding: chunked\r\n")); } } actix-http-3.9.0/src/h1/expect.rs000064400000000000000000000015561046102023000146660ustar 00000000000000use actix_service::{Service, ServiceFactory}; use actix_utils::future::{ready, Ready}; use crate::{Error, Request}; pub struct ExpectHandler; impl ServiceFactory for ExpectHandler { type Response = Request; type Error = Error; type Config = (); type Service = ExpectHandler; type InitError = Error; type Future = Ready>; fn new_service(&self, _: Self::Config) -> Self::Future { ready(Ok(ExpectHandler)) } } impl Service for ExpectHandler { type Response = Request; type Error = Error; type Future = Ready>; actix_service::always_ready!(); fn call(&self, req: Request) -> Self::Future { ready(Ok(req)) // TODO: add some way to trigger error // Err(error::ErrorExpectationFailed("test")) } } actix-http-3.9.0/src/h1/mod.rs000064400000000000000000000033441046102023000141520ustar 00000000000000//! HTTP/1 protocol implementation. use bytes::{Bytes, BytesMut}; mod chunked; mod client; mod codec; mod decoder; mod dispatcher; #[cfg(test)] mod dispatcher_tests; mod encoder; mod expect; mod payload; mod service; mod timer; mod upgrade; mod utils; pub use self::{ client::{ClientCodec, ClientPayloadCodec}, codec::Codec, dispatcher::Dispatcher, expect::ExpectHandler, payload::Payload, service::{H1Service, H1ServiceHandler}, upgrade::UpgradeHandler, utils::SendResponse, }; #[derive(Debug)] /// Codec message pub enum Message { /// HTTP message. Item(T), /// Payload chunk. Chunk(Option), } impl From for Message { fn from(item: T) -> Self { Message::Item(item) } } /// Incoming request type #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum MessageType { None, Payload, Stream, } const LW: usize = 2 * 1024; const HW: usize = 32 * 1024; pub(crate) fn reserve_readbuf(src: &mut BytesMut) { let cap = src.capacity(); if cap < LW { src.reserve(HW - cap); } } #[cfg(test)] mod tests { use super::*; use crate::Request; impl Message { pub fn message(self) -> Request { match self { Message::Item(req) => req, _ => panic!("error"), } } pub fn chunk(self) -> Bytes { match self { Message::Chunk(Some(data)) => data, _ => panic!("error"), } } pub fn eof(self) -> bool { match self { Message::Chunk(None) => true, Message::Chunk(Some(_)) => false, _ => panic!("error"), } } } } actix-http-3.9.0/src/h1/payload.rs000064400000000000000000000154001046102023000150200ustar 00000000000000//! Payload stream use std::{ cell::RefCell, collections::VecDeque, pin::Pin, rc::{Rc, Weak}, task::{Context, Poll, Waker}, }; use bytes::Bytes; use futures_core::Stream; use crate::error::PayloadError; /// max buffer size 32k pub(crate) const MAX_BUFFER_SIZE: usize = 32_768; #[derive(Debug, PartialEq, Eq)] pub enum PayloadStatus { Read, Pause, Dropped, } /// Buffered stream of bytes chunks /// /// Payload stores chunks in a vector. First chunk can be received with `poll_next`. Payload does /// not notify current task when new data is available. /// /// Payload can be used as `Response` body stream. #[derive(Debug)] pub struct Payload { inner: Rc>, } impl Payload { /// Creates a payload stream. /// /// This method construct two objects responsible for bytes stream generation: /// - `PayloadSender` - *Sender* side of the stream /// - `Payload` - *Receiver* side of the stream pub fn create(eof: bool) -> (PayloadSender, Payload) { let shared = Rc::new(RefCell::new(Inner::new(eof))); ( PayloadSender::new(Rc::downgrade(&shared)), Payload { inner: shared }, ) } /// Creates an empty payload. pub(crate) fn empty() -> Payload { Payload { inner: Rc::new(RefCell::new(Inner::new(true))), } } /// Length of the data in this payload #[cfg(test)] pub fn len(&self) -> usize { self.inner.borrow().len() } /// Is payload empty #[cfg(test)] pub fn is_empty(&self) -> bool { self.inner.borrow().len() == 0 } /// Put unused data back to payload #[inline] pub fn unread_data(&mut self, data: Bytes) { self.inner.borrow_mut().unread_data(data); } } impl Stream for Payload { type Item = Result; fn poll_next( self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { Pin::new(&mut *self.inner.borrow_mut()).poll_next(cx) } } /// Sender part of the payload stream pub struct PayloadSender { inner: Weak>, } impl PayloadSender { fn new(inner: Weak>) -> Self { Self { inner } } #[inline] pub fn set_error(&mut self, err: PayloadError) { if let Some(shared) = self.inner.upgrade() { shared.borrow_mut().set_error(err) } } #[inline] pub fn feed_eof(&mut self) { if let Some(shared) = self.inner.upgrade() { shared.borrow_mut().feed_eof() } } #[inline] pub fn feed_data(&mut self, data: Bytes) { if let Some(shared) = self.inner.upgrade() { shared.borrow_mut().feed_data(data) } } #[allow(clippy::needless_pass_by_ref_mut)] #[inline] pub fn need_read(&self, cx: &mut Context<'_>) -> PayloadStatus { // we check need_read only if Payload (other side) is alive, // otherwise always return true (consume payload) if let Some(shared) = self.inner.upgrade() { if shared.borrow().need_read { PayloadStatus::Read } else { shared.borrow_mut().register_io(cx); PayloadStatus::Pause } } else { PayloadStatus::Dropped } } } #[derive(Debug)] struct Inner { len: usize, eof: bool, err: Option, need_read: bool, items: VecDeque, task: Option, io_task: Option, } impl Inner { fn new(eof: bool) -> Self { Inner { eof, len: 0, err: None, items: VecDeque::new(), need_read: true, task: None, io_task: None, } } /// Wake up future waiting for payload data to be available. fn wake(&mut self) { if let Some(waker) = self.task.take() { waker.wake(); } } /// Wake up future feeding data to Payload. fn wake_io(&mut self) { if let Some(waker) = self.io_task.take() { waker.wake(); } } /// Register future waiting data from payload. /// Waker would be used in `Inner::wake` fn register(&mut self, cx: &Context<'_>) { if self .task .as_ref() .map_or(true, |w| !cx.waker().will_wake(w)) { self.task = Some(cx.waker().clone()); } } // Register future feeding data to payload. /// Waker would be used in `Inner::wake_io` fn register_io(&mut self, cx: &Context<'_>) { if self .io_task .as_ref() .map_or(true, |w| !cx.waker().will_wake(w)) { self.io_task = Some(cx.waker().clone()); } } #[inline] fn set_error(&mut self, err: PayloadError) { self.err = Some(err); } #[inline] fn feed_eof(&mut self) { self.eof = true; } #[inline] fn feed_data(&mut self, data: Bytes) { self.len += data.len(); self.items.push_back(data); self.need_read = self.len < MAX_BUFFER_SIZE; self.wake(); } #[cfg(test)] fn len(&self) -> usize { self.len } fn poll_next( mut self: Pin<&mut Self>, cx: &Context<'_>, ) -> Poll>> { if let Some(data) = self.items.pop_front() { self.len -= data.len(); self.need_read = self.len < MAX_BUFFER_SIZE; if self.need_read && !self.eof { self.register(cx); } self.wake_io(); Poll::Ready(Some(Ok(data))) } else if let Some(err) = self.err.take() { Poll::Ready(Some(Err(err))) } else if self.eof { Poll::Ready(None) } else { self.need_read = true; self.register(cx); self.wake_io(); Poll::Pending } } fn unread_data(&mut self, data: Bytes) { self.len += data.len(); self.items.push_front(data); } } #[cfg(test)] mod tests { use actix_utils::future::poll_fn; use static_assertions::{assert_impl_all, assert_not_impl_any}; use super::*; assert_impl_all!(Payload: Unpin); assert_not_impl_any!(Payload: Send, Sync); assert_impl_all!(Inner: Unpin, Send, Sync); #[actix_rt::test] async fn test_unread_data() { let (_, mut payload) = Payload::create(false); payload.unread_data(Bytes::from("data")); assert!(!payload.is_empty()); assert_eq!(payload.len(), 4); assert_eq!( Bytes::from("data"), poll_fn(|cx| Pin::new(&mut payload).poll_next(cx)) .await .unwrap() .unwrap() ); } } actix-http-3.9.0/src/h1/service.rs000064400000000000000000000375421046102023000150420ustar 00000000000000use std::{ fmt, marker::PhantomData, net, rc::Rc, task::{Context, Poll}, }; use actix_codec::{AsyncRead, AsyncWrite, Framed}; use actix_rt::net::TcpStream; use actix_service::{ fn_service, IntoServiceFactory, Service, ServiceFactory, ServiceFactoryExt as _, }; use actix_utils::future::ready; use futures_core::future::LocalBoxFuture; use tracing::error; use super::{codec::Codec, dispatcher::Dispatcher, ExpectHandler, UpgradeHandler}; use crate::{ body::{BoxBody, MessageBody}, config::ServiceConfig, error::DispatchError, service::HttpServiceHandler, ConnectCallback, OnConnectData, Request, Response, }; /// `ServiceFactory` implementation for HTTP1 transport pub struct H1Service { srv: S, cfg: ServiceConfig, expect: X, upgrade: Option, on_connect_ext: Option>>, _phantom: PhantomData, } impl H1Service where S: ServiceFactory, S::Error: Into>, S::InitError: fmt::Debug, S::Response: Into>, B: MessageBody, { /// Create new `HttpService` instance with config. pub(crate) fn with_config>( cfg: ServiceConfig, service: F, ) -> Self { H1Service { cfg, srv: service.into_factory(), expect: ExpectHandler, upgrade: None, on_connect_ext: None, _phantom: PhantomData, } } } impl H1Service where S: ServiceFactory, S::Future: 'static, S::Error: Into>, S::InitError: fmt::Debug, S::Response: Into>, B: MessageBody, X: ServiceFactory, X::Future: 'static, X::Error: Into>, X::InitError: fmt::Debug, U: ServiceFactory<(Request, Framed), Config = (), Response = ()>, U::Future: 'static, U::Error: fmt::Display + Into>, U::InitError: fmt::Debug, { /// Create simple tcp stream service pub fn tcp( self, ) -> impl ServiceFactory { fn_service(|io: TcpStream| { let peer_addr = io.peer_addr().ok(); ready(Ok((io, peer_addr))) }) .and_then(self) } } #[cfg(feature = "openssl")] mod openssl { use actix_tls::accept::{ openssl::{ reexports::{Error as SslError, SslAcceptor}, Acceptor, TlsStream, }, TlsError, }; use super::*; impl H1Service, S, B, X, U> where S: ServiceFactory, S::Future: 'static, S::Error: Into>, S::InitError: fmt::Debug, S::Response: Into>, B: MessageBody, X: ServiceFactory, X::Future: 'static, X::Error: Into>, X::InitError: fmt::Debug, U: ServiceFactory< (Request, Framed, Codec>), Config = (), Response = (), >, U::Future: 'static, U::Error: fmt::Display + Into>, U::InitError: fmt::Debug, { /// Create OpenSSL based service. pub fn openssl( self, acceptor: SslAcceptor, ) -> impl ServiceFactory< TcpStream, Config = (), Response = (), Error = TlsError, InitError = (), > { Acceptor::new(acceptor) .map_init_err(|_| { unreachable!("TLS acceptor service factory does not error on init") }) .map_err(TlsError::into_service_error) .map(|io: TlsStream| { let peer_addr = io.get_ref().peer_addr().ok(); (io, peer_addr) }) .and_then(self.map_err(TlsError::Service)) } } } #[cfg(feature = "rustls-0_20")] mod rustls_0_20 { use std::io; use actix_service::ServiceFactoryExt as _; use actix_tls::accept::{ rustls_0_20::{reexports::ServerConfig, Acceptor, TlsStream}, TlsError, }; use super::*; impl H1Service, S, B, X, U> where S: ServiceFactory, S::Future: 'static, S::Error: Into>, S::InitError: fmt::Debug, S::Response: Into>, B: MessageBody, X: ServiceFactory, X::Future: 'static, X::Error: Into>, X::InitError: fmt::Debug, U: ServiceFactory< (Request, Framed, Codec>), Config = (), Response = (), >, U::Future: 'static, U::Error: fmt::Display + Into>, U::InitError: fmt::Debug, { /// Create Rustls v0.20 based service. pub fn rustls( self, config: ServerConfig, ) -> impl ServiceFactory< TcpStream, Config = (), Response = (), Error = TlsError, InitError = (), > { Acceptor::new(config) .map_init_err(|_| { unreachable!("TLS acceptor service factory does not error on init") }) .map_err(TlsError::into_service_error) .map(|io: TlsStream| { let peer_addr = io.get_ref().0.peer_addr().ok(); (io, peer_addr) }) .and_then(self.map_err(TlsError::Service)) } } } #[cfg(feature = "rustls-0_21")] mod rustls_0_21 { use std::io; use actix_service::ServiceFactoryExt as _; use actix_tls::accept::{ rustls_0_21::{reexports::ServerConfig, Acceptor, TlsStream}, TlsError, }; use super::*; impl H1Service, S, B, X, U> where S: ServiceFactory, S::Future: 'static, S::Error: Into>, S::InitError: fmt::Debug, S::Response: Into>, B: MessageBody, X: ServiceFactory, X::Future: 'static, X::Error: Into>, X::InitError: fmt::Debug, U: ServiceFactory< (Request, Framed, Codec>), Config = (), Response = (), >, U::Future: 'static, U::Error: fmt::Display + Into>, U::InitError: fmt::Debug, { /// Create Rustls v0.21 based service. pub fn rustls_021( self, config: ServerConfig, ) -> impl ServiceFactory< TcpStream, Config = (), Response = (), Error = TlsError, InitError = (), > { Acceptor::new(config) .map_init_err(|_| { unreachable!("TLS acceptor service factory does not error on init") }) .map_err(TlsError::into_service_error) .map(|io: TlsStream| { let peer_addr = io.get_ref().0.peer_addr().ok(); (io, peer_addr) }) .and_then(self.map_err(TlsError::Service)) } } } #[cfg(feature = "rustls-0_22")] mod rustls_0_22 { use std::io; use actix_service::ServiceFactoryExt as _; use actix_tls::accept::{ rustls_0_22::{reexports::ServerConfig, Acceptor, TlsStream}, TlsError, }; use super::*; impl H1Service, S, B, X, U> where S: ServiceFactory, S::Future: 'static, S::Error: Into>, S::InitError: fmt::Debug, S::Response: Into>, B: MessageBody, X: ServiceFactory, X::Future: 'static, X::Error: Into>, X::InitError: fmt::Debug, U: ServiceFactory< (Request, Framed, Codec>), Config = (), Response = (), >, U::Future: 'static, U::Error: fmt::Display + Into>, U::InitError: fmt::Debug, { /// Create Rustls v0.22 based service. pub fn rustls_0_22( self, config: ServerConfig, ) -> impl ServiceFactory< TcpStream, Config = (), Response = (), Error = TlsError, InitError = (), > { Acceptor::new(config) .map_init_err(|_| { unreachable!("TLS acceptor service factory does not error on init") }) .map_err(TlsError::into_service_error) .map(|io: TlsStream| { let peer_addr = io.get_ref().0.peer_addr().ok(); (io, peer_addr) }) .and_then(self.map_err(TlsError::Service)) } } } #[cfg(feature = "rustls-0_23")] mod rustls_0_23 { use std::io; use actix_service::ServiceFactoryExt as _; use actix_tls::accept::{ rustls_0_23::{reexports::ServerConfig, Acceptor, TlsStream}, TlsError, }; use super::*; impl H1Service, S, B, X, U> where S: ServiceFactory, S::Future: 'static, S::Error: Into>, S::InitError: fmt::Debug, S::Response: Into>, B: MessageBody, X: ServiceFactory, X::Future: 'static, X::Error: Into>, X::InitError: fmt::Debug, U: ServiceFactory< (Request, Framed, Codec>), Config = (), Response = (), >, U::Future: 'static, U::Error: fmt::Display + Into>, U::InitError: fmt::Debug, { /// Create Rustls v0.23 based service. pub fn rustls_0_23( self, config: ServerConfig, ) -> impl ServiceFactory< TcpStream, Config = (), Response = (), Error = TlsError, InitError = (), > { Acceptor::new(config) .map_init_err(|_| { unreachable!("TLS acceptor service factory does not error on init") }) .map_err(TlsError::into_service_error) .map(|io: TlsStream| { let peer_addr = io.get_ref().0.peer_addr().ok(); (io, peer_addr) }) .and_then(self.map_err(TlsError::Service)) } } } impl H1Service where S: ServiceFactory, S::Error: Into>, S::Response: Into>, S::InitError: fmt::Debug, B: MessageBody, { pub fn expect(self, expect: X1) -> H1Service where X1: ServiceFactory, X1::Error: Into>, X1::InitError: fmt::Debug, { H1Service { expect, cfg: self.cfg, srv: self.srv, upgrade: self.upgrade, on_connect_ext: self.on_connect_ext, _phantom: PhantomData, } } pub fn upgrade(self, upgrade: Option) -> H1Service where U1: ServiceFactory<(Request, Framed), Response = ()>, U1::Error: fmt::Display, U1::InitError: fmt::Debug, { H1Service { upgrade, cfg: self.cfg, srv: self.srv, expect: self.expect, on_connect_ext: self.on_connect_ext, _phantom: PhantomData, } } /// Set on connect callback. pub(crate) fn on_connect_ext(mut self, f: Option>>) -> Self { self.on_connect_ext = f; self } } impl ServiceFactory<(T, Option)> for H1Service where T: AsyncRead + AsyncWrite + Unpin + 'static, S: ServiceFactory, S::Future: 'static, S::Error: Into>, S::Response: Into>, S::InitError: fmt::Debug, B: MessageBody, X: ServiceFactory, X::Future: 'static, X::Error: Into>, X::InitError: fmt::Debug, U: ServiceFactory<(Request, Framed), Config = (), Response = ()>, U::Future: 'static, U::Error: fmt::Display + Into>, U::InitError: fmt::Debug, { type Response = (); type Error = DispatchError; type Config = (); type Service = H1ServiceHandler; type InitError = (); type Future = LocalBoxFuture<'static, Result>; fn new_service(&self, _: ()) -> Self::Future { let service = self.srv.new_service(()); let expect = self.expect.new_service(()); let upgrade = self.upgrade.as_ref().map(|s| s.new_service(())); let on_connect_ext = self.on_connect_ext.clone(); let cfg = self.cfg.clone(); Box::pin(async move { let expect = expect .await .map_err(|e| error!("Init http expect service error: {:?}", e))?; let upgrade = match upgrade { Some(upgrade) => { let upgrade = upgrade .await .map_err(|e| error!("Init http upgrade service error: {:?}", e))?; Some(upgrade) } None => None, }; let service = service .await .map_err(|e| error!("Init http service error: {:?}", e))?; Ok(H1ServiceHandler::new( cfg, service, expect, upgrade, on_connect_ext, )) }) } } /// `Service` implementation for HTTP/1 transport pub type H1ServiceHandler = HttpServiceHandler; impl Service<(T, Option)> for HttpServiceHandler where T: AsyncRead + AsyncWrite + Unpin, S: Service, S::Error: Into>, S::Response: Into>, B: MessageBody, X: Service, X::Error: Into>, U: Service<(Request, Framed), Response = ()>, U::Error: fmt::Display + Into>, { type Response = (); type Error = DispatchError; type Future = Dispatcher; fn poll_ready(&self, cx: &mut Context<'_>) -> Poll> { self._poll_ready(cx).map_err(|err| { error!("HTTP/1 service readiness error: {:?}", err); DispatchError::Service(err) }) } fn call(&self, (io, addr): (T, Option)) -> Self::Future { let conn_data = OnConnectData::from_io(&io, self.on_connect_ext.as_deref()); Dispatcher::new(io, Rc::clone(&self.flow), self.cfg.clone(), addr, conn_data) } } actix-http-3.9.0/src/h1/timer.rs000064400000000000000000000043341046102023000145130ustar 00000000000000use std::{fmt, future::Future, pin::Pin, task::Context}; use actix_rt::time::{Instant, Sleep}; use tracing::trace; #[derive(Debug)] pub(super) enum TimerState { Disabled, Inactive, Active { timer: Pin> }, } impl TimerState { pub(super) fn new(enabled: bool) -> Self { if enabled { Self::Inactive } else { Self::Disabled } } pub(super) fn is_enabled(&self) -> bool { matches!(self, Self::Active { .. } | Self::Inactive) } pub(super) fn set(&mut self, timer: Sleep, line: u32) { if matches!(self, Self::Disabled) { trace!("setting disabled timer from line {}", line); } *self = Self::Active { timer: Box::pin(timer), }; } pub(super) fn set_and_init(&mut self, cx: &mut Context<'_>, timer: Sleep, line: u32) { self.set(timer, line); self.init(cx); } pub(super) fn clear(&mut self, line: u32) { if matches!(self, Self::Disabled) { trace!("trying to clear a disabled timer from line {}", line); } if matches!(self, Self::Inactive) { trace!("trying to clear an inactive timer from line {}", line); } *self = Self::Inactive; } pub(super) fn init(&mut self, cx: &mut Context<'_>) { if let TimerState::Active { timer } = self { let _ = timer.as_mut().poll(cx); } } } impl fmt::Display for TimerState { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { TimerState::Disabled => f.write_str("timer is disabled"), TimerState::Inactive => f.write_str("timer is inactive"), TimerState::Active { timer } => { let deadline = timer.deadline(); let now = Instant::now(); if deadline < now { f.write_str("timer is active and has reached deadline") } else { write!( f, "timer is active and due to expire in {} milliseconds", ((deadline - now).as_secs_f32() * 1000.0) ) } } } } } actix-http-3.9.0/src/h1/upgrade.rs000064400000000000000000000015741046102023000150250ustar 00000000000000use actix_codec::Framed; use actix_service::{Service, ServiceFactory}; use futures_core::future::LocalBoxFuture; use crate::{h1::Codec, Error, Request}; pub struct UpgradeHandler; impl ServiceFactory<(Request, Framed)> for UpgradeHandler { type Response = (); type Error = Error; type Config = (); type Service = UpgradeHandler; type InitError = Error; type Future = LocalBoxFuture<'static, Result>; fn new_service(&self, _: ()) -> Self::Future { unimplemented!() } } impl Service<(Request, Framed)> for UpgradeHandler { type Response = (); type Error = Error; type Future = LocalBoxFuture<'static, Result>; actix_service::always_ready!(); fn call(&self, _: (Request, Framed)) -> Self::Future { unimplemented!() } } actix-http-3.9.0/src/h1/utils.rs000064400000000000000000000100141046102023000145230ustar 00000000000000use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use actix_codec::{AsyncRead, AsyncWrite, Framed}; use pin_project_lite::pin_project; use crate::{ body::{BodySize, MessageBody}, h1::{Codec, Message}, Error, Response, }; pin_project! { /// Send HTTP/1 response pub struct SendResponse { res: Option, BodySize)>>, #[pin] body: Option, #[pin] framed: Option>, } } impl SendResponse where B: MessageBody, B::Error: Into, { pub fn new(framed: Framed, response: Response) -> Self { let (res, body) = response.into_parts(); SendResponse { res: Some((res, body.size()).into()), body: Some(body), framed: Some(framed), } } } impl Future for SendResponse where T: AsyncRead + AsyncWrite + Unpin, B: MessageBody, B::Error: Into, { type Output = Result, Error>; // TODO: rethink if we need loops in polls fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let mut this = self.as_mut().project(); let mut body_done = this.body.is_none(); loop { let mut body_ready = !body_done; // send body if this.res.is_none() && body_ready { while body_ready && !body_done && !this .framed .as_ref() .as_pin_ref() .unwrap() .is_write_buf_full() { let next = match this.body.as_mut().as_pin_mut().unwrap().poll_next(cx) { Poll::Ready(Some(Ok(item))) => Poll::Ready(Some(item)), Poll::Ready(Some(Err(err))) => return Poll::Ready(Err(err.into())), Poll::Ready(None) => Poll::Ready(None), Poll::Pending => Poll::Pending, }; match next { Poll::Ready(item) => { // body is done when item is None body_done = item.is_none(); if body_done { this.body.set(None); } let framed = this.framed.as_mut().as_pin_mut().unwrap(); framed .write(Message::Chunk(item)) .map_err(|err| Error::new_send_response().with_cause(err))?; } Poll::Pending => body_ready = false, } } } let framed = this.framed.as_mut().as_pin_mut().unwrap(); // flush write buffer if !framed.is_write_buf_empty() { match framed .flush(cx) .map_err(|err| Error::new_send_response().with_cause(err))? { Poll::Ready(_) => { if body_ready { continue; } else { return Poll::Pending; } } Poll::Pending => return Poll::Pending, } } // send response if let Some(res) = this.res.take() { framed .write(res) .map_err(|err| Error::new_send_response().with_cause(err))?; continue; } if !body_done { if body_ready { continue; } else { return Poll::Pending; } } else { break; } } let framed = this.framed.take().unwrap(); Poll::Ready(Ok(framed)) } } actix-http-3.9.0/src/h2/dispatcher.rs000064400000000000000000000267471046102023000155360ustar 00000000000000use std::{ cmp, error::Error as StdError, future::Future, marker::PhantomData, net, pin::{pin, Pin}, rc::Rc, task::{Context, Poll}, }; use actix_codec::{AsyncRead, AsyncWrite}; use actix_rt::time::{sleep, Sleep}; use actix_service::Service; use actix_utils::future::poll_fn; use bytes::{Bytes, BytesMut}; use futures_core::ready; use h2::{ server::{Connection, SendResponse}, Ping, PingPong, }; use pin_project_lite::pin_project; use crate::{ body::{BodySize, BoxBody, MessageBody}, config::ServiceConfig, header::{ HeaderName, HeaderValue, CONNECTION, CONTENT_LENGTH, DATE, TRANSFER_ENCODING, UPGRADE, }, service::HttpFlow, Extensions, Method, OnConnectData, Payload, Request, Response, ResponseHead, }; const CHUNK_SIZE: usize = 16_384; pin_project! { /// Dispatcher for HTTP/2 protocol. pub struct Dispatcher { flow: Rc>, connection: Connection, conn_data: Option>, config: ServiceConfig, peer_addr: Option, ping_pong: Option, _phantom: PhantomData } } impl Dispatcher where T: AsyncRead + AsyncWrite + Unpin, { pub(crate) fn new( mut conn: Connection, flow: Rc>, config: ServiceConfig, peer_addr: Option, conn_data: OnConnectData, timer: Option>>, ) -> Self { let ping_pong = config.keep_alive().duration().map(|dur| H2PingPong { timer: timer .map(|mut timer| { // reuse timer slot if it was initialized for handshake timer.as_mut().reset((config.now() + dur).into()); timer }) .unwrap_or_else(|| Box::pin(sleep(dur))), in_flight: false, ping_pong: conn.ping_pong().unwrap(), }); Self { flow, config, peer_addr, connection: conn, conn_data: conn_data.0.map(Rc::new), ping_pong, _phantom: PhantomData, } } } struct H2PingPong { /// Handle to send ping frames from the peer. ping_pong: PingPong, /// True when a ping has been sent and is waiting for a reply. in_flight: bool, /// Timeout for pong response. timer: Pin>, } impl Future for Dispatcher where T: AsyncRead + AsyncWrite + Unpin, S: Service, S::Error: Into>, S::Future: 'static, S::Response: Into>, B: MessageBody, { type Output = Result<(), crate::error::DispatchError>; #[inline] fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let this = self.get_mut(); loop { match Pin::new(&mut this.connection).poll_accept(cx)? { Poll::Ready(Some((req, tx))) => { let (parts, body) = req.into_parts(); let payload = crate::h2::Payload::new(body); let pl = Payload::H2 { payload }; let mut req = Request::with_payload(pl); let head_req = parts.method == Method::HEAD; let head = req.head_mut(); head.uri = parts.uri; head.method = parts.method; head.version = parts.version; head.headers = parts.headers.into(); head.peer_addr = this.peer_addr; req.conn_data.clone_from(&this.conn_data); let fut = this.flow.service.call(req); let config = this.config.clone(); // multiplex request handling with spawn task actix_rt::spawn(async move { // resolve service call and send response. let res = match fut.await { Ok(res) => handle_response(res.into(), tx, config, head_req).await, Err(err) => { let res: Response = err.into(); handle_response(res, tx, config, head_req).await } }; // log error. if let Err(err) = res { match err { DispatchError::SendResponse(err) => { tracing::trace!("Error sending response: {err:?}"); } DispatchError::SendData(err) => { tracing::warn!("Send data error: {err:?}"); } DispatchError::ResponseBody(err) => { tracing::error!("Response payload stream error: {err:?}"); } } } }); } Poll::Ready(None) => return Poll::Ready(Ok(())), Poll::Pending => match this.ping_pong.as_mut() { Some(ping_pong) => loop { if ping_pong.in_flight { // When there is an in-flight ping-pong, poll pong and and keep-alive // timer. On successful pong received, update keep-alive timer to // determine the next timing of ping pong. match ping_pong.ping_pong.poll_pong(cx)? { Poll::Ready(_) => { ping_pong.in_flight = false; let dead_line = this.config.keep_alive_deadline().unwrap(); ping_pong.timer.as_mut().reset(dead_line.into()); } Poll::Pending => { return ping_pong.timer.as_mut().poll(cx).map(|_| Ok(())); } } } else { // When there is no in-flight ping-pong, keep-alive timer is used to // wait for next timing of ping-pong. Therefore, at this point it serves // as an interval instead. ready!(ping_pong.timer.as_mut().poll(cx)); ping_pong.ping_pong.send_ping(Ping::opaque())?; let dead_line = this.config.keep_alive_deadline().unwrap(); ping_pong.timer.as_mut().reset(dead_line.into()); ping_pong.in_flight = true; } }, None => return Poll::Pending, }, } } } } enum DispatchError { SendResponse(h2::Error), SendData(h2::Error), ResponseBody(Box), } async fn handle_response( res: Response, mut tx: SendResponse, config: ServiceConfig, head_req: bool, ) -> Result<(), DispatchError> where B: MessageBody, { let (res, body) = res.replace_body(()); // prepare response. let mut size = body.size(); let res = prepare_response(config, res.head(), &mut size); let eof_or_head = size.is_eof() || head_req; // send response head and return on eof. let mut stream = tx .send_response(res, eof_or_head) .map_err(DispatchError::SendResponse)?; if eof_or_head { return Ok(()); } let mut body = pin!(body); // poll response body and send chunks to client while let Some(res) = poll_fn(|cx| body.as_mut().poll_next(cx)).await { let mut chunk = res.map_err(|err| DispatchError::ResponseBody(err.into()))?; 'send: loop { let chunk_size = cmp::min(chunk.len(), CHUNK_SIZE); // reserve enough space and wait for stream ready. stream.reserve_capacity(chunk_size); match poll_fn(|cx| stream.poll_capacity(cx)).await { // No capacity left. drop body and return. None => return Ok(()), Some(Err(err)) => return Err(DispatchError::SendData(err)), Some(Ok(cap)) => { // split chunk to writeable size and send to client let len = chunk.len(); let bytes = chunk.split_to(cmp::min(len, cap)); stream .send_data(bytes, false) .map_err(DispatchError::SendData)?; // Current chuck completely sent. break send loop and poll next one. if chunk.is_empty() { break 'send; } } } } } // response body streaming finished. send end of stream and return. stream .send_data(Bytes::new(), true) .map_err(DispatchError::SendData)?; Ok(()) } fn prepare_response( config: ServiceConfig, head: &ResponseHead, size: &mut BodySize, ) -> http::Response<()> { let mut has_date = false; let mut skip_len = size != &BodySize::Stream; let mut res = http::Response::new(()); *res.status_mut() = head.status; *res.version_mut() = http::Version::HTTP_2; // Content length match head.status { http::StatusCode::NO_CONTENT | http::StatusCode::CONTINUE | http::StatusCode::PROCESSING => *size = BodySize::None, http::StatusCode::SWITCHING_PROTOCOLS => { skip_len = true; *size = BodySize::Stream; } _ => {} } match size { BodySize::None | BodySize::Stream => {} BodySize::Sized(0) => { #[allow(clippy::declare_interior_mutable_const)] const HV_ZERO: HeaderValue = HeaderValue::from_static("0"); res.headers_mut().insert(CONTENT_LENGTH, HV_ZERO); } BodySize::Sized(len) => { let mut buf = itoa::Buffer::new(); res.headers_mut().insert( CONTENT_LENGTH, HeaderValue::from_str(buf.format(*len)).unwrap(), ); } }; // copy headers for (key, value) in head.headers.iter() { match key { // omit HTTP/1.x only headers according to: // https://datatracker.ietf.org/doc/html/rfc7540#section-8.1.2.2 &CONNECTION | &TRANSFER_ENCODING | &UPGRADE => continue, &CONTENT_LENGTH if skip_len => continue, &DATE => has_date = true, // omit HTTP/1.x only headers according to: // https://datatracker.ietf.org/doc/html/rfc7540#section-8.1.2.2 hdr if hdr == HeaderName::from_static("keep-alive") || hdr == HeaderName::from_static("proxy-connection") => { continue } _ => {} } res.headers_mut().append(key, value.clone()); } // set date header if !has_date { let mut bytes = BytesMut::with_capacity(29); config.write_date_header_value(&mut bytes); res.headers_mut().insert( DATE, // SAFETY: serialized date-times are known ASCII strings unsafe { HeaderValue::from_maybe_shared_unchecked(bytes.freeze()) }, ); } res } actix-http-3.9.0/src/h2/mod.rs000064400000000000000000000054011046102023000141470ustar 00000000000000//! HTTP/2 protocol. use std::{ future::Future, pin::Pin, task::{Context, Poll}, }; use actix_codec::{AsyncRead, AsyncWrite}; use actix_rt::time::{sleep_until, Sleep}; use bytes::Bytes; use futures_core::{ready, Stream}; use h2::{ server::{handshake, Connection, Handshake}, RecvStream, }; use crate::{ config::ServiceConfig, error::{DispatchError, PayloadError}, }; mod dispatcher; mod service; pub use self::{dispatcher::Dispatcher, service::H2Service}; /// HTTP/2 peer stream. pub struct Payload { stream: RecvStream, } impl Payload { pub(crate) fn new(stream: RecvStream) -> Self { Self { stream } } } impl Stream for Payload { type Item = Result; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { let this = self.get_mut(); match ready!(Pin::new(&mut this.stream).poll_data(cx)) { Some(Ok(chunk)) => { let len = chunk.len(); match this.stream.flow_control().release_capacity(len) { Ok(()) => Poll::Ready(Some(Ok(chunk))), Err(err) => Poll::Ready(Some(Err(err.into()))), } } Some(Err(err)) => Poll::Ready(Some(Err(err.into()))), None => Poll::Ready(None), } } } pub(crate) fn handshake_with_timeout(io: T, config: &ServiceConfig) -> HandshakeWithTimeout where T: AsyncRead + AsyncWrite + Unpin, { HandshakeWithTimeout { handshake: handshake(io), timer: config .client_request_deadline() .map(|deadline| Box::pin(sleep_until(deadline.into()))), } } pub(crate) struct HandshakeWithTimeout { handshake: Handshake, timer: Option>>, } impl Future for HandshakeWithTimeout where T: AsyncRead + AsyncWrite + Unpin, { type Output = Result<(Connection, Option>>), DispatchError>; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { let this = self.get_mut(); match Pin::new(&mut this.handshake).poll(cx)? { // return the timer on success handshake; its slot can be re-used for h2 ping-pong Poll::Ready(conn) => Poll::Ready(Ok((conn, this.timer.take()))), Poll::Pending => match this.timer.as_mut() { Some(timer) => { ready!(timer.as_mut().poll(cx)); Poll::Ready(Err(DispatchError::SlowRequestTimeout)) } None => Poll::Pending, }, } } } #[cfg(test)] mod tests { use static_assertions::assert_impl_all; use super::*; assert_impl_all!(Payload: Unpin, Send, Sync); } actix-http-3.9.0/src/h2/service.rs000064400000000000000000000357641046102023000150470ustar 00000000000000use std::{ future::Future, marker::PhantomData, mem, net, pin::Pin, rc::Rc, task::{Context, Poll}, }; use actix_codec::{AsyncRead, AsyncWrite}; use actix_rt::net::TcpStream; use actix_service::{ fn_factory, fn_service, IntoServiceFactory, Service, ServiceFactory, ServiceFactoryExt as _, }; use actix_utils::future::ready; use futures_core::{future::LocalBoxFuture, ready}; use tracing::{error, trace}; use super::{dispatcher::Dispatcher, handshake_with_timeout, HandshakeWithTimeout}; use crate::{ body::{BoxBody, MessageBody}, config::ServiceConfig, error::DispatchError, service::HttpFlow, ConnectCallback, OnConnectData, Request, Response, }; /// `ServiceFactory` implementation for HTTP/2 transport pub struct H2Service { srv: S, cfg: ServiceConfig, on_connect_ext: Option>>, _phantom: PhantomData<(T, B)>, } impl H2Service where S: ServiceFactory, S::Error: Into> + 'static, S::Response: Into> + 'static, >::Future: 'static, B: MessageBody + 'static, { /// Create new `H2Service` instance with config. pub(crate) fn with_config>( cfg: ServiceConfig, service: F, ) -> Self { H2Service { cfg, on_connect_ext: None, srv: service.into_factory(), _phantom: PhantomData, } } /// Set on connect callback. pub(crate) fn on_connect_ext(mut self, f: Option>>) -> Self { self.on_connect_ext = f; self } } impl H2Service where S: ServiceFactory, S::Future: 'static, S::Error: Into> + 'static, S::Response: Into> + 'static, >::Future: 'static, B: MessageBody + 'static, { /// Create plain TCP based service pub fn tcp( self, ) -> impl ServiceFactory< TcpStream, Config = (), Response = (), Error = DispatchError, InitError = S::InitError, > { fn_factory(|| { ready(Ok::<_, S::InitError>(fn_service(|io: TcpStream| { let peer_addr = io.peer_addr().ok(); ready(Ok::<_, DispatchError>((io, peer_addr))) }))) }) .and_then(self) } } #[cfg(feature = "openssl")] mod openssl { use actix_service::ServiceFactoryExt as _; use actix_tls::accept::{ openssl::{ reexports::{Error as SslError, SslAcceptor}, Acceptor, TlsStream, }, TlsError, }; use super::*; impl H2Service, S, B> where S: ServiceFactory, S::Future: 'static, S::Error: Into> + 'static, S::Response: Into> + 'static, >::Future: 'static, B: MessageBody + 'static, { /// Create OpenSSL based service. pub fn openssl( self, acceptor: SslAcceptor, ) -> impl ServiceFactory< TcpStream, Config = (), Response = (), Error = TlsError, InitError = S::InitError, > { Acceptor::new(acceptor) .map_init_err(|_| { unreachable!("TLS acceptor service factory does not error on init") }) .map_err(TlsError::into_service_error) .map(|io: TlsStream| { let peer_addr = io.get_ref().peer_addr().ok(); (io, peer_addr) }) .and_then(self.map_err(TlsError::Service)) } } } #[cfg(feature = "rustls-0_20")] mod rustls_0_20 { use std::io; use actix_service::ServiceFactoryExt as _; use actix_tls::accept::{ rustls::{reexports::ServerConfig, Acceptor, TlsStream}, TlsError, }; use super::*; impl H2Service, S, B> where S: ServiceFactory, S::Future: 'static, S::Error: Into> + 'static, S::Response: Into> + 'static, >::Future: 'static, B: MessageBody + 'static, { /// Create Rustls v0.20 based service. pub fn rustls( self, mut config: ServerConfig, ) -> impl ServiceFactory< TcpStream, Config = (), Response = (), Error = TlsError, InitError = S::InitError, > { let mut protos = vec![b"h2".to_vec()]; protos.extend_from_slice(&config.alpn_protocols); config.alpn_protocols = protos; Acceptor::new(config) .map_init_err(|_| { unreachable!("TLS acceptor service factory does not error on init") }) .map_err(TlsError::into_service_error) .map(|io: TlsStream| { let peer_addr = io.get_ref().0.peer_addr().ok(); (io, peer_addr) }) .and_then(self.map_err(TlsError::Service)) } } } #[cfg(feature = "rustls-0_21")] mod rustls_0_21 { use std::io; use actix_service::ServiceFactoryExt as _; use actix_tls::accept::{ rustls_0_21::{reexports::ServerConfig, Acceptor, TlsStream}, TlsError, }; use super::*; impl H2Service, S, B> where S: ServiceFactory, S::Future: 'static, S::Error: Into> + 'static, S::Response: Into> + 'static, >::Future: 'static, B: MessageBody + 'static, { /// Create Rustls v0.21 based service. pub fn rustls_021( self, mut config: ServerConfig, ) -> impl ServiceFactory< TcpStream, Config = (), Response = (), Error = TlsError, InitError = S::InitError, > { let mut protos = vec![b"h2".to_vec()]; protos.extend_from_slice(&config.alpn_protocols); config.alpn_protocols = protos; Acceptor::new(config) .map_init_err(|_| { unreachable!("TLS acceptor service factory does not error on init") }) .map_err(TlsError::into_service_error) .map(|io: TlsStream| { let peer_addr = io.get_ref().0.peer_addr().ok(); (io, peer_addr) }) .and_then(self.map_err(TlsError::Service)) } } } #[cfg(feature = "rustls-0_22")] mod rustls_0_22 { use std::io; use actix_service::ServiceFactoryExt as _; use actix_tls::accept::{ rustls_0_22::{reexports::ServerConfig, Acceptor, TlsStream}, TlsError, }; use super::*; impl H2Service, S, B> where S: ServiceFactory, S::Future: 'static, S::Error: Into> + 'static, S::Response: Into> + 'static, >::Future: 'static, B: MessageBody + 'static, { /// Create Rustls v0.22 based service. pub fn rustls_0_22( self, mut config: ServerConfig, ) -> impl ServiceFactory< TcpStream, Config = (), Response = (), Error = TlsError, InitError = S::InitError, > { let mut protos = vec![b"h2".to_vec()]; protos.extend_from_slice(&config.alpn_protocols); config.alpn_protocols = protos; Acceptor::new(config) .map_init_err(|_| { unreachable!("TLS acceptor service factory does not error on init") }) .map_err(TlsError::into_service_error) .map(|io: TlsStream| { let peer_addr = io.get_ref().0.peer_addr().ok(); (io, peer_addr) }) .and_then(self.map_err(TlsError::Service)) } } } #[cfg(feature = "rustls-0_23")] mod rustls_0_23 { use std::io; use actix_service::ServiceFactoryExt as _; use actix_tls::accept::{ rustls_0_23::{reexports::ServerConfig, Acceptor, TlsStream}, TlsError, }; use super::*; impl H2Service, S, B> where S: ServiceFactory, S::Future: 'static, S::Error: Into> + 'static, S::Response: Into> + 'static, >::Future: 'static, B: MessageBody + 'static, { /// Create Rustls v0.23 based service. pub fn rustls_0_23( self, mut config: ServerConfig, ) -> impl ServiceFactory< TcpStream, Config = (), Response = (), Error = TlsError, InitError = S::InitError, > { let mut protos = vec![b"h2".to_vec()]; protos.extend_from_slice(&config.alpn_protocols); config.alpn_protocols = protos; Acceptor::new(config) .map_init_err(|_| { unreachable!("TLS acceptor service factory does not error on init") }) .map_err(TlsError::into_service_error) .map(|io: TlsStream| { let peer_addr = io.get_ref().0.peer_addr().ok(); (io, peer_addr) }) .and_then(self.map_err(TlsError::Service)) } } } impl ServiceFactory<(T, Option)> for H2Service where T: AsyncRead + AsyncWrite + Unpin + 'static, S: ServiceFactory, S::Future: 'static, S::Error: Into> + 'static, S::Response: Into> + 'static, >::Future: 'static, B: MessageBody + 'static, { type Response = (); type Error = DispatchError; type Config = (); type Service = H2ServiceHandler; type InitError = S::InitError; type Future = LocalBoxFuture<'static, Result>; fn new_service(&self, _: ()) -> Self::Future { let service = self.srv.new_service(()); let cfg = self.cfg.clone(); let on_connect_ext = self.on_connect_ext.clone(); Box::pin(async move { let service = service.await?; Ok(H2ServiceHandler::new(cfg, on_connect_ext, service)) }) } } /// `Service` implementation for HTTP/2 transport pub struct H2ServiceHandler where S: Service, { flow: Rc>, cfg: ServiceConfig, on_connect_ext: Option>>, _phantom: PhantomData, } impl H2ServiceHandler where S: Service, S::Error: Into> + 'static, S::Future: 'static, S::Response: Into> + 'static, B: MessageBody + 'static, { fn new( cfg: ServiceConfig, on_connect_ext: Option>>, service: S, ) -> H2ServiceHandler { H2ServiceHandler { flow: HttpFlow::new(service, (), None), cfg, on_connect_ext, _phantom: PhantomData, } } } impl Service<(T, Option)> for H2ServiceHandler where T: AsyncRead + AsyncWrite + Unpin, S: Service, S::Error: Into> + 'static, S::Future: 'static, S::Response: Into> + 'static, B: MessageBody + 'static, { type Response = (); type Error = DispatchError; type Future = H2ServiceHandlerResponse; fn poll_ready(&self, cx: &mut Context<'_>) -> Poll> { self.flow.service.poll_ready(cx).map_err(|err| { let err = err.into(); error!("Service readiness error: {:?}", err); DispatchError::Service(err) }) } fn call(&self, (io, addr): (T, Option)) -> Self::Future { let on_connect_data = OnConnectData::from_io(&io, self.on_connect_ext.as_deref()); H2ServiceHandlerResponse { state: State::Handshake( Some(Rc::clone(&self.flow)), Some(self.cfg.clone()), addr, on_connect_data, handshake_with_timeout(io, &self.cfg), ), } } } enum State, B: MessageBody> where T: AsyncRead + AsyncWrite + Unpin, S::Future: 'static, { Handshake( Option>>, Option, Option, OnConnectData, HandshakeWithTimeout, ), Established(Dispatcher), } pub struct H2ServiceHandlerResponse where T: AsyncRead + AsyncWrite + Unpin, S: Service, S::Error: Into> + 'static, S::Future: 'static, S::Response: Into> + 'static, B: MessageBody + 'static, { state: State, } impl Future for H2ServiceHandlerResponse where T: AsyncRead + AsyncWrite + Unpin, S: Service, S::Error: Into> + 'static, S::Future: 'static, S::Response: Into> + 'static, B: MessageBody, { type Output = Result<(), DispatchError>; fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { match self.state { State::Handshake( ref mut srv, ref mut config, ref peer_addr, ref mut conn_data, ref mut handshake, ) => match ready!(Pin::new(handshake).poll(cx)) { Ok((conn, timer)) => { let on_connect_data = mem::take(conn_data); self.state = State::Established(Dispatcher::new( conn, srv.take().unwrap(), config.take().unwrap(), *peer_addr, on_connect_data, timer, )); self.poll(cx) } Err(err) => { trace!("H2 handshake error: {}", err); Poll::Ready(Err(err)) } }, State::Established(ref mut disp) => Pin::new(disp).poll(cx), } } } actix-http-3.9.0/src/header/as_name.rs000064400000000000000000000027621046102023000157210ustar 00000000000000//! Sealed [`AsHeaderName`] trait and implementations. use std::{borrow::Cow, str::FromStr as _}; use http::header::{HeaderName, InvalidHeaderName}; /// Sealed trait implemented for types that can be effectively borrowed as a [`HeaderValue`]. /// /// [`HeaderValue`]: super::HeaderValue pub trait AsHeaderName: Sealed {} pub struct Seal; pub trait Sealed { fn try_as_name(&self, seal: Seal) -> Result, InvalidHeaderName>; } impl Sealed for HeaderName { #[inline] fn try_as_name(&self, _: Seal) -> Result, InvalidHeaderName> { Ok(Cow::Borrowed(self)) } } impl AsHeaderName for HeaderName {} impl Sealed for &HeaderName { #[inline] fn try_as_name(&self, _: Seal) -> Result, InvalidHeaderName> { Ok(Cow::Borrowed(*self)) } } impl AsHeaderName for &HeaderName {} impl Sealed for &str { #[inline] fn try_as_name(&self, _: Seal) -> Result, InvalidHeaderName> { HeaderName::from_str(self).map(Cow::Owned) } } impl AsHeaderName for &str {} impl Sealed for String { #[inline] fn try_as_name(&self, _: Seal) -> Result, InvalidHeaderName> { HeaderName::from_str(self).map(Cow::Owned) } } impl AsHeaderName for String {} impl Sealed for &String { #[inline] fn try_as_name(&self, _: Seal) -> Result, InvalidHeaderName> { HeaderName::from_str(self).map(Cow::Owned) } } impl AsHeaderName for &String {} actix-http-3.9.0/src/header/common.rs000064400000000000000000000053571046102023000156110ustar 00000000000000//! Common header names not defined in [`http`]. //! //! Any headers added to this file will need to be re-exported from the list at `crate::headers`. use http::header::HeaderName; /// Response header field that indicates how caches have handled that response and its corresponding /// request. /// /// See [RFC 9211](https://www.rfc-editor.org/rfc/rfc9211) for full semantics. // TODO(breaking): replace with http's version pub const CACHE_STATUS: HeaderName = HeaderName::from_static("cache-status"); /// Response header field that allows origin servers to control the behavior of CDN caches /// interposed between them and clients separately from other caches that might handle the response. /// /// See [RFC 9213](https://www.rfc-editor.org/rfc/rfc9213) for full semantics. // TODO(breaking): replace with http's version pub const CDN_CACHE_CONTROL: HeaderName = HeaderName::from_static("cdn-cache-control"); /// Response header that prevents a document from loading any cross-origin resources that don't /// explicitly grant the document permission (using [CORP] or [CORS]). /// /// [CORP]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Cross-Origin_Resource_Policy_(CORP) /// [CORS]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS pub const CROSS_ORIGIN_EMBEDDER_POLICY: HeaderName = HeaderName::from_static("cross-origin-embedder-policy"); /// Response header that allows you to ensure a top-level document does not share a browsing context /// group with cross-origin documents. pub const CROSS_ORIGIN_OPENER_POLICY: HeaderName = HeaderName::from_static("cross-origin-opener-policy"); /// Response header that conveys a desire that the browser blocks no-cors cross-origin/cross-site /// requests to the given resource. pub const CROSS_ORIGIN_RESOURCE_POLICY: HeaderName = HeaderName::from_static("cross-origin-resource-policy"); /// Response header that provides a mechanism to allow and deny the use of browser features in a /// document or within any `