tokio-postgres-0.7.12/.cargo_vcs_info.json0000644000000001540000000000100141140ustar { "git": { "sha1": "8d08adb62bd96a29106800a798883a234a4c7e1f" }, "path_in_vcs": "tokio-postgres" }tokio-postgres-0.7.12/CHANGELOG.md000064400000000000000000000214061046102023000145200ustar 00000000000000# Change Log ## Unreleased ## v0.7.12 - 2024-09-15 ### Fixed * Fixed `query_typed` queries that return no rows. ### Added * Added support for `jiff` 0.1 via the `with-jiff-01` feature. * Added support for TCP keepalive on AIX. ## v0.7.11 - 2024-07-21 ### Fixed * Fixed handling of non-UTF8 error fields which can be sent after failed handshakes. * Fixed cancellation handling of `TransactionBuilder::start` futures. ### Added * Added `table_oid` and `field_id` fields to `Columns` struct of prepared statements. * Added `GenericClient::simple_query`. * Added `#[track_caller]` to `Row::get` and `SimpleQueryRow::get`. * Added `TargetSessionAttrs::ReadOnly`. * Added `Debug` implementation for `Statement`. * Added `Clone` implementation for `Row`. * Added `SimpleQueryMessage::RowDescription`. * Added `{Client, Transaction, GenericClient}::query_typed`. ### Changed * Disable `rustc-serialize` compatibility of `eui48-1` dependency * Config setters now take `impl Into`. ## v0.7.10 - 2023-08-25 ## Fixed * Defered default username lookup to avoid regressing `Config` behavior. ## v0.7.9 - 2023-08-19 ## Fixed * Fixed builds on OpenBSD. ## Added * Added the `js` feature for WASM support. * Added support for the `hostaddr` config option to bypass DNS lookups. * Added support for the `load_balance_hosts` config option to randomize connection ordering. * The `user` config option now defaults to the executing process's user. ## v0.7.8 - 2023-05-27 ## Added * Added `keepalives_interval` and `keepalives_retries` config options. * Added new `SqlState` variants. * Added more `Debug` impls. * Added `GenericClient::batch_execute`. * Added `RowStream::rows_affected`. * Added the `tcp_user_timeout` config option. ## Changed * Passing an incorrect number of parameters to a query method now returns an error instead of panicking. * Upgraded `socket2`. ## v0.7.7 - 2022-08-21 ## Added * Added `ToSql` and `FromSql` implementations for `[u8; N]` via the `array-impls` feature. * Added support for `smol_str` 0.1 via the `with-smol_str-01` feature. * Added `ToSql::encode_format` to support text encodings of parameters. ## v0.7.6 - 2022-04-30 ### Added * Added support for `uuid` 1.0 via the `with-uuid-1` feature. ### Changed * Upgraded to `tokio-util` 0.7. * Upgraded to `parking_lot` 0.12. ## v0.7.5 - 2021-10-29 ### Fixed * Fixed a bug where the client could enter into a transaction if the `Client::transaction` future was dropped before completion. ## v0.7.4 - 2021-10-19 ### Fixed * Fixed reporting of commit-time errors triggered by deferred constraints. ## v0.7.3 - 2021-09-29 ### Fixed * Fixed a deadlock when pipelined requests concurrently prepare cached typeinfo queries. ### Added * Added `SimpleQueryRow::columns`. * Added support for `eui48` 1.0 via the `with-eui48-1` feature. * Added `FromSql` and `ToSql` implementations for arrays via the `array-impls` feature. * Added support for `time` 0.3 via the `with-time-0_3` feature. ## v0.7.2 - 2021-04-25 ### Fixed * `SqlState` constants can now be used in `match` patterns. ## v0.7.1 - 2021-04-03 ### Added * Added support for `geo-types` 0.7 via `with-geo-types-0_7` feature. * Added `Client::clear_type_cache`. * Added `Error::as_db_error` and `Error::is_closed`. ## v0.7.0 - 2020-12-25 ### Changed * Upgraded to `tokio` 1.0. * Upgraded to `postgres-types` 0.2. ### Added * Methods taking iterators of `ToSql` values can now take both `&dyn ToSql` and `T: ToSql` values. ## v0.6.0 - 2020-10-17 ### Changed * Upgraded to `tokio` 0.3. * Added the detail and hint fields to `DbError`'s `Display` implementation. ## v0.5.5 - 2020-07-03 ### Added * Added support for `geo-types` 0.6. ## v0.5.4 - 2020-05-01 ### Added * Added `Transaction::savepoint`, which can be used to create a savepoint with a custom name. ## v0.5.3 - 2020-03-05 ### Added * Added `Debug` implementations for `Client`, `Row`, and `Column`. * Added `time` 0.2 support. ## v0.5.2 - 2020-01-31 ### Fixed * Notice messages sent during the initial connection process are now collected and returned first from `Connection::poll_message`. ### Deprecated * Deprecated `Client::cancel_query` and `Client::cancel_query_raw` in favor of `Client::cancel_token`. ### Added * Added `Client::build_transaction` to allow configuration of various transaction options. * Added `Client::cancel_token`, which returns a separate owned object that can be used to cancel queries. * Added accessors for `Config` fields. * Added a `GenericClient` trait implemented for `Client` and `Transaction` and covering shared functionality. ## v0.5.1 - 2019-12-25 ### Fixed * Removed some stray `println!`s from `copy_out` internals. ## v0.5.0 - 2019-12-23 ### Changed * `Client::copy_in` now returns a `Sink` rather than taking in a `Stream`. * `CopyStream` has been renamed to `CopyOutStream`. * `Client::copy_in` and `Client::copy_out` no longer take query parameters as PostgreSQL doesn't support parameters in COPY queries. * `TargetSessionAttrs`, `SslMode`, and `ChannelBinding` are now true non-exhaustive enums. ### Added * Added `Client::query_opt` for queries expected to return zero or one rows. * Added binary copy format support to the `binary_copy` module. * Added back query logging. ### Removed * Removed `uuid` 0.7 support. ## v0.5.0-alpha.2 - 2019-11-27 ### Changed * Upgraded `bytes` to 0.5. * Upgraded `tokio` to 0.2. * The TLS interface uses a trait to obtain channel binding information rather than returning it after the handshake. * Changed the value of the `timezone` property from `GMT` to `UTC`. * Returned `Stream` implementations are now `!Unpin`. ### Added * Added support for `uuid` 0.8. * Added the column to `Row::try_get` errors. ## v0.5.0-alpha.1 - 2019-10-14 ### Changed * The library now uses `std::futures::Future` and async/await syntax. * Most methods now take `&self` rather than `&mut self`. * The transaction API has changed to more closely resemble the synchronous API and is significantly more ergonomic. * Methods now take `&[&(dyn ToSql + Sync)]` rather than `&[&dyn ToSql]` to allow futures to be `Send`. * Methods are now "normal" async functions that no longer do work up-front. * Statements are no longer required to be prepared explicitly before use. Methods taking `&Statement` can now also take `&str`, and will internally prepare the statement. * `ToSql` now serializes its value into a `BytesMut` rather than `Vec`. * Methods that previously returned `Stream`s now return `Vec`. New `*_raw` methods still provide a `Stream` interface. ### Added * Added the `channel_binding=disable/allow/require` configuration to control use of channel binding. * Added the `Client::query_one` method to cover the common case of a query that returns exactly one row. ## v0.4.0-rc.3 - 2019-06-29 ### Fixed * Significantly improved the performance of `query` and `copy_in`. ### Changed * The items of the stream passed to `copy_in` must be `'static`. ## v0.4.0-rc.2 - 2019-03-05 ### Fixed * Fixed Cargo features to actually enable the functionality they claim to. ## v0.4.0-rc.1 - 2019-03-05 ### Changed * The client API has been significantly overhauled. It now resembles `hyper`'s, with separate `Connection` and `Client` objects. See the crate-level documentation for more details. * The TLS connection mode (e.g. `prefer`) is now part of the connection configuration rather than being passed in separately. * The Cargo features enabling `ToSql` and `FromSql` implementations for external crates are now versioned. For example, `with-uuid` is now `with-uuid-0_7`. This enables us to add support for new major versions of the crates in parallel without breaking backwards compatibility. * Upgraded from `tokio-core` to `tokio`. ### Added * Connection string configuration now more fully mirrors libpq's syntax, and supports both URL-style and key-value style strings. * `FromSql` implementations can now borrow from the data buffer. In particular, this means that you can deserialize values as `&str`. The `FromSqlOwned` trait can be used as a bound to restrict code to deserializing owned values. * Added support for channel binding with SCRAM authentication. * Added multi-host support in connection configuration. * The client now supports query pipelining, which can be used as a latency hiding measure. * While the crate uses `tokio` by default, the base API can be used with any asynchronous stream type on any reactor. * Added support for simple query requests returning row data. ### Removed * The `with-openssl` feature has been removed. Use the `tokio-postgres-openssl` crate instead. * The `with-rustc_serialize` and `with-time` features have been removed. Use `serde` and `SystemTime` or `chrono` instead. ## Older Look at the [release tags] for information about older releases. [release tags]: https://github.com/sfackler/rust-postgres/releases tokio-postgres-0.7.12/Cargo.toml0000644000000077100000000000100121170ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2018" name = "tokio-postgres" version = "0.7.12" authors = ["Steven Fackler "] description = "A native, asynchronous PostgreSQL client" readme = "README.md" keywords = [ "database", "postgres", "postgresql", "sql", "async", ] categories = ["database"] license = "MIT OR Apache-2.0" repository = "https://github.com/sfackler/rust-postgres" resolver = "2" [package.metadata.docs.rs] all-features = true [lib] test = false [[bench]] name = "bench" harness = false [dependencies.async-trait] version = "0.1" [dependencies.byteorder] version = "1.0" [dependencies.bytes] version = "1.0" [dependencies.fallible-iterator] version = "0.2" [dependencies.futures-channel] version = "0.3" features = ["sink"] [dependencies.futures-util] version = "0.3" features = ["sink"] [dependencies.log] version = "0.4" [dependencies.parking_lot] version = "0.12" [dependencies.percent-encoding] version = "2.0" [dependencies.phf] version = "0.11" [dependencies.pin-project-lite] version = "0.2" [dependencies.postgres-protocol] version = "0.6.7" [dependencies.postgres-types] version = "0.2.8" [dependencies.rand] version = "0.8.5" [dependencies.tokio] version = "1.27" features = ["io-util"] [dependencies.tokio-util] version = "0.7" features = ["codec"] [dependencies.whoami] version = "1.4.1" [dev-dependencies.bit-vec-06] version = "0.6" package = "bit-vec" [dev-dependencies.chrono-04] version = "0.4" default-features = false package = "chrono" [dev-dependencies.criterion] version = "0.5" [dev-dependencies.env_logger] version = "0.11" [dev-dependencies.eui48-1] version = "1.0" default-features = false package = "eui48" [dev-dependencies.futures-executor] version = "0.3" [dev-dependencies.geo-types-06] version = "0.6" package = "geo-types" [dev-dependencies.geo-types-07] version = "0.7" package = "geo-types" [dev-dependencies.jiff-01] version = "0.1" package = "jiff" [dev-dependencies.serde-1] version = "1.0" package = "serde" [dev-dependencies.serde_json-1] version = "1.0" package = "serde_json" [dev-dependencies.smol_str-01] version = "0.1" package = "smol_str" [dev-dependencies.time-02] version = "0.2" package = "time" [dev-dependencies.time-03] version = "0.3" features = ["parsing"] package = "time" [dev-dependencies.tokio] version = "1.0" features = [ "macros", "net", "rt", "rt-multi-thread", "time", ] [dev-dependencies.uuid-08] version = "0.8" package = "uuid" [dev-dependencies.uuid-1] version = "1.0" package = "uuid" [features] array-impls = ["postgres-types/array-impls"] default = ["runtime"] js = [ "postgres-protocol/js", "postgres-types/js", ] runtime = [ "tokio/net", "tokio/time", ] with-bit-vec-0_6 = ["postgres-types/with-bit-vec-0_6"] with-chrono-0_4 = ["postgres-types/with-chrono-0_4"] with-eui48-0_4 = ["postgres-types/with-eui48-0_4"] with-eui48-1 = ["postgres-types/with-eui48-1"] with-geo-types-0_6 = ["postgres-types/with-geo-types-0_6"] with-geo-types-0_7 = ["postgres-types/with-geo-types-0_7"] with-jiff-0_1 = ["postgres-types/with-jiff-0_1"] with-serde_json-1 = ["postgres-types/with-serde_json-1"] with-smol_str-01 = ["postgres-types/with-smol_str-01"] with-time-0_2 = ["postgres-types/with-time-0_2"] with-time-0_3 = ["postgres-types/with-time-0_3"] with-uuid-0_8 = ["postgres-types/with-uuid-0_8"] with-uuid-1 = ["postgres-types/with-uuid-1"] [target."cfg(not(target_arch = \"wasm32\"))".dependencies.socket2] version = "0.5" features = ["all"] [badges.circle-ci] repository = "sfackler/rust-postgres" tokio-postgres-0.7.12/Cargo.toml.orig000064400000000000000000000057411046102023000156020ustar 00000000000000[package] name = "tokio-postgres" version = "0.7.12" authors = ["Steven Fackler "] edition = "2018" license = "MIT OR Apache-2.0" description = "A native, asynchronous PostgreSQL client" repository = "https://github.com/sfackler/rust-postgres" readme = "../README.md" keywords = ["database", "postgres", "postgresql", "sql", "async"] categories = ["database"] [lib] test = false [[bench]] name = "bench" harness = false [package.metadata.docs.rs] all-features = true [badges] circle-ci = { repository = "sfackler/rust-postgres" } [features] default = ["runtime"] runtime = ["tokio/net", "tokio/time"] array-impls = ["postgres-types/array-impls"] with-bit-vec-0_6 = ["postgres-types/with-bit-vec-0_6"] with-chrono-0_4 = ["postgres-types/with-chrono-0_4"] with-eui48-0_4 = ["postgres-types/with-eui48-0_4"] with-eui48-1 = ["postgres-types/with-eui48-1"] with-geo-types-0_6 = ["postgres-types/with-geo-types-0_6"] with-geo-types-0_7 = ["postgres-types/with-geo-types-0_7"] with-jiff-0_1 = ["postgres-types/with-jiff-0_1"] with-serde_json-1 = ["postgres-types/with-serde_json-1"] with-smol_str-01 = ["postgres-types/with-smol_str-01"] with-uuid-0_8 = ["postgres-types/with-uuid-0_8"] with-uuid-1 = ["postgres-types/with-uuid-1"] with-time-0_2 = ["postgres-types/with-time-0_2"] with-time-0_3 = ["postgres-types/with-time-0_3"] js = ["postgres-protocol/js", "postgres-types/js"] [dependencies] async-trait = "0.1" bytes = "1.0" byteorder = "1.0" fallible-iterator = "0.2" futures-channel = { version = "0.3", features = ["sink"] } futures-util = { version = "0.3", features = ["sink"] } log = "0.4" parking_lot = "0.12" percent-encoding = "2.0" pin-project-lite = "0.2" phf = "0.11" postgres-protocol = { version = "0.6.7", path = "../postgres-protocol" } postgres-types = { version = "0.2.8", path = "../postgres-types" } tokio = { version = "1.27", features = ["io-util"] } tokio-util = { version = "0.7", features = ["codec"] } rand = "0.8.5" whoami = "1.4.1" [target.'cfg(not(target_arch = "wasm32"))'.dependencies] socket2 = { version = "0.5", features = ["all"] } [dev-dependencies] futures-executor = "0.3" criterion = "0.5" env_logger = "0.11" tokio = { version = "1.0", features = [ "macros", "net", "rt", "rt-multi-thread", "time", ] } bit-vec-06 = { version = "0.6", package = "bit-vec" } chrono-04 = { version = "0.4", package = "chrono", default-features = false } eui48-1 = { version = "1.0", package = "eui48", default-features = false } geo-types-06 = { version = "0.6", package = "geo-types" } geo-types-07 = { version = "0.7", package = "geo-types" } jiff-01 = { version = "0.1", package = "jiff" } serde-1 = { version = "1.0", package = "serde" } serde_json-1 = { version = "1.0", package = "serde_json" } smol_str-01 = { version = "0.1", package = "smol_str" } uuid-08 = { version = "0.8", package = "uuid" } uuid-1 = { version = "1.0", package = "uuid" } time-02 = { version = "0.2", package = "time" } time-03 = { version = "0.3", package = "time", features = ["parsing"] } tokio-postgres-0.7.12/LICENSE-APACHE000064400000000000000000000251371046102023000146400ustar 00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. tokio-postgres-0.7.12/LICENSE-MIT000064400000000000000000000020721046102023000143410ustar 00000000000000The MIT License (MIT) Copyright (c) 2016 Steven Fackler Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. tokio-postgres-0.7.12/README.md000064400000000000000000000032371046102023000141700ustar 00000000000000# Rust-Postgres PostgreSQL support for Rust. ## postgres [![Latest Version](https://img.shields.io/crates/v/postgres.svg)](https://crates.io/crates/postgres) [Documentation](https://docs.rs/postgres) A native, synchronous PostgreSQL client. ## tokio-postgres [![Latest Version](https://img.shields.io/crates/v/tokio-postgres.svg)](https://crates.io/crates/tokio-postgres) [Documentation](https://docs.rs/tokio-postgres) A native, asynchronous PostgreSQL client. ## postgres-types [![Latest Version](https://img.shields.io/crates/v/postgres-types.svg)](https://crates.io/crates/postgres-types) [Documentation](https://docs.rs/postgres-types) Conversions between Rust and Postgres types. ## postgres-native-tls [![Latest Version](https://img.shields.io/crates/v/postgres-native-tls.svg)](https://crates.io/crates/postgres-native-tls) [Documentation](https://docs.rs/postgres-native-tls) TLS support for postgres and tokio-postgres via native-tls. ## postgres-openssl [![Latest Version](https://img.shields.io/crates/v/postgres-openssl.svg)](https://crates.io/crates/postgres-openssl) [Documentation](https://docs.rs/postgres-openssl) TLS support for postgres and tokio-postgres via openssl. # Running test suite The test suite requires postgres to be running in the correct configuration. The easiest way to do this is with docker: 1. Install `docker` and `docker-compose`. 1. On ubuntu: `sudo apt install docker.io docker-compose`. 1. Make sure your user has permissions for docker. 1. On ubuntu: ``sudo usermod -aG docker $USER`` 1. Change to top-level directory of `rust-postgres` repo. 1. Run `docker-compose up -d`. 1. Run `cargo test`. 1. Run `docker-compose stop`. tokio-postgres-0.7.12/benches/bench.rs000064400000000000000000000036311046102023000157430ustar 00000000000000use criterion::{criterion_group, criterion_main, Criterion}; use futures_channel::oneshot; use std::sync::Arc; use std::time::Instant; use tokio::runtime::Runtime; use tokio_postgres::{Client, NoTls}; fn setup() -> (Client, Runtime) { let runtime = Runtime::new().unwrap(); let (client, conn) = runtime .block_on(tokio_postgres::connect( "host=localhost port=5433 user=postgres", NoTls, )) .unwrap(); runtime.spawn(async { conn.await.unwrap() }); (client, runtime) } fn query_prepared(c: &mut Criterion) { let (client, runtime) = setup(); let statement = runtime.block_on(client.prepare("SELECT $1::INT8")).unwrap(); c.bench_function("runtime_block_on", move |b| { b.iter(|| { runtime .block_on(client.query(&statement, &[&1i64])) .unwrap() }) }); let (client, runtime) = setup(); let statement = runtime.block_on(client.prepare("SELECT $1::INT8")).unwrap(); c.bench_function("executor_block_on", move |b| { b.iter(|| futures_executor::block_on(client.query(&statement, &[&1i64])).unwrap()) }); let (client, runtime) = setup(); let client = Arc::new(client); let statement = runtime.block_on(client.prepare("SELECT $1::INT8")).unwrap(); c.bench_function("spawned", move |b| { b.iter_custom(|iters| { let (tx, rx) = oneshot::channel(); let client = client.clone(); let statement = statement.clone(); runtime.spawn(async move { let start = Instant::now(); for _ in 0..iters { client.query(&statement, &[&1i64]).await.unwrap(); } tx.send(start.elapsed()).unwrap(); }); futures_executor::block_on(rx).unwrap() }) }); } criterion_group!(benches, query_prepared); criterion_main!(benches); tokio-postgres-0.7.12/src/binary_copy.rs000064400000000000000000000202201046102023000163530ustar 00000000000000//! Utilities for working with the PostgreSQL binary copy format. use crate::types::{FromSql, IsNull, ToSql, Type, WrongType}; use crate::{slice_iter, CopyInSink, CopyOutStream, Error}; use byteorder::{BigEndian, ByteOrder}; use bytes::{Buf, BufMut, Bytes, BytesMut}; use futures_util::{ready, SinkExt, Stream}; use pin_project_lite::pin_project; use postgres_types::BorrowToSql; use std::convert::TryFrom; use std::io; use std::io::Cursor; use std::ops::Range; use std::pin::Pin; use std::sync::Arc; use std::task::{Context, Poll}; const MAGIC: &[u8] = b"PGCOPY\n\xff\r\n\0"; const HEADER_LEN: usize = MAGIC.len() + 4 + 4; pin_project! { /// A type which serializes rows into the PostgreSQL binary copy format. /// /// The copy *must* be explicitly completed via the `finish` method. If it is not, the copy will be aborted. pub struct BinaryCopyInWriter { #[pin] sink: CopyInSink, types: Vec, buf: BytesMut, } } impl BinaryCopyInWriter { /// Creates a new writer which will write rows of the provided types to the provided sink. pub fn new(sink: CopyInSink, types: &[Type]) -> BinaryCopyInWriter { let mut buf = BytesMut::new(); buf.put_slice(MAGIC); buf.put_i32(0); // flags buf.put_i32(0); // header extension BinaryCopyInWriter { sink, types: types.to_vec(), buf, } } /// Writes a single row. /// /// # Panics /// /// Panics if the number of values provided does not match the number expected. pub async fn write(self: Pin<&mut Self>, values: &[&(dyn ToSql + Sync)]) -> Result<(), Error> { self.write_raw(slice_iter(values)).await } /// A maximally-flexible version of `write`. /// /// # Panics /// /// Panics if the number of values provided does not match the number expected. pub async fn write_raw(self: Pin<&mut Self>, values: I) -> Result<(), Error> where P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { let mut this = self.project(); let values = values.into_iter(); assert!( values.len() == this.types.len(), "expected {} values but got {}", this.types.len(), values.len(), ); this.buf.put_i16(this.types.len() as i16); for (i, (value, type_)) in values.zip(this.types).enumerate() { let idx = this.buf.len(); this.buf.put_i32(0); let len = match value .borrow_to_sql() .to_sql_checked(type_, this.buf) .map_err(|e| Error::to_sql(e, i))? { IsNull::Yes => -1, IsNull::No => i32::try_from(this.buf.len() - idx - 4) .map_err(|e| Error::encode(io::Error::new(io::ErrorKind::InvalidInput, e)))?, }; BigEndian::write_i32(&mut this.buf[idx..], len); } if this.buf.len() > 4096 { this.sink.send(this.buf.split().freeze()).await?; } Ok(()) } /// Completes the copy, returning the number of rows added. /// /// This method *must* be used to complete the copy process. If it is not, the copy will be aborted. pub async fn finish(self: Pin<&mut Self>) -> Result { let mut this = self.project(); this.buf.put_i16(-1); this.sink.send(this.buf.split().freeze()).await?; this.sink.finish().await } } struct Header { has_oids: bool, } pin_project! { /// A stream of rows deserialized from the PostgreSQL binary copy format. pub struct BinaryCopyOutStream { #[pin] stream: CopyOutStream, types: Arc>, header: Option
, } } impl BinaryCopyOutStream { /// Creates a stream from a raw copy out stream and the types of the columns being returned. pub fn new(stream: CopyOutStream, types: &[Type]) -> BinaryCopyOutStream { BinaryCopyOutStream { stream, types: Arc::new(types.to_vec()), header: None, } } } impl Stream for BinaryCopyOutStream { type Item = Result; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { let this = self.project(); let chunk = match ready!(this.stream.poll_next(cx)) { Some(Ok(chunk)) => chunk, Some(Err(e)) => return Poll::Ready(Some(Err(e))), None => return Poll::Ready(Some(Err(Error::closed()))), }; let mut chunk = Cursor::new(chunk); let has_oids = match &this.header { Some(header) => header.has_oids, None => { check_remaining(&chunk, HEADER_LEN)?; if !chunk.chunk().starts_with(MAGIC) { return Poll::Ready(Some(Err(Error::parse(io::Error::new( io::ErrorKind::InvalidData, "invalid magic value", ))))); } chunk.advance(MAGIC.len()); let flags = chunk.get_i32(); let has_oids = (flags & (1 << 16)) != 0; let header_extension = chunk.get_u32() as usize; check_remaining(&chunk, header_extension)?; chunk.advance(header_extension); *this.header = Some(Header { has_oids }); has_oids } }; check_remaining(&chunk, 2)?; let mut len = chunk.get_i16(); if len == -1 { return Poll::Ready(None); } if has_oids { len += 1; } if len as usize != this.types.len() { return Poll::Ready(Some(Err(Error::parse(io::Error::new( io::ErrorKind::InvalidInput, format!("expected {} values but got {}", this.types.len(), len), ))))); } let mut ranges = vec![]; for _ in 0..len { check_remaining(&chunk, 4)?; let len = chunk.get_i32(); if len == -1 { ranges.push(None); } else { let len = len as usize; check_remaining(&chunk, len)?; let start = chunk.position() as usize; ranges.push(Some(start..start + len)); chunk.advance(len); } } Poll::Ready(Some(Ok(BinaryCopyOutRow { buf: chunk.into_inner(), ranges, types: this.types.clone(), }))) } } fn check_remaining(buf: &Cursor, len: usize) -> Result<(), Error> { if buf.remaining() < len { Err(Error::parse(io::Error::new( io::ErrorKind::UnexpectedEof, "unexpected EOF", ))) } else { Ok(()) } } /// A row of data parsed from a binary copy out stream. pub struct BinaryCopyOutRow { buf: Bytes, ranges: Vec>>, types: Arc>, } impl BinaryCopyOutRow { /// Like `get`, but returns a `Result` rather than panicking. pub fn try_get<'a, T>(&'a self, idx: usize) -> Result where T: FromSql<'a>, { let type_ = match self.types.get(idx) { Some(type_) => type_, None => return Err(Error::column(idx.to_string())), }; if !T::accepts(type_) { return Err(Error::from_sql( Box::new(WrongType::new::(type_.clone())), idx, )); } let r = match &self.ranges[idx] { Some(range) => T::from_sql(type_, &self.buf[range.clone()]), None => T::from_sql_null(type_), }; r.map_err(|e| Error::from_sql(e, idx)) } /// Deserializes a value from the row. /// /// # Panics /// /// Panics if the index is out of bounds or if the value cannot be converted to the specified type. pub fn get<'a, T>(&'a self, idx: usize) -> T where T: FromSql<'a>, { match self.try_get(idx) { Ok(value) => value, Err(e) => panic!("error retrieving column {}: {}", idx, e), } } } tokio-postgres-0.7.12/src/bind.rs000064400000000000000000000021511046102023000147540ustar 00000000000000use crate::client::InnerClient; use crate::codec::FrontendMessage; use crate::connection::RequestMessages; use crate::types::BorrowToSql; use crate::{query, Error, Portal, Statement}; use postgres_protocol::message::backend::Message; use postgres_protocol::message::frontend; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::Arc; static NEXT_ID: AtomicUsize = AtomicUsize::new(0); pub async fn bind( client: &Arc, statement: Statement, params: I, ) -> Result where P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { let name = format!("p{}", NEXT_ID.fetch_add(1, Ordering::SeqCst)); let buf = client.with_buf(|buf| { query::encode_bind(&statement, params, &name, buf)?; frontend::sync(buf); Ok(buf.split().freeze()) })?; let mut responses = client.send(RequestMessages::Single(FrontendMessage::Raw(buf)))?; match responses.next().await? { Message::BindComplete => {} _ => return Err(Error::unexpected_message()), } Ok(Portal::new(client, name, statement)) } tokio-postgres-0.7.12/src/cancel_query.rs000064400000000000000000000021651046102023000165170ustar 00000000000000use crate::client::SocketConfig; use crate::config::SslMode; use crate::tls::MakeTlsConnect; use crate::{cancel_query_raw, connect_socket, Error, Socket}; use std::io; pub(crate) async fn cancel_query( config: Option, ssl_mode: SslMode, mut tls: T, process_id: i32, secret_key: i32, ) -> Result<(), Error> where T: MakeTlsConnect, { let config = match config { Some(config) => config, None => { return Err(Error::connect(io::Error::new( io::ErrorKind::InvalidInput, "unknown host", ))) } }; let tls = tls .make_tls_connect(config.hostname.as_deref().unwrap_or("")) .map_err(|e| Error::tls(e.into()))?; let has_hostname = config.hostname.is_some(); let socket = connect_socket::connect_socket( &config.addr, config.port, config.connect_timeout, config.tcp_user_timeout, config.keepalive.as_ref(), ) .await?; cancel_query_raw::cancel_query_raw(socket, ssl_mode, tls, has_hostname, process_id, secret_key) .await } tokio-postgres-0.7.12/src/cancel_query_raw.rs000064400000000000000000000014401046102023000173630ustar 00000000000000use crate::config::SslMode; use crate::tls::TlsConnect; use crate::{connect_tls, Error}; use bytes::BytesMut; use postgres_protocol::message::frontend; use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt}; pub async fn cancel_query_raw( stream: S, mode: SslMode, tls: T, has_hostname: bool, process_id: i32, secret_key: i32, ) -> Result<(), Error> where S: AsyncRead + AsyncWrite + Unpin, T: TlsConnect, { let mut stream = connect_tls::connect_tls(stream, mode, tls, has_hostname).await?; let mut buf = BytesMut::new(); frontend::cancel_request(process_id, secret_key, &mut buf); stream.write_all(&buf).await.map_err(Error::io)?; stream.flush().await.map_err(Error::io)?; stream.shutdown().await.map_err(Error::io)?; Ok(()) } tokio-postgres-0.7.12/src/cancel_token.rs000064400000000000000000000040731046102023000164720ustar 00000000000000use crate::config::SslMode; use crate::tls::TlsConnect; #[cfg(feature = "runtime")] use crate::{cancel_query, client::SocketConfig, tls::MakeTlsConnect, Socket}; use crate::{cancel_query_raw, Error}; use tokio::io::{AsyncRead, AsyncWrite}; /// The capability to request cancellation of in-progress queries on a /// connection. #[derive(Clone)] pub struct CancelToken { #[cfg(feature = "runtime")] pub(crate) socket_config: Option, pub(crate) ssl_mode: SslMode, pub(crate) process_id: i32, pub(crate) secret_key: i32, } impl CancelToken { /// Attempts to cancel the in-progress query on the connection associated /// with this `CancelToken`. /// /// The server provides no information about whether a cancellation attempt was successful or not. An error will /// only be returned if the client was unable to connect to the database. /// /// Cancellation is inherently racy. There is no guarantee that the /// cancellation request will reach the server before the query terminates /// normally, or that the connection associated with this token is still /// active. /// /// Requires the `runtime` Cargo feature (enabled by default). #[cfg(feature = "runtime")] pub async fn cancel_query(&self, tls: T) -> Result<(), Error> where T: MakeTlsConnect, { cancel_query::cancel_query( self.socket_config.clone(), self.ssl_mode, tls, self.process_id, self.secret_key, ) .await } /// Like `cancel_query`, but uses a stream which is already connected to the server rather than opening a new /// connection itself. pub async fn cancel_query_raw(&self, stream: S, tls: T) -> Result<(), Error> where S: AsyncRead + AsyncWrite + Unpin, T: TlsConnect, { cancel_query_raw::cancel_query_raw( stream, self.ssl_mode, tls, true, self.process_id, self.secret_key, ) .await } } tokio-postgres-0.7.12/src/client.rs000064400000000000000000000544071046102023000153310ustar 00000000000000use crate::codec::BackendMessages; use crate::config::SslMode; use crate::connection::{Request, RequestMessages}; use crate::copy_out::CopyOutStream; #[cfg(feature = "runtime")] use crate::keepalive::KeepaliveConfig; use crate::query::RowStream; use crate::simple_query::SimpleQueryStream; #[cfg(feature = "runtime")] use crate::tls::MakeTlsConnect; use crate::tls::TlsConnect; use crate::types::{Oid, ToSql, Type}; #[cfg(feature = "runtime")] use crate::Socket; use crate::{ copy_in, copy_out, prepare, query, simple_query, slice_iter, CancelToken, CopyInSink, Error, Row, SimpleQueryMessage, Statement, ToStatement, Transaction, TransactionBuilder, }; use bytes::{Buf, BytesMut}; use fallible_iterator::FallibleIterator; use futures_channel::mpsc; use futures_util::{future, pin_mut, ready, StreamExt, TryStreamExt}; use parking_lot::Mutex; use postgres_protocol::message::backend::Message; use postgres_types::BorrowToSql; use std::collections::HashMap; use std::fmt; #[cfg(feature = "runtime")] use std::net::IpAddr; #[cfg(feature = "runtime")] use std::path::PathBuf; use std::sync::Arc; use std::task::{Context, Poll}; #[cfg(feature = "runtime")] use std::time::Duration; use tokio::io::{AsyncRead, AsyncWrite}; pub struct Responses { receiver: mpsc::Receiver, cur: BackendMessages, } impl Responses { pub fn poll_next(&mut self, cx: &mut Context<'_>) -> Poll> { loop { match self.cur.next().map_err(Error::parse)? { Some(Message::ErrorResponse(body)) => return Poll::Ready(Err(Error::db(body))), Some(message) => return Poll::Ready(Ok(message)), None => {} } match ready!(self.receiver.poll_next_unpin(cx)) { Some(messages) => self.cur = messages, None => return Poll::Ready(Err(Error::closed())), } } } pub async fn next(&mut self) -> Result { future::poll_fn(|cx| self.poll_next(cx)).await } } /// A cache of type info and prepared statements for fetching type info /// (corresponding to the queries in the [prepare](prepare) module). #[derive(Default)] struct CachedTypeInfo { /// A statement for basic information for a type from its /// OID. Corresponds to [TYPEINFO_QUERY](prepare::TYPEINFO_QUERY) (or its /// fallback). typeinfo: Option, /// A statement for getting information for a composite type from its OID. /// Corresponds to [TYPEINFO_QUERY](prepare::TYPEINFO_COMPOSITE_QUERY). typeinfo_composite: Option, /// A statement for getting information for a composite type from its OID. /// Corresponds to [TYPEINFO_QUERY](prepare::TYPEINFO_COMPOSITE_QUERY) (or /// its fallback). typeinfo_enum: Option, /// Cache of types already looked up. types: HashMap, } pub struct InnerClient { sender: mpsc::UnboundedSender, cached_typeinfo: Mutex, /// A buffer to use when writing out postgres commands. buffer: Mutex, } impl InnerClient { pub fn send(&self, messages: RequestMessages) -> Result { let (sender, receiver) = mpsc::channel(1); let request = Request { messages, sender }; self.sender .unbounded_send(request) .map_err(|_| Error::closed())?; Ok(Responses { receiver, cur: BackendMessages::empty(), }) } pub fn typeinfo(&self) -> Option { self.cached_typeinfo.lock().typeinfo.clone() } pub fn set_typeinfo(&self, statement: &Statement) { self.cached_typeinfo.lock().typeinfo = Some(statement.clone()); } pub fn typeinfo_composite(&self) -> Option { self.cached_typeinfo.lock().typeinfo_composite.clone() } pub fn set_typeinfo_composite(&self, statement: &Statement) { self.cached_typeinfo.lock().typeinfo_composite = Some(statement.clone()); } pub fn typeinfo_enum(&self) -> Option { self.cached_typeinfo.lock().typeinfo_enum.clone() } pub fn set_typeinfo_enum(&self, statement: &Statement) { self.cached_typeinfo.lock().typeinfo_enum = Some(statement.clone()); } pub fn type_(&self, oid: Oid) -> Option { self.cached_typeinfo.lock().types.get(&oid).cloned() } pub fn set_type(&self, oid: Oid, type_: &Type) { self.cached_typeinfo.lock().types.insert(oid, type_.clone()); } pub fn clear_type_cache(&self) { self.cached_typeinfo.lock().types.clear(); } /// Call the given function with a buffer to be used when writing out /// postgres commands. pub fn with_buf(&self, f: F) -> R where F: FnOnce(&mut BytesMut) -> R, { let mut buffer = self.buffer.lock(); let r = f(&mut buffer); buffer.clear(); r } } #[cfg(feature = "runtime")] #[derive(Clone)] pub(crate) struct SocketConfig { pub addr: Addr, pub hostname: Option, pub port: u16, pub connect_timeout: Option, pub tcp_user_timeout: Option, pub keepalive: Option, } #[cfg(feature = "runtime")] #[derive(Clone)] pub(crate) enum Addr { Tcp(IpAddr), #[cfg(unix)] Unix(PathBuf), } /// An asynchronous PostgreSQL client. /// /// The client is one half of what is returned when a connection is established. Users interact with the database /// through this client object. pub struct Client { inner: Arc, #[cfg(feature = "runtime")] socket_config: Option, ssl_mode: SslMode, process_id: i32, secret_key: i32, } impl Client { pub(crate) fn new( sender: mpsc::UnboundedSender, ssl_mode: SslMode, process_id: i32, secret_key: i32, ) -> Client { Client { inner: Arc::new(InnerClient { sender, cached_typeinfo: Default::default(), buffer: Default::default(), }), #[cfg(feature = "runtime")] socket_config: None, ssl_mode, process_id, secret_key, } } pub(crate) fn inner(&self) -> &Arc { &self.inner } #[cfg(feature = "runtime")] pub(crate) fn set_socket_config(&mut self, socket_config: SocketConfig) { self.socket_config = Some(socket_config); } /// Creates a new prepared statement. /// /// Prepared statements can be executed repeatedly, and may contain query parameters (indicated by `$1`, `$2`, etc), /// which are set when executed. Prepared statements can only be used with the connection that created them. pub async fn prepare(&self, query: &str) -> Result { self.prepare_typed(query, &[]).await } /// Like `prepare`, but allows the types of query parameters to be explicitly specified. /// /// The list of types may be smaller than the number of parameters - the types of the remaining parameters will be /// inferred. For example, `client.prepare_typed(query, &[])` is equivalent to `client.prepare(query)`. pub async fn prepare_typed( &self, query: &str, parameter_types: &[Type], ) -> Result { prepare::prepare(&self.inner, query, parameter_types).await } /// Executes a statement, returning a vector of the resulting rows. /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the parameter of the list /// provided, 1-indexed. /// /// The `statement` argument can either be a `Statement`, or a raw query string. If the same statement will be /// repeatedly executed (perhaps with different query parameters), consider preparing the statement up front /// with the `prepare` method. pub async fn query( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement, { self.query_raw(statement, slice_iter(params)) .await? .try_collect() .await } /// Executes a statement which returns a single row, returning it. /// /// Returns an error if the query does not return exactly one row. /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the parameter of the list /// provided, 1-indexed. /// /// The `statement` argument can either be a `Statement`, or a raw query string. If the same statement will be /// repeatedly executed (perhaps with different query parameters), consider preparing the statement up front /// with the `prepare` method. pub async fn query_one( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result where T: ?Sized + ToStatement, { self.query_opt(statement, params) .await .and_then(|res| res.ok_or_else(Error::row_count)) } /// Executes a statements which returns zero or one rows, returning it. /// /// Returns an error if the query returns more than one row. /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the parameter of the list /// provided, 1-indexed. /// /// The `statement` argument can either be a `Statement`, or a raw query string. If the same statement will be /// repeatedly executed (perhaps with different query parameters), consider preparing the statement up front /// with the `prepare` method. pub async fn query_opt( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement, { let stream = self.query_raw(statement, slice_iter(params)).await?; pin_mut!(stream); let mut first = None; // Originally this was two calls to `try_next().await?`, // once for the first element, and second to error if more than one. // // However, this new form with only one .await in a loop generates // slightly smaller codegen/stack usage for the resulting future. while let Some(row) = stream.try_next().await? { if first.is_some() { return Err(Error::row_count()); } first = Some(row); } Ok(first) } /// The maximally flexible version of [`query`]. /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the parameter of the list /// provided, 1-indexed. /// /// The `statement` argument can either be a `Statement`, or a raw query string. If the same statement will be /// repeatedly executed (perhaps with different query parameters), consider preparing the statement up front /// with the `prepare` method. /// /// [`query`]: #method.query /// /// # Examples /// /// ```no_run /// # async fn async_main(client: &tokio_postgres::Client) -> Result<(), tokio_postgres::Error> { /// use futures_util::{pin_mut, TryStreamExt}; /// /// let params: Vec = vec![ /// "first param".into(), /// "second param".into(), /// ]; /// let mut it = client.query_raw( /// "SELECT foo FROM bar WHERE biz = $1 AND baz = $2", /// params, /// ).await?; /// /// pin_mut!(it); /// while let Some(row) = it.try_next().await? { /// let foo: i32 = row.get("foo"); /// println!("foo: {}", foo); /// } /// # Ok(()) /// # } /// ``` pub async fn query_raw(&self, statement: &T, params: I) -> Result where T: ?Sized + ToStatement, P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { let statement = statement.__convert().into_statement(self).await?; query::query(&self.inner, statement, params).await } /// Like `query`, but requires the types of query parameters to be explicitly specified. /// /// Compared to `query`, this method allows performing queries without three round trips (for /// prepare, execute, and close) by requiring the caller to specify parameter values along with /// their Postgres type. Thus, this is suitable in environments where prepared statements aren't /// supported (such as Cloudflare Workers with Hyperdrive). /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the /// parameter of the list provided, 1-indexed. pub async fn query_typed( &self, query: &str, params: &[(&(dyn ToSql + Sync), Type)], ) -> Result, Error> { self.query_typed_raw(query, params.iter().map(|(v, t)| (*v, t.clone()))) .await? .try_collect() .await } /// The maximally flexible version of [`query_typed`]. /// /// Compared to `query`, this method allows performing queries without three round trips (for /// prepare, execute, and close) by requiring the caller to specify parameter values along with /// their Postgres type. Thus, this is suitable in environments where prepared statements aren't /// supported (such as Cloudflare Workers with Hyperdrive). /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the /// parameter of the list provided, 1-indexed. /// /// [`query_typed`]: #method.query_typed /// /// # Examples /// /// ```no_run /// # async fn async_main(client: &tokio_postgres::Client) -> Result<(), tokio_postgres::Error> { /// use futures_util::{pin_mut, TryStreamExt}; /// use tokio_postgres::types::Type; /// /// let params: Vec<(String, Type)> = vec![ /// ("first param".into(), Type::TEXT), /// ("second param".into(), Type::TEXT), /// ]; /// let mut it = client.query_typed_raw( /// "SELECT foo FROM bar WHERE biz = $1 AND baz = $2", /// params, /// ).await?; /// /// pin_mut!(it); /// while let Some(row) = it.try_next().await? { /// let foo: i32 = row.get("foo"); /// println!("foo: {}", foo); /// } /// # Ok(()) /// # } /// ``` pub async fn query_typed_raw(&self, query: &str, params: I) -> Result where P: BorrowToSql, I: IntoIterator, { query::query_typed(&self.inner, query, params).await } /// Executes a statement, returning the number of rows modified. /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the parameter of the list /// provided, 1-indexed. /// /// The `statement` argument can either be a `Statement`, or a raw query string. If the same statement will be /// repeatedly executed (perhaps with different query parameters), consider preparing the statement up front /// with the `prepare` method. /// /// If the statement does not modify any rows (e.g. `SELECT`), 0 is returned. pub async fn execute( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result where T: ?Sized + ToStatement, { self.execute_raw(statement, slice_iter(params)).await } /// The maximally flexible version of [`execute`]. /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the parameter of the list /// provided, 1-indexed. /// /// The `statement` argument can either be a `Statement`, or a raw query string. If the same statement will be /// repeatedly executed (perhaps with different query parameters), consider preparing the statement up front /// with the `prepare` method. /// /// [`execute`]: #method.execute pub async fn execute_raw(&self, statement: &T, params: I) -> Result where T: ?Sized + ToStatement, P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { let statement = statement.__convert().into_statement(self).await?; query::execute(self.inner(), statement, params).await } /// Executes a `COPY FROM STDIN` statement, returning a sink used to write the copy data. /// /// PostgreSQL does not support parameters in `COPY` statements, so this method does not take any. The copy *must* /// be explicitly completed via the `Sink::close` or `finish` methods. If it is not, the copy will be aborted. pub async fn copy_in(&self, statement: &T) -> Result, Error> where T: ?Sized + ToStatement, U: Buf + 'static + Send, { let statement = statement.__convert().into_statement(self).await?; copy_in::copy_in(self.inner(), statement).await } /// Executes a `COPY TO STDOUT` statement, returning a stream of the resulting data. /// /// PostgreSQL does not support parameters in `COPY` statements, so this method does not take any. pub async fn copy_out(&self, statement: &T) -> Result where T: ?Sized + ToStatement, { let statement = statement.__convert().into_statement(self).await?; copy_out::copy_out(self.inner(), statement).await } /// Executes a sequence of SQL statements using the simple query protocol, returning the resulting rows. /// /// Statements should be separated by semicolons. If an error occurs, execution of the sequence will stop at that /// point. The simple query protocol returns the values in rows as strings rather than in their binary encodings, /// so the associated row type doesn't work with the `FromSql` trait. Rather than simply returning a list of the /// rows, this method returns a list of an enum which indicates either the completion of one of the commands, /// or a row of data. This preserves the framing between the separate statements in the request. /// /// # Warning /// /// Prepared statements should be use for any query which contains user-specified data, as they provided the /// functionality to safely embed that data in the request. Do not form statements via string concatenation and pass /// them to this method! pub async fn simple_query(&self, query: &str) -> Result, Error> { self.simple_query_raw(query).await?.try_collect().await } pub(crate) async fn simple_query_raw(&self, query: &str) -> Result { simple_query::simple_query(self.inner(), query).await } /// Executes a sequence of SQL statements using the simple query protocol. /// /// Statements should be separated by semicolons. If an error occurs, execution of the sequence will stop at that /// point. This is intended for use when, for example, initializing a database schema. /// /// # Warning /// /// Prepared statements should be use for any query which contains user-specified data, as they provided the /// functionality to safely embed that data in the request. Do not form statements via string concatenation and pass /// them to this method! pub async fn batch_execute(&self, query: &str) -> Result<(), Error> { simple_query::batch_execute(self.inner(), query).await } /// Begins a new database transaction. /// /// The transaction will roll back by default - use the `commit` method to commit it. pub async fn transaction(&mut self) -> Result, Error> { self.build_transaction().start().await } /// Returns a builder for a transaction with custom settings. /// /// Unlike the `transaction` method, the builder can be used to control the transaction's isolation level and other /// attributes. pub fn build_transaction(&mut self) -> TransactionBuilder<'_> { TransactionBuilder::new(self) } /// Constructs a cancellation token that can later be used to request cancellation of a query running on the /// connection associated with this client. pub fn cancel_token(&self) -> CancelToken { CancelToken { #[cfg(feature = "runtime")] socket_config: self.socket_config.clone(), ssl_mode: self.ssl_mode, process_id: self.process_id, secret_key: self.secret_key, } } /// Attempts to cancel an in-progress query. /// /// The server provides no information about whether a cancellation attempt was successful or not. An error will /// only be returned if the client was unable to connect to the database. /// /// Requires the `runtime` Cargo feature (enabled by default). #[cfg(feature = "runtime")] #[deprecated(since = "0.6.0", note = "use Client::cancel_token() instead")] pub async fn cancel_query(&self, tls: T) -> Result<(), Error> where T: MakeTlsConnect, { self.cancel_token().cancel_query(tls).await } /// Like `cancel_query`, but uses a stream which is already connected to the server rather than opening a new /// connection itself. #[deprecated(since = "0.6.0", note = "use Client::cancel_token() instead")] pub async fn cancel_query_raw(&self, stream: S, tls: T) -> Result<(), Error> where S: AsyncRead + AsyncWrite + Unpin, T: TlsConnect, { self.cancel_token().cancel_query_raw(stream, tls).await } /// Clears the client's type information cache. /// /// When user-defined types are used in a query, the client loads their definitions from the database and caches /// them for the lifetime of the client. If those definitions are changed in the database, this method can be used /// to flush the local cache and allow the new, updated definitions to be loaded. pub fn clear_type_cache(&self) { self.inner().clear_type_cache(); } /// Determines if the connection to the server has already closed. /// /// In that case, all future queries will fail. pub fn is_closed(&self) -> bool { self.inner.sender.is_closed() } #[doc(hidden)] pub fn __private_api_close(&mut self) { self.inner.sender.close_channel() } } impl fmt::Debug for Client { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("Client").finish() } } tokio-postgres-0.7.12/src/codec.rs000064400000000000000000000050441046102023000151210ustar 00000000000000use bytes::{Buf, Bytes, BytesMut}; use fallible_iterator::FallibleIterator; use postgres_protocol::message::backend; use postgres_protocol::message::frontend::CopyData; use std::io; use tokio_util::codec::{Decoder, Encoder}; pub enum FrontendMessage { Raw(Bytes), CopyData(CopyData>), } pub enum BackendMessage { Normal { messages: BackendMessages, request_complete: bool, }, Async(backend::Message), } pub struct BackendMessages(BytesMut); impl BackendMessages { pub fn empty() -> BackendMessages { BackendMessages(BytesMut::new()) } } impl FallibleIterator for BackendMessages { type Item = backend::Message; type Error = io::Error; fn next(&mut self) -> io::Result> { backend::Message::parse(&mut self.0) } } pub struct PostgresCodec; impl Encoder for PostgresCodec { type Error = io::Error; fn encode(&mut self, item: FrontendMessage, dst: &mut BytesMut) -> io::Result<()> { match item { FrontendMessage::Raw(buf) => dst.extend_from_slice(&buf), FrontendMessage::CopyData(data) => data.write(dst), } Ok(()) } } impl Decoder for PostgresCodec { type Item = BackendMessage; type Error = io::Error; fn decode(&mut self, src: &mut BytesMut) -> Result, io::Error> { let mut idx = 0; let mut request_complete = false; while let Some(header) = backend::Header::parse(&src[idx..])? { let len = header.len() as usize + 1; if src[idx..].len() < len { break; } match header.tag() { backend::NOTICE_RESPONSE_TAG | backend::NOTIFICATION_RESPONSE_TAG | backend::PARAMETER_STATUS_TAG => { if idx == 0 { let message = backend::Message::parse(src)?.unwrap(); return Ok(Some(BackendMessage::Async(message))); } else { break; } } _ => {} } idx += len; if header.tag() == backend::READY_FOR_QUERY_TAG { request_complete = true; break; } } if idx == 0 { Ok(None) } else { Ok(Some(BackendMessage::Normal { messages: BackendMessages(src.split_to(idx)), request_complete, })) } } } tokio-postgres-0.7.12/src/config.rs000064400000000000000000001154001046102023000153070ustar 00000000000000//! Connection configuration. #[cfg(feature = "runtime")] use crate::connect::connect; use crate::connect_raw::connect_raw; #[cfg(not(target_arch = "wasm32"))] use crate::keepalive::KeepaliveConfig; #[cfg(feature = "runtime")] use crate::tls::MakeTlsConnect; use crate::tls::TlsConnect; #[cfg(feature = "runtime")] use crate::Socket; use crate::{Client, Connection, Error}; use std::borrow::Cow; #[cfg(unix)] use std::ffi::OsStr; use std::net::IpAddr; use std::ops::Deref; #[cfg(unix)] use std::os::unix::ffi::OsStrExt; #[cfg(unix)] use std::path::{Path, PathBuf}; use std::str; use std::str::FromStr; use std::time::Duration; use std::{error, fmt, iter, mem}; use tokio::io::{AsyncRead, AsyncWrite}; /// Properties required of a session. #[derive(Debug, Copy, Clone, PartialEq, Eq)] #[non_exhaustive] pub enum TargetSessionAttrs { /// No special properties are required. Any, /// The session must allow writes. ReadWrite, /// The session allow only reads. ReadOnly, } /// TLS configuration. #[derive(Debug, Copy, Clone, PartialEq, Eq)] #[non_exhaustive] pub enum SslMode { /// Do not use TLS. Disable, /// Attempt to connect with TLS but allow sessions without. Prefer, /// Require the use of TLS. Require, } /// Channel binding configuration. #[derive(Debug, Copy, Clone, PartialEq, Eq)] #[non_exhaustive] pub enum ChannelBinding { /// Do not use channel binding. Disable, /// Attempt to use channel binding but allow sessions without. Prefer, /// Require the use of channel binding. Require, } /// Load balancing configuration. #[derive(Debug, Copy, Clone, PartialEq, Eq)] #[non_exhaustive] pub enum LoadBalanceHosts { /// Make connection attempts to hosts in the order provided. Disable, /// Make connection attempts to hosts in a random order. Random, } /// A host specification. #[derive(Debug, Clone, PartialEq, Eq)] pub enum Host { /// A TCP hostname. Tcp(String), /// A path to a directory containing the server's Unix socket. /// /// This variant is only available on Unix platforms. #[cfg(unix)] Unix(PathBuf), } /// Connection configuration. /// /// Configuration can be parsed from libpq-style connection strings. These strings come in two formats: /// /// # Key-Value /// /// This format consists of space-separated key-value pairs. Values which are either the empty string or contain /// whitespace should be wrapped in `'`. `'` and `\` characters should be backslash-escaped. /// /// ## Keys /// /// * `user` - The username to authenticate with. Defaults to the user executing this process. /// * `password` - The password to authenticate with. /// * `dbname` - The name of the database to connect to. Defaults to the username. /// * `options` - Command line options used to configure the server. /// * `application_name` - Sets the `application_name` parameter on the server. /// * `sslmode` - Controls usage of TLS. If set to `disable`, TLS will not be used. If set to `prefer`, TLS will be used /// if available, but not used otherwise. If set to `require`, TLS will be forced to be used. Defaults to `prefer`. /// * `host` - The host to connect to. On Unix platforms, if the host starts with a `/` character it is treated as the /// path to the directory containing Unix domain sockets. Otherwise, it is treated as a hostname. Multiple hosts /// can be specified, separated by commas. Each host will be tried in turn when connecting. Required if connecting /// with the `connect` method. /// * `hostaddr` - Numeric IP address of host to connect to. This should be in the standard IPv4 address format, /// e.g., 172.28.40.9. If your machine supports IPv6, you can also use those addresses. /// If this parameter is not specified, the value of `host` will be looked up to find the corresponding IP address, /// or if host specifies an IP address, that value will be used directly. /// Using `hostaddr` allows the application to avoid a host name look-up, which might be important in applications /// with time constraints. However, a host name is required for TLS certificate verification. /// Specifically: /// * If `hostaddr` is specified without `host`, the value for `hostaddr` gives the server network address. /// The connection attempt will fail if the authentication method requires a host name; /// * If `host` is specified without `hostaddr`, a host name lookup occurs; /// * If both `host` and `hostaddr` are specified, the value for `hostaddr` gives the server network address. /// The value for `host` is ignored unless the authentication method requires it, /// in which case it will be used as the host name. /// * `port` - The port to connect to. Multiple ports can be specified, separated by commas. The number of ports must be /// either 1, in which case it will be used for all hosts, or the same as the number of hosts. Defaults to 5432 if /// omitted or the empty string. /// * `connect_timeout` - The time limit in seconds applied to each socket-level connection attempt. Note that hostnames /// can resolve to multiple IP addresses, and this limit is applied to each address. Defaults to no timeout. /// * `tcp_user_timeout` - The time limit that transmitted data may remain unacknowledged before a connection is forcibly closed. /// This is ignored for Unix domain socket connections. It is only supported on systems where TCP_USER_TIMEOUT is available /// and will default to the system default if omitted or set to 0; on other systems, it has no effect. /// * `keepalives` - Controls the use of TCP keepalive. A value of 0 disables keepalive and nonzero integers enable it. /// This option is ignored when connecting with Unix sockets. Defaults to on. /// * `keepalives_idle` - The number of seconds of inactivity after which a keepalive message is sent to the server. /// This option is ignored when connecting with Unix sockets. Defaults to 2 hours. /// * `keepalives_interval` - The time interval between TCP keepalive probes. /// This option is ignored when connecting with Unix sockets. /// * `keepalives_retries` - The maximum number of TCP keepalive probes that will be sent before dropping a connection. /// This option is ignored when connecting with Unix sockets. /// * `target_session_attrs` - Specifies requirements of the session. If set to `read-write`, the client will check that /// the `transaction_read_write` session parameter is set to `on`. This can be used to connect to the primary server /// in a database cluster as opposed to the secondary read-only mirrors. Defaults to `all`. /// * `channel_binding` - Controls usage of channel binding in the authentication process. If set to `disable`, channel /// binding will not be used. If set to `prefer`, channel binding will be used if available, but not used otherwise. /// If set to `require`, the authentication process will fail if channel binding is not used. Defaults to `prefer`. /// * `load_balance_hosts` - Controls the order in which the client tries to connect to the available hosts and /// addresses. Once a connection attempt is successful no other hosts and addresses will be tried. This parameter /// is typically used in combination with multiple host names or a DNS record that returns multiple IPs. If set to /// `disable`, hosts and addresses will be tried in the order provided. If set to `random`, hosts will be tried /// in a random order, and the IP addresses resolved from a hostname will also be tried in a random order. Defaults /// to `disable`. /// /// ## Examples /// /// ```not_rust /// host=localhost user=postgres connect_timeout=10 keepalives=0 /// ``` /// /// ```not_rust /// host=/var/lib/postgresql,localhost port=1234 user=postgres password='password with spaces' /// ``` /// /// ```not_rust /// host=host1,host2,host3 port=1234,,5678 hostaddr=127.0.0.1,127.0.0.2,127.0.0.3 user=postgres target_session_attrs=read-write /// ``` /// /// ```not_rust /// host=host1,host2,host3 port=1234,,5678 user=postgres target_session_attrs=read-write /// ``` /// /// # Url /// /// This format resembles a URL with a scheme of either `postgres://` or `postgresql://`. All components are optional, /// and the format accepts query parameters for all of the key-value pairs described in the section above. Multiple /// host/port pairs can be comma-separated. Unix socket paths in the host section of the URL should be percent-encoded, /// as the path component of the URL specifies the database name. /// /// ## Examples /// /// ```not_rust /// postgresql://user@localhost /// ``` /// /// ```not_rust /// postgresql://user:password@%2Fvar%2Flib%2Fpostgresql/mydb?connect_timeout=10 /// ``` /// /// ```not_rust /// postgresql://user@host1:1234,host2,host3:5678?target_session_attrs=read-write /// ``` /// /// ```not_rust /// postgresql:///mydb?user=user&host=/var/lib/postgresql /// ``` #[derive(Clone, PartialEq, Eq)] pub struct Config { pub(crate) user: Option, pub(crate) password: Option>, pub(crate) dbname: Option, pub(crate) options: Option, pub(crate) application_name: Option, pub(crate) ssl_mode: SslMode, pub(crate) host: Vec, pub(crate) hostaddr: Vec, pub(crate) port: Vec, pub(crate) connect_timeout: Option, pub(crate) tcp_user_timeout: Option, pub(crate) keepalives: bool, #[cfg(not(target_arch = "wasm32"))] pub(crate) keepalive_config: KeepaliveConfig, pub(crate) target_session_attrs: TargetSessionAttrs, pub(crate) channel_binding: ChannelBinding, pub(crate) load_balance_hosts: LoadBalanceHosts, } impl Default for Config { fn default() -> Config { Config::new() } } impl Config { /// Creates a new configuration. pub fn new() -> Config { Config { user: None, password: None, dbname: None, options: None, application_name: None, ssl_mode: SslMode::Prefer, host: vec![], hostaddr: vec![], port: vec![], connect_timeout: None, tcp_user_timeout: None, keepalives: true, #[cfg(not(target_arch = "wasm32"))] keepalive_config: KeepaliveConfig { idle: Duration::from_secs(2 * 60 * 60), interval: None, retries: None, }, target_session_attrs: TargetSessionAttrs::Any, channel_binding: ChannelBinding::Prefer, load_balance_hosts: LoadBalanceHosts::Disable, } } /// Sets the user to authenticate with. /// /// Defaults to the user executing this process. pub fn user(&mut self, user: impl Into) -> &mut Config { self.user = Some(user.into()); self } /// Gets the user to authenticate with, if one has been configured with /// the `user` method. pub fn get_user(&self) -> Option<&str> { self.user.as_deref() } /// Sets the password to authenticate with. pub fn password(&mut self, password: T) -> &mut Config where T: AsRef<[u8]>, { self.password = Some(password.as_ref().to_vec()); self } /// Gets the password to authenticate with, if one has been configured with /// the `password` method. pub fn get_password(&self) -> Option<&[u8]> { self.password.as_deref() } /// Sets the name of the database to connect to. /// /// Defaults to the user. pub fn dbname(&mut self, dbname: impl Into) -> &mut Config { self.dbname = Some(dbname.into()); self } /// Gets the name of the database to connect to, if one has been configured /// with the `dbname` method. pub fn get_dbname(&self) -> Option<&str> { self.dbname.as_deref() } /// Sets command line options used to configure the server. pub fn options(&mut self, options: impl Into) -> &mut Config { self.options = Some(options.into()); self } /// Gets the command line options used to configure the server, if the /// options have been set with the `options` method. pub fn get_options(&self) -> Option<&str> { self.options.as_deref() } /// Sets the value of the `application_name` runtime parameter. pub fn application_name(&mut self, application_name: impl Into) -> &mut Config { self.application_name = Some(application_name.into()); self } /// Gets the value of the `application_name` runtime parameter, if it has /// been set with the `application_name` method. pub fn get_application_name(&self) -> Option<&str> { self.application_name.as_deref() } /// Sets the SSL configuration. /// /// Defaults to `prefer`. pub fn ssl_mode(&mut self, ssl_mode: SslMode) -> &mut Config { self.ssl_mode = ssl_mode; self } /// Gets the SSL configuration. pub fn get_ssl_mode(&self) -> SslMode { self.ssl_mode } /// Adds a host to the configuration. /// /// Multiple hosts can be specified by calling this method multiple times, and each will be tried in order. On Unix /// systems, a host starting with a `/` is interpreted as a path to a directory containing Unix domain sockets. /// There must be either no hosts, or the same number of hosts as hostaddrs. pub fn host(&mut self, host: impl Into) -> &mut Config { let host = host.into(); #[cfg(unix)] { if host.starts_with('/') { return self.host_path(host); } } self.host.push(Host::Tcp(host)); self } /// Gets the hosts that have been added to the configuration with `host`. pub fn get_hosts(&self) -> &[Host] { &self.host } /// Gets the hostaddrs that have been added to the configuration with `hostaddr`. pub fn get_hostaddrs(&self) -> &[IpAddr] { self.hostaddr.deref() } /// Adds a Unix socket host to the configuration. /// /// Unlike `host`, this method allows non-UTF8 paths. #[cfg(unix)] pub fn host_path(&mut self, host: T) -> &mut Config where T: AsRef, { self.host.push(Host::Unix(host.as_ref().to_path_buf())); self } /// Adds a hostaddr to the configuration. /// /// Multiple hostaddrs can be specified by calling this method multiple times, and each will be tried in order. /// There must be either no hostaddrs, or the same number of hostaddrs as hosts. pub fn hostaddr(&mut self, hostaddr: IpAddr) -> &mut Config { self.hostaddr.push(hostaddr); self } /// Adds a port to the configuration. /// /// Multiple ports can be specified by calling this method multiple times. There must either be no ports, in which /// case the default of 5432 is used, a single port, in which it is used for all hosts, or the same number of ports /// as hosts. pub fn port(&mut self, port: u16) -> &mut Config { self.port.push(port); self } /// Gets the ports that have been added to the configuration with `port`. pub fn get_ports(&self) -> &[u16] { &self.port } /// Sets the timeout applied to socket-level connection attempts. /// /// Note that hostnames can resolve to multiple IP addresses, and this timeout will apply to each address of each /// host separately. Defaults to no limit. pub fn connect_timeout(&mut self, connect_timeout: Duration) -> &mut Config { self.connect_timeout = Some(connect_timeout); self } /// Gets the connection timeout, if one has been set with the /// `connect_timeout` method. pub fn get_connect_timeout(&self) -> Option<&Duration> { self.connect_timeout.as_ref() } /// Sets the TCP user timeout. /// /// This is ignored for Unix domain socket connections. It is only supported on systems where /// TCP_USER_TIMEOUT is available and will default to the system default if omitted or set to 0; /// on other systems, it has no effect. pub fn tcp_user_timeout(&mut self, tcp_user_timeout: Duration) -> &mut Config { self.tcp_user_timeout = Some(tcp_user_timeout); self } /// Gets the TCP user timeout, if one has been set with the /// `user_timeout` method. pub fn get_tcp_user_timeout(&self) -> Option<&Duration> { self.tcp_user_timeout.as_ref() } /// Controls the use of TCP keepalive. /// /// This is ignored for Unix domain socket connections. Defaults to `true`. pub fn keepalives(&mut self, keepalives: bool) -> &mut Config { self.keepalives = keepalives; self } /// Reports whether TCP keepalives will be used. pub fn get_keepalives(&self) -> bool { self.keepalives } /// Sets the amount of idle time before a keepalive packet is sent on the connection. /// /// This is ignored for Unix domain sockets, or if the `keepalives` option is disabled. Defaults to 2 hours. #[cfg(not(target_arch = "wasm32"))] pub fn keepalives_idle(&mut self, keepalives_idle: Duration) -> &mut Config { self.keepalive_config.idle = keepalives_idle; self } /// Gets the configured amount of idle time before a keepalive packet will /// be sent on the connection. #[cfg(not(target_arch = "wasm32"))] pub fn get_keepalives_idle(&self) -> Duration { self.keepalive_config.idle } /// Sets the time interval between TCP keepalive probes. /// On Windows, this sets the value of the tcp_keepalive struct’s keepaliveinterval field. /// /// This is ignored for Unix domain sockets, or if the `keepalives` option is disabled. #[cfg(not(target_arch = "wasm32"))] pub fn keepalives_interval(&mut self, keepalives_interval: Duration) -> &mut Config { self.keepalive_config.interval = Some(keepalives_interval); self } /// Gets the time interval between TCP keepalive probes. #[cfg(not(target_arch = "wasm32"))] pub fn get_keepalives_interval(&self) -> Option { self.keepalive_config.interval } /// Sets the maximum number of TCP keepalive probes that will be sent before dropping a connection. /// /// This is ignored for Unix domain sockets, or if the `keepalives` option is disabled. #[cfg(not(target_arch = "wasm32"))] pub fn keepalives_retries(&mut self, keepalives_retries: u32) -> &mut Config { self.keepalive_config.retries = Some(keepalives_retries); self } /// Gets the maximum number of TCP keepalive probes that will be sent before dropping a connection. #[cfg(not(target_arch = "wasm32"))] pub fn get_keepalives_retries(&self) -> Option { self.keepalive_config.retries } /// Sets the requirements of the session. /// /// This can be used to connect to the primary server in a clustered database rather than one of the read-only /// secondary servers. Defaults to `Any`. pub fn target_session_attrs( &mut self, target_session_attrs: TargetSessionAttrs, ) -> &mut Config { self.target_session_attrs = target_session_attrs; self } /// Gets the requirements of the session. pub fn get_target_session_attrs(&self) -> TargetSessionAttrs { self.target_session_attrs } /// Sets the channel binding behavior. /// /// Defaults to `prefer`. pub fn channel_binding(&mut self, channel_binding: ChannelBinding) -> &mut Config { self.channel_binding = channel_binding; self } /// Gets the channel binding behavior. pub fn get_channel_binding(&self) -> ChannelBinding { self.channel_binding } /// Sets the host load balancing behavior. /// /// Defaults to `disable`. pub fn load_balance_hosts(&mut self, load_balance_hosts: LoadBalanceHosts) -> &mut Config { self.load_balance_hosts = load_balance_hosts; self } /// Gets the host load balancing behavior. pub fn get_load_balance_hosts(&self) -> LoadBalanceHosts { self.load_balance_hosts } fn param(&mut self, key: &str, value: &str) -> Result<(), Error> { match key { "user" => { self.user(value); } "password" => { self.password(value); } "dbname" => { self.dbname(value); } "options" => { self.options(value); } "application_name" => { self.application_name(value); } "sslmode" => { let mode = match value { "disable" => SslMode::Disable, "prefer" => SslMode::Prefer, "require" => SslMode::Require, _ => return Err(Error::config_parse(Box::new(InvalidValue("sslmode")))), }; self.ssl_mode(mode); } "host" => { for host in value.split(',') { self.host(host); } } "hostaddr" => { for hostaddr in value.split(',') { let addr = hostaddr .parse() .map_err(|_| Error::config_parse(Box::new(InvalidValue("hostaddr"))))?; self.hostaddr(addr); } } "port" => { for port in value.split(',') { let port = if port.is_empty() { 5432 } else { port.parse() .map_err(|_| Error::config_parse(Box::new(InvalidValue("port"))))? }; self.port(port); } } "connect_timeout" => { let timeout = value .parse::() .map_err(|_| Error::config_parse(Box::new(InvalidValue("connect_timeout"))))?; if timeout > 0 { self.connect_timeout(Duration::from_secs(timeout as u64)); } } "tcp_user_timeout" => { let timeout = value .parse::() .map_err(|_| Error::config_parse(Box::new(InvalidValue("tcp_user_timeout"))))?; if timeout > 0 { self.tcp_user_timeout(Duration::from_secs(timeout as u64)); } } #[cfg(not(target_arch = "wasm32"))] "keepalives" => { let keepalives = value .parse::() .map_err(|_| Error::config_parse(Box::new(InvalidValue("keepalives"))))?; self.keepalives(keepalives != 0); } #[cfg(not(target_arch = "wasm32"))] "keepalives_idle" => { let keepalives_idle = value .parse::() .map_err(|_| Error::config_parse(Box::new(InvalidValue("keepalives_idle"))))?; if keepalives_idle > 0 { self.keepalives_idle(Duration::from_secs(keepalives_idle as u64)); } } #[cfg(not(target_arch = "wasm32"))] "keepalives_interval" => { let keepalives_interval = value.parse::().map_err(|_| { Error::config_parse(Box::new(InvalidValue("keepalives_interval"))) })?; if keepalives_interval > 0 { self.keepalives_interval(Duration::from_secs(keepalives_interval as u64)); } } #[cfg(not(target_arch = "wasm32"))] "keepalives_retries" => { let keepalives_retries = value.parse::().map_err(|_| { Error::config_parse(Box::new(InvalidValue("keepalives_retries"))) })?; self.keepalives_retries(keepalives_retries); } "target_session_attrs" => { let target_session_attrs = match value { "any" => TargetSessionAttrs::Any, "read-write" => TargetSessionAttrs::ReadWrite, "read-only" => TargetSessionAttrs::ReadOnly, _ => { return Err(Error::config_parse(Box::new(InvalidValue( "target_session_attrs", )))); } }; self.target_session_attrs(target_session_attrs); } "channel_binding" => { let channel_binding = match value { "disable" => ChannelBinding::Disable, "prefer" => ChannelBinding::Prefer, "require" => ChannelBinding::Require, _ => { return Err(Error::config_parse(Box::new(InvalidValue( "channel_binding", )))) } }; self.channel_binding(channel_binding); } "load_balance_hosts" => { let load_balance_hosts = match value { "disable" => LoadBalanceHosts::Disable, "random" => LoadBalanceHosts::Random, _ => { return Err(Error::config_parse(Box::new(InvalidValue( "load_balance_hosts", )))) } }; self.load_balance_hosts(load_balance_hosts); } key => { return Err(Error::config_parse(Box::new(UnknownOption( key.to_string(), )))); } } Ok(()) } /// Opens a connection to a PostgreSQL database. /// /// Requires the `runtime` Cargo feature (enabled by default). #[cfg(feature = "runtime")] pub async fn connect(&self, tls: T) -> Result<(Client, Connection), Error> where T: MakeTlsConnect, { connect(tls, self).await } /// Connects to a PostgreSQL database over an arbitrary stream. /// /// All of the settings other than `user`, `password`, `dbname`, `options`, and `application_name` name are ignored. pub async fn connect_raw( &self, stream: S, tls: T, ) -> Result<(Client, Connection), Error> where S: AsyncRead + AsyncWrite + Unpin, T: TlsConnect, { connect_raw(stream, tls, true, self).await } } impl FromStr for Config { type Err = Error; fn from_str(s: &str) -> Result { match UrlParser::parse(s)? { Some(config) => Ok(config), None => Parser::parse(s), } } } // Omit password from debug output impl fmt::Debug for Config { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { struct Redaction {} impl fmt::Debug for Redaction { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "_") } } let mut config_dbg = &mut f.debug_struct("Config"); config_dbg = config_dbg .field("user", &self.user) .field("password", &self.password.as_ref().map(|_| Redaction {})) .field("dbname", &self.dbname) .field("options", &self.options) .field("application_name", &self.application_name) .field("ssl_mode", &self.ssl_mode) .field("host", &self.host) .field("hostaddr", &self.hostaddr) .field("port", &self.port) .field("connect_timeout", &self.connect_timeout) .field("tcp_user_timeout", &self.tcp_user_timeout) .field("keepalives", &self.keepalives); #[cfg(not(target_arch = "wasm32"))] { config_dbg = config_dbg .field("keepalives_idle", &self.keepalive_config.idle) .field("keepalives_interval", &self.keepalive_config.interval) .field("keepalives_retries", &self.keepalive_config.retries); } config_dbg .field("target_session_attrs", &self.target_session_attrs) .field("channel_binding", &self.channel_binding) .finish() } } #[derive(Debug)] struct UnknownOption(String); impl fmt::Display for UnknownOption { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { write!(fmt, "unknown option `{}`", self.0) } } impl error::Error for UnknownOption {} #[derive(Debug)] struct InvalidValue(&'static str); impl fmt::Display for InvalidValue { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { write!(fmt, "invalid value for option `{}`", self.0) } } impl error::Error for InvalidValue {} struct Parser<'a> { s: &'a str, it: iter::Peekable>, } impl<'a> Parser<'a> { fn parse(s: &'a str) -> Result { let mut parser = Parser { s, it: s.char_indices().peekable(), }; let mut config = Config::new(); while let Some((key, value)) = parser.parameter()? { config.param(key, &value)?; } Ok(config) } fn skip_ws(&mut self) { self.take_while(char::is_whitespace); } fn take_while(&mut self, f: F) -> &'a str where F: Fn(char) -> bool, { let start = match self.it.peek() { Some(&(i, _)) => i, None => return "", }; loop { match self.it.peek() { Some(&(_, c)) if f(c) => { self.it.next(); } Some(&(i, _)) => return &self.s[start..i], None => return &self.s[start..], } } } fn eat(&mut self, target: char) -> Result<(), Error> { match self.it.next() { Some((_, c)) if c == target => Ok(()), Some((i, c)) => { let m = format!( "unexpected character at byte {}: expected `{}` but got `{}`", i, target, c ); Err(Error::config_parse(m.into())) } None => Err(Error::config_parse("unexpected EOF".into())), } } fn eat_if(&mut self, target: char) -> bool { match self.it.peek() { Some(&(_, c)) if c == target => { self.it.next(); true } _ => false, } } fn keyword(&mut self) -> Option<&'a str> { let s = self.take_while(|c| match c { c if c.is_whitespace() => false, '=' => false, _ => true, }); if s.is_empty() { None } else { Some(s) } } fn value(&mut self) -> Result { let value = if self.eat_if('\'') { let value = self.quoted_value()?; self.eat('\'')?; value } else { self.simple_value()? }; Ok(value) } fn simple_value(&mut self) -> Result { let mut value = String::new(); while let Some(&(_, c)) = self.it.peek() { if c.is_whitespace() { break; } self.it.next(); if c == '\\' { if let Some((_, c2)) = self.it.next() { value.push(c2); } } else { value.push(c); } } if value.is_empty() { return Err(Error::config_parse("unexpected EOF".into())); } Ok(value) } fn quoted_value(&mut self) -> Result { let mut value = String::new(); while let Some(&(_, c)) = self.it.peek() { if c == '\'' { return Ok(value); } self.it.next(); if c == '\\' { if let Some((_, c2)) = self.it.next() { value.push(c2); } } else { value.push(c); } } Err(Error::config_parse( "unterminated quoted connection parameter value".into(), )) } fn parameter(&mut self) -> Result, Error> { self.skip_ws(); let keyword = match self.keyword() { Some(keyword) => keyword, None => return Ok(None), }; self.skip_ws(); self.eat('=')?; self.skip_ws(); let value = self.value()?; Ok(Some((keyword, value))) } } // This is a pretty sloppy "URL" parser, but it matches the behavior of libpq, where things really aren't very strict struct UrlParser<'a> { s: &'a str, config: Config, } impl<'a> UrlParser<'a> { fn parse(s: &'a str) -> Result, Error> { let s = match Self::remove_url_prefix(s) { Some(s) => s, None => return Ok(None), }; let mut parser = UrlParser { s, config: Config::new(), }; parser.parse_credentials()?; parser.parse_host()?; parser.parse_path()?; parser.parse_params()?; Ok(Some(parser.config)) } fn remove_url_prefix(s: &str) -> Option<&str> { for prefix in &["postgres://", "postgresql://"] { if let Some(stripped) = s.strip_prefix(prefix) { return Some(stripped); } } None } fn take_until(&mut self, end: &[char]) -> Option<&'a str> { match self.s.find(end) { Some(pos) => { let (head, tail) = self.s.split_at(pos); self.s = tail; Some(head) } None => None, } } fn take_all(&mut self) -> &'a str { mem::take(&mut self.s) } fn eat_byte(&mut self) { self.s = &self.s[1..]; } fn parse_credentials(&mut self) -> Result<(), Error> { let creds = match self.take_until(&['@']) { Some(creds) => creds, None => return Ok(()), }; self.eat_byte(); let mut it = creds.splitn(2, ':'); let user = self.decode(it.next().unwrap())?; self.config.user(user); if let Some(password) = it.next() { let password = Cow::from(percent_encoding::percent_decode(password.as_bytes())); self.config.password(password); } Ok(()) } fn parse_host(&mut self) -> Result<(), Error> { let host = match self.take_until(&['/', '?']) { Some(host) => host, None => self.take_all(), }; if host.is_empty() { return Ok(()); } for chunk in host.split(',') { let (host, port) = if chunk.starts_with('[') { let idx = match chunk.find(']') { Some(idx) => idx, None => return Err(Error::config_parse(InvalidValue("host").into())), }; let host = &chunk[1..idx]; let remaining = &chunk[idx + 1..]; let port = if let Some(port) = remaining.strip_prefix(':') { Some(port) } else if remaining.is_empty() { None } else { return Err(Error::config_parse(InvalidValue("host").into())); }; (host, port) } else { let mut it = chunk.splitn(2, ':'); (it.next().unwrap(), it.next()) }; self.host_param(host)?; let port = self.decode(port.unwrap_or("5432"))?; self.config.param("port", &port)?; } Ok(()) } fn parse_path(&mut self) -> Result<(), Error> { if !self.s.starts_with('/') { return Ok(()); } self.eat_byte(); let dbname = match self.take_until(&['?']) { Some(dbname) => dbname, None => self.take_all(), }; if !dbname.is_empty() { self.config.dbname(self.decode(dbname)?); } Ok(()) } fn parse_params(&mut self) -> Result<(), Error> { if !self.s.starts_with('?') { return Ok(()); } self.eat_byte(); while !self.s.is_empty() { let key = match self.take_until(&['=']) { Some(key) => self.decode(key)?, None => return Err(Error::config_parse("unterminated parameter".into())), }; self.eat_byte(); let value = match self.take_until(&['&']) { Some(value) => { self.eat_byte(); value } None => self.take_all(), }; if key == "host" { self.host_param(value)?; } else { let value = self.decode(value)?; self.config.param(&key, &value)?; } } Ok(()) } #[cfg(unix)] fn host_param(&mut self, s: &str) -> Result<(), Error> { let decoded = Cow::from(percent_encoding::percent_decode(s.as_bytes())); if decoded.first() == Some(&b'/') { self.config.host_path(OsStr::from_bytes(&decoded)); } else { let decoded = str::from_utf8(&decoded).map_err(|e| Error::config_parse(Box::new(e)))?; self.config.host(decoded); } Ok(()) } #[cfg(not(unix))] fn host_param(&mut self, s: &str) -> Result<(), Error> { let s = self.decode(s)?; self.config.param("host", &s) } fn decode(&self, s: &'a str) -> Result, Error> { percent_encoding::percent_decode(s.as_bytes()) .decode_utf8() .map_err(|e| Error::config_parse(e.into())) } } #[cfg(test)] mod tests { use std::net::IpAddr; use crate::{config::Host, Config}; #[test] fn test_simple_parsing() { let s = "user=pass_user dbname=postgres host=host1,host2 hostaddr=127.0.0.1,127.0.0.2 port=26257"; let config = s.parse::().unwrap(); assert_eq!(Some("pass_user"), config.get_user()); assert_eq!(Some("postgres"), config.get_dbname()); assert_eq!( [ Host::Tcp("host1".to_string()), Host::Tcp("host2".to_string()) ], config.get_hosts(), ); assert_eq!( [ "127.0.0.1".parse::().unwrap(), "127.0.0.2".parse::().unwrap() ], config.get_hostaddrs(), ); assert_eq!(1, 1); } #[test] fn test_invalid_hostaddr_parsing() { let s = "user=pass_user dbname=postgres host=host1 hostaddr=127.0.0 port=26257"; s.parse::().err().unwrap(); } } tokio-postgres-0.7.12/src/connect.rs000064400000000000000000000162241046102023000154770ustar 00000000000000use crate::client::{Addr, SocketConfig}; use crate::config::{Host, LoadBalanceHosts, TargetSessionAttrs}; use crate::connect_raw::connect_raw; use crate::connect_socket::connect_socket; use crate::tls::MakeTlsConnect; use crate::{Client, Config, Connection, Error, SimpleQueryMessage, Socket}; use futures_util::{future, pin_mut, Future, FutureExt, Stream}; use rand::seq::SliceRandom; use std::task::Poll; use std::{cmp, io}; use tokio::net; pub async fn connect( mut tls: T, config: &Config, ) -> Result<(Client, Connection), Error> where T: MakeTlsConnect, { if config.host.is_empty() && config.hostaddr.is_empty() { return Err(Error::config("both host and hostaddr are missing".into())); } if !config.host.is_empty() && !config.hostaddr.is_empty() && config.host.len() != config.hostaddr.len() { let msg = format!( "number of hosts ({}) is different from number of hostaddrs ({})", config.host.len(), config.hostaddr.len(), ); return Err(Error::config(msg.into())); } // At this point, either one of the following two scenarios could happen: // (1) either config.host or config.hostaddr must be empty; // (2) if both config.host and config.hostaddr are NOT empty; their lengths must be equal. let num_hosts = cmp::max(config.host.len(), config.hostaddr.len()); if config.port.len() > 1 && config.port.len() != num_hosts { return Err(Error::config("invalid number of ports".into())); } let mut indices = (0..num_hosts).collect::>(); if config.load_balance_hosts == LoadBalanceHosts::Random { indices.shuffle(&mut rand::thread_rng()); } let mut error = None; for i in indices { let host = config.host.get(i); let hostaddr = config.hostaddr.get(i); let port = config .port .get(i) .or_else(|| config.port.first()) .copied() .unwrap_or(5432); // The value of host is used as the hostname for TLS validation, let hostname = match host { Some(Host::Tcp(host)) => Some(host.clone()), // postgres doesn't support TLS over unix sockets, so the choice here doesn't matter #[cfg(unix)] Some(Host::Unix(_)) => None, None => None, }; // Try to use the value of hostaddr to establish the TCP connection, // fallback to host if hostaddr is not present. let addr = match hostaddr { Some(ipaddr) => Host::Tcp(ipaddr.to_string()), None => host.cloned().unwrap(), }; match connect_host(addr, hostname, port, &mut tls, config).await { Ok((client, connection)) => return Ok((client, connection)), Err(e) => error = Some(e), } } Err(error.unwrap()) } async fn connect_host( host: Host, hostname: Option, port: u16, tls: &mut T, config: &Config, ) -> Result<(Client, Connection), Error> where T: MakeTlsConnect, { match host { Host::Tcp(host) => { let mut addrs = net::lookup_host((&*host, port)) .await .map_err(Error::connect)? .collect::>(); if config.load_balance_hosts == LoadBalanceHosts::Random { addrs.shuffle(&mut rand::thread_rng()); } let mut last_err = None; for addr in addrs { match connect_once(Addr::Tcp(addr.ip()), hostname.as_deref(), port, tls, config) .await { Ok(stream) => return Ok(stream), Err(e) => { last_err = Some(e); continue; } }; } Err(last_err.unwrap_or_else(|| { Error::connect(io::Error::new( io::ErrorKind::InvalidInput, "could not resolve any addresses", )) })) } #[cfg(unix)] Host::Unix(path) => { connect_once(Addr::Unix(path), hostname.as_deref(), port, tls, config).await } } } async fn connect_once( addr: Addr, hostname: Option<&str>, port: u16, tls: &mut T, config: &Config, ) -> Result<(Client, Connection), Error> where T: MakeTlsConnect, { let socket = connect_socket( &addr, port, config.connect_timeout, config.tcp_user_timeout, if config.keepalives { Some(&config.keepalive_config) } else { None }, ) .await?; let tls = tls .make_tls_connect(hostname.unwrap_or("")) .map_err(|e| Error::tls(e.into()))?; let has_hostname = hostname.is_some(); let (mut client, mut connection) = connect_raw(socket, tls, has_hostname, config).await?; if config.target_session_attrs != TargetSessionAttrs::Any { let rows = client.simple_query_raw("SHOW transaction_read_only"); pin_mut!(rows); let rows = future::poll_fn(|cx| { if connection.poll_unpin(cx)?.is_ready() { return Poll::Ready(Err(Error::closed())); } rows.as_mut().poll(cx) }) .await?; pin_mut!(rows); loop { let next = future::poll_fn(|cx| { if connection.poll_unpin(cx)?.is_ready() { return Poll::Ready(Some(Err(Error::closed()))); } rows.as_mut().poll_next(cx) }); match next.await.transpose()? { Some(SimpleQueryMessage::Row(row)) => { let read_only_result = row.try_get(0)?; if read_only_result == Some("on") && config.target_session_attrs == TargetSessionAttrs::ReadWrite { return Err(Error::connect(io::Error::new( io::ErrorKind::PermissionDenied, "database does not allow writes", ))); } else if read_only_result == Some("off") && config.target_session_attrs == TargetSessionAttrs::ReadOnly { return Err(Error::connect(io::Error::new( io::ErrorKind::PermissionDenied, "database is not read only", ))); } else { break; } } Some(_) => {} None => return Err(Error::unexpected_message()), } } } client.set_socket_config(SocketConfig { addr, hostname: hostname.map(|s| s.to_string()), port, connect_timeout: config.connect_timeout, tcp_user_timeout: config.tcp_user_timeout, keepalive: if config.keepalives { Some(config.keepalive_config.clone()) } else { None }, }); Ok((client, connection)) } tokio-postgres-0.7.12/src/connect_raw.rs000064400000000000000000000267221046102023000163540ustar 00000000000000use crate::codec::{BackendMessage, BackendMessages, FrontendMessage, PostgresCodec}; use crate::config::{self, Config}; use crate::connect_tls::connect_tls; use crate::maybe_tls_stream::MaybeTlsStream; use crate::tls::{TlsConnect, TlsStream}; use crate::{Client, Connection, Error}; use bytes::BytesMut; use fallible_iterator::FallibleIterator; use futures_channel::mpsc; use futures_util::{ready, Sink, SinkExt, Stream, TryStreamExt}; use postgres_protocol::authentication; use postgres_protocol::authentication::sasl; use postgres_protocol::authentication::sasl::ScramSha256; use postgres_protocol::message::backend::{AuthenticationSaslBody, Message}; use postgres_protocol::message::frontend; use std::borrow::Cow; use std::collections::{HashMap, VecDeque}; use std::io; use std::pin::Pin; use std::task::{Context, Poll}; use tokio::io::{AsyncRead, AsyncWrite}; use tokio_util::codec::Framed; pub struct StartupStream { inner: Framed, PostgresCodec>, buf: BackendMessages, delayed: VecDeque, } impl Sink for StartupStream where S: AsyncRead + AsyncWrite + Unpin, T: AsyncRead + AsyncWrite + Unpin, { type Error = io::Error; fn poll_ready(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { Pin::new(&mut self.inner).poll_ready(cx) } fn start_send(mut self: Pin<&mut Self>, item: FrontendMessage) -> io::Result<()> { Pin::new(&mut self.inner).start_send(item) } fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { Pin::new(&mut self.inner).poll_flush(cx) } fn poll_close(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { Pin::new(&mut self.inner).poll_close(cx) } } impl Stream for StartupStream where S: AsyncRead + AsyncWrite + Unpin, T: AsyncRead + AsyncWrite + Unpin, { type Item = io::Result; fn poll_next( mut self: Pin<&mut Self>, cx: &mut Context<'_>, ) -> Poll>> { loop { match self.buf.next() { Ok(Some(message)) => return Poll::Ready(Some(Ok(message))), Ok(None) => {} Err(e) => return Poll::Ready(Some(Err(e))), } match ready!(Pin::new(&mut self.inner).poll_next(cx)) { Some(Ok(BackendMessage::Normal { messages, .. })) => self.buf = messages, Some(Ok(BackendMessage::Async(message))) => return Poll::Ready(Some(Ok(message))), Some(Err(e)) => return Poll::Ready(Some(Err(e))), None => return Poll::Ready(None), } } } } pub async fn connect_raw( stream: S, tls: T, has_hostname: bool, config: &Config, ) -> Result<(Client, Connection), Error> where S: AsyncRead + AsyncWrite + Unpin, T: TlsConnect, { let stream = connect_tls(stream, config.ssl_mode, tls, has_hostname).await?; let mut stream = StartupStream { inner: Framed::new(stream, PostgresCodec), buf: BackendMessages::empty(), delayed: VecDeque::new(), }; let user = config .user .as_deref() .map_or_else(|| Cow::Owned(whoami::username()), Cow::Borrowed); startup(&mut stream, config, &user).await?; authenticate(&mut stream, config, &user).await?; let (process_id, secret_key, parameters) = read_info(&mut stream).await?; let (sender, receiver) = mpsc::unbounded(); let client = Client::new(sender, config.ssl_mode, process_id, secret_key); let connection = Connection::new(stream.inner, stream.delayed, parameters, receiver); Ok((client, connection)) } async fn startup( stream: &mut StartupStream, config: &Config, user: &str, ) -> Result<(), Error> where S: AsyncRead + AsyncWrite + Unpin, T: AsyncRead + AsyncWrite + Unpin, { let mut params = vec![("client_encoding", "UTF8")]; params.push(("user", user)); if let Some(dbname) = &config.dbname { params.push(("database", &**dbname)); } if let Some(options) = &config.options { params.push(("options", &**options)); } if let Some(application_name) = &config.application_name { params.push(("application_name", &**application_name)); } let mut buf = BytesMut::new(); frontend::startup_message(params, &mut buf).map_err(Error::encode)?; stream .send(FrontendMessage::Raw(buf.freeze())) .await .map_err(Error::io) } async fn authenticate( stream: &mut StartupStream, config: &Config, user: &str, ) -> Result<(), Error> where S: AsyncRead + AsyncWrite + Unpin, T: TlsStream + Unpin, { match stream.try_next().await.map_err(Error::io)? { Some(Message::AuthenticationOk) => { can_skip_channel_binding(config)?; return Ok(()); } Some(Message::AuthenticationCleartextPassword) => { can_skip_channel_binding(config)?; let pass = config .password .as_ref() .ok_or_else(|| Error::config("password missing".into()))?; authenticate_password(stream, pass).await?; } Some(Message::AuthenticationMd5Password(body)) => { can_skip_channel_binding(config)?; let pass = config .password .as_ref() .ok_or_else(|| Error::config("password missing".into()))?; let output = authentication::md5_hash(user.as_bytes(), pass, body.salt()); authenticate_password(stream, output.as_bytes()).await?; } Some(Message::AuthenticationSasl(body)) => { authenticate_sasl(stream, body, config).await?; } Some(Message::AuthenticationKerberosV5) | Some(Message::AuthenticationScmCredential) | Some(Message::AuthenticationGss) | Some(Message::AuthenticationSspi) => { return Err(Error::authentication( "unsupported authentication method".into(), )) } Some(Message::ErrorResponse(body)) => return Err(Error::db(body)), Some(_) => return Err(Error::unexpected_message()), None => return Err(Error::closed()), } match stream.try_next().await.map_err(Error::io)? { Some(Message::AuthenticationOk) => Ok(()), Some(Message::ErrorResponse(body)) => Err(Error::db(body)), Some(_) => Err(Error::unexpected_message()), None => Err(Error::closed()), } } fn can_skip_channel_binding(config: &Config) -> Result<(), Error> { match config.channel_binding { config::ChannelBinding::Disable | config::ChannelBinding::Prefer => Ok(()), config::ChannelBinding::Require => Err(Error::authentication( "server did not use channel binding".into(), )), } } async fn authenticate_password( stream: &mut StartupStream, password: &[u8], ) -> Result<(), Error> where S: AsyncRead + AsyncWrite + Unpin, T: AsyncRead + AsyncWrite + Unpin, { let mut buf = BytesMut::new(); frontend::password_message(password, &mut buf).map_err(Error::encode)?; stream .send(FrontendMessage::Raw(buf.freeze())) .await .map_err(Error::io) } async fn authenticate_sasl( stream: &mut StartupStream, body: AuthenticationSaslBody, config: &Config, ) -> Result<(), Error> where S: AsyncRead + AsyncWrite + Unpin, T: TlsStream + Unpin, { let password = config .password .as_ref() .ok_or_else(|| Error::config("password missing".into()))?; let mut has_scram = false; let mut has_scram_plus = false; let mut mechanisms = body.mechanisms(); while let Some(mechanism) = mechanisms.next().map_err(Error::parse)? { match mechanism { sasl::SCRAM_SHA_256 => has_scram = true, sasl::SCRAM_SHA_256_PLUS => has_scram_plus = true, _ => {} } } let channel_binding = stream .inner .get_ref() .channel_binding() .tls_server_end_point .filter(|_| config.channel_binding != config::ChannelBinding::Disable) .map(sasl::ChannelBinding::tls_server_end_point); let (channel_binding, mechanism) = if has_scram_plus { match channel_binding { Some(channel_binding) => (channel_binding, sasl::SCRAM_SHA_256_PLUS), None => (sasl::ChannelBinding::unsupported(), sasl::SCRAM_SHA_256), } } else if has_scram { match channel_binding { Some(_) => (sasl::ChannelBinding::unrequested(), sasl::SCRAM_SHA_256), None => (sasl::ChannelBinding::unsupported(), sasl::SCRAM_SHA_256), } } else { return Err(Error::authentication("unsupported SASL mechanism".into())); }; if mechanism != sasl::SCRAM_SHA_256_PLUS { can_skip_channel_binding(config)?; } let mut scram = ScramSha256::new(password, channel_binding); let mut buf = BytesMut::new(); frontend::sasl_initial_response(mechanism, scram.message(), &mut buf).map_err(Error::encode)?; stream .send(FrontendMessage::Raw(buf.freeze())) .await .map_err(Error::io)?; let body = match stream.try_next().await.map_err(Error::io)? { Some(Message::AuthenticationSaslContinue(body)) => body, Some(Message::ErrorResponse(body)) => return Err(Error::db(body)), Some(_) => return Err(Error::unexpected_message()), None => return Err(Error::closed()), }; scram .update(body.data()) .map_err(|e| Error::authentication(e.into()))?; let mut buf = BytesMut::new(); frontend::sasl_response(scram.message(), &mut buf).map_err(Error::encode)?; stream .send(FrontendMessage::Raw(buf.freeze())) .await .map_err(Error::io)?; let body = match stream.try_next().await.map_err(Error::io)? { Some(Message::AuthenticationSaslFinal(body)) => body, Some(Message::ErrorResponse(body)) => return Err(Error::db(body)), Some(_) => return Err(Error::unexpected_message()), None => return Err(Error::closed()), }; scram .finish(body.data()) .map_err(|e| Error::authentication(e.into()))?; Ok(()) } async fn read_info( stream: &mut StartupStream, ) -> Result<(i32, i32, HashMap), Error> where S: AsyncRead + AsyncWrite + Unpin, T: AsyncRead + AsyncWrite + Unpin, { let mut process_id = 0; let mut secret_key = 0; let mut parameters = HashMap::new(); loop { match stream.try_next().await.map_err(Error::io)? { Some(Message::BackendKeyData(body)) => { process_id = body.process_id(); secret_key = body.secret_key(); } Some(Message::ParameterStatus(body)) => { parameters.insert( body.name().map_err(Error::parse)?.to_string(), body.value().map_err(Error::parse)?.to_string(), ); } Some(msg @ Message::NoticeResponse(_)) => { stream.delayed.push_back(BackendMessage::Async(msg)) } Some(Message::ReadyForQuery(_)) => return Ok((process_id, secret_key, parameters)), Some(Message::ErrorResponse(body)) => return Err(Error::db(body)), Some(_) => return Err(Error::unexpected_message()), None => return Err(Error::closed()), } } } tokio-postgres-0.7.12/src/connect_socket.rs000064400000000000000000000043031046102023000170420ustar 00000000000000use crate::client::Addr; use crate::keepalive::KeepaliveConfig; use crate::{Error, Socket}; use socket2::{SockRef, TcpKeepalive}; use std::future::Future; use std::io; use std::time::Duration; use tokio::net::TcpStream; #[cfg(unix)] use tokio::net::UnixStream; use tokio::time; pub(crate) async fn connect_socket( addr: &Addr, port: u16, connect_timeout: Option, #[cfg_attr(not(target_os = "linux"), allow(unused_variables))] tcp_user_timeout: Option< Duration, >, keepalive_config: Option<&KeepaliveConfig>, ) -> Result { match addr { Addr::Tcp(ip) => { let stream = connect_with_timeout(TcpStream::connect((*ip, port)), connect_timeout).await?; stream.set_nodelay(true).map_err(Error::connect)?; let sock_ref = SockRef::from(&stream); #[cfg(target_os = "linux")] { sock_ref .set_tcp_user_timeout(tcp_user_timeout) .map_err(Error::connect)?; } if let Some(keepalive_config) = keepalive_config { sock_ref .set_tcp_keepalive(&TcpKeepalive::from(keepalive_config)) .map_err(Error::connect)?; } Ok(Socket::new_tcp(stream)) } #[cfg(unix)] Addr::Unix(dir) => { let path = dir.join(format!(".s.PGSQL.{}", port)); let socket = connect_with_timeout(UnixStream::connect(path), connect_timeout).await?; Ok(Socket::new_unix(socket)) } } } async fn connect_with_timeout(connect: F, timeout: Option) -> Result where F: Future>, { match timeout { Some(timeout) => match time::timeout(timeout, connect).await { Ok(Ok(socket)) => Ok(socket), Ok(Err(e)) => Err(Error::connect(e)), Err(_) => Err(Error::connect(io::Error::new( io::ErrorKind::TimedOut, "connection timed out", ))), }, None => match connect.await { Ok(socket) => Ok(socket), Err(e) => Err(Error::connect(e)), }, } } tokio-postgres-0.7.12/src/connect_tls.rs000064400000000000000000000027021046102023000163550ustar 00000000000000use crate::config::SslMode; use crate::maybe_tls_stream::MaybeTlsStream; use crate::tls::private::ForcePrivateApi; use crate::tls::TlsConnect; use crate::Error; use bytes::BytesMut; use postgres_protocol::message::frontend; use tokio::io::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; pub async fn connect_tls( mut stream: S, mode: SslMode, tls: T, has_hostname: bool, ) -> Result, Error> where S: AsyncRead + AsyncWrite + Unpin, T: TlsConnect, { match mode { SslMode::Disable => return Ok(MaybeTlsStream::Raw(stream)), SslMode::Prefer if !tls.can_connect(ForcePrivateApi) => { return Ok(MaybeTlsStream::Raw(stream)) } SslMode::Prefer | SslMode::Require => {} } let mut buf = BytesMut::new(); frontend::ssl_request(&mut buf); stream.write_all(&buf).await.map_err(Error::io)?; let mut buf = [0]; stream.read_exact(&mut buf).await.map_err(Error::io)?; if buf[0] != b'S' { if SslMode::Require == mode { return Err(Error::tls("server does not support TLS".into())); } else { return Ok(MaybeTlsStream::Raw(stream)); } } if !has_hostname { return Err(Error::tls("no hostname provided for TLS handshake".into())); } let stream = tls .connect(stream) .await .map_err(|e| Error::tls(e.into()))?; Ok(MaybeTlsStream::Tls(stream)) } tokio-postgres-0.7.12/src/connection.rs000064400000000000000000000305041046102023000162020ustar 00000000000000use crate::codec::{BackendMessage, BackendMessages, FrontendMessage, PostgresCodec}; use crate::copy_in::CopyInReceiver; use crate::error::DbError; use crate::maybe_tls_stream::MaybeTlsStream; use crate::{AsyncMessage, Error, Notification}; use bytes::BytesMut; use fallible_iterator::FallibleIterator; use futures_channel::mpsc; use futures_util::{ready, stream::FusedStream, Sink, Stream, StreamExt}; use log::{info, trace}; use postgres_protocol::message::backend::Message; use postgres_protocol::message::frontend; use std::collections::{HashMap, VecDeque}; use std::future::Future; use std::pin::Pin; use std::task::{Context, Poll}; use tokio::io::{AsyncRead, AsyncWrite}; use tokio_util::codec::Framed; pub enum RequestMessages { Single(FrontendMessage), CopyIn(CopyInReceiver), } pub struct Request { pub messages: RequestMessages, pub sender: mpsc::Sender, } pub struct Response { sender: mpsc::Sender, } #[derive(PartialEq, Debug)] enum State { Active, Terminating, Closing, } /// A connection to a PostgreSQL database. /// /// This is one half of what is returned when a new connection is established. It performs the actual IO with the /// server, and should generally be spawned off onto an executor to run in the background. /// /// `Connection` implements `Future`, and only resolves when the connection is closed, either because a fatal error has /// occurred, or because its associated `Client` has dropped and all outstanding work has completed. #[must_use = "futures do nothing unless polled"] pub struct Connection { stream: Framed, PostgresCodec>, parameters: HashMap, receiver: mpsc::UnboundedReceiver, pending_request: Option, pending_responses: VecDeque, responses: VecDeque, state: State, } impl Connection where S: AsyncRead + AsyncWrite + Unpin, T: AsyncRead + AsyncWrite + Unpin, { pub(crate) fn new( stream: Framed, PostgresCodec>, pending_responses: VecDeque, parameters: HashMap, receiver: mpsc::UnboundedReceiver, ) -> Connection { Connection { stream, parameters, receiver, pending_request: None, pending_responses, responses: VecDeque::new(), state: State::Active, } } fn poll_response( &mut self, cx: &mut Context<'_>, ) -> Poll>> { if let Some(message) = self.pending_responses.pop_front() { trace!("retrying pending response"); return Poll::Ready(Some(Ok(message))); } Pin::new(&mut self.stream) .poll_next(cx) .map(|o| o.map(|r| r.map_err(Error::io))) } fn poll_read(&mut self, cx: &mut Context<'_>) -> Result, Error> { if self.state != State::Active { trace!("poll_read: done"); return Ok(None); } loop { let message = match self.poll_response(cx)? { Poll::Ready(Some(message)) => message, Poll::Ready(None) => return Err(Error::closed()), Poll::Pending => { trace!("poll_read: waiting on response"); return Ok(None); } }; let (mut messages, request_complete) = match message { BackendMessage::Async(Message::NoticeResponse(body)) => { let error = DbError::parse(&mut body.fields()).map_err(Error::parse)?; return Ok(Some(AsyncMessage::Notice(error))); } BackendMessage::Async(Message::NotificationResponse(body)) => { let notification = Notification { process_id: body.process_id(), channel: body.channel().map_err(Error::parse)?.to_string(), payload: body.message().map_err(Error::parse)?.to_string(), }; return Ok(Some(AsyncMessage::Notification(notification))); } BackendMessage::Async(Message::ParameterStatus(body)) => { self.parameters.insert( body.name().map_err(Error::parse)?.to_string(), body.value().map_err(Error::parse)?.to_string(), ); continue; } BackendMessage::Async(_) => unreachable!(), BackendMessage::Normal { messages, request_complete, } => (messages, request_complete), }; let mut response = match self.responses.pop_front() { Some(response) => response, None => match messages.next().map_err(Error::parse)? { Some(Message::ErrorResponse(error)) => return Err(Error::db(error)), _ => return Err(Error::unexpected_message()), }, }; match response.sender.poll_ready(cx) { Poll::Ready(Ok(())) => { let _ = response.sender.start_send(messages); if !request_complete { self.responses.push_front(response); } } Poll::Ready(Err(_)) => { // we need to keep paging through the rest of the messages even if the receiver's hung up if !request_complete { self.responses.push_front(response); } } Poll::Pending => { self.responses.push_front(response); self.pending_responses.push_back(BackendMessage::Normal { messages, request_complete, }); trace!("poll_read: waiting on sender"); return Ok(None); } } } } fn poll_request(&mut self, cx: &mut Context<'_>) -> Poll> { if let Some(messages) = self.pending_request.take() { trace!("retrying pending request"); return Poll::Ready(Some(messages)); } if self.receiver.is_terminated() { return Poll::Ready(None); } match self.receiver.poll_next_unpin(cx) { Poll::Ready(Some(request)) => { trace!("polled new request"); self.responses.push_back(Response { sender: request.sender, }); Poll::Ready(Some(request.messages)) } Poll::Ready(None) => Poll::Ready(None), Poll::Pending => Poll::Pending, } } fn poll_write(&mut self, cx: &mut Context<'_>) -> Result { loop { if self.state == State::Closing { trace!("poll_write: done"); return Ok(false); } if Pin::new(&mut self.stream) .poll_ready(cx) .map_err(Error::io)? .is_pending() { trace!("poll_write: waiting on socket"); return Ok(false); } let request = match self.poll_request(cx) { Poll::Ready(Some(request)) => request, Poll::Ready(None) if self.responses.is_empty() && self.state == State::Active => { trace!("poll_write: at eof, terminating"); self.state = State::Terminating; let mut request = BytesMut::new(); frontend::terminate(&mut request); RequestMessages::Single(FrontendMessage::Raw(request.freeze())) } Poll::Ready(None) => { trace!( "poll_write: at eof, pending responses {}", self.responses.len() ); return Ok(true); } Poll::Pending => { trace!("poll_write: waiting on request"); return Ok(true); } }; match request { RequestMessages::Single(request) => { Pin::new(&mut self.stream) .start_send(request) .map_err(Error::io)?; if self.state == State::Terminating { trace!("poll_write: sent eof, closing"); self.state = State::Closing; } } RequestMessages::CopyIn(mut receiver) => { let message = match receiver.poll_next_unpin(cx) { Poll::Ready(Some(message)) => message, Poll::Ready(None) => { trace!("poll_write: finished copy_in request"); continue; } Poll::Pending => { trace!("poll_write: waiting on copy_in stream"); self.pending_request = Some(RequestMessages::CopyIn(receiver)); return Ok(true); } }; Pin::new(&mut self.stream) .start_send(message) .map_err(Error::io)?; self.pending_request = Some(RequestMessages::CopyIn(receiver)); } } } } fn poll_flush(&mut self, cx: &mut Context<'_>) -> Result<(), Error> { match Pin::new(&mut self.stream) .poll_flush(cx) .map_err(Error::io)? { Poll::Ready(()) => trace!("poll_flush: flushed"), Poll::Pending => trace!("poll_flush: waiting on socket"), } Ok(()) } fn poll_shutdown(&mut self, cx: &mut Context<'_>) -> Poll> { if self.state != State::Closing { return Poll::Pending; } match Pin::new(&mut self.stream) .poll_close(cx) .map_err(Error::io)? { Poll::Ready(()) => { trace!("poll_shutdown: complete"); Poll::Ready(Ok(())) } Poll::Pending => { trace!("poll_shutdown: waiting on socket"); Poll::Pending } } } /// Returns the value of a runtime parameter for this connection. pub fn parameter(&self, name: &str) -> Option<&str> { self.parameters.get(name).map(|s| &**s) } /// Polls for asynchronous messages from the server. /// /// The server can send notices as well as notifications asynchronously to the client. Applications that wish to /// examine those messages should use this method to drive the connection rather than its `Future` implementation. /// /// Return values of `None` or `Some(Err(_))` are "terminal"; callers should not invoke this method again after /// receiving one of those values. pub fn poll_message( &mut self, cx: &mut Context<'_>, ) -> Poll>> { let message = self.poll_read(cx)?; let want_flush = self.poll_write(cx)?; if want_flush { self.poll_flush(cx)?; } match message { Some(message) => Poll::Ready(Some(Ok(message))), None => match self.poll_shutdown(cx) { Poll::Ready(Ok(())) => Poll::Ready(None), Poll::Ready(Err(e)) => Poll::Ready(Some(Err(e))), Poll::Pending => Poll::Pending, }, } } } impl Future for Connection where S: AsyncRead + AsyncWrite + Unpin, T: AsyncRead + AsyncWrite + Unpin, { type Output = Result<(), Error>; fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { while let Some(message) = ready!(self.poll_message(cx)?) { if let AsyncMessage::Notice(notice) = message { info!("{}: {}", notice.severity(), notice.message()); } } Poll::Ready(Ok(())) } } tokio-postgres-0.7.12/src/copy_in.rs000064400000000000000000000161211046102023000155020ustar 00000000000000use crate::client::{InnerClient, Responses}; use crate::codec::FrontendMessage; use crate::connection::RequestMessages; use crate::query::extract_row_affected; use crate::{query, slice_iter, Error, Statement}; use bytes::{Buf, BufMut, BytesMut}; use futures_channel::mpsc; use futures_util::{future, ready, Sink, SinkExt, Stream, StreamExt}; use log::debug; use pin_project_lite::pin_project; use postgres_protocol::message::backend::Message; use postgres_protocol::message::frontend; use postgres_protocol::message::frontend::CopyData; use std::marker::{PhantomData, PhantomPinned}; use std::pin::Pin; use std::task::{Context, Poll}; enum CopyInMessage { Message(FrontendMessage), Done, } pub struct CopyInReceiver { receiver: mpsc::Receiver, done: bool, } impl CopyInReceiver { fn new(receiver: mpsc::Receiver) -> CopyInReceiver { CopyInReceiver { receiver, done: false, } } } impl Stream for CopyInReceiver { type Item = FrontendMessage; fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { if self.done { return Poll::Ready(None); } match ready!(self.receiver.poll_next_unpin(cx)) { Some(CopyInMessage::Message(message)) => Poll::Ready(Some(message)), Some(CopyInMessage::Done) => { self.done = true; let mut buf = BytesMut::new(); frontend::copy_done(&mut buf); frontend::sync(&mut buf); Poll::Ready(Some(FrontendMessage::Raw(buf.freeze()))) } None => { self.done = true; let mut buf = BytesMut::new(); frontend::copy_fail("", &mut buf).unwrap(); frontend::sync(&mut buf); Poll::Ready(Some(FrontendMessage::Raw(buf.freeze()))) } } } } enum SinkState { Active, Closing, Reading, } pin_project! { /// A sink for `COPY ... FROM STDIN` query data. /// /// The copy *must* be explicitly completed via the `Sink::close` or `finish` methods. If it is /// not, the copy will be aborted. pub struct CopyInSink { #[pin] sender: mpsc::Sender, responses: Responses, buf: BytesMut, state: SinkState, #[pin] _p: PhantomPinned, _p2: PhantomData, } } impl CopyInSink where T: Buf + 'static + Send, { /// A poll-based version of `finish`. pub fn poll_finish(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { loop { match self.state { SinkState::Active => { ready!(self.as_mut().poll_flush(cx))?; let mut this = self.as_mut().project(); ready!(this.sender.as_mut().poll_ready(cx)).map_err(|_| Error::closed())?; this.sender .start_send(CopyInMessage::Done) .map_err(|_| Error::closed())?; *this.state = SinkState::Closing; } SinkState::Closing => { let this = self.as_mut().project(); ready!(this.sender.poll_close(cx)).map_err(|_| Error::closed())?; *this.state = SinkState::Reading; } SinkState::Reading => { let this = self.as_mut().project(); match ready!(this.responses.poll_next(cx))? { Message::CommandComplete(body) => { let rows = extract_row_affected(&body)?; return Poll::Ready(Ok(rows)); } _ => return Poll::Ready(Err(Error::unexpected_message())), } } } } } /// Completes the copy, returning the number of rows inserted. /// /// The `Sink::close` method is equivalent to `finish`, except that it does not return the /// number of rows. pub async fn finish(mut self: Pin<&mut Self>) -> Result { future::poll_fn(|cx| self.as_mut().poll_finish(cx)).await } } impl Sink for CopyInSink where T: Buf + 'static + Send, { type Error = Error; fn poll_ready(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { self.project() .sender .poll_ready(cx) .map_err(|_| Error::closed()) } fn start_send(self: Pin<&mut Self>, item: T) -> Result<(), Error> { let this = self.project(); let data: Box = if item.remaining() > 4096 { if this.buf.is_empty() { Box::new(item) } else { Box::new(this.buf.split().freeze().chain(item)) } } else { this.buf.put(item); if this.buf.len() > 4096 { Box::new(this.buf.split().freeze()) } else { return Ok(()); } }; let data = CopyData::new(data).map_err(Error::encode)?; this.sender .start_send(CopyInMessage::Message(FrontendMessage::CopyData(data))) .map_err(|_| Error::closed()) } fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { let mut this = self.project(); if !this.buf.is_empty() { ready!(this.sender.as_mut().poll_ready(cx)).map_err(|_| Error::closed())?; let data: Box = Box::new(this.buf.split().freeze()); let data = CopyData::new(data).map_err(Error::encode)?; this.sender .as_mut() .start_send(CopyInMessage::Message(FrontendMessage::CopyData(data))) .map_err(|_| Error::closed())?; } this.sender.poll_flush(cx).map_err(|_| Error::closed()) } fn poll_close(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { self.poll_finish(cx).map_ok(|_| ()) } } pub async fn copy_in(client: &InnerClient, statement: Statement) -> Result, Error> where T: Buf + 'static + Send, { debug!("executing copy in statement {}", statement.name()); let buf = query::encode(client, &statement, slice_iter(&[]))?; let (mut sender, receiver) = mpsc::channel(1); let receiver = CopyInReceiver::new(receiver); let mut responses = client.send(RequestMessages::CopyIn(receiver))?; sender .send(CopyInMessage::Message(FrontendMessage::Raw(buf))) .await .map_err(|_| Error::closed())?; match responses.next().await? { Message::BindComplete => {} _ => return Err(Error::unexpected_message()), } match responses.next().await? { Message::CopyInResponse(_) => {} _ => return Err(Error::unexpected_message()), } Ok(CopyInSink { sender, responses, buf: BytesMut::new(), state: SinkState::Active, _p: PhantomPinned, _p2: PhantomData, }) } tokio-postgres-0.7.12/src/copy_out.rs000064400000000000000000000035331046102023000157060ustar 00000000000000use crate::client::{InnerClient, Responses}; use crate::codec::FrontendMessage; use crate::connection::RequestMessages; use crate::{query, slice_iter, Error, Statement}; use bytes::Bytes; use futures_util::{ready, Stream}; use log::debug; use pin_project_lite::pin_project; use postgres_protocol::message::backend::Message; use std::marker::PhantomPinned; use std::pin::Pin; use std::task::{Context, Poll}; pub async fn copy_out(client: &InnerClient, statement: Statement) -> Result { debug!("executing copy out statement {}", statement.name()); let buf = query::encode(client, &statement, slice_iter(&[]))?; let responses = start(client, buf).await?; Ok(CopyOutStream { responses, _p: PhantomPinned, }) } async fn start(client: &InnerClient, buf: Bytes) -> Result { let mut responses = client.send(RequestMessages::Single(FrontendMessage::Raw(buf)))?; match responses.next().await? { Message::BindComplete => {} _ => return Err(Error::unexpected_message()), } match responses.next().await? { Message::CopyOutResponse(_) => {} _ => return Err(Error::unexpected_message()), } Ok(responses) } pin_project! { /// A stream of `COPY ... TO STDOUT` query data. pub struct CopyOutStream { responses: Responses, #[pin] _p: PhantomPinned, } } impl Stream for CopyOutStream { type Item = Result; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { let this = self.project(); match ready!(this.responses.poll_next(cx)?) { Message::CopyData(body) => Poll::Ready(Some(Ok(body.into_bytes()))), Message::CopyDone => Poll::Ready(None), _ => Poll::Ready(Some(Err(Error::unexpected_message()))), } } } tokio-postgres-0.7.12/src/error/mod.rs000064400000000000000000000410431046102023000157530ustar 00000000000000//! Errors. use fallible_iterator::FallibleIterator; use postgres_protocol::message::backend::{ErrorFields, ErrorResponseBody}; use std::error::{self, Error as _Error}; use std::fmt; use std::io; pub use self::sqlstate::*; #[allow(clippy::unreadable_literal)] mod sqlstate; /// The severity of a Postgres error or notice. #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub enum Severity { /// PANIC Panic, /// FATAL Fatal, /// ERROR Error, /// WARNING Warning, /// NOTICE Notice, /// DEBUG Debug, /// INFO Info, /// LOG Log, } impl fmt::Display for Severity { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { let s = match *self { Severity::Panic => "PANIC", Severity::Fatal => "FATAL", Severity::Error => "ERROR", Severity::Warning => "WARNING", Severity::Notice => "NOTICE", Severity::Debug => "DEBUG", Severity::Info => "INFO", Severity::Log => "LOG", }; fmt.write_str(s) } } impl Severity { fn from_str(s: &str) -> Option { match s { "PANIC" => Some(Severity::Panic), "FATAL" => Some(Severity::Fatal), "ERROR" => Some(Severity::Error), "WARNING" => Some(Severity::Warning), "NOTICE" => Some(Severity::Notice), "DEBUG" => Some(Severity::Debug), "INFO" => Some(Severity::Info), "LOG" => Some(Severity::Log), _ => None, } } } /// A Postgres error or notice. #[derive(Debug, Clone, PartialEq, Eq)] pub struct DbError { severity: String, parsed_severity: Option, code: SqlState, message: String, detail: Option, hint: Option, position: Option, where_: Option, schema: Option, table: Option, column: Option, datatype: Option, constraint: Option, file: Option, line: Option, routine: Option, } impl DbError { pub(crate) fn parse(fields: &mut ErrorFields<'_>) -> io::Result { let mut severity = None; let mut parsed_severity = None; let mut code = None; let mut message = None; let mut detail = None; let mut hint = None; let mut normal_position = None; let mut internal_position = None; let mut internal_query = None; let mut where_ = None; let mut schema = None; let mut table = None; let mut column = None; let mut datatype = None; let mut constraint = None; let mut file = None; let mut line = None; let mut routine = None; while let Some(field) = fields.next()? { let value = String::from_utf8_lossy(field.value_bytes()); match field.type_() { b'S' => severity = Some(value.into_owned()), b'C' => code = Some(SqlState::from_code(&value)), b'M' => message = Some(value.into_owned()), b'D' => detail = Some(value.into_owned()), b'H' => hint = Some(value.into_owned()), b'P' => { normal_position = Some(value.parse::().map_err(|_| { io::Error::new( io::ErrorKind::InvalidInput, "`P` field did not contain an integer", ) })?); } b'p' => { internal_position = Some(value.parse::().map_err(|_| { io::Error::new( io::ErrorKind::InvalidInput, "`p` field did not contain an integer", ) })?); } b'q' => internal_query = Some(value.into_owned()), b'W' => where_ = Some(value.into_owned()), b's' => schema = Some(value.into_owned()), b't' => table = Some(value.into_owned()), b'c' => column = Some(value.into_owned()), b'd' => datatype = Some(value.into_owned()), b'n' => constraint = Some(value.into_owned()), b'F' => file = Some(value.into_owned()), b'L' => { line = Some(value.parse::().map_err(|_| { io::Error::new( io::ErrorKind::InvalidInput, "`L` field did not contain an integer", ) })?); } b'R' => routine = Some(value.into_owned()), b'V' => { parsed_severity = Some(Severity::from_str(&value).ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidInput, "`V` field contained an invalid value", ) })?); } _ => {} } } Ok(DbError { severity: severity .ok_or_else(|| io::Error::new(io::ErrorKind::InvalidInput, "`S` field missing"))?, parsed_severity, code: code .ok_or_else(|| io::Error::new(io::ErrorKind::InvalidInput, "`C` field missing"))?, message: message .ok_or_else(|| io::Error::new(io::ErrorKind::InvalidInput, "`M` field missing"))?, detail, hint, position: match normal_position { Some(position) => Some(ErrorPosition::Original(position)), None => match internal_position { Some(position) => Some(ErrorPosition::Internal { position, query: internal_query.ok_or_else(|| { io::Error::new( io::ErrorKind::InvalidInput, "`q` field missing but `p` field present", ) })?, }), None => None, }, }, where_, schema, table, column, datatype, constraint, file, line, routine, }) } /// The field contents are ERROR, FATAL, or PANIC (in an error message), /// or WARNING, NOTICE, DEBUG, INFO, or LOG (in a notice message), or a /// localized translation of one of these. pub fn severity(&self) -> &str { &self.severity } /// A parsed, nonlocalized version of `severity`. (PostgreSQL 9.6+) pub fn parsed_severity(&self) -> Option { self.parsed_severity } /// The SQLSTATE code for the error. pub fn code(&self) -> &SqlState { &self.code } /// The primary human-readable error message. /// /// This should be accurate but terse (typically one line). pub fn message(&self) -> &str { &self.message } /// An optional secondary error message carrying more detail about the /// problem. /// /// Might run to multiple lines. pub fn detail(&self) -> Option<&str> { self.detail.as_deref() } /// An optional suggestion what to do about the problem. /// /// This is intended to differ from `detail` in that it offers advice /// (potentially inappropriate) rather than hard facts. Might run to /// multiple lines. pub fn hint(&self) -> Option<&str> { self.hint.as_deref() } /// An optional error cursor position into either the original query string /// or an internally generated query. pub fn position(&self) -> Option<&ErrorPosition> { self.position.as_ref() } /// An indication of the context in which the error occurred. /// /// Presently this includes a call stack traceback of active procedural /// language functions and internally-generated queries. The trace is one /// entry per line, most recent first. pub fn where_(&self) -> Option<&str> { self.where_.as_deref() } /// If the error was associated with a specific database object, the name /// of the schema containing that object, if any. (PostgreSQL 9.3+) pub fn schema(&self) -> Option<&str> { self.schema.as_deref() } /// If the error was associated with a specific table, the name of the /// table. (Refer to the schema name field for the name of the table's /// schema.) (PostgreSQL 9.3+) pub fn table(&self) -> Option<&str> { self.table.as_deref() } /// If the error was associated with a specific table column, the name of /// the column. /// /// (Refer to the schema and table name fields to identify the table.) /// (PostgreSQL 9.3+) pub fn column(&self) -> Option<&str> { self.column.as_deref() } /// If the error was associated with a specific data type, the name of the /// data type. (Refer to the schema name field for the name of the data /// type's schema.) (PostgreSQL 9.3+) pub fn datatype(&self) -> Option<&str> { self.datatype.as_deref() } /// If the error was associated with a specific constraint, the name of the /// constraint. /// /// Refer to fields listed above for the associated table or domain. /// (For this purpose, indexes are treated as constraints, even if they /// weren't created with constraint syntax.) (PostgreSQL 9.3+) pub fn constraint(&self) -> Option<&str> { self.constraint.as_deref() } /// The file name of the source-code location where the error was reported. pub fn file(&self) -> Option<&str> { self.file.as_deref() } /// The line number of the source-code location where the error was /// reported. pub fn line(&self) -> Option { self.line } /// The name of the source-code routine reporting the error. pub fn routine(&self) -> Option<&str> { self.routine.as_deref() } } impl fmt::Display for DbError { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { write!(fmt, "{}: {}", self.severity, self.message)?; if let Some(detail) = &self.detail { write!(fmt, "\nDETAIL: {}", detail)?; } if let Some(hint) = &self.hint { write!(fmt, "\nHINT: {}", hint)?; } Ok(()) } } impl error::Error for DbError {} /// Represents the position of an error in a query. #[derive(Clone, PartialEq, Eq, Debug)] pub enum ErrorPosition { /// A position in the original query. Original(u32), /// A position in an internally generated query. Internal { /// The byte position. position: u32, /// A query generated by the Postgres server. query: String, }, } #[derive(Debug, PartialEq)] enum Kind { Io, UnexpectedMessage, Tls, ToSql(usize), FromSql(usize), Column(String), Parameters(usize, usize), Closed, Db, Parse, Encode, Authentication, ConfigParse, Config, RowCount, #[cfg(feature = "runtime")] Connect, Timeout, } struct ErrorInner { kind: Kind, cause: Option>, } /// An error communicating with the Postgres server. pub struct Error(Box); impl fmt::Debug for Error { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt.debug_struct("Error") .field("kind", &self.0.kind) .field("cause", &self.0.cause) .finish() } } impl fmt::Display for Error { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { match &self.0.kind { Kind::Io => fmt.write_str("error communicating with the server")?, Kind::UnexpectedMessage => fmt.write_str("unexpected message from server")?, Kind::Tls => fmt.write_str("error performing TLS handshake")?, Kind::ToSql(idx) => write!(fmt, "error serializing parameter {}", idx)?, Kind::FromSql(idx) => write!(fmt, "error deserializing column {}", idx)?, Kind::Column(column) => write!(fmt, "invalid column `{}`", column)?, Kind::Parameters(real, expected) => { write!(fmt, "expected {expected} parameters but got {real}")? } Kind::Closed => fmt.write_str("connection closed")?, Kind::Db => fmt.write_str("db error")?, Kind::Parse => fmt.write_str("error parsing response from server")?, Kind::Encode => fmt.write_str("error encoding message to server")?, Kind::Authentication => fmt.write_str("authentication error")?, Kind::ConfigParse => fmt.write_str("invalid connection string")?, Kind::Config => fmt.write_str("invalid configuration")?, Kind::RowCount => fmt.write_str("query returned an unexpected number of rows")?, #[cfg(feature = "runtime")] Kind::Connect => fmt.write_str("error connecting to server")?, Kind::Timeout => fmt.write_str("timeout waiting for server")?, }; if let Some(ref cause) = self.0.cause { write!(fmt, ": {}", cause)?; } Ok(()) } } impl error::Error for Error { fn source(&self) -> Option<&(dyn error::Error + 'static)> { self.0.cause.as_ref().map(|e| &**e as _) } } impl Error { /// Consumes the error, returning its cause. pub fn into_source(self) -> Option> { self.0.cause } /// Returns the source of this error if it was a `DbError`. /// /// This is a simple convenience method. pub fn as_db_error(&self) -> Option<&DbError> { self.source().and_then(|e| e.downcast_ref::()) } /// Determines if the error was associated with closed connection. pub fn is_closed(&self) -> bool { self.0.kind == Kind::Closed } /// Returns the SQLSTATE error code associated with the error. /// /// This is a convenience method that downcasts the cause to a `DbError` and returns its code. pub fn code(&self) -> Option<&SqlState> { self.as_db_error().map(DbError::code) } fn new(kind: Kind, cause: Option>) -> Error { Error(Box::new(ErrorInner { kind, cause })) } pub(crate) fn closed() -> Error { Error::new(Kind::Closed, None) } pub(crate) fn unexpected_message() -> Error { Error::new(Kind::UnexpectedMessage, None) } #[allow(clippy::needless_pass_by_value)] pub(crate) fn db(error: ErrorResponseBody) -> Error { match DbError::parse(&mut error.fields()) { Ok(e) => Error::new(Kind::Db, Some(Box::new(e))), Err(e) => Error::new(Kind::Parse, Some(Box::new(e))), } } pub(crate) fn parse(e: io::Error) -> Error { Error::new(Kind::Parse, Some(Box::new(e))) } pub(crate) fn encode(e: io::Error) -> Error { Error::new(Kind::Encode, Some(Box::new(e))) } #[allow(clippy::wrong_self_convention)] pub(crate) fn to_sql(e: Box, idx: usize) -> Error { Error::new(Kind::ToSql(idx), Some(e)) } pub(crate) fn from_sql(e: Box, idx: usize) -> Error { Error::new(Kind::FromSql(idx), Some(e)) } pub(crate) fn column(column: String) -> Error { Error::new(Kind::Column(column), None) } pub(crate) fn parameters(real: usize, expected: usize) -> Error { Error::new(Kind::Parameters(real, expected), None) } pub(crate) fn tls(e: Box) -> Error { Error::new(Kind::Tls, Some(e)) } pub(crate) fn io(e: io::Error) -> Error { Error::new(Kind::Io, Some(Box::new(e))) } pub(crate) fn authentication(e: Box) -> Error { Error::new(Kind::Authentication, Some(e)) } pub(crate) fn config_parse(e: Box) -> Error { Error::new(Kind::ConfigParse, Some(e)) } pub(crate) fn config(e: Box) -> Error { Error::new(Kind::Config, Some(e)) } pub(crate) fn row_count() -> Error { Error::new(Kind::RowCount, None) } #[cfg(feature = "runtime")] pub(crate) fn connect(e: io::Error) -> Error { Error::new(Kind::Connect, Some(Box::new(e))) } #[doc(hidden)] pub fn __private_api_timeout() -> Error { Error::new(Kind::Timeout, None) } } tokio-postgres-0.7.12/src/error/sqlstate.rs000064400000000000000000001503411046102023000170360ustar 00000000000000// Autogenerated file - DO NOT EDIT /// A SQLSTATE error code #[derive(PartialEq, Eq, Clone, Debug)] pub struct SqlState(Inner); impl SqlState { /// Creates a `SqlState` from its error code. pub fn from_code(s: &str) -> SqlState { match SQLSTATE_MAP.get(s) { Some(state) => state.clone(), None => SqlState(Inner::Other(s.into())), } } /// Returns the error code corresponding to the `SqlState`. pub fn code(&self) -> &str { match &self.0 { Inner::E00000 => "00000", Inner::E01000 => "01000", Inner::E0100C => "0100C", Inner::E01008 => "01008", Inner::E01003 => "01003", Inner::E01007 => "01007", Inner::E01006 => "01006", Inner::E01004 => "01004", Inner::E01P01 => "01P01", Inner::E02000 => "02000", Inner::E02001 => "02001", Inner::E03000 => "03000", Inner::E08000 => "08000", Inner::E08003 => "08003", Inner::E08006 => "08006", Inner::E08001 => "08001", Inner::E08004 => "08004", Inner::E08007 => "08007", Inner::E08P01 => "08P01", Inner::E09000 => "09000", Inner::E0A000 => "0A000", Inner::E0B000 => "0B000", Inner::E0F000 => "0F000", Inner::E0F001 => "0F001", Inner::E0L000 => "0L000", Inner::E0LP01 => "0LP01", Inner::E0P000 => "0P000", Inner::E0Z000 => "0Z000", Inner::E0Z002 => "0Z002", Inner::E20000 => "20000", Inner::E21000 => "21000", Inner::E22000 => "22000", Inner::E2202E => "2202E", Inner::E22021 => "22021", Inner::E22008 => "22008", Inner::E22012 => "22012", Inner::E22005 => "22005", Inner::E2200B => "2200B", Inner::E22022 => "22022", Inner::E22015 => "22015", Inner::E2201E => "2201E", Inner::E22014 => "22014", Inner::E22016 => "22016", Inner::E2201F => "2201F", Inner::E2201G => "2201G", Inner::E22018 => "22018", Inner::E22007 => "22007", Inner::E22019 => "22019", Inner::E2200D => "2200D", Inner::E22025 => "22025", Inner::E22P06 => "22P06", Inner::E22010 => "22010", Inner::E22023 => "22023", Inner::E22013 => "22013", Inner::E2201B => "2201B", Inner::E2201W => "2201W", Inner::E2201X => "2201X", Inner::E2202H => "2202H", Inner::E2202G => "2202G", Inner::E22009 => "22009", Inner::E2200C => "2200C", Inner::E2200G => "2200G", Inner::E22004 => "22004", Inner::E22002 => "22002", Inner::E22003 => "22003", Inner::E2200H => "2200H", Inner::E22026 => "22026", Inner::E22001 => "22001", Inner::E22011 => "22011", Inner::E22027 => "22027", Inner::E22024 => "22024", Inner::E2200F => "2200F", Inner::E22P01 => "22P01", Inner::E22P02 => "22P02", Inner::E22P03 => "22P03", Inner::E22P04 => "22P04", Inner::E22P05 => "22P05", Inner::E2200L => "2200L", Inner::E2200M => "2200M", Inner::E2200N => "2200N", Inner::E2200S => "2200S", Inner::E2200T => "2200T", Inner::E22030 => "22030", Inner::E22031 => "22031", Inner::E22032 => "22032", Inner::E22033 => "22033", Inner::E22034 => "22034", Inner::E22035 => "22035", Inner::E22036 => "22036", Inner::E22037 => "22037", Inner::E22038 => "22038", Inner::E22039 => "22039", Inner::E2203A => "2203A", Inner::E2203B => "2203B", Inner::E2203C => "2203C", Inner::E2203D => "2203D", Inner::E2203E => "2203E", Inner::E2203F => "2203F", Inner::E2203G => "2203G", Inner::E23000 => "23000", Inner::E23001 => "23001", Inner::E23502 => "23502", Inner::E23503 => "23503", Inner::E23505 => "23505", Inner::E23514 => "23514", Inner::E23P01 => "23P01", Inner::E24000 => "24000", Inner::E25000 => "25000", Inner::E25001 => "25001", Inner::E25002 => "25002", Inner::E25008 => "25008", Inner::E25003 => "25003", Inner::E25004 => "25004", Inner::E25005 => "25005", Inner::E25006 => "25006", Inner::E25007 => "25007", Inner::E25P01 => "25P01", Inner::E25P02 => "25P02", Inner::E25P03 => "25P03", Inner::E26000 => "26000", Inner::E27000 => "27000", Inner::E28000 => "28000", Inner::E28P01 => "28P01", Inner::E2B000 => "2B000", Inner::E2BP01 => "2BP01", Inner::E2D000 => "2D000", Inner::E2F000 => "2F000", Inner::E2F005 => "2F005", Inner::E2F002 => "2F002", Inner::E2F003 => "2F003", Inner::E2F004 => "2F004", Inner::E34000 => "34000", Inner::E38000 => "38000", Inner::E38001 => "38001", Inner::E38002 => "38002", Inner::E38003 => "38003", Inner::E38004 => "38004", Inner::E39000 => "39000", Inner::E39001 => "39001", Inner::E39004 => "39004", Inner::E39P01 => "39P01", Inner::E39P02 => "39P02", Inner::E39P03 => "39P03", Inner::E3B000 => "3B000", Inner::E3B001 => "3B001", Inner::E3D000 => "3D000", Inner::E3F000 => "3F000", Inner::E40000 => "40000", Inner::E40002 => "40002", Inner::E40001 => "40001", Inner::E40003 => "40003", Inner::E40P01 => "40P01", Inner::E42000 => "42000", Inner::E42601 => "42601", Inner::E42501 => "42501", Inner::E42846 => "42846", Inner::E42803 => "42803", Inner::E42P20 => "42P20", Inner::E42P19 => "42P19", Inner::E42830 => "42830", Inner::E42602 => "42602", Inner::E42622 => "42622", Inner::E42939 => "42939", Inner::E42804 => "42804", Inner::E42P18 => "42P18", Inner::E42P21 => "42P21", Inner::E42P22 => "42P22", Inner::E42809 => "42809", Inner::E428C9 => "428C9", Inner::E42703 => "42703", Inner::E42883 => "42883", Inner::E42P01 => "42P01", Inner::E42P02 => "42P02", Inner::E42704 => "42704", Inner::E42701 => "42701", Inner::E42P03 => "42P03", Inner::E42P04 => "42P04", Inner::E42723 => "42723", Inner::E42P05 => "42P05", Inner::E42P06 => "42P06", Inner::E42P07 => "42P07", Inner::E42712 => "42712", Inner::E42710 => "42710", Inner::E42702 => "42702", Inner::E42725 => "42725", Inner::E42P08 => "42P08", Inner::E42P09 => "42P09", Inner::E42P10 => "42P10", Inner::E42611 => "42611", Inner::E42P11 => "42P11", Inner::E42P12 => "42P12", Inner::E42P13 => "42P13", Inner::E42P14 => "42P14", Inner::E42P15 => "42P15", Inner::E42P16 => "42P16", Inner::E42P17 => "42P17", Inner::E44000 => "44000", Inner::E53000 => "53000", Inner::E53100 => "53100", Inner::E53200 => "53200", Inner::E53300 => "53300", Inner::E53400 => "53400", Inner::E54000 => "54000", Inner::E54001 => "54001", Inner::E54011 => "54011", Inner::E54023 => "54023", Inner::E55000 => "55000", Inner::E55006 => "55006", Inner::E55P02 => "55P02", Inner::E55P03 => "55P03", Inner::E55P04 => "55P04", Inner::E57000 => "57000", Inner::E57014 => "57014", Inner::E57P01 => "57P01", Inner::E57P02 => "57P02", Inner::E57P03 => "57P03", Inner::E57P04 => "57P04", Inner::E57P05 => "57P05", Inner::E58000 => "58000", Inner::E58030 => "58030", Inner::E58P01 => "58P01", Inner::E58P02 => "58P02", Inner::E72000 => "72000", Inner::EF0000 => "F0000", Inner::EF0001 => "F0001", Inner::EHV000 => "HV000", Inner::EHV005 => "HV005", Inner::EHV002 => "HV002", Inner::EHV010 => "HV010", Inner::EHV021 => "HV021", Inner::EHV024 => "HV024", Inner::EHV007 => "HV007", Inner::EHV008 => "HV008", Inner::EHV004 => "HV004", Inner::EHV006 => "HV006", Inner::EHV091 => "HV091", Inner::EHV00B => "HV00B", Inner::EHV00C => "HV00C", Inner::EHV00D => "HV00D", Inner::EHV090 => "HV090", Inner::EHV00A => "HV00A", Inner::EHV009 => "HV009", Inner::EHV014 => "HV014", Inner::EHV001 => "HV001", Inner::EHV00P => "HV00P", Inner::EHV00J => "HV00J", Inner::EHV00K => "HV00K", Inner::EHV00Q => "HV00Q", Inner::EHV00R => "HV00R", Inner::EHV00L => "HV00L", Inner::EHV00M => "HV00M", Inner::EHV00N => "HV00N", Inner::EP0000 => "P0000", Inner::EP0001 => "P0001", Inner::EP0002 => "P0002", Inner::EP0003 => "P0003", Inner::EP0004 => "P0004", Inner::EXX000 => "XX000", Inner::EXX001 => "XX001", Inner::EXX002 => "XX002", Inner::Other(code) => code, } } /// 00000 pub const SUCCESSFUL_COMPLETION: SqlState = SqlState(Inner::E00000); /// 01000 pub const WARNING: SqlState = SqlState(Inner::E01000); /// 0100C pub const WARNING_DYNAMIC_RESULT_SETS_RETURNED: SqlState = SqlState(Inner::E0100C); /// 01008 pub const WARNING_IMPLICIT_ZERO_BIT_PADDING: SqlState = SqlState(Inner::E01008); /// 01003 pub const WARNING_NULL_VALUE_ELIMINATED_IN_SET_FUNCTION: SqlState = SqlState(Inner::E01003); /// 01007 pub const WARNING_PRIVILEGE_NOT_GRANTED: SqlState = SqlState(Inner::E01007); /// 01006 pub const WARNING_PRIVILEGE_NOT_REVOKED: SqlState = SqlState(Inner::E01006); /// 01004 pub const WARNING_STRING_DATA_RIGHT_TRUNCATION: SqlState = SqlState(Inner::E01004); /// 01P01 pub const WARNING_DEPRECATED_FEATURE: SqlState = SqlState(Inner::E01P01); /// 02000 pub const NO_DATA: SqlState = SqlState(Inner::E02000); /// 02001 pub const NO_ADDITIONAL_DYNAMIC_RESULT_SETS_RETURNED: SqlState = SqlState(Inner::E02001); /// 03000 pub const SQL_STATEMENT_NOT_YET_COMPLETE: SqlState = SqlState(Inner::E03000); /// 08000 pub const CONNECTION_EXCEPTION: SqlState = SqlState(Inner::E08000); /// 08003 pub const CONNECTION_DOES_NOT_EXIST: SqlState = SqlState(Inner::E08003); /// 08006 pub const CONNECTION_FAILURE: SqlState = SqlState(Inner::E08006); /// 08001 pub const SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION: SqlState = SqlState(Inner::E08001); /// 08004 pub const SQLSERVER_REJECTED_ESTABLISHMENT_OF_SQLCONNECTION: SqlState = SqlState(Inner::E08004); /// 08007 pub const TRANSACTION_RESOLUTION_UNKNOWN: SqlState = SqlState(Inner::E08007); /// 08P01 pub const PROTOCOL_VIOLATION: SqlState = SqlState(Inner::E08P01); /// 09000 pub const TRIGGERED_ACTION_EXCEPTION: SqlState = SqlState(Inner::E09000); /// 0A000 pub const FEATURE_NOT_SUPPORTED: SqlState = SqlState(Inner::E0A000); /// 0B000 pub const INVALID_TRANSACTION_INITIATION: SqlState = SqlState(Inner::E0B000); /// 0F000 pub const LOCATOR_EXCEPTION: SqlState = SqlState(Inner::E0F000); /// 0F001 pub const L_E_INVALID_SPECIFICATION: SqlState = SqlState(Inner::E0F001); /// 0L000 pub const INVALID_GRANTOR: SqlState = SqlState(Inner::E0L000); /// 0LP01 pub const INVALID_GRANT_OPERATION: SqlState = SqlState(Inner::E0LP01); /// 0P000 pub const INVALID_ROLE_SPECIFICATION: SqlState = SqlState(Inner::E0P000); /// 0Z000 pub const DIAGNOSTICS_EXCEPTION: SqlState = SqlState(Inner::E0Z000); /// 0Z002 pub const STACKED_DIAGNOSTICS_ACCESSED_WITHOUT_ACTIVE_HANDLER: SqlState = SqlState(Inner::E0Z002); /// 20000 pub const CASE_NOT_FOUND: SqlState = SqlState(Inner::E20000); /// 21000 pub const CARDINALITY_VIOLATION: SqlState = SqlState(Inner::E21000); /// 22000 pub const DATA_EXCEPTION: SqlState = SqlState(Inner::E22000); /// 2202E pub const ARRAY_ELEMENT_ERROR: SqlState = SqlState(Inner::E2202E); /// 2202E pub const ARRAY_SUBSCRIPT_ERROR: SqlState = SqlState(Inner::E2202E); /// 22021 pub const CHARACTER_NOT_IN_REPERTOIRE: SqlState = SqlState(Inner::E22021); /// 22008 pub const DATETIME_FIELD_OVERFLOW: SqlState = SqlState(Inner::E22008); /// 22008 pub const DATETIME_VALUE_OUT_OF_RANGE: SqlState = SqlState(Inner::E22008); /// 22012 pub const DIVISION_BY_ZERO: SqlState = SqlState(Inner::E22012); /// 22005 pub const ERROR_IN_ASSIGNMENT: SqlState = SqlState(Inner::E22005); /// 2200B pub const ESCAPE_CHARACTER_CONFLICT: SqlState = SqlState(Inner::E2200B); /// 22022 pub const INDICATOR_OVERFLOW: SqlState = SqlState(Inner::E22022); /// 22015 pub const INTERVAL_FIELD_OVERFLOW: SqlState = SqlState(Inner::E22015); /// 2201E pub const INVALID_ARGUMENT_FOR_LOG: SqlState = SqlState(Inner::E2201E); /// 22014 pub const INVALID_ARGUMENT_FOR_NTILE: SqlState = SqlState(Inner::E22014); /// 22016 pub const INVALID_ARGUMENT_FOR_NTH_VALUE: SqlState = SqlState(Inner::E22016); /// 2201F pub const INVALID_ARGUMENT_FOR_POWER_FUNCTION: SqlState = SqlState(Inner::E2201F); /// 2201G pub const INVALID_ARGUMENT_FOR_WIDTH_BUCKET_FUNCTION: SqlState = SqlState(Inner::E2201G); /// 22018 pub const INVALID_CHARACTER_VALUE_FOR_CAST: SqlState = SqlState(Inner::E22018); /// 22007 pub const INVALID_DATETIME_FORMAT: SqlState = SqlState(Inner::E22007); /// 22019 pub const INVALID_ESCAPE_CHARACTER: SqlState = SqlState(Inner::E22019); /// 2200D pub const INVALID_ESCAPE_OCTET: SqlState = SqlState(Inner::E2200D); /// 22025 pub const INVALID_ESCAPE_SEQUENCE: SqlState = SqlState(Inner::E22025); /// 22P06 pub const NONSTANDARD_USE_OF_ESCAPE_CHARACTER: SqlState = SqlState(Inner::E22P06); /// 22010 pub const INVALID_INDICATOR_PARAMETER_VALUE: SqlState = SqlState(Inner::E22010); /// 22023 pub const INVALID_PARAMETER_VALUE: SqlState = SqlState(Inner::E22023); /// 22013 pub const INVALID_PRECEDING_OR_FOLLOWING_SIZE: SqlState = SqlState(Inner::E22013); /// 2201B pub const INVALID_REGULAR_EXPRESSION: SqlState = SqlState(Inner::E2201B); /// 2201W pub const INVALID_ROW_COUNT_IN_LIMIT_CLAUSE: SqlState = SqlState(Inner::E2201W); /// 2201X pub const INVALID_ROW_COUNT_IN_RESULT_OFFSET_CLAUSE: SqlState = SqlState(Inner::E2201X); /// 2202H pub const INVALID_TABLESAMPLE_ARGUMENT: SqlState = SqlState(Inner::E2202H); /// 2202G pub const INVALID_TABLESAMPLE_REPEAT: SqlState = SqlState(Inner::E2202G); /// 22009 pub const INVALID_TIME_ZONE_DISPLACEMENT_VALUE: SqlState = SqlState(Inner::E22009); /// 2200C pub const INVALID_USE_OF_ESCAPE_CHARACTER: SqlState = SqlState(Inner::E2200C); /// 2200G pub const MOST_SPECIFIC_TYPE_MISMATCH: SqlState = SqlState(Inner::E2200G); /// 22004 pub const NULL_VALUE_NOT_ALLOWED: SqlState = SqlState(Inner::E22004); /// 22002 pub const NULL_VALUE_NO_INDICATOR_PARAMETER: SqlState = SqlState(Inner::E22002); /// 22003 pub const NUMERIC_VALUE_OUT_OF_RANGE: SqlState = SqlState(Inner::E22003); /// 2200H pub const SEQUENCE_GENERATOR_LIMIT_EXCEEDED: SqlState = SqlState(Inner::E2200H); /// 22026 pub const STRING_DATA_LENGTH_MISMATCH: SqlState = SqlState(Inner::E22026); /// 22001 pub const STRING_DATA_RIGHT_TRUNCATION: SqlState = SqlState(Inner::E22001); /// 22011 pub const SUBSTRING_ERROR: SqlState = SqlState(Inner::E22011); /// 22027 pub const TRIM_ERROR: SqlState = SqlState(Inner::E22027); /// 22024 pub const UNTERMINATED_C_STRING: SqlState = SqlState(Inner::E22024); /// 2200F pub const ZERO_LENGTH_CHARACTER_STRING: SqlState = SqlState(Inner::E2200F); /// 22P01 pub const FLOATING_POINT_EXCEPTION: SqlState = SqlState(Inner::E22P01); /// 22P02 pub const INVALID_TEXT_REPRESENTATION: SqlState = SqlState(Inner::E22P02); /// 22P03 pub const INVALID_BINARY_REPRESENTATION: SqlState = SqlState(Inner::E22P03); /// 22P04 pub const BAD_COPY_FILE_FORMAT: SqlState = SqlState(Inner::E22P04); /// 22P05 pub const UNTRANSLATABLE_CHARACTER: SqlState = SqlState(Inner::E22P05); /// 2200L pub const NOT_AN_XML_DOCUMENT: SqlState = SqlState(Inner::E2200L); /// 2200M pub const INVALID_XML_DOCUMENT: SqlState = SqlState(Inner::E2200M); /// 2200N pub const INVALID_XML_CONTENT: SqlState = SqlState(Inner::E2200N); /// 2200S pub const INVALID_XML_COMMENT: SqlState = SqlState(Inner::E2200S); /// 2200T pub const INVALID_XML_PROCESSING_INSTRUCTION: SqlState = SqlState(Inner::E2200T); /// 22030 pub const DUPLICATE_JSON_OBJECT_KEY_VALUE: SqlState = SqlState(Inner::E22030); /// 22031 pub const INVALID_ARGUMENT_FOR_SQL_JSON_DATETIME_FUNCTION: SqlState = SqlState(Inner::E22031); /// 22032 pub const INVALID_JSON_TEXT: SqlState = SqlState(Inner::E22032); /// 22033 pub const INVALID_SQL_JSON_SUBSCRIPT: SqlState = SqlState(Inner::E22033); /// 22034 pub const MORE_THAN_ONE_SQL_JSON_ITEM: SqlState = SqlState(Inner::E22034); /// 22035 pub const NO_SQL_JSON_ITEM: SqlState = SqlState(Inner::E22035); /// 22036 pub const NON_NUMERIC_SQL_JSON_ITEM: SqlState = SqlState(Inner::E22036); /// 22037 pub const NON_UNIQUE_KEYS_IN_A_JSON_OBJECT: SqlState = SqlState(Inner::E22037); /// 22038 pub const SINGLETON_SQL_JSON_ITEM_REQUIRED: SqlState = SqlState(Inner::E22038); /// 22039 pub const SQL_JSON_ARRAY_NOT_FOUND: SqlState = SqlState(Inner::E22039); /// 2203A pub const SQL_JSON_MEMBER_NOT_FOUND: SqlState = SqlState(Inner::E2203A); /// 2203B pub const SQL_JSON_NUMBER_NOT_FOUND: SqlState = SqlState(Inner::E2203B); /// 2203C pub const SQL_JSON_OBJECT_NOT_FOUND: SqlState = SqlState(Inner::E2203C); /// 2203D pub const TOO_MANY_JSON_ARRAY_ELEMENTS: SqlState = SqlState(Inner::E2203D); /// 2203E pub const TOO_MANY_JSON_OBJECT_MEMBERS: SqlState = SqlState(Inner::E2203E); /// 2203F pub const SQL_JSON_SCALAR_REQUIRED: SqlState = SqlState(Inner::E2203F); /// 2203G pub const SQL_JSON_ITEM_CANNOT_BE_CAST_TO_TARGET_TYPE: SqlState = SqlState(Inner::E2203G); /// 23000 pub const INTEGRITY_CONSTRAINT_VIOLATION: SqlState = SqlState(Inner::E23000); /// 23001 pub const RESTRICT_VIOLATION: SqlState = SqlState(Inner::E23001); /// 23502 pub const NOT_NULL_VIOLATION: SqlState = SqlState(Inner::E23502); /// 23503 pub const FOREIGN_KEY_VIOLATION: SqlState = SqlState(Inner::E23503); /// 23505 pub const UNIQUE_VIOLATION: SqlState = SqlState(Inner::E23505); /// 23514 pub const CHECK_VIOLATION: SqlState = SqlState(Inner::E23514); /// 23P01 pub const EXCLUSION_VIOLATION: SqlState = SqlState(Inner::E23P01); /// 24000 pub const INVALID_CURSOR_STATE: SqlState = SqlState(Inner::E24000); /// 25000 pub const INVALID_TRANSACTION_STATE: SqlState = SqlState(Inner::E25000); /// 25001 pub const ACTIVE_SQL_TRANSACTION: SqlState = SqlState(Inner::E25001); /// 25002 pub const BRANCH_TRANSACTION_ALREADY_ACTIVE: SqlState = SqlState(Inner::E25002); /// 25008 pub const HELD_CURSOR_REQUIRES_SAME_ISOLATION_LEVEL: SqlState = SqlState(Inner::E25008); /// 25003 pub const INAPPROPRIATE_ACCESS_MODE_FOR_BRANCH_TRANSACTION: SqlState = SqlState(Inner::E25003); /// 25004 pub const INAPPROPRIATE_ISOLATION_LEVEL_FOR_BRANCH_TRANSACTION: SqlState = SqlState(Inner::E25004); /// 25005 pub const NO_ACTIVE_SQL_TRANSACTION_FOR_BRANCH_TRANSACTION: SqlState = SqlState(Inner::E25005); /// 25006 pub const READ_ONLY_SQL_TRANSACTION: SqlState = SqlState(Inner::E25006); /// 25007 pub const SCHEMA_AND_DATA_STATEMENT_MIXING_NOT_SUPPORTED: SqlState = SqlState(Inner::E25007); /// 25P01 pub const NO_ACTIVE_SQL_TRANSACTION: SqlState = SqlState(Inner::E25P01); /// 25P02 pub const IN_FAILED_SQL_TRANSACTION: SqlState = SqlState(Inner::E25P02); /// 25P03 pub const IDLE_IN_TRANSACTION_SESSION_TIMEOUT: SqlState = SqlState(Inner::E25P03); /// 26000 pub const INVALID_SQL_STATEMENT_NAME: SqlState = SqlState(Inner::E26000); /// 26000 pub const UNDEFINED_PSTATEMENT: SqlState = SqlState(Inner::E26000); /// 27000 pub const TRIGGERED_DATA_CHANGE_VIOLATION: SqlState = SqlState(Inner::E27000); /// 28000 pub const INVALID_AUTHORIZATION_SPECIFICATION: SqlState = SqlState(Inner::E28000); /// 28P01 pub const INVALID_PASSWORD: SqlState = SqlState(Inner::E28P01); /// 2B000 pub const DEPENDENT_PRIVILEGE_DESCRIPTORS_STILL_EXIST: SqlState = SqlState(Inner::E2B000); /// 2BP01 pub const DEPENDENT_OBJECTS_STILL_EXIST: SqlState = SqlState(Inner::E2BP01); /// 2D000 pub const INVALID_TRANSACTION_TERMINATION: SqlState = SqlState(Inner::E2D000); /// 2F000 pub const SQL_ROUTINE_EXCEPTION: SqlState = SqlState(Inner::E2F000); /// 2F005 pub const S_R_E_FUNCTION_EXECUTED_NO_RETURN_STATEMENT: SqlState = SqlState(Inner::E2F005); /// 2F002 pub const S_R_E_MODIFYING_SQL_DATA_NOT_PERMITTED: SqlState = SqlState(Inner::E2F002); /// 2F003 pub const S_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED: SqlState = SqlState(Inner::E2F003); /// 2F004 pub const S_R_E_READING_SQL_DATA_NOT_PERMITTED: SqlState = SqlState(Inner::E2F004); /// 34000 pub const INVALID_CURSOR_NAME: SqlState = SqlState(Inner::E34000); /// 34000 pub const UNDEFINED_CURSOR: SqlState = SqlState(Inner::E34000); /// 38000 pub const EXTERNAL_ROUTINE_EXCEPTION: SqlState = SqlState(Inner::E38000); /// 38001 pub const E_R_E_CONTAINING_SQL_NOT_PERMITTED: SqlState = SqlState(Inner::E38001); /// 38002 pub const E_R_E_MODIFYING_SQL_DATA_NOT_PERMITTED: SqlState = SqlState(Inner::E38002); /// 38003 pub const E_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED: SqlState = SqlState(Inner::E38003); /// 38004 pub const E_R_E_READING_SQL_DATA_NOT_PERMITTED: SqlState = SqlState(Inner::E38004); /// 39000 pub const EXTERNAL_ROUTINE_INVOCATION_EXCEPTION: SqlState = SqlState(Inner::E39000); /// 39001 pub const E_R_I_E_INVALID_SQLSTATE_RETURNED: SqlState = SqlState(Inner::E39001); /// 39004 pub const E_R_I_E_NULL_VALUE_NOT_ALLOWED: SqlState = SqlState(Inner::E39004); /// 39P01 pub const E_R_I_E_TRIGGER_PROTOCOL_VIOLATED: SqlState = SqlState(Inner::E39P01); /// 39P02 pub const E_R_I_E_SRF_PROTOCOL_VIOLATED: SqlState = SqlState(Inner::E39P02); /// 39P03 pub const E_R_I_E_EVENT_TRIGGER_PROTOCOL_VIOLATED: SqlState = SqlState(Inner::E39P03); /// 3B000 pub const SAVEPOINT_EXCEPTION: SqlState = SqlState(Inner::E3B000); /// 3B001 pub const S_E_INVALID_SPECIFICATION: SqlState = SqlState(Inner::E3B001); /// 3D000 pub const INVALID_CATALOG_NAME: SqlState = SqlState(Inner::E3D000); /// 3D000 pub const UNDEFINED_DATABASE: SqlState = SqlState(Inner::E3D000); /// 3F000 pub const INVALID_SCHEMA_NAME: SqlState = SqlState(Inner::E3F000); /// 3F000 pub const UNDEFINED_SCHEMA: SqlState = SqlState(Inner::E3F000); /// 40000 pub const TRANSACTION_ROLLBACK: SqlState = SqlState(Inner::E40000); /// 40002 pub const T_R_INTEGRITY_CONSTRAINT_VIOLATION: SqlState = SqlState(Inner::E40002); /// 40001 pub const T_R_SERIALIZATION_FAILURE: SqlState = SqlState(Inner::E40001); /// 40003 pub const T_R_STATEMENT_COMPLETION_UNKNOWN: SqlState = SqlState(Inner::E40003); /// 40P01 pub const T_R_DEADLOCK_DETECTED: SqlState = SqlState(Inner::E40P01); /// 42000 pub const SYNTAX_ERROR_OR_ACCESS_RULE_VIOLATION: SqlState = SqlState(Inner::E42000); /// 42601 pub const SYNTAX_ERROR: SqlState = SqlState(Inner::E42601); /// 42501 pub const INSUFFICIENT_PRIVILEGE: SqlState = SqlState(Inner::E42501); /// 42846 pub const CANNOT_COERCE: SqlState = SqlState(Inner::E42846); /// 42803 pub const GROUPING_ERROR: SqlState = SqlState(Inner::E42803); /// 42P20 pub const WINDOWING_ERROR: SqlState = SqlState(Inner::E42P20); /// 42P19 pub const INVALID_RECURSION: SqlState = SqlState(Inner::E42P19); /// 42830 pub const INVALID_FOREIGN_KEY: SqlState = SqlState(Inner::E42830); /// 42602 pub const INVALID_NAME: SqlState = SqlState(Inner::E42602); /// 42622 pub const NAME_TOO_LONG: SqlState = SqlState(Inner::E42622); /// 42939 pub const RESERVED_NAME: SqlState = SqlState(Inner::E42939); /// 42804 pub const DATATYPE_MISMATCH: SqlState = SqlState(Inner::E42804); /// 42P18 pub const INDETERMINATE_DATATYPE: SqlState = SqlState(Inner::E42P18); /// 42P21 pub const COLLATION_MISMATCH: SqlState = SqlState(Inner::E42P21); /// 42P22 pub const INDETERMINATE_COLLATION: SqlState = SqlState(Inner::E42P22); /// 42809 pub const WRONG_OBJECT_TYPE: SqlState = SqlState(Inner::E42809); /// 428C9 pub const GENERATED_ALWAYS: SqlState = SqlState(Inner::E428C9); /// 42703 pub const UNDEFINED_COLUMN: SqlState = SqlState(Inner::E42703); /// 42883 pub const UNDEFINED_FUNCTION: SqlState = SqlState(Inner::E42883); /// 42P01 pub const UNDEFINED_TABLE: SqlState = SqlState(Inner::E42P01); /// 42P02 pub const UNDEFINED_PARAMETER: SqlState = SqlState(Inner::E42P02); /// 42704 pub const UNDEFINED_OBJECT: SqlState = SqlState(Inner::E42704); /// 42701 pub const DUPLICATE_COLUMN: SqlState = SqlState(Inner::E42701); /// 42P03 pub const DUPLICATE_CURSOR: SqlState = SqlState(Inner::E42P03); /// 42P04 pub const DUPLICATE_DATABASE: SqlState = SqlState(Inner::E42P04); /// 42723 pub const DUPLICATE_FUNCTION: SqlState = SqlState(Inner::E42723); /// 42P05 pub const DUPLICATE_PSTATEMENT: SqlState = SqlState(Inner::E42P05); /// 42P06 pub const DUPLICATE_SCHEMA: SqlState = SqlState(Inner::E42P06); /// 42P07 pub const DUPLICATE_TABLE: SqlState = SqlState(Inner::E42P07); /// 42712 pub const DUPLICATE_ALIAS: SqlState = SqlState(Inner::E42712); /// 42710 pub const DUPLICATE_OBJECT: SqlState = SqlState(Inner::E42710); /// 42702 pub const AMBIGUOUS_COLUMN: SqlState = SqlState(Inner::E42702); /// 42725 pub const AMBIGUOUS_FUNCTION: SqlState = SqlState(Inner::E42725); /// 42P08 pub const AMBIGUOUS_PARAMETER: SqlState = SqlState(Inner::E42P08); /// 42P09 pub const AMBIGUOUS_ALIAS: SqlState = SqlState(Inner::E42P09); /// 42P10 pub const INVALID_COLUMN_REFERENCE: SqlState = SqlState(Inner::E42P10); /// 42611 pub const INVALID_COLUMN_DEFINITION: SqlState = SqlState(Inner::E42611); /// 42P11 pub const INVALID_CURSOR_DEFINITION: SqlState = SqlState(Inner::E42P11); /// 42P12 pub const INVALID_DATABASE_DEFINITION: SqlState = SqlState(Inner::E42P12); /// 42P13 pub const INVALID_FUNCTION_DEFINITION: SqlState = SqlState(Inner::E42P13); /// 42P14 pub const INVALID_PSTATEMENT_DEFINITION: SqlState = SqlState(Inner::E42P14); /// 42P15 pub const INVALID_SCHEMA_DEFINITION: SqlState = SqlState(Inner::E42P15); /// 42P16 pub const INVALID_TABLE_DEFINITION: SqlState = SqlState(Inner::E42P16); /// 42P17 pub const INVALID_OBJECT_DEFINITION: SqlState = SqlState(Inner::E42P17); /// 44000 pub const WITH_CHECK_OPTION_VIOLATION: SqlState = SqlState(Inner::E44000); /// 53000 pub const INSUFFICIENT_RESOURCES: SqlState = SqlState(Inner::E53000); /// 53100 pub const DISK_FULL: SqlState = SqlState(Inner::E53100); /// 53200 pub const OUT_OF_MEMORY: SqlState = SqlState(Inner::E53200); /// 53300 pub const TOO_MANY_CONNECTIONS: SqlState = SqlState(Inner::E53300); /// 53400 pub const CONFIGURATION_LIMIT_EXCEEDED: SqlState = SqlState(Inner::E53400); /// 54000 pub const PROGRAM_LIMIT_EXCEEDED: SqlState = SqlState(Inner::E54000); /// 54001 pub const STATEMENT_TOO_COMPLEX: SqlState = SqlState(Inner::E54001); /// 54011 pub const TOO_MANY_COLUMNS: SqlState = SqlState(Inner::E54011); /// 54023 pub const TOO_MANY_ARGUMENTS: SqlState = SqlState(Inner::E54023); /// 55000 pub const OBJECT_NOT_IN_PREREQUISITE_STATE: SqlState = SqlState(Inner::E55000); /// 55006 pub const OBJECT_IN_USE: SqlState = SqlState(Inner::E55006); /// 55P02 pub const CANT_CHANGE_RUNTIME_PARAM: SqlState = SqlState(Inner::E55P02); /// 55P03 pub const LOCK_NOT_AVAILABLE: SqlState = SqlState(Inner::E55P03); /// 55P04 pub const UNSAFE_NEW_ENUM_VALUE_USAGE: SqlState = SqlState(Inner::E55P04); /// 57000 pub const OPERATOR_INTERVENTION: SqlState = SqlState(Inner::E57000); /// 57014 pub const QUERY_CANCELED: SqlState = SqlState(Inner::E57014); /// 57P01 pub const ADMIN_SHUTDOWN: SqlState = SqlState(Inner::E57P01); /// 57P02 pub const CRASH_SHUTDOWN: SqlState = SqlState(Inner::E57P02); /// 57P03 pub const CANNOT_CONNECT_NOW: SqlState = SqlState(Inner::E57P03); /// 57P04 pub const DATABASE_DROPPED: SqlState = SqlState(Inner::E57P04); /// 57P05 pub const IDLE_SESSION_TIMEOUT: SqlState = SqlState(Inner::E57P05); /// 58000 pub const SYSTEM_ERROR: SqlState = SqlState(Inner::E58000); /// 58030 pub const IO_ERROR: SqlState = SqlState(Inner::E58030); /// 58P01 pub const UNDEFINED_FILE: SqlState = SqlState(Inner::E58P01); /// 58P02 pub const DUPLICATE_FILE: SqlState = SqlState(Inner::E58P02); /// 72000 pub const SNAPSHOT_TOO_OLD: SqlState = SqlState(Inner::E72000); /// F0000 pub const CONFIG_FILE_ERROR: SqlState = SqlState(Inner::EF0000); /// F0001 pub const LOCK_FILE_EXISTS: SqlState = SqlState(Inner::EF0001); /// HV000 pub const FDW_ERROR: SqlState = SqlState(Inner::EHV000); /// HV005 pub const FDW_COLUMN_NAME_NOT_FOUND: SqlState = SqlState(Inner::EHV005); /// HV002 pub const FDW_DYNAMIC_PARAMETER_VALUE_NEEDED: SqlState = SqlState(Inner::EHV002); /// HV010 pub const FDW_FUNCTION_SEQUENCE_ERROR: SqlState = SqlState(Inner::EHV010); /// HV021 pub const FDW_INCONSISTENT_DESCRIPTOR_INFORMATION: SqlState = SqlState(Inner::EHV021); /// HV024 pub const FDW_INVALID_ATTRIBUTE_VALUE: SqlState = SqlState(Inner::EHV024); /// HV007 pub const FDW_INVALID_COLUMN_NAME: SqlState = SqlState(Inner::EHV007); /// HV008 pub const FDW_INVALID_COLUMN_NUMBER: SqlState = SqlState(Inner::EHV008); /// HV004 pub const FDW_INVALID_DATA_TYPE: SqlState = SqlState(Inner::EHV004); /// HV006 pub const FDW_INVALID_DATA_TYPE_DESCRIPTORS: SqlState = SqlState(Inner::EHV006); /// HV091 pub const FDW_INVALID_DESCRIPTOR_FIELD_IDENTIFIER: SqlState = SqlState(Inner::EHV091); /// HV00B pub const FDW_INVALID_HANDLE: SqlState = SqlState(Inner::EHV00B); /// HV00C pub const FDW_INVALID_OPTION_INDEX: SqlState = SqlState(Inner::EHV00C); /// HV00D pub const FDW_INVALID_OPTION_NAME: SqlState = SqlState(Inner::EHV00D); /// HV090 pub const FDW_INVALID_STRING_LENGTH_OR_BUFFER_LENGTH: SqlState = SqlState(Inner::EHV090); /// HV00A pub const FDW_INVALID_STRING_FORMAT: SqlState = SqlState(Inner::EHV00A); /// HV009 pub const FDW_INVALID_USE_OF_NULL_POINTER: SqlState = SqlState(Inner::EHV009); /// HV014 pub const FDW_TOO_MANY_HANDLES: SqlState = SqlState(Inner::EHV014); /// HV001 pub const FDW_OUT_OF_MEMORY: SqlState = SqlState(Inner::EHV001); /// HV00P pub const FDW_NO_SCHEMAS: SqlState = SqlState(Inner::EHV00P); /// HV00J pub const FDW_OPTION_NAME_NOT_FOUND: SqlState = SqlState(Inner::EHV00J); /// HV00K pub const FDW_REPLY_HANDLE: SqlState = SqlState(Inner::EHV00K); /// HV00Q pub const FDW_SCHEMA_NOT_FOUND: SqlState = SqlState(Inner::EHV00Q); /// HV00R pub const FDW_TABLE_NOT_FOUND: SqlState = SqlState(Inner::EHV00R); /// HV00L pub const FDW_UNABLE_TO_CREATE_EXECUTION: SqlState = SqlState(Inner::EHV00L); /// HV00M pub const FDW_UNABLE_TO_CREATE_REPLY: SqlState = SqlState(Inner::EHV00M); /// HV00N pub const FDW_UNABLE_TO_ESTABLISH_CONNECTION: SqlState = SqlState(Inner::EHV00N); /// P0000 pub const PLPGSQL_ERROR: SqlState = SqlState(Inner::EP0000); /// P0001 pub const RAISE_EXCEPTION: SqlState = SqlState(Inner::EP0001); /// P0002 pub const NO_DATA_FOUND: SqlState = SqlState(Inner::EP0002); /// P0003 pub const TOO_MANY_ROWS: SqlState = SqlState(Inner::EP0003); /// P0004 pub const ASSERT_FAILURE: SqlState = SqlState(Inner::EP0004); /// XX000 pub const INTERNAL_ERROR: SqlState = SqlState(Inner::EXX000); /// XX001 pub const DATA_CORRUPTED: SqlState = SqlState(Inner::EXX001); /// XX002 pub const INDEX_CORRUPTED: SqlState = SqlState(Inner::EXX002); } #[derive(PartialEq, Eq, Clone, Debug)] #[allow(clippy::upper_case_acronyms)] enum Inner { E00000, E01000, E0100C, E01008, E01003, E01007, E01006, E01004, E01P01, E02000, E02001, E03000, E08000, E08003, E08006, E08001, E08004, E08007, E08P01, E09000, E0A000, E0B000, E0F000, E0F001, E0L000, E0LP01, E0P000, E0Z000, E0Z002, E20000, E21000, E22000, E2202E, E22021, E22008, E22012, E22005, E2200B, E22022, E22015, E2201E, E22014, E22016, E2201F, E2201G, E22018, E22007, E22019, E2200D, E22025, E22P06, E22010, E22023, E22013, E2201B, E2201W, E2201X, E2202H, E2202G, E22009, E2200C, E2200G, E22004, E22002, E22003, E2200H, E22026, E22001, E22011, E22027, E22024, E2200F, E22P01, E22P02, E22P03, E22P04, E22P05, E2200L, E2200M, E2200N, E2200S, E2200T, E22030, E22031, E22032, E22033, E22034, E22035, E22036, E22037, E22038, E22039, E2203A, E2203B, E2203C, E2203D, E2203E, E2203F, E2203G, E23000, E23001, E23502, E23503, E23505, E23514, E23P01, E24000, E25000, E25001, E25002, E25008, E25003, E25004, E25005, E25006, E25007, E25P01, E25P02, E25P03, E26000, E27000, E28000, E28P01, E2B000, E2BP01, E2D000, E2F000, E2F005, E2F002, E2F003, E2F004, E34000, E38000, E38001, E38002, E38003, E38004, E39000, E39001, E39004, E39P01, E39P02, E39P03, E3B000, E3B001, E3D000, E3F000, E40000, E40002, E40001, E40003, E40P01, E42000, E42601, E42501, E42846, E42803, E42P20, E42P19, E42830, E42602, E42622, E42939, E42804, E42P18, E42P21, E42P22, E42809, E428C9, E42703, E42883, E42P01, E42P02, E42704, E42701, E42P03, E42P04, E42723, E42P05, E42P06, E42P07, E42712, E42710, E42702, E42725, E42P08, E42P09, E42P10, E42611, E42P11, E42P12, E42P13, E42P14, E42P15, E42P16, E42P17, E44000, E53000, E53100, E53200, E53300, E53400, E54000, E54001, E54011, E54023, E55000, E55006, E55P02, E55P03, E55P04, E57000, E57014, E57P01, E57P02, E57P03, E57P04, E57P05, E58000, E58030, E58P01, E58P02, E72000, EF0000, EF0001, EHV000, EHV005, EHV002, EHV010, EHV021, EHV024, EHV007, EHV008, EHV004, EHV006, EHV091, EHV00B, EHV00C, EHV00D, EHV090, EHV00A, EHV009, EHV014, EHV001, EHV00P, EHV00J, EHV00K, EHV00Q, EHV00R, EHV00L, EHV00M, EHV00N, EP0000, EP0001, EP0002, EP0003, EP0004, EXX000, EXX001, EXX002, Other(Box), } #[rustfmt::skip] static SQLSTATE_MAP: phf::Map<&'static str, SqlState> = ::phf::Map { key: 12913932095322966823, disps: &[ (0, 24), (0, 12), (0, 74), (0, 109), (0, 11), (0, 9), (0, 0), (4, 38), (3, 155), (0, 6), (1, 242), (0, 66), (0, 53), (5, 180), (3, 221), (7, 230), (0, 125), (1, 46), (0, 11), (1, 2), (0, 5), (0, 13), (0, 171), (0, 15), (0, 4), (0, 22), (1, 85), (0, 75), (2, 0), (1, 25), (7, 47), (0, 45), (0, 35), (0, 7), (7, 124), (0, 0), (14, 104), (1, 183), (61, 50), (3, 76), (0, 12), (0, 7), (4, 189), (0, 1), (64, 102), (0, 0), (16, 192), (24, 19), (0, 5), (0, 87), (0, 89), (0, 14), ], entries: &[ ("2F000", SqlState::SQL_ROUTINE_EXCEPTION), ("01008", SqlState::WARNING_IMPLICIT_ZERO_BIT_PADDING), ("42501", SqlState::INSUFFICIENT_PRIVILEGE), ("22000", SqlState::DATA_EXCEPTION), ("0100C", SqlState::WARNING_DYNAMIC_RESULT_SETS_RETURNED), ("2200N", SqlState::INVALID_XML_CONTENT), ("40001", SqlState::T_R_SERIALIZATION_FAILURE), ("28P01", SqlState::INVALID_PASSWORD), ("38000", SqlState::EXTERNAL_ROUTINE_EXCEPTION), ("25006", SqlState::READ_ONLY_SQL_TRANSACTION), ("2203D", SqlState::TOO_MANY_JSON_ARRAY_ELEMENTS), ("42P09", SqlState::AMBIGUOUS_ALIAS), ("F0000", SqlState::CONFIG_FILE_ERROR), ("42P18", SqlState::INDETERMINATE_DATATYPE), ("40002", SqlState::T_R_INTEGRITY_CONSTRAINT_VIOLATION), ("22009", SqlState::INVALID_TIME_ZONE_DISPLACEMENT_VALUE), ("42P08", SqlState::AMBIGUOUS_PARAMETER), ("08000", SqlState::CONNECTION_EXCEPTION), ("25P01", SqlState::NO_ACTIVE_SQL_TRANSACTION), ("22024", SqlState::UNTERMINATED_C_STRING), ("55000", SqlState::OBJECT_NOT_IN_PREREQUISITE_STATE), ("25001", SqlState::ACTIVE_SQL_TRANSACTION), ("03000", SqlState::SQL_STATEMENT_NOT_YET_COMPLETE), ("42710", SqlState::DUPLICATE_OBJECT), ("2D000", SqlState::INVALID_TRANSACTION_TERMINATION), ("2200G", SqlState::MOST_SPECIFIC_TYPE_MISMATCH), ("22022", SqlState::INDICATOR_OVERFLOW), ("55006", SqlState::OBJECT_IN_USE), ("53200", SqlState::OUT_OF_MEMORY), ("22012", SqlState::DIVISION_BY_ZERO), ("P0002", SqlState::NO_DATA_FOUND), ("XX001", SqlState::DATA_CORRUPTED), ("22P05", SqlState::UNTRANSLATABLE_CHARACTER), ("40003", SqlState::T_R_STATEMENT_COMPLETION_UNKNOWN), ("22021", SqlState::CHARACTER_NOT_IN_REPERTOIRE), ("25000", SqlState::INVALID_TRANSACTION_STATE), ("42P15", SqlState::INVALID_SCHEMA_DEFINITION), ("0B000", SqlState::INVALID_TRANSACTION_INITIATION), ("22004", SqlState::NULL_VALUE_NOT_ALLOWED), ("42804", SqlState::DATATYPE_MISMATCH), ("42803", SqlState::GROUPING_ERROR), ("02001", SqlState::NO_ADDITIONAL_DYNAMIC_RESULT_SETS_RETURNED), ("25002", SqlState::BRANCH_TRANSACTION_ALREADY_ACTIVE), ("28000", SqlState::INVALID_AUTHORIZATION_SPECIFICATION), ("HV009", SqlState::FDW_INVALID_USE_OF_NULL_POINTER), ("22P01", SqlState::FLOATING_POINT_EXCEPTION), ("2B000", SqlState::DEPENDENT_PRIVILEGE_DESCRIPTORS_STILL_EXIST), ("42723", SqlState::DUPLICATE_FUNCTION), ("21000", SqlState::CARDINALITY_VIOLATION), ("0Z002", SqlState::STACKED_DIAGNOSTICS_ACCESSED_WITHOUT_ACTIVE_HANDLER), ("23505", SqlState::UNIQUE_VIOLATION), ("HV00J", SqlState::FDW_OPTION_NAME_NOT_FOUND), ("23P01", SqlState::EXCLUSION_VIOLATION), ("39P03", SqlState::E_R_I_E_EVENT_TRIGGER_PROTOCOL_VIOLATED), ("42P10", SqlState::INVALID_COLUMN_REFERENCE), ("2202H", SqlState::INVALID_TABLESAMPLE_ARGUMENT), ("55P04", SqlState::UNSAFE_NEW_ENUM_VALUE_USAGE), ("P0000", SqlState::PLPGSQL_ERROR), ("2F005", SqlState::S_R_E_FUNCTION_EXECUTED_NO_RETURN_STATEMENT), ("HV00M", SqlState::FDW_UNABLE_TO_CREATE_REPLY), ("0A000", SqlState::FEATURE_NOT_SUPPORTED), ("24000", SqlState::INVALID_CURSOR_STATE), ("25008", SqlState::HELD_CURSOR_REQUIRES_SAME_ISOLATION_LEVEL), ("01003", SqlState::WARNING_NULL_VALUE_ELIMINATED_IN_SET_FUNCTION), ("42712", SqlState::DUPLICATE_ALIAS), ("HV014", SqlState::FDW_TOO_MANY_HANDLES), ("58030", SqlState::IO_ERROR), ("2201W", SqlState::INVALID_ROW_COUNT_IN_LIMIT_CLAUSE), ("22033", SqlState::INVALID_SQL_JSON_SUBSCRIPT), ("2BP01", SqlState::DEPENDENT_OBJECTS_STILL_EXIST), ("HV005", SqlState::FDW_COLUMN_NAME_NOT_FOUND), ("25004", SqlState::INAPPROPRIATE_ISOLATION_LEVEL_FOR_BRANCH_TRANSACTION), ("54000", SqlState::PROGRAM_LIMIT_EXCEEDED), ("20000", SqlState::CASE_NOT_FOUND), ("2203G", SqlState::SQL_JSON_ITEM_CANNOT_BE_CAST_TO_TARGET_TYPE), ("22038", SqlState::SINGLETON_SQL_JSON_ITEM_REQUIRED), ("22007", SqlState::INVALID_DATETIME_FORMAT), ("08004", SqlState::SQLSERVER_REJECTED_ESTABLISHMENT_OF_SQLCONNECTION), ("2200H", SqlState::SEQUENCE_GENERATOR_LIMIT_EXCEEDED), ("HV00D", SqlState::FDW_INVALID_OPTION_NAME), ("P0004", SqlState::ASSERT_FAILURE), ("22018", SqlState::INVALID_CHARACTER_VALUE_FOR_CAST), ("0L000", SqlState::INVALID_GRANTOR), ("22P04", SqlState::BAD_COPY_FILE_FORMAT), ("22031", SqlState::INVALID_ARGUMENT_FOR_SQL_JSON_DATETIME_FUNCTION), ("01P01", SqlState::WARNING_DEPRECATED_FEATURE), ("0LP01", SqlState::INVALID_GRANT_OPERATION), ("58P02", SqlState::DUPLICATE_FILE), ("26000", SqlState::INVALID_SQL_STATEMENT_NAME), ("54001", SqlState::STATEMENT_TOO_COMPLEX), ("22010", SqlState::INVALID_INDICATOR_PARAMETER_VALUE), ("HV00C", SqlState::FDW_INVALID_OPTION_INDEX), ("22008", SqlState::DATETIME_FIELD_OVERFLOW), ("42P06", SqlState::DUPLICATE_SCHEMA), ("25007", SqlState::SCHEMA_AND_DATA_STATEMENT_MIXING_NOT_SUPPORTED), ("42P20", SqlState::WINDOWING_ERROR), ("HV091", SqlState::FDW_INVALID_DESCRIPTOR_FIELD_IDENTIFIER), ("HV021", SqlState::FDW_INCONSISTENT_DESCRIPTOR_INFORMATION), ("42702", SqlState::AMBIGUOUS_COLUMN), ("02000", SqlState::NO_DATA), ("54011", SqlState::TOO_MANY_COLUMNS), ("HV004", SqlState::FDW_INVALID_DATA_TYPE), ("01006", SqlState::WARNING_PRIVILEGE_NOT_REVOKED), ("42701", SqlState::DUPLICATE_COLUMN), ("08P01", SqlState::PROTOCOL_VIOLATION), ("42622", SqlState::NAME_TOO_LONG), ("P0003", SqlState::TOO_MANY_ROWS), ("22003", SqlState::NUMERIC_VALUE_OUT_OF_RANGE), ("42P03", SqlState::DUPLICATE_CURSOR), ("23001", SqlState::RESTRICT_VIOLATION), ("57000", SqlState::OPERATOR_INTERVENTION), ("22027", SqlState::TRIM_ERROR), ("42P12", SqlState::INVALID_DATABASE_DEFINITION), ("3B000", SqlState::SAVEPOINT_EXCEPTION), ("2201B", SqlState::INVALID_REGULAR_EXPRESSION), ("22030", SqlState::DUPLICATE_JSON_OBJECT_KEY_VALUE), ("2F004", SqlState::S_R_E_READING_SQL_DATA_NOT_PERMITTED), ("428C9", SqlState::GENERATED_ALWAYS), ("2200S", SqlState::INVALID_XML_COMMENT), ("22039", SqlState::SQL_JSON_ARRAY_NOT_FOUND), ("42809", SqlState::WRONG_OBJECT_TYPE), ("2201X", SqlState::INVALID_ROW_COUNT_IN_RESULT_OFFSET_CLAUSE), ("39001", SqlState::E_R_I_E_INVALID_SQLSTATE_RETURNED), ("25P02", SqlState::IN_FAILED_SQL_TRANSACTION), ("0P000", SqlState::INVALID_ROLE_SPECIFICATION), ("HV00N", SqlState::FDW_UNABLE_TO_ESTABLISH_CONNECTION), ("53100", SqlState::DISK_FULL), ("42601", SqlState::SYNTAX_ERROR), ("23000", SqlState::INTEGRITY_CONSTRAINT_VIOLATION), ("HV006", SqlState::FDW_INVALID_DATA_TYPE_DESCRIPTORS), ("HV00B", SqlState::FDW_INVALID_HANDLE), ("HV00Q", SqlState::FDW_SCHEMA_NOT_FOUND), ("01000", SqlState::WARNING), ("42883", SqlState::UNDEFINED_FUNCTION), ("57P01", SqlState::ADMIN_SHUTDOWN), ("22037", SqlState::NON_UNIQUE_KEYS_IN_A_JSON_OBJECT), ("00000", SqlState::SUCCESSFUL_COMPLETION), ("55P03", SqlState::LOCK_NOT_AVAILABLE), ("42P01", SqlState::UNDEFINED_TABLE), ("42830", SqlState::INVALID_FOREIGN_KEY), ("22005", SqlState::ERROR_IN_ASSIGNMENT), ("22025", SqlState::INVALID_ESCAPE_SEQUENCE), ("XX002", SqlState::INDEX_CORRUPTED), ("42P16", SqlState::INVALID_TABLE_DEFINITION), ("55P02", SqlState::CANT_CHANGE_RUNTIME_PARAM), ("22019", SqlState::INVALID_ESCAPE_CHARACTER), ("P0001", SqlState::RAISE_EXCEPTION), ("72000", SqlState::SNAPSHOT_TOO_OLD), ("42P11", SqlState::INVALID_CURSOR_DEFINITION), ("40P01", SqlState::T_R_DEADLOCK_DETECTED), ("57P02", SqlState::CRASH_SHUTDOWN), ("HV00A", SqlState::FDW_INVALID_STRING_FORMAT), ("2F002", SqlState::S_R_E_MODIFYING_SQL_DATA_NOT_PERMITTED), ("23503", SqlState::FOREIGN_KEY_VIOLATION), ("40000", SqlState::TRANSACTION_ROLLBACK), ("22032", SqlState::INVALID_JSON_TEXT), ("2202E", SqlState::ARRAY_ELEMENT_ERROR), ("42P19", SqlState::INVALID_RECURSION), ("42611", SqlState::INVALID_COLUMN_DEFINITION), ("42P13", SqlState::INVALID_FUNCTION_DEFINITION), ("25003", SqlState::INAPPROPRIATE_ACCESS_MODE_FOR_BRANCH_TRANSACTION), ("39P02", SqlState::E_R_I_E_SRF_PROTOCOL_VIOLATED), ("XX000", SqlState::INTERNAL_ERROR), ("08006", SqlState::CONNECTION_FAILURE), ("57P04", SqlState::DATABASE_DROPPED), ("42P07", SqlState::DUPLICATE_TABLE), ("22P03", SqlState::INVALID_BINARY_REPRESENTATION), ("22035", SqlState::NO_SQL_JSON_ITEM), ("42P14", SqlState::INVALID_PSTATEMENT_DEFINITION), ("01007", SqlState::WARNING_PRIVILEGE_NOT_GRANTED), ("38004", SqlState::E_R_E_READING_SQL_DATA_NOT_PERMITTED), ("42P21", SqlState::COLLATION_MISMATCH), ("0Z000", SqlState::DIAGNOSTICS_EXCEPTION), ("HV001", SqlState::FDW_OUT_OF_MEMORY), ("0F000", SqlState::LOCATOR_EXCEPTION), ("22013", SqlState::INVALID_PRECEDING_OR_FOLLOWING_SIZE), ("2201E", SqlState::INVALID_ARGUMENT_FOR_LOG), ("22011", SqlState::SUBSTRING_ERROR), ("42602", SqlState::INVALID_NAME), ("01004", SqlState::WARNING_STRING_DATA_RIGHT_TRUNCATION), ("42P02", SqlState::UNDEFINED_PARAMETER), ("2203C", SqlState::SQL_JSON_OBJECT_NOT_FOUND), ("HV002", SqlState::FDW_DYNAMIC_PARAMETER_VALUE_NEEDED), ("0F001", SqlState::L_E_INVALID_SPECIFICATION), ("58P01", SqlState::UNDEFINED_FILE), ("38001", SqlState::E_R_E_CONTAINING_SQL_NOT_PERMITTED), ("42703", SqlState::UNDEFINED_COLUMN), ("57P05", SqlState::IDLE_SESSION_TIMEOUT), ("57P03", SqlState::CANNOT_CONNECT_NOW), ("HV007", SqlState::FDW_INVALID_COLUMN_NAME), ("22014", SqlState::INVALID_ARGUMENT_FOR_NTILE), ("22P06", SqlState::NONSTANDARD_USE_OF_ESCAPE_CHARACTER), ("2203F", SqlState::SQL_JSON_SCALAR_REQUIRED), ("2200F", SqlState::ZERO_LENGTH_CHARACTER_STRING), ("09000", SqlState::TRIGGERED_ACTION_EXCEPTION), ("2201F", SqlState::INVALID_ARGUMENT_FOR_POWER_FUNCTION), ("08003", SqlState::CONNECTION_DOES_NOT_EXIST), ("38002", SqlState::E_R_E_MODIFYING_SQL_DATA_NOT_PERMITTED), ("F0001", SqlState::LOCK_FILE_EXISTS), ("42P22", SqlState::INDETERMINATE_COLLATION), ("2200C", SqlState::INVALID_USE_OF_ESCAPE_CHARACTER), ("2203E", SqlState::TOO_MANY_JSON_OBJECT_MEMBERS), ("23514", SqlState::CHECK_VIOLATION), ("22P02", SqlState::INVALID_TEXT_REPRESENTATION), ("54023", SqlState::TOO_MANY_ARGUMENTS), ("2200T", SqlState::INVALID_XML_PROCESSING_INSTRUCTION), ("22016", SqlState::INVALID_ARGUMENT_FOR_NTH_VALUE), ("25P03", SqlState::IDLE_IN_TRANSACTION_SESSION_TIMEOUT), ("3B001", SqlState::S_E_INVALID_SPECIFICATION), ("08001", SqlState::SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION), ("22036", SqlState::NON_NUMERIC_SQL_JSON_ITEM), ("3F000", SqlState::INVALID_SCHEMA_NAME), ("39P01", SqlState::E_R_I_E_TRIGGER_PROTOCOL_VIOLATED), ("22026", SqlState::STRING_DATA_LENGTH_MISMATCH), ("42P17", SqlState::INVALID_OBJECT_DEFINITION), ("22034", SqlState::MORE_THAN_ONE_SQL_JSON_ITEM), ("HV000", SqlState::FDW_ERROR), ("2200B", SqlState::ESCAPE_CHARACTER_CONFLICT), ("HV008", SqlState::FDW_INVALID_COLUMN_NUMBER), ("34000", SqlState::INVALID_CURSOR_NAME), ("2201G", SqlState::INVALID_ARGUMENT_FOR_WIDTH_BUCKET_FUNCTION), ("44000", SqlState::WITH_CHECK_OPTION_VIOLATION), ("HV010", SqlState::FDW_FUNCTION_SEQUENCE_ERROR), ("39004", SqlState::E_R_I_E_NULL_VALUE_NOT_ALLOWED), ("22001", SqlState::STRING_DATA_RIGHT_TRUNCATION), ("3D000", SqlState::INVALID_CATALOG_NAME), ("25005", SqlState::NO_ACTIVE_SQL_TRANSACTION_FOR_BRANCH_TRANSACTION), ("2200L", SqlState::NOT_AN_XML_DOCUMENT), ("27000", SqlState::TRIGGERED_DATA_CHANGE_VIOLATION), ("HV090", SqlState::FDW_INVALID_STRING_LENGTH_OR_BUFFER_LENGTH), ("42939", SqlState::RESERVED_NAME), ("58000", SqlState::SYSTEM_ERROR), ("2200M", SqlState::INVALID_XML_DOCUMENT), ("HV00L", SqlState::FDW_UNABLE_TO_CREATE_EXECUTION), ("57014", SqlState::QUERY_CANCELED), ("23502", SqlState::NOT_NULL_VIOLATION), ("22002", SqlState::NULL_VALUE_NO_INDICATOR_PARAMETER), ("HV00R", SqlState::FDW_TABLE_NOT_FOUND), ("HV00P", SqlState::FDW_NO_SCHEMAS), ("38003", SqlState::E_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED), ("39000", SqlState::EXTERNAL_ROUTINE_INVOCATION_EXCEPTION), ("22015", SqlState::INTERVAL_FIELD_OVERFLOW), ("HV00K", SqlState::FDW_REPLY_HANDLE), ("HV024", SqlState::FDW_INVALID_ATTRIBUTE_VALUE), ("2200D", SqlState::INVALID_ESCAPE_OCTET), ("08007", SqlState::TRANSACTION_RESOLUTION_UNKNOWN), ("2F003", SqlState::S_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED), ("42725", SqlState::AMBIGUOUS_FUNCTION), ("2203A", SqlState::SQL_JSON_MEMBER_NOT_FOUND), ("42846", SqlState::CANNOT_COERCE), ("42P04", SqlState::DUPLICATE_DATABASE), ("42000", SqlState::SYNTAX_ERROR_OR_ACCESS_RULE_VIOLATION), ("2203B", SqlState::SQL_JSON_NUMBER_NOT_FOUND), ("42P05", SqlState::DUPLICATE_PSTATEMENT), ("53300", SqlState::TOO_MANY_CONNECTIONS), ("53400", SqlState::CONFIGURATION_LIMIT_EXCEEDED), ("42704", SqlState::UNDEFINED_OBJECT), ("2202G", SqlState::INVALID_TABLESAMPLE_REPEAT), ("22023", SqlState::INVALID_PARAMETER_VALUE), ("53000", SqlState::INSUFFICIENT_RESOURCES), ], }; tokio-postgres-0.7.12/src/generic_client.rs000064400000000000000000000211751046102023000170210ustar 00000000000000use crate::query::RowStream; use crate::types::{BorrowToSql, ToSql, Type}; use crate::{Client, Error, Row, SimpleQueryMessage, Statement, ToStatement, Transaction}; use async_trait::async_trait; mod private { pub trait Sealed {} } /// A trait allowing abstraction over connections and transactions. /// /// This trait is "sealed", and cannot be implemented outside of this crate. #[async_trait] pub trait GenericClient: private::Sealed { /// Like [`Client::execute`]. async fn execute(&self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement + Sync + Send; /// Like [`Client::execute_raw`]. async fn execute_raw(&self, statement: &T, params: I) -> Result where T: ?Sized + ToStatement + Sync + Send, P: BorrowToSql, I: IntoIterator + Sync + Send, I::IntoIter: ExactSizeIterator; /// Like [`Client::query`]. async fn query(&self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result, Error> where T: ?Sized + ToStatement + Sync + Send; /// Like [`Client::query_one`]. async fn query_one( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result where T: ?Sized + ToStatement + Sync + Send; /// Like [`Client::query_opt`]. async fn query_opt( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement + Sync + Send; /// Like [`Client::query_raw`]. async fn query_raw(&self, statement: &T, params: I) -> Result where T: ?Sized + ToStatement + Sync + Send, P: BorrowToSql, I: IntoIterator + Sync + Send, I::IntoIter: ExactSizeIterator; /// Like [`Client::query_typed`] async fn query_typed( &self, statement: &str, params: &[(&(dyn ToSql + Sync), Type)], ) -> Result, Error>; /// Like [`Client::query_typed_raw`] async fn query_typed_raw(&self, statement: &str, params: I) -> Result where P: BorrowToSql, I: IntoIterator + Sync + Send; /// Like [`Client::prepare`]. async fn prepare(&self, query: &str) -> Result; /// Like [`Client::prepare_typed`]. async fn prepare_typed( &self, query: &str, parameter_types: &[Type], ) -> Result; /// Like [`Client::transaction`]. async fn transaction(&mut self) -> Result, Error>; /// Like [`Client::batch_execute`]. async fn batch_execute(&self, query: &str) -> Result<(), Error>; /// Like [`Client::simple_query`]. async fn simple_query(&self, query: &str) -> Result, Error>; /// Returns a reference to the underlying [`Client`]. fn client(&self) -> &Client; } impl private::Sealed for Client {} #[async_trait] impl GenericClient for Client { async fn execute(&self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement + Sync + Send, { self.execute(query, params).await } async fn execute_raw(&self, statement: &T, params: I) -> Result where T: ?Sized + ToStatement + Sync + Send, P: BorrowToSql, I: IntoIterator + Sync + Send, I::IntoIter: ExactSizeIterator, { self.execute_raw(statement, params).await } async fn query(&self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result, Error> where T: ?Sized + ToStatement + Sync + Send, { self.query(query, params).await } async fn query_one( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result where T: ?Sized + ToStatement + Sync + Send, { self.query_one(statement, params).await } async fn query_opt( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement + Sync + Send, { self.query_opt(statement, params).await } async fn query_raw(&self, statement: &T, params: I) -> Result where T: ?Sized + ToStatement + Sync + Send, P: BorrowToSql, I: IntoIterator + Sync + Send, I::IntoIter: ExactSizeIterator, { self.query_raw(statement, params).await } async fn query_typed( &self, statement: &str, params: &[(&(dyn ToSql + Sync), Type)], ) -> Result, Error> { self.query_typed(statement, params).await } async fn query_typed_raw(&self, statement: &str, params: I) -> Result where P: BorrowToSql, I: IntoIterator + Sync + Send, { self.query_typed_raw(statement, params).await } async fn prepare(&self, query: &str) -> Result { self.prepare(query).await } async fn prepare_typed( &self, query: &str, parameter_types: &[Type], ) -> Result { self.prepare_typed(query, parameter_types).await } async fn transaction(&mut self) -> Result, Error> { self.transaction().await } async fn batch_execute(&self, query: &str) -> Result<(), Error> { self.batch_execute(query).await } async fn simple_query(&self, query: &str) -> Result, Error> { self.simple_query(query).await } fn client(&self) -> &Client { self } } impl private::Sealed for Transaction<'_> {} #[async_trait] #[allow(clippy::needless_lifetimes)] impl GenericClient for Transaction<'_> { async fn execute(&self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement + Sync + Send, { self.execute(query, params).await } async fn execute_raw(&self, statement: &T, params: I) -> Result where T: ?Sized + ToStatement + Sync + Send, P: BorrowToSql, I: IntoIterator + Sync + Send, I::IntoIter: ExactSizeIterator, { self.execute_raw(statement, params).await } async fn query(&self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result, Error> where T: ?Sized + ToStatement + Sync + Send, { self.query(query, params).await } async fn query_one( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result where T: ?Sized + ToStatement + Sync + Send, { self.query_one(statement, params).await } async fn query_opt( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement + Sync + Send, { self.query_opt(statement, params).await } async fn query_raw(&self, statement: &T, params: I) -> Result where T: ?Sized + ToStatement + Sync + Send, P: BorrowToSql, I: IntoIterator + Sync + Send, I::IntoIter: ExactSizeIterator, { self.query_raw(statement, params).await } async fn query_typed( &self, statement: &str, params: &[(&(dyn ToSql + Sync), Type)], ) -> Result, Error> { self.query_typed(statement, params).await } async fn query_typed_raw(&self, statement: &str, params: I) -> Result where P: BorrowToSql, I: IntoIterator + Sync + Send, { self.query_typed_raw(statement, params).await } async fn prepare(&self, query: &str) -> Result { self.prepare(query).await } async fn prepare_typed( &self, query: &str, parameter_types: &[Type], ) -> Result { self.prepare_typed(query, parameter_types).await } #[allow(clippy::needless_lifetimes)] async fn transaction<'a>(&'a mut self) -> Result, Error> { self.transaction().await } async fn batch_execute(&self, query: &str) -> Result<(), Error> { self.batch_execute(query).await } async fn simple_query(&self, query: &str) -> Result, Error> { self.simple_query(query).await } fn client(&self) -> &Client { self.client() } } tokio-postgres-0.7.12/src/keepalive.rs000064400000000000000000000020611046102023000160050ustar 00000000000000use socket2::TcpKeepalive; use std::time::Duration; #[derive(Clone, PartialEq, Eq)] pub(crate) struct KeepaliveConfig { pub idle: Duration, pub interval: Option, pub retries: Option, } impl From<&KeepaliveConfig> for TcpKeepalive { fn from(keepalive_config: &KeepaliveConfig) -> Self { let mut tcp_keepalive = Self::new().with_time(keepalive_config.idle); #[cfg(not(any( target_os = "aix", target_os = "redox", target_os = "solaris", target_os = "openbsd" )))] if let Some(interval) = keepalive_config.interval { tcp_keepalive = tcp_keepalive.with_interval(interval); } #[cfg(not(any( target_os = "aix", target_os = "redox", target_os = "solaris", target_os = "windows", target_os = "openbsd" )))] if let Some(retries) = keepalive_config.retries { tcp_keepalive = tcp_keepalive.with_retries(retries); } tcp_keepalive } } tokio-postgres-0.7.12/src/lib.rs000064400000000000000000000234641046102023000146200ustar 00000000000000//! An asynchronous, pipelined, PostgreSQL client. //! //! # Example //! //! ```no_run //! use tokio_postgres::{NoTls, Error}; //! //! # #[cfg(not(feature = "runtime"))] fn main() {} //! # #[cfg(feature = "runtime")] //! #[tokio::main] // By default, tokio_postgres uses the tokio crate as its runtime. //! async fn main() -> Result<(), Error> { //! // Connect to the database. //! let (client, connection) = //! tokio_postgres::connect("host=localhost user=postgres", NoTls).await?; //! //! // The connection object performs the actual communication with the database, //! // so spawn it off to run on its own. //! tokio::spawn(async move { //! if let Err(e) = connection.await { //! eprintln!("connection error: {}", e); //! } //! }); //! //! // Now we can execute a simple statement that just returns its parameter. //! let rows = client //! .query("SELECT $1::TEXT", &[&"hello world"]) //! .await?; //! //! // And then check that we got back the same string we sent over. //! let value: &str = rows[0].get(0); //! assert_eq!(value, "hello world"); //! //! Ok(()) //! } //! ``` //! //! # Behavior //! //! Calling a method like `Client::query` on its own does nothing. The associated request is not sent to the database //! until the future returned by the method is first polled. Requests are executed in the order that they are first //! polled, not in the order that their futures are created. //! //! # Pipelining //! //! The client supports *pipelined* requests. Pipelining can improve performance in use cases in which multiple, //! independent queries need to be executed. In a traditional workflow, each query is sent to the server after the //! previous query completes. In contrast, pipelining allows the client to send all of the queries to the server up //! front, minimizing time spent by one side waiting for the other to finish sending data: //! //! ```not_rust //! Sequential Pipelined //! | Client | Server | | Client | Server | //! |----------------|-----------------| |----------------|-----------------| //! | send query 1 | | | send query 1 | | //! | | process query 1 | | send query 2 | process query 1 | //! | receive rows 1 | | | send query 3 | process query 2 | //! | send query 2 | | | receive rows 1 | process query 3 | //! | | process query 2 | | receive rows 2 | | //! | receive rows 2 | | | receive rows 3 | | //! | send query 3 | | //! | | process query 3 | //! | receive rows 3 | | //! ``` //! //! In both cases, the PostgreSQL server is executing the queries sequentially - pipelining just allows both sides of //! the connection to work concurrently when possible. //! //! Pipelining happens automatically when futures are polled concurrently (for example, by using the futures `join` //! combinator): //! //! ```rust //! use futures_util::future; //! use std::future::Future; //! use tokio_postgres::{Client, Error, Statement}; //! //! async fn pipelined_prepare( //! client: &Client, //! ) -> Result<(Statement, Statement), Error> //! { //! future::try_join( //! client.prepare("SELECT * FROM foo"), //! client.prepare("INSERT INTO bar (id, name) VALUES ($1, $2)") //! ).await //! } //! ``` //! //! # Runtime //! //! The client works with arbitrary `AsyncRead + AsyncWrite` streams. Convenience APIs are provided to handle the //! connection process, but these are gated by the `runtime` Cargo feature, which is enabled by default. If disabled, //! all dependence on the tokio runtime is removed. //! //! # SSL/TLS support //! //! TLS support is implemented via external libraries. `Client::connect` and `Config::connect` take a TLS implementation //! as an argument. The `NoTls` type in this crate can be used when TLS is not required. Otherwise, the //! `postgres-openssl` and `postgres-native-tls` crates provide implementations backed by the `openssl` and `native-tls` //! crates, respectively. //! //! # Features //! //! The following features can be enabled from `Cargo.toml`: //! //! | Feature | Description | Extra dependencies | Default | //! | ------- | ----------- | ------------------ | ------- | //! | `runtime` | Enable convenience API for the connection process based on the `tokio` crate. | [tokio](https://crates.io/crates/tokio) 1.0 with the features `net` and `time` | yes | //! | `array-impls` | Enables `ToSql` and `FromSql` trait impls for arrays | - | no | //! | `with-bit-vec-0_6` | Enable support for the `bit-vec` crate. | [bit-vec](https://crates.io/crates/bit-vec) 0.6 | no | //! | `with-chrono-0_4` | Enable support for the `chrono` crate. | [chrono](https://crates.io/crates/chrono) 0.4 | no | //! | `with-eui48-0_4` | Enable support for the 0.4 version of the `eui48` crate. This is deprecated and will be removed. | [eui48](https://crates.io/crates/eui48) 0.4 | no | //! | `with-eui48-1` | Enable support for the 1.0 version of the `eui48` crate. | [eui48](https://crates.io/crates/eui48) 1.0 | no | //! | `with-geo-types-0_6` | Enable support for the 0.6 version of the `geo-types` crate. | [geo-types](https://crates.io/crates/geo-types/0.6.0) 0.6 | no | //! | `with-geo-types-0_7` | Enable support for the 0.7 version of the `geo-types` crate. | [geo-types](https://crates.io/crates/geo-types/0.7.0) 0.7 | no | //! | `with-jiff-0_1` | Enable support for the 0.1 version of the `jiff` crate. | [jiff](https://crates.io/crates/jiff/0.1.0) 0.1 | no | //! | `with-serde_json-1` | Enable support for the `serde_json` crate. | [serde_json](https://crates.io/crates/serde_json) 1.0 | no | //! | `with-uuid-0_8` | Enable support for the `uuid` crate. | [uuid](https://crates.io/crates/uuid) 0.8 | no | //! | `with-uuid-1` | Enable support for the `uuid` crate. | [uuid](https://crates.io/crates/uuid) 1.0 | no | //! | `with-time-0_2` | Enable support for the 0.2 version of the `time` crate. | [time](https://crates.io/crates/time/0.2.0) 0.2 | no | //! | `with-time-0_3` | Enable support for the 0.3 version of the `time` crate. | [time](https://crates.io/crates/time/0.3.0) 0.3 | no | #![warn(rust_2018_idioms, clippy::all, missing_docs)] pub use crate::cancel_token::CancelToken; pub use crate::client::Client; pub use crate::config::Config; pub use crate::connection::Connection; pub use crate::copy_in::CopyInSink; pub use crate::copy_out::CopyOutStream; use crate::error::DbError; pub use crate::error::Error; pub use crate::generic_client::GenericClient; pub use crate::portal::Portal; pub use crate::query::RowStream; pub use crate::row::{Row, SimpleQueryRow}; pub use crate::simple_query::{SimpleColumn, SimpleQueryStream}; #[cfg(feature = "runtime")] pub use crate::socket::Socket; pub use crate::statement::{Column, Statement}; #[cfg(feature = "runtime")] use crate::tls::MakeTlsConnect; pub use crate::tls::NoTls; pub use crate::to_statement::ToStatement; pub use crate::transaction::Transaction; pub use crate::transaction_builder::{IsolationLevel, TransactionBuilder}; use crate::types::ToSql; use std::sync::Arc; pub mod binary_copy; mod bind; #[cfg(feature = "runtime")] mod cancel_query; mod cancel_query_raw; mod cancel_token; mod client; mod codec; pub mod config; #[cfg(feature = "runtime")] mod connect; mod connect_raw; #[cfg(feature = "runtime")] mod connect_socket; mod connect_tls; mod connection; mod copy_in; mod copy_out; pub mod error; mod generic_client; #[cfg(not(target_arch = "wasm32"))] mod keepalive; mod maybe_tls_stream; mod portal; mod prepare; mod query; pub mod row; mod simple_query; #[cfg(feature = "runtime")] mod socket; mod statement; pub mod tls; mod to_statement; mod transaction; mod transaction_builder; pub mod types; /// A convenience function which parses a connection string and connects to the database. /// /// See the documentation for [`Config`] for details on the connection string format. /// /// Requires the `runtime` Cargo feature (enabled by default). /// /// [`Config`]: config/struct.Config.html #[cfg(feature = "runtime")] pub async fn connect( config: &str, tls: T, ) -> Result<(Client, Connection), Error> where T: MakeTlsConnect, { let config = config.parse::()?; config.connect(tls).await } /// An asynchronous notification. #[derive(Clone, Debug)] pub struct Notification { process_id: i32, channel: String, payload: String, } impl Notification { /// The process ID of the notifying backend process. pub fn process_id(&self) -> i32 { self.process_id } /// The name of the channel that the notify has been raised on. pub fn channel(&self) -> &str { &self.channel } /// The "payload" string passed from the notifying process. pub fn payload(&self) -> &str { &self.payload } } /// An asynchronous message from the server. #[allow(clippy::large_enum_variant)] #[derive(Debug, Clone)] #[non_exhaustive] pub enum AsyncMessage { /// A notice. /// /// Notices use the same format as errors, but aren't "errors" per-se. Notice(DbError), /// A notification. /// /// Connections can subscribe to notifications with the `LISTEN` command. Notification(Notification), } /// Message returned by the `SimpleQuery` stream. #[derive(Debug)] #[non_exhaustive] pub enum SimpleQueryMessage { /// A row of data. Row(SimpleQueryRow), /// A statement in the query has completed. /// /// The number of rows modified or selected is returned. CommandComplete(u64), /// Column values of the proceeding row values RowDescription(Arc<[SimpleColumn]>), } fn slice_iter<'a>( s: &'a [&'a (dyn ToSql + Sync)], ) -> impl ExactSizeIterator + 'a { s.iter().map(|s| *s as _) } tokio-postgres-0.7.12/src/maybe_tls_stream.rs000064400000000000000000000036541046102023000174030ustar 00000000000000use crate::tls::{ChannelBinding, TlsStream}; use std::io; use std::pin::Pin; use std::task::{Context, Poll}; use tokio::io::{AsyncRead, AsyncWrite, ReadBuf}; pub enum MaybeTlsStream { Raw(S), Tls(T), } impl AsyncRead for MaybeTlsStream where S: AsyncRead + Unpin, T: AsyncRead + Unpin, { fn poll_read( mut self: Pin<&mut Self>, cx: &mut Context<'_>, buf: &mut ReadBuf<'_>, ) -> Poll> { match &mut *self { MaybeTlsStream::Raw(s) => Pin::new(s).poll_read(cx, buf), MaybeTlsStream::Tls(s) => Pin::new(s).poll_read(cx, buf), } } } impl AsyncWrite for MaybeTlsStream where S: AsyncWrite + Unpin, T: AsyncWrite + Unpin, { fn poll_write( mut self: Pin<&mut Self>, cx: &mut Context<'_>, buf: &[u8], ) -> Poll> { match &mut *self { MaybeTlsStream::Raw(s) => Pin::new(s).poll_write(cx, buf), MaybeTlsStream::Tls(s) => Pin::new(s).poll_write(cx, buf), } } fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { match &mut *self { MaybeTlsStream::Raw(s) => Pin::new(s).poll_flush(cx), MaybeTlsStream::Tls(s) => Pin::new(s).poll_flush(cx), } } fn poll_shutdown(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { match &mut *self { MaybeTlsStream::Raw(s) => Pin::new(s).poll_shutdown(cx), MaybeTlsStream::Tls(s) => Pin::new(s).poll_shutdown(cx), } } } impl TlsStream for MaybeTlsStream where S: AsyncRead + AsyncWrite + Unpin, T: TlsStream + Unpin, { fn channel_binding(&self) -> ChannelBinding { match self { MaybeTlsStream::Raw(_) => ChannelBinding::none(), MaybeTlsStream::Tls(s) => s.channel_binding(), } } } tokio-postgres-0.7.12/src/portal.rs000064400000000000000000000024451046102023000153470ustar 00000000000000use crate::client::InnerClient; use crate::codec::FrontendMessage; use crate::connection::RequestMessages; use crate::Statement; use postgres_protocol::message::frontend; use std::sync::{Arc, Weak}; struct Inner { client: Weak, name: String, statement: Statement, } impl Drop for Inner { fn drop(&mut self) { if let Some(client) = self.client.upgrade() { let buf = client.with_buf(|buf| { frontend::close(b'P', &self.name, buf).unwrap(); frontend::sync(buf); buf.split().freeze() }); let _ = client.send(RequestMessages::Single(FrontendMessage::Raw(buf))); } } } /// A portal. /// /// Portals can only be used with the connection that created them, and only exist for the duration of the transaction /// in which they were created. #[derive(Clone)] pub struct Portal(Arc); impl Portal { pub(crate) fn new(client: &Arc, name: String, statement: Statement) -> Portal { Portal(Arc::new(Inner { client: Arc::downgrade(client), name, statement, })) } pub(crate) fn name(&self) -> &str { &self.0.name } pub(crate) fn statement(&self) -> &Statement { &self.0.statement } } tokio-postgres-0.7.12/src/prepare.rs000064400000000000000000000200211046102023000154720ustar 00000000000000use crate::client::InnerClient; use crate::codec::FrontendMessage; use crate::connection::RequestMessages; use crate::error::SqlState; use crate::types::{Field, Kind, Oid, Type}; use crate::{query, slice_iter}; use crate::{Column, Error, Statement}; use bytes::Bytes; use fallible_iterator::FallibleIterator; use futures_util::{pin_mut, TryStreamExt}; use log::debug; use postgres_protocol::message::backend::Message; use postgres_protocol::message::frontend; use std::future::Future; use std::pin::Pin; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::Arc; const TYPEINFO_QUERY: &str = "\ SELECT t.typname, t.typtype, t.typelem, r.rngsubtype, t.typbasetype, n.nspname, t.typrelid FROM pg_catalog.pg_type t LEFT OUTER JOIN pg_catalog.pg_range r ON r.rngtypid = t.oid INNER JOIN pg_catalog.pg_namespace n ON t.typnamespace = n.oid WHERE t.oid = $1 "; // Range types weren't added until Postgres 9.2, so pg_range may not exist const TYPEINFO_FALLBACK_QUERY: &str = "\ SELECT t.typname, t.typtype, t.typelem, NULL::OID, t.typbasetype, n.nspname, t.typrelid FROM pg_catalog.pg_type t INNER JOIN pg_catalog.pg_namespace n ON t.typnamespace = n.oid WHERE t.oid = $1 "; const TYPEINFO_ENUM_QUERY: &str = "\ SELECT enumlabel FROM pg_catalog.pg_enum WHERE enumtypid = $1 ORDER BY enumsortorder "; // Postgres 9.0 didn't have enumsortorder const TYPEINFO_ENUM_FALLBACK_QUERY: &str = "\ SELECT enumlabel FROM pg_catalog.pg_enum WHERE enumtypid = $1 ORDER BY oid "; const TYPEINFO_COMPOSITE_QUERY: &str = "\ SELECT attname, atttypid FROM pg_catalog.pg_attribute WHERE attrelid = $1 AND NOT attisdropped AND attnum > 0 ORDER BY attnum "; static NEXT_ID: AtomicUsize = AtomicUsize::new(0); pub async fn prepare( client: &Arc, query: &str, types: &[Type], ) -> Result { let name = format!("s{}", NEXT_ID.fetch_add(1, Ordering::SeqCst)); let buf = encode(client, &name, query, types)?; let mut responses = client.send(RequestMessages::Single(FrontendMessage::Raw(buf)))?; match responses.next().await? { Message::ParseComplete => {} _ => return Err(Error::unexpected_message()), } let parameter_description = match responses.next().await? { Message::ParameterDescription(body) => body, _ => return Err(Error::unexpected_message()), }; let row_description = match responses.next().await? { Message::RowDescription(body) => Some(body), Message::NoData => None, _ => return Err(Error::unexpected_message()), }; let mut parameters = vec![]; let mut it = parameter_description.parameters(); while let Some(oid) = it.next().map_err(Error::parse)? { let type_ = get_type(client, oid).await?; parameters.push(type_); } let mut columns = vec![]; if let Some(row_description) = row_description { let mut it = row_description.fields(); while let Some(field) = it.next().map_err(Error::parse)? { let type_ = get_type(client, field.type_oid()).await?; let column = Column { name: field.name().to_string(), table_oid: Some(field.table_oid()).filter(|n| *n != 0), column_id: Some(field.column_id()).filter(|n| *n != 0), r#type: type_, }; columns.push(column); } } Ok(Statement::new(client, name, parameters, columns)) } fn prepare_rec<'a>( client: &'a Arc, query: &'a str, types: &'a [Type], ) -> Pin> + 'a + Send>> { Box::pin(prepare(client, query, types)) } fn encode(client: &InnerClient, name: &str, query: &str, types: &[Type]) -> Result { if types.is_empty() { debug!("preparing query {}: {}", name, query); } else { debug!("preparing query {} with types {:?}: {}", name, types, query); } client.with_buf(|buf| { frontend::parse(name, query, types.iter().map(Type::oid), buf).map_err(Error::encode)?; frontend::describe(b'S', name, buf).map_err(Error::encode)?; frontend::sync(buf); Ok(buf.split().freeze()) }) } pub(crate) async fn get_type(client: &Arc, oid: Oid) -> Result { if let Some(type_) = Type::from_oid(oid) { return Ok(type_); } if let Some(type_) = client.type_(oid) { return Ok(type_); } let stmt = typeinfo_statement(client).await?; let rows = query::query(client, stmt, slice_iter(&[&oid])).await?; pin_mut!(rows); let row = match rows.try_next().await? { Some(row) => row, None => return Err(Error::unexpected_message()), }; let name: String = row.try_get(0)?; let type_: i8 = row.try_get(1)?; let elem_oid: Oid = row.try_get(2)?; let rngsubtype: Option = row.try_get(3)?; let basetype: Oid = row.try_get(4)?; let schema: String = row.try_get(5)?; let relid: Oid = row.try_get(6)?; let kind = if type_ == b'e' as i8 { let variants = get_enum_variants(client, oid).await?; Kind::Enum(variants) } else if type_ == b'p' as i8 { Kind::Pseudo } else if basetype != 0 { let type_ = get_type_rec(client, basetype).await?; Kind::Domain(type_) } else if elem_oid != 0 { let type_ = get_type_rec(client, elem_oid).await?; Kind::Array(type_) } else if relid != 0 { let fields = get_composite_fields(client, relid).await?; Kind::Composite(fields) } else if let Some(rngsubtype) = rngsubtype { let type_ = get_type_rec(client, rngsubtype).await?; Kind::Range(type_) } else { Kind::Simple }; let type_ = Type::new(name, oid, kind, schema); client.set_type(oid, &type_); Ok(type_) } fn get_type_rec<'a>( client: &'a Arc, oid: Oid, ) -> Pin> + Send + 'a>> { Box::pin(get_type(client, oid)) } async fn typeinfo_statement(client: &Arc) -> Result { if let Some(stmt) = client.typeinfo() { return Ok(stmt); } let stmt = match prepare_rec(client, TYPEINFO_QUERY, &[]).await { Ok(stmt) => stmt, Err(ref e) if e.code() == Some(&SqlState::UNDEFINED_TABLE) => { prepare_rec(client, TYPEINFO_FALLBACK_QUERY, &[]).await? } Err(e) => return Err(e), }; client.set_typeinfo(&stmt); Ok(stmt) } async fn get_enum_variants(client: &Arc, oid: Oid) -> Result, Error> { let stmt = typeinfo_enum_statement(client).await?; query::query(client, stmt, slice_iter(&[&oid])) .await? .and_then(|row| async move { row.try_get(0) }) .try_collect() .await } async fn typeinfo_enum_statement(client: &Arc) -> Result { if let Some(stmt) = client.typeinfo_enum() { return Ok(stmt); } let stmt = match prepare_rec(client, TYPEINFO_ENUM_QUERY, &[]).await { Ok(stmt) => stmt, Err(ref e) if e.code() == Some(&SqlState::UNDEFINED_COLUMN) => { prepare_rec(client, TYPEINFO_ENUM_FALLBACK_QUERY, &[]).await? } Err(e) => return Err(e), }; client.set_typeinfo_enum(&stmt); Ok(stmt) } async fn get_composite_fields(client: &Arc, oid: Oid) -> Result, Error> { let stmt = typeinfo_composite_statement(client).await?; let rows = query::query(client, stmt, slice_iter(&[&oid])) .await? .try_collect::>() .await?; let mut fields = vec![]; for row in rows { let name = row.try_get(0)?; let oid = row.try_get(1)?; let type_ = get_type_rec(client, oid).await?; fields.push(Field::new(name, type_)); } Ok(fields) } async fn typeinfo_composite_statement(client: &Arc) -> Result { if let Some(stmt) = client.typeinfo_composite() { return Ok(stmt); } let stmt = prepare_rec(client, TYPEINFO_COMPOSITE_QUERY, &[]).await?; client.set_typeinfo_composite(&stmt); Ok(stmt) } tokio-postgres-0.7.12/src/query.rs000064400000000000000000000230141046102023000152060ustar 00000000000000use crate::client::{InnerClient, Responses}; use crate::codec::FrontendMessage; use crate::connection::RequestMessages; use crate::prepare::get_type; use crate::types::{BorrowToSql, IsNull}; use crate::{Column, Error, Portal, Row, Statement}; use bytes::{Bytes, BytesMut}; use fallible_iterator::FallibleIterator; use futures_util::{ready, Stream}; use log::{debug, log_enabled, Level}; use pin_project_lite::pin_project; use postgres_protocol::message::backend::{CommandCompleteBody, Message}; use postgres_protocol::message::frontend; use postgres_types::Type; use std::fmt; use std::marker::PhantomPinned; use std::pin::Pin; use std::sync::Arc; use std::task::{Context, Poll}; struct BorrowToSqlParamsDebug<'a, T>(&'a [T]); impl<'a, T> fmt::Debug for BorrowToSqlParamsDebug<'a, T> where T: BorrowToSql, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_list() .entries(self.0.iter().map(|x| x.borrow_to_sql())) .finish() } } pub async fn query( client: &InnerClient, statement: Statement, params: I, ) -> Result where P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { let buf = if log_enabled!(Level::Debug) { let params = params.into_iter().collect::>(); debug!( "executing statement {} with parameters: {:?}", statement.name(), BorrowToSqlParamsDebug(params.as_slice()), ); encode(client, &statement, params)? } else { encode(client, &statement, params)? }; let responses = start(client, buf).await?; Ok(RowStream { statement, responses, rows_affected: None, _p: PhantomPinned, }) } pub async fn query_typed<'a, P, I>( client: &Arc, query: &str, params: I, ) -> Result where P: BorrowToSql, I: IntoIterator, { let buf = { let params = params.into_iter().collect::>(); let param_oids = params.iter().map(|(_, t)| t.oid()).collect::>(); client.with_buf(|buf| { frontend::parse("", query, param_oids.into_iter(), buf).map_err(Error::parse)?; encode_bind_raw("", params, "", buf)?; frontend::describe(b'S', "", buf).map_err(Error::encode)?; frontend::execute("", 0, buf).map_err(Error::encode)?; frontend::sync(buf); Ok(buf.split().freeze()) })? }; let mut responses = client.send(RequestMessages::Single(FrontendMessage::Raw(buf)))?; loop { match responses.next().await? { Message::ParseComplete | Message::BindComplete | Message::ParameterDescription(_) => {} Message::NoData => { return Ok(RowStream { statement: Statement::unnamed(vec![], vec![]), responses, rows_affected: None, _p: PhantomPinned, }); } Message::RowDescription(row_description) => { let mut columns: Vec = vec![]; let mut it = row_description.fields(); while let Some(field) = it.next().map_err(Error::parse)? { let type_ = get_type(client, field.type_oid()).await?; let column = Column { name: field.name().to_string(), table_oid: Some(field.table_oid()).filter(|n| *n != 0), column_id: Some(field.column_id()).filter(|n| *n != 0), r#type: type_, }; columns.push(column); } return Ok(RowStream { statement: Statement::unnamed(vec![], columns), responses, rows_affected: None, _p: PhantomPinned, }); } _ => return Err(Error::unexpected_message()), } } } pub async fn query_portal( client: &InnerClient, portal: &Portal, max_rows: i32, ) -> Result { let buf = client.with_buf(|buf| { frontend::execute(portal.name(), max_rows, buf).map_err(Error::encode)?; frontend::sync(buf); Ok(buf.split().freeze()) })?; let responses = client.send(RequestMessages::Single(FrontendMessage::Raw(buf)))?; Ok(RowStream { statement: portal.statement().clone(), responses, rows_affected: None, _p: PhantomPinned, }) } /// Extract the number of rows affected from [`CommandCompleteBody`]. pub fn extract_row_affected(body: &CommandCompleteBody) -> Result { let rows = body .tag() .map_err(Error::parse)? .rsplit(' ') .next() .unwrap() .parse() .unwrap_or(0); Ok(rows) } pub async fn execute( client: &InnerClient, statement: Statement, params: I, ) -> Result where P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { let buf = if log_enabled!(Level::Debug) { let params = params.into_iter().collect::>(); debug!( "executing statement {} with parameters: {:?}", statement.name(), BorrowToSqlParamsDebug(params.as_slice()), ); encode(client, &statement, params)? } else { encode(client, &statement, params)? }; let mut responses = start(client, buf).await?; let mut rows = 0; loop { match responses.next().await? { Message::DataRow(_) => {} Message::CommandComplete(body) => { rows = extract_row_affected(&body)?; } Message::EmptyQueryResponse => rows = 0, Message::ReadyForQuery(_) => return Ok(rows), _ => return Err(Error::unexpected_message()), } } } async fn start(client: &InnerClient, buf: Bytes) -> Result { let mut responses = client.send(RequestMessages::Single(FrontendMessage::Raw(buf)))?; match responses.next().await? { Message::BindComplete => {} _ => return Err(Error::unexpected_message()), } Ok(responses) } pub fn encode(client: &InnerClient, statement: &Statement, params: I) -> Result where P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { client.with_buf(|buf| { encode_bind(statement, params, "", buf)?; frontend::execute("", 0, buf).map_err(Error::encode)?; frontend::sync(buf); Ok(buf.split().freeze()) }) } pub fn encode_bind( statement: &Statement, params: I, portal: &str, buf: &mut BytesMut, ) -> Result<(), Error> where P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { let params = params.into_iter(); if params.len() != statement.params().len() { return Err(Error::parameters(params.len(), statement.params().len())); } encode_bind_raw( statement.name(), params.zip(statement.params().iter().cloned()), portal, buf, ) } fn encode_bind_raw( statement_name: &str, params: I, portal: &str, buf: &mut BytesMut, ) -> Result<(), Error> where P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { let (param_formats, params): (Vec<_>, Vec<_>) = params .into_iter() .map(|(p, ty)| (p.borrow_to_sql().encode_format(&ty) as i16, (p, ty))) .unzip(); let mut error_idx = 0; let r = frontend::bind( portal, statement_name, param_formats, params.into_iter().enumerate(), |(idx, (param, ty)), buf| match param.borrow_to_sql().to_sql_checked(&ty, buf) { Ok(IsNull::No) => Ok(postgres_protocol::IsNull::No), Ok(IsNull::Yes) => Ok(postgres_protocol::IsNull::Yes), Err(e) => { error_idx = idx; Err(e) } }, Some(1), buf, ); match r { Ok(()) => Ok(()), Err(frontend::BindError::Conversion(e)) => Err(Error::to_sql(e, error_idx)), Err(frontend::BindError::Serialization(e)) => Err(Error::encode(e)), } } pin_project! { /// A stream of table rows. pub struct RowStream { statement: Statement, responses: Responses, rows_affected: Option, #[pin] _p: PhantomPinned, } } impl Stream for RowStream { type Item = Result; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { let this = self.project(); loop { match ready!(this.responses.poll_next(cx)?) { Message::DataRow(body) => { return Poll::Ready(Some(Ok(Row::new(this.statement.clone(), body)?))) } Message::CommandComplete(body) => { *this.rows_affected = Some(extract_row_affected(&body)?); } Message::EmptyQueryResponse | Message::PortalSuspended => {} Message::ReadyForQuery(_) => return Poll::Ready(None), _ => return Poll::Ready(Some(Err(Error::unexpected_message()))), } } } } impl RowStream { /// Returns the number of rows affected by the query. /// /// This function will return `None` until the stream has been exhausted. pub fn rows_affected(&self) -> Option { self.rows_affected } } tokio-postgres-0.7.12/src/row.rs000064400000000000000000000155251046102023000146600ustar 00000000000000//! Rows. use crate::row::sealed::{AsName, Sealed}; use crate::simple_query::SimpleColumn; use crate::statement::Column; use crate::types::{FromSql, Type, WrongType}; use crate::{Error, Statement}; use fallible_iterator::FallibleIterator; use postgres_protocol::message::backend::DataRowBody; use std::fmt; use std::ops::Range; use std::str; use std::sync::Arc; mod sealed { pub trait Sealed {} pub trait AsName { fn as_name(&self) -> &str; } } impl AsName for Column { fn as_name(&self) -> &str { self.name() } } impl AsName for String { fn as_name(&self) -> &str { self } } /// A trait implemented by types that can index into columns of a row. /// /// This cannot be implemented outside of this crate. pub trait RowIndex: Sealed { #[doc(hidden)] fn __idx(&self, columns: &[T]) -> Option where T: AsName; } impl Sealed for usize {} impl RowIndex for usize { #[inline] fn __idx(&self, columns: &[T]) -> Option where T: AsName, { if *self >= columns.len() { None } else { Some(*self) } } } impl Sealed for str {} impl RowIndex for str { #[inline] fn __idx(&self, columns: &[T]) -> Option where T: AsName, { if let Some(idx) = columns.iter().position(|d| d.as_name() == self) { return Some(idx); }; // FIXME ASCII-only case insensitivity isn't really the right thing to // do. Postgres itself uses a dubious wrapper around tolower and JDBC // uses the US locale. columns .iter() .position(|d| d.as_name().eq_ignore_ascii_case(self)) } } impl<'a, T> Sealed for &'a T where T: ?Sized + Sealed {} impl<'a, T> RowIndex for &'a T where T: ?Sized + RowIndex, { #[inline] fn __idx(&self, columns: &[U]) -> Option where U: AsName, { T::__idx(*self, columns) } } /// A row of data returned from the database by a query. #[derive(Clone)] pub struct Row { statement: Statement, body: DataRowBody, ranges: Vec>>, } impl fmt::Debug for Row { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("Row") .field("columns", &self.columns()) .finish() } } impl Row { pub(crate) fn new(statement: Statement, body: DataRowBody) -> Result { let ranges = body.ranges().collect().map_err(Error::parse)?; Ok(Row { statement, body, ranges, }) } /// Returns information about the columns of data in the row. pub fn columns(&self) -> &[Column] { self.statement.columns() } /// Determines if the row contains no values. pub fn is_empty(&self) -> bool { self.len() == 0 } /// Returns the number of values in the row. pub fn len(&self) -> usize { self.columns().len() } /// Deserializes a value from the row. /// /// The value can be specified either by its numeric index in the row, or by its column name. /// /// # Panics /// /// Panics if the index is out of bounds or if the value cannot be converted to the specified type. #[track_caller] pub fn get<'a, I, T>(&'a self, idx: I) -> T where I: RowIndex + fmt::Display, T: FromSql<'a>, { match self.get_inner(&idx) { Ok(ok) => ok, Err(err) => panic!("error retrieving column {}: {}", idx, err), } } /// Like `Row::get`, but returns a `Result` rather than panicking. pub fn try_get<'a, I, T>(&'a self, idx: I) -> Result where I: RowIndex + fmt::Display, T: FromSql<'a>, { self.get_inner(&idx) } fn get_inner<'a, I, T>(&'a self, idx: &I) -> Result where I: RowIndex + fmt::Display, T: FromSql<'a>, { let idx = match idx.__idx(self.columns()) { Some(idx) => idx, None => return Err(Error::column(idx.to_string())), }; let ty = self.columns()[idx].type_(); if !T::accepts(ty) { return Err(Error::from_sql( Box::new(WrongType::new::(ty.clone())), idx, )); } FromSql::from_sql_nullable(ty, self.col_buffer(idx)).map_err(|e| Error::from_sql(e, idx)) } /// Get the raw bytes for the column at the given index. fn col_buffer(&self, idx: usize) -> Option<&[u8]> { let range = self.ranges[idx].to_owned()?; Some(&self.body.buffer()[range]) } } impl AsName for SimpleColumn { fn as_name(&self) -> &str { self.name() } } /// A row of data returned from the database by a simple query. #[derive(Debug)] pub struct SimpleQueryRow { columns: Arc<[SimpleColumn]>, body: DataRowBody, ranges: Vec>>, } impl SimpleQueryRow { #[allow(clippy::new_ret_no_self)] pub(crate) fn new( columns: Arc<[SimpleColumn]>, body: DataRowBody, ) -> Result { let ranges = body.ranges().collect().map_err(Error::parse)?; Ok(SimpleQueryRow { columns, body, ranges, }) } /// Returns information about the columns of data in the row. pub fn columns(&self) -> &[SimpleColumn] { &self.columns } /// Determines if the row contains no values. pub fn is_empty(&self) -> bool { self.len() == 0 } /// Returns the number of values in the row. pub fn len(&self) -> usize { self.columns.len() } /// Returns a value from the row. /// /// The value can be specified either by its numeric index in the row, or by its column name. /// /// # Panics /// /// Panics if the index is out of bounds or if the value cannot be converted to the specified type. #[track_caller] pub fn get(&self, idx: I) -> Option<&str> where I: RowIndex + fmt::Display, { match self.get_inner(&idx) { Ok(ok) => ok, Err(err) => panic!("error retrieving column {}: {}", idx, err), } } /// Like `SimpleQueryRow::get`, but returns a `Result` rather than panicking. pub fn try_get(&self, idx: I) -> Result, Error> where I: RowIndex + fmt::Display, { self.get_inner(&idx) } fn get_inner(&self, idx: &I) -> Result, Error> where I: RowIndex + fmt::Display, { let idx = match idx.__idx(&self.columns) { Some(idx) => idx, None => return Err(Error::column(idx.to_string())), }; let buf = self.ranges[idx].clone().map(|r| &self.body.buffer()[r]); FromSql::from_sql_nullable(&Type::TEXT, buf).map_err(|e| Error::from_sql(e, idx)) } } tokio-postgres-0.7.12/src/simple_query.rs000064400000000000000000000074611046102023000165670ustar 00000000000000use crate::client::{InnerClient, Responses}; use crate::codec::FrontendMessage; use crate::connection::RequestMessages; use crate::query::extract_row_affected; use crate::{Error, SimpleQueryMessage, SimpleQueryRow}; use bytes::Bytes; use fallible_iterator::FallibleIterator; use futures_util::{ready, Stream}; use log::debug; use pin_project_lite::pin_project; use postgres_protocol::message::backend::Message; use postgres_protocol::message::frontend; use std::marker::PhantomPinned; use std::pin::Pin; use std::sync::Arc; use std::task::{Context, Poll}; /// Information about a column of a single query row. #[derive(Debug)] pub struct SimpleColumn { name: String, } impl SimpleColumn { pub(crate) fn new(name: String) -> SimpleColumn { SimpleColumn { name } } /// Returns the name of the column. pub fn name(&self) -> &str { &self.name } } pub async fn simple_query(client: &InnerClient, query: &str) -> Result { debug!("executing simple query: {}", query); let buf = encode(client, query)?; let responses = client.send(RequestMessages::Single(FrontendMessage::Raw(buf)))?; Ok(SimpleQueryStream { responses, columns: None, _p: PhantomPinned, }) } pub async fn batch_execute(client: &InnerClient, query: &str) -> Result<(), Error> { debug!("executing statement batch: {}", query); let buf = encode(client, query)?; let mut responses = client.send(RequestMessages::Single(FrontendMessage::Raw(buf)))?; loop { match responses.next().await? { Message::ReadyForQuery(_) => return Ok(()), Message::CommandComplete(_) | Message::EmptyQueryResponse | Message::RowDescription(_) | Message::DataRow(_) => {} _ => return Err(Error::unexpected_message()), } } } fn encode(client: &InnerClient, query: &str) -> Result { client.with_buf(|buf| { frontend::query(query, buf).map_err(Error::encode)?; Ok(buf.split().freeze()) }) } pin_project! { /// A stream of simple query results. pub struct SimpleQueryStream { responses: Responses, columns: Option>, #[pin] _p: PhantomPinned, } } impl Stream for SimpleQueryStream { type Item = Result; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { let this = self.project(); match ready!(this.responses.poll_next(cx)?) { Message::CommandComplete(body) => { let rows = extract_row_affected(&body)?; Poll::Ready(Some(Ok(SimpleQueryMessage::CommandComplete(rows)))) } Message::EmptyQueryResponse => { Poll::Ready(Some(Ok(SimpleQueryMessage::CommandComplete(0)))) } Message::RowDescription(body) => { let columns: Arc<[SimpleColumn]> = body .fields() .map(|f| Ok(SimpleColumn::new(f.name().to_string()))) .collect::>() .map_err(Error::parse)? .into(); *this.columns = Some(columns.clone()); Poll::Ready(Some(Ok(SimpleQueryMessage::RowDescription(columns)))) } Message::DataRow(body) => { let row = match &this.columns { Some(columns) => SimpleQueryRow::new(columns.clone(), body)?, None => return Poll::Ready(Some(Err(Error::unexpected_message()))), }; Poll::Ready(Some(Ok(SimpleQueryMessage::Row(row)))) } Message::ReadyForQuery(_) => Poll::Ready(None), _ => Poll::Ready(Some(Err(Error::unexpected_message()))), } } } tokio-postgres-0.7.12/src/socket.rs000064400000000000000000000036511046102023000153360ustar 00000000000000use std::io; use std::pin::Pin; use std::task::{Context, Poll}; use tokio::io::{AsyncRead, AsyncWrite, ReadBuf}; use tokio::net::TcpStream; #[cfg(unix)] use tokio::net::UnixStream; #[derive(Debug)] enum Inner { Tcp(TcpStream), #[cfg(unix)] Unix(UnixStream), } /// The standard stream type used by the crate. /// /// Requires the `runtime` Cargo feature (enabled by default). #[derive(Debug)] pub struct Socket(Inner); impl Socket { pub(crate) fn new_tcp(stream: TcpStream) -> Socket { Socket(Inner::Tcp(stream)) } #[cfg(unix)] pub(crate) fn new_unix(stream: UnixStream) -> Socket { Socket(Inner::Unix(stream)) } } impl AsyncRead for Socket { fn poll_read( mut self: Pin<&mut Self>, cx: &mut Context<'_>, buf: &mut ReadBuf<'_>, ) -> Poll> { match &mut self.0 { Inner::Tcp(s) => Pin::new(s).poll_read(cx, buf), #[cfg(unix)] Inner::Unix(s) => Pin::new(s).poll_read(cx, buf), } } } impl AsyncWrite for Socket { fn poll_write( mut self: Pin<&mut Self>, cx: &mut Context<'_>, buf: &[u8], ) -> Poll> { match &mut self.0 { Inner::Tcp(s) => Pin::new(s).poll_write(cx, buf), #[cfg(unix)] Inner::Unix(s) => Pin::new(s).poll_write(cx, buf), } } fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { match &mut self.0 { Inner::Tcp(s) => Pin::new(s).poll_flush(cx), #[cfg(unix)] Inner::Unix(s) => Pin::new(s).poll_flush(cx), } } fn poll_shutdown(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { match &mut self.0 { Inner::Tcp(s) => Pin::new(s).poll_shutdown(cx), #[cfg(unix)] Inner::Unix(s) => Pin::new(s).poll_shutdown(cx), } } } tokio-postgres-0.7.12/src/statement.rs000064400000000000000000000057331046102023000160550ustar 00000000000000use crate::client::InnerClient; use crate::codec::FrontendMessage; use crate::connection::RequestMessages; use crate::types::Type; use postgres_protocol::message::frontend; use std::sync::{Arc, Weak}; struct StatementInner { client: Weak, name: String, params: Vec, columns: Vec, } impl Drop for StatementInner { fn drop(&mut self) { if self.name.is_empty() { // Unnamed statements don't need to be closed return; } if let Some(client) = self.client.upgrade() { let buf = client.with_buf(|buf| { frontend::close(b'S', &self.name, buf).unwrap(); frontend::sync(buf); buf.split().freeze() }); let _ = client.send(RequestMessages::Single(FrontendMessage::Raw(buf))); } } } /// A prepared statement. /// /// Prepared statements can only be used with the connection that created them. #[derive(Clone)] pub struct Statement(Arc); impl Statement { pub(crate) fn new( inner: &Arc, name: String, params: Vec, columns: Vec, ) -> Statement { Statement(Arc::new(StatementInner { client: Arc::downgrade(inner), name, params, columns, })) } pub(crate) fn unnamed(params: Vec, columns: Vec) -> Statement { Statement(Arc::new(StatementInner { client: Weak::new(), name: String::new(), params, columns, })) } pub(crate) fn name(&self) -> &str { &self.0.name } /// Returns the expected types of the statement's parameters. pub fn params(&self) -> &[Type] { &self.0.params } /// Returns information about the columns returned when the statement is queried. pub fn columns(&self) -> &[Column] { &self.0.columns } } impl std::fmt::Debug for Statement { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { f.debug_struct("Statement") .field("name", &self.0.name) .field("params", &self.0.params) .field("columns", &self.0.columns) .finish_non_exhaustive() } } /// Information about a column of a query. #[derive(Debug)] pub struct Column { pub(crate) name: String, pub(crate) table_oid: Option, pub(crate) column_id: Option, pub(crate) r#type: Type, } impl Column { /// Returns the name of the column. pub fn name(&self) -> &str { &self.name } /// Returns the OID of the underlying database table. pub fn table_oid(&self) -> Option { self.table_oid } /// Return the column ID within the underlying database table. pub fn column_id(&self) -> Option { self.column_id } /// Returns the type of the column. pub fn type_(&self) -> &Type { &self.r#type } } tokio-postgres-0.7.12/src/tls.rs000064400000000000000000000107711046102023000146510ustar 00000000000000//! TLS support. use std::error::Error; use std::future::Future; use std::pin::Pin; use std::task::{Context, Poll}; use std::{fmt, io}; use tokio::io::{AsyncRead, AsyncWrite, ReadBuf}; pub(crate) mod private { pub struct ForcePrivateApi; } /// Channel binding information returned from a TLS handshake. pub struct ChannelBinding { pub(crate) tls_server_end_point: Option>, } impl ChannelBinding { /// Creates a `ChannelBinding` containing no information. pub fn none() -> ChannelBinding { ChannelBinding { tls_server_end_point: None, } } /// Creates a `ChannelBinding` containing `tls-server-end-point` channel binding information. pub fn tls_server_end_point(tls_server_end_point: Vec) -> ChannelBinding { ChannelBinding { tls_server_end_point: Some(tls_server_end_point), } } } /// A constructor of `TlsConnect`ors. /// /// Requires the `runtime` Cargo feature (enabled by default). #[cfg(feature = "runtime")] pub trait MakeTlsConnect { /// The stream type created by the `TlsConnect` implementation. type Stream: TlsStream + Unpin; /// The `TlsConnect` implementation created by this type. type TlsConnect: TlsConnect; /// The error type returned by the `TlsConnect` implementation. type Error: Into>; /// Creates a new `TlsConnect`or. /// /// The domain name is provided for certificate verification and SNI. fn make_tls_connect(&mut self, domain: &str) -> Result; } /// An asynchronous function wrapping a stream in a TLS session. pub trait TlsConnect { /// The stream returned by the future. type Stream: TlsStream + Unpin; /// The error returned by the future. type Error: Into>; /// The future returned by the connector. type Future: Future>; /// Returns a future performing a TLS handshake over the stream. fn connect(self, stream: S) -> Self::Future; #[doc(hidden)] fn can_connect(&self, _: private::ForcePrivateApi) -> bool { true } } /// A TLS-wrapped connection to a PostgreSQL database. pub trait TlsStream: AsyncRead + AsyncWrite { /// Returns channel binding information for the session. fn channel_binding(&self) -> ChannelBinding; } /// A `MakeTlsConnect` and `TlsConnect` implementation which simply returns an error. /// /// This can be used when `sslmode` is `none` or `prefer`. #[derive(Debug, Copy, Clone)] pub struct NoTls; #[cfg(feature = "runtime")] impl MakeTlsConnect for NoTls { type Stream = NoTlsStream; type TlsConnect = NoTls; type Error = NoTlsError; fn make_tls_connect(&mut self, _: &str) -> Result { Ok(NoTls) } } impl TlsConnect for NoTls { type Stream = NoTlsStream; type Error = NoTlsError; type Future = NoTlsFuture; fn connect(self, _: S) -> NoTlsFuture { NoTlsFuture(()) } fn can_connect(&self, _: private::ForcePrivateApi) -> bool { false } } /// The future returned by `NoTls`. pub struct NoTlsFuture(()); impl Future for NoTlsFuture { type Output = Result; fn poll(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll { Poll::Ready(Err(NoTlsError(()))) } } /// The TLS "stream" type produced by the `NoTls` connector. /// /// Since `NoTls` doesn't support TLS, this type is uninhabited. pub enum NoTlsStream {} impl AsyncRead for NoTlsStream { fn poll_read( self: Pin<&mut Self>, _: &mut Context<'_>, _: &mut ReadBuf<'_>, ) -> Poll> { match *self {} } } impl AsyncWrite for NoTlsStream { fn poll_write(self: Pin<&mut Self>, _: &mut Context<'_>, _: &[u8]) -> Poll> { match *self {} } fn poll_flush(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll> { match *self {} } fn poll_shutdown(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll> { match *self {} } } impl TlsStream for NoTlsStream { fn channel_binding(&self) -> ChannelBinding { match *self {} } } /// The error returned by `NoTls`. #[derive(Debug)] pub struct NoTlsError(()); impl fmt::Display for NoTlsError { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt.write_str("no TLS implementation configured") } } impl Error for NoTlsError {} tokio-postgres-0.7.12/src/to_statement.rs000064400000000000000000000026531046102023000165550ustar 00000000000000use crate::to_statement::private::{Sealed, ToStatementType}; use crate::Statement; mod private { use crate::{Client, Error, Statement}; pub trait Sealed {} pub enum ToStatementType<'a> { Statement(&'a Statement), Query(&'a str), } impl<'a> ToStatementType<'a> { pub async fn into_statement(self, client: &Client) -> Result { match self { ToStatementType::Statement(s) => Ok(s.clone()), ToStatementType::Query(s) => client.prepare(s).await, } } } } /// A trait abstracting over prepared and unprepared statements. /// /// Many methods are generic over this bound, so that they support both a raw query string as well as a statement which /// was prepared previously. /// /// This trait is "sealed" and cannot be implemented by anything outside this crate. pub trait ToStatement: Sealed { #[doc(hidden)] fn __convert(&self) -> ToStatementType<'_>; } impl ToStatement for Statement { fn __convert(&self) -> ToStatementType<'_> { ToStatementType::Statement(self) } } impl Sealed for Statement {} impl ToStatement for str { fn __convert(&self) -> ToStatementType<'_> { ToStatementType::Query(self) } } impl Sealed for str {} impl ToStatement for String { fn __convert(&self) -> ToStatementType<'_> { ToStatementType::Query(self) } } impl Sealed for String {} tokio-postgres-0.7.12/src/transaction.rs000064400000000000000000000240701046102023000163710ustar 00000000000000use crate::codec::FrontendMessage; use crate::connection::RequestMessages; use crate::copy_out::CopyOutStream; use crate::query::RowStream; #[cfg(feature = "runtime")] use crate::tls::MakeTlsConnect; use crate::tls::TlsConnect; use crate::types::{BorrowToSql, ToSql, Type}; #[cfg(feature = "runtime")] use crate::Socket; use crate::{ bind, query, slice_iter, CancelToken, Client, CopyInSink, Error, Portal, Row, SimpleQueryMessage, Statement, ToStatement, }; use bytes::Buf; use futures_util::TryStreamExt; use postgres_protocol::message::frontend; use tokio::io::{AsyncRead, AsyncWrite}; /// A representation of a PostgreSQL database transaction. /// /// Transactions will implicitly roll back when dropped. Use the `commit` method to commit the changes made in the /// transaction. Transactions can be nested, with inner transactions implemented via safepoints. pub struct Transaction<'a> { client: &'a mut Client, savepoint: Option, done: bool, } /// A representation of a PostgreSQL database savepoint. struct Savepoint { name: String, depth: u32, } impl<'a> Drop for Transaction<'a> { fn drop(&mut self) { if self.done { return; } let query = if let Some(sp) = self.savepoint.as_ref() { format!("ROLLBACK TO {}", sp.name) } else { "ROLLBACK".to_string() }; let buf = self.client.inner().with_buf(|buf| { frontend::query(&query, buf).unwrap(); buf.split().freeze() }); let _ = self .client .inner() .send(RequestMessages::Single(FrontendMessage::Raw(buf))); } } impl<'a> Transaction<'a> { pub(crate) fn new(client: &'a mut Client) -> Transaction<'a> { Transaction { client, savepoint: None, done: false, } } /// Consumes the transaction, committing all changes made within it. pub async fn commit(mut self) -> Result<(), Error> { self.done = true; let query = if let Some(sp) = self.savepoint.as_ref() { format!("RELEASE {}", sp.name) } else { "COMMIT".to_string() }; self.client.batch_execute(&query).await } /// Rolls the transaction back, discarding all changes made within it. /// /// This is equivalent to `Transaction`'s `Drop` implementation, but provides any error encountered to the caller. pub async fn rollback(mut self) -> Result<(), Error> { self.done = true; let query = if let Some(sp) = self.savepoint.as_ref() { format!("ROLLBACK TO {}", sp.name) } else { "ROLLBACK".to_string() }; self.client.batch_execute(&query).await } /// Like `Client::prepare`. pub async fn prepare(&self, query: &str) -> Result { self.client.prepare(query).await } /// Like `Client::prepare_typed`. pub async fn prepare_typed( &self, query: &str, parameter_types: &[Type], ) -> Result { self.client.prepare_typed(query, parameter_types).await } /// Like `Client::query`. pub async fn query( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement, { self.client.query(statement, params).await } /// Like `Client::query_one`. pub async fn query_one( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result where T: ?Sized + ToStatement, { self.client.query_one(statement, params).await } /// Like `Client::query_opt`. pub async fn query_opt( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement, { self.client.query_opt(statement, params).await } /// Like `Client::query_raw`. pub async fn query_raw(&self, statement: &T, params: I) -> Result where T: ?Sized + ToStatement, P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { self.client.query_raw(statement, params).await } /// Like `Client::query_typed`. pub async fn query_typed( &self, statement: &str, params: &[(&(dyn ToSql + Sync), Type)], ) -> Result, Error> { self.client.query_typed(statement, params).await } /// Like `Client::query_typed_raw`. pub async fn query_typed_raw(&self, query: &str, params: I) -> Result where P: BorrowToSql, I: IntoIterator, { self.client.query_typed_raw(query, params).await } /// Like `Client::execute`. pub async fn execute( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result where T: ?Sized + ToStatement, { self.client.execute(statement, params).await } /// Like `Client::execute_iter`. pub async fn execute_raw(&self, statement: &T, params: I) -> Result where T: ?Sized + ToStatement, P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { self.client.execute_raw(statement, params).await } /// Binds a statement to a set of parameters, creating a `Portal` which can be incrementally queried. /// /// Portals only last for the duration of the transaction in which they are created, and can only be used on the /// connection that created them. /// /// # Panics /// /// Panics if the number of parameters provided does not match the number expected. pub async fn bind( &self, statement: &T, params: &[&(dyn ToSql + Sync)], ) -> Result where T: ?Sized + ToStatement, { self.bind_raw(statement, slice_iter(params)).await } /// A maximally flexible version of [`bind`]. /// /// [`bind`]: #method.bind pub async fn bind_raw(&self, statement: &T, params: I) -> Result where T: ?Sized + ToStatement, P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { let statement = statement.__convert().into_statement(self.client).await?; bind::bind(self.client.inner(), statement, params).await } /// Continues execution of a portal, returning a stream of the resulting rows. /// /// Unlike `query`, portals can be incrementally evaluated by limiting the number of rows returned in each call to /// `query_portal`. If the requested number is negative or 0, all rows will be returned. pub async fn query_portal(&self, portal: &Portal, max_rows: i32) -> Result, Error> { self.query_portal_raw(portal, max_rows) .await? .try_collect() .await } /// The maximally flexible version of [`query_portal`]. /// /// [`query_portal`]: #method.query_portal pub async fn query_portal_raw( &self, portal: &Portal, max_rows: i32, ) -> Result { query::query_portal(self.client.inner(), portal, max_rows).await } /// Like `Client::copy_in`. pub async fn copy_in(&self, statement: &T) -> Result, Error> where T: ?Sized + ToStatement, U: Buf + 'static + Send, { self.client.copy_in(statement).await } /// Like `Client::copy_out`. pub async fn copy_out(&self, statement: &T) -> Result where T: ?Sized + ToStatement, { self.client.copy_out(statement).await } /// Like `Client::simple_query`. pub async fn simple_query(&self, query: &str) -> Result, Error> { self.client.simple_query(query).await } /// Like `Client::batch_execute`. pub async fn batch_execute(&self, query: &str) -> Result<(), Error> { self.client.batch_execute(query).await } /// Like `Client::cancel_token`. pub fn cancel_token(&self) -> CancelToken { self.client.cancel_token() } /// Like `Client::cancel_query`. #[cfg(feature = "runtime")] #[deprecated(since = "0.6.0", note = "use Transaction::cancel_token() instead")] pub async fn cancel_query(&self, tls: T) -> Result<(), Error> where T: MakeTlsConnect, { #[allow(deprecated)] self.client.cancel_query(tls).await } /// Like `Client::cancel_query_raw`. #[deprecated(since = "0.6.0", note = "use Transaction::cancel_token() instead")] pub async fn cancel_query_raw(&self, stream: S, tls: T) -> Result<(), Error> where S: AsyncRead + AsyncWrite + Unpin, T: TlsConnect, { #[allow(deprecated)] self.client.cancel_query_raw(stream, tls).await } /// Like `Client::transaction`, but creates a nested transaction via a savepoint. pub async fn transaction(&mut self) -> Result, Error> { self._savepoint(None).await } /// Like `Client::transaction`, but creates a nested transaction via a savepoint with the specified name. pub async fn savepoint(&mut self, name: I) -> Result, Error> where I: Into, { self._savepoint(Some(name.into())).await } async fn _savepoint(&mut self, name: Option) -> Result, Error> { let depth = self.savepoint.as_ref().map_or(0, |sp| sp.depth) + 1; let name = name.unwrap_or_else(|| format!("sp_{}", depth)); let query = format!("SAVEPOINT {}", name); self.batch_execute(&query).await?; Ok(Transaction { client: self.client, savepoint: Some(Savepoint { name, depth }), done: false, }) } /// Returns a reference to the underlying `Client`. pub fn client(&self) -> &Client { self.client } } tokio-postgres-0.7.12/src/transaction_builder.rs000064400000000000000000000113121046102023000200720ustar 00000000000000use postgres_protocol::message::frontend; use crate::{codec::FrontendMessage, connection::RequestMessages, Client, Error, Transaction}; /// The isolation level of a database transaction. #[derive(Debug, Copy, Clone)] #[non_exhaustive] pub enum IsolationLevel { /// Equivalent to `ReadCommitted`. ReadUncommitted, /// An individual statement in the transaction will see rows committed before it began. ReadCommitted, /// All statements in the transaction will see the same view of rows committed before the first query in the /// transaction. RepeatableRead, /// The reads and writes in this transaction must be able to be committed as an atomic "unit" with respect to reads /// and writes of all other concurrent serializable transactions without interleaving. Serializable, } /// A builder for database transactions. pub struct TransactionBuilder<'a> { client: &'a mut Client, isolation_level: Option, read_only: Option, deferrable: Option, } impl<'a> TransactionBuilder<'a> { pub(crate) fn new(client: &'a mut Client) -> TransactionBuilder<'a> { TransactionBuilder { client, isolation_level: None, read_only: None, deferrable: None, } } /// Sets the isolation level of the transaction. pub fn isolation_level(mut self, isolation_level: IsolationLevel) -> Self { self.isolation_level = Some(isolation_level); self } /// Sets the access mode of the transaction. pub fn read_only(mut self, read_only: bool) -> Self { self.read_only = Some(read_only); self } /// Sets the deferrability of the transaction. /// /// If the transaction is also serializable and read only, creation of the transaction may block, but when it /// completes the transaction is able to run with less overhead and a guarantee that it will not be aborted due to /// serialization failure. pub fn deferrable(mut self, deferrable: bool) -> Self { self.deferrable = Some(deferrable); self } /// Begins the transaction. /// /// The transaction will roll back by default - use the `commit` method to commit it. pub async fn start(self) -> Result, Error> { let mut query = "START TRANSACTION".to_string(); let mut first = true; if let Some(level) = self.isolation_level { first = false; query.push_str(" ISOLATION LEVEL "); let level = match level { IsolationLevel::ReadUncommitted => "READ UNCOMMITTED", IsolationLevel::ReadCommitted => "READ COMMITTED", IsolationLevel::RepeatableRead => "REPEATABLE READ", IsolationLevel::Serializable => "SERIALIZABLE", }; query.push_str(level); } if let Some(read_only) = self.read_only { if !first { query.push(','); } first = false; let s = if read_only { " READ ONLY" } else { " READ WRITE" }; query.push_str(s); } if let Some(deferrable) = self.deferrable { if !first { query.push(','); } let s = if deferrable { " DEFERRABLE" } else { " NOT DEFERRABLE" }; query.push_str(s); } struct RollbackIfNotDone<'me> { client: &'me Client, done: bool, } impl<'a> Drop for RollbackIfNotDone<'a> { fn drop(&mut self) { if self.done { return; } let buf = self.client.inner().with_buf(|buf| { frontend::query("ROLLBACK", buf).unwrap(); buf.split().freeze() }); let _ = self .client .inner() .send(RequestMessages::Single(FrontendMessage::Raw(buf))); } } // This is done as `Future` created by this method can be dropped after // `RequestMessages` is synchronously send to the `Connection` by // `batch_execute()`, but before `Responses` is asynchronously polled to // completion. In that case `Transaction` won't be created and thus // won't be rolled back. { let mut cleaner = RollbackIfNotDone { client: self.client, done: false, }; self.client.batch_execute(&query).await?; cleaner.done = true; } Ok(Transaction::new(self.client)) } } tokio-postgres-0.7.12/src/types.rs000064400000000000000000000001671046102023000152110ustar 00000000000000//! Types. //! //! This module is a reexport of the `postgres_types` crate. #[doc(inline)] pub use postgres_types::*; tokio-postgres-0.7.12/tests/test/binary_copy.rs000064400000000000000000000126641046102023000177220ustar 00000000000000use crate::connect; use futures_util::{pin_mut, TryStreamExt}; use tokio_postgres::binary_copy::{BinaryCopyInWriter, BinaryCopyOutStream}; use tokio_postgres::types::Type; #[tokio::test] async fn write_basic() { let client = connect("user=postgres").await; client .batch_execute("CREATE TEMPORARY TABLE foo (id INT, bar TEXT)") .await .unwrap(); let sink = client .copy_in("COPY foo (id, bar) FROM STDIN BINARY") .await .unwrap(); let writer = BinaryCopyInWriter::new(sink, &[Type::INT4, Type::TEXT]); pin_mut!(writer); writer.as_mut().write(&[&1i32, &"foobar"]).await.unwrap(); writer .as_mut() .write(&[&2i32, &None::<&str>]) .await .unwrap(); writer.finish().await.unwrap(); let rows = client .query("SELECT id, bar FROM foo ORDER BY id", &[]) .await .unwrap(); assert_eq!(rows.len(), 2); assert_eq!(rows[0].get::<_, i32>(0), 1); assert_eq!(rows[0].get::<_, Option<&str>>(1), Some("foobar")); assert_eq!(rows[1].get::<_, i32>(0), 2); assert_eq!(rows[1].get::<_, Option<&str>>(1), None); } #[tokio::test] async fn write_many_rows() { let client = connect("user=postgres").await; client .batch_execute("CREATE TEMPORARY TABLE foo (id INT, bar TEXT)") .await .unwrap(); let sink = client .copy_in("COPY foo (id, bar) FROM STDIN BINARY") .await .unwrap(); let writer = BinaryCopyInWriter::new(sink, &[Type::INT4, Type::TEXT]); pin_mut!(writer); for i in 0..10_000i32 { writer .as_mut() .write(&[&i, &format!("the value for {}", i)]) .await .unwrap(); } writer.finish().await.unwrap(); let rows = client .query("SELECT id, bar FROM foo ORDER BY id", &[]) .await .unwrap(); for (i, row) in rows.iter().enumerate() { assert_eq!(row.get::<_, i32>(0), i as i32); assert_eq!(row.get::<_, &str>(1), format!("the value for {}", i)); } } #[tokio::test] async fn write_big_rows() { let client = connect("user=postgres").await; client .batch_execute("CREATE TEMPORARY TABLE foo (id INT, bar BYTEA)") .await .unwrap(); let sink = client .copy_in("COPY foo (id, bar) FROM STDIN BINARY") .await .unwrap(); let writer = BinaryCopyInWriter::new(sink, &[Type::INT4, Type::BYTEA]); pin_mut!(writer); for i in 0..2i32 { writer .as_mut() .write(&[&i, &vec![i as u8; 128 * 1024]]) .await .unwrap(); } writer.finish().await.unwrap(); let rows = client .query("SELECT id, bar FROM foo ORDER BY id", &[]) .await .unwrap(); for (i, row) in rows.iter().enumerate() { assert_eq!(row.get::<_, i32>(0), i as i32); assert_eq!(row.get::<_, &[u8]>(1), &*vec![i as u8; 128 * 1024]); } } #[tokio::test] async fn read_basic() { let client = connect("user=postgres").await; client .batch_execute( " CREATE TEMPORARY TABLE foo (id INT, bar TEXT); INSERT INTO foo (id, bar) VALUES (1, 'foobar'), (2, NULL); ", ) .await .unwrap(); let stream = client .copy_out("COPY foo (id, bar) TO STDIN BINARY") .await .unwrap(); let rows = BinaryCopyOutStream::new(stream, &[Type::INT4, Type::TEXT]) .try_collect::>() .await .unwrap(); assert_eq!(rows.len(), 2); assert_eq!(rows[0].get::(0), 1); assert_eq!(rows[0].get::>(1), Some("foobar")); assert_eq!(rows[1].get::(0), 2); assert_eq!(rows[1].get::>(1), None); } #[tokio::test] async fn read_many_rows() { let client = connect("user=postgres").await; client .batch_execute( " CREATE TEMPORARY TABLE foo (id INT, bar TEXT); INSERT INTO foo (id, bar) SELECT i, 'the value for ' || i FROM generate_series(0, 9999) i;" ) .await .unwrap(); let stream = client .copy_out("COPY foo (id, bar) TO STDIN BINARY") .await .unwrap(); let rows = BinaryCopyOutStream::new(stream, &[Type::INT4, Type::TEXT]) .try_collect::>() .await .unwrap(); assert_eq!(rows.len(), 10_000); for (i, row) in rows.iter().enumerate() { assert_eq!(row.get::(0), i as i32); assert_eq!(row.get::<&str>(1), format!("the value for {}", i)); } } #[tokio::test] async fn read_big_rows() { let client = connect("user=postgres").await; client .batch_execute("CREATE TEMPORARY TABLE foo (id INT, bar BYTEA)") .await .unwrap(); for i in 0..2i32 { client .execute( "INSERT INTO foo (id, bar) VALUES ($1, $2)", &[&i, &vec![i as u8; 128 * 1024]], ) .await .unwrap(); } let stream = client .copy_out("COPY foo (id, bar) TO STDIN BINARY") .await .unwrap(); let rows = BinaryCopyOutStream::new(stream, &[Type::INT4, Type::BYTEA]) .try_collect::>() .await .unwrap(); assert_eq!(rows.len(), 2); for (i, row) in rows.iter().enumerate() { assert_eq!(row.get::(0), i as i32); assert_eq!(row.get::<&[u8]>(1), &vec![i as u8; 128 * 1024][..]); } } tokio-postgres-0.7.12/tests/test/main.rs000064400000000000000000000702711046102023000163260ustar 00000000000000#![warn(rust_2018_idioms)] use bytes::{Bytes, BytesMut}; use futures_channel::mpsc; use futures_util::{ future, join, pin_mut, stream, try_join, Future, FutureExt, SinkExt, StreamExt, TryStreamExt, }; use pin_project_lite::pin_project; use std::fmt::Write; use std::pin::Pin; use std::task::{Context, Poll}; use std::time::Duration; use tokio::net::TcpStream; use tokio::time; use tokio_postgres::error::SqlState; use tokio_postgres::tls::{NoTls, NoTlsStream}; use tokio_postgres::types::{Kind, Type}; use tokio_postgres::{ AsyncMessage, Client, Config, Connection, Error, IsolationLevel, SimpleQueryMessage, }; mod binary_copy; mod parse; #[cfg(feature = "runtime")] mod runtime; mod types; pin_project! { /// Polls `F` at most `polls_left` times returning `Some(F::Output)` if /// [`Future`] returned [`Poll::Ready`] or [`None`] otherwise. struct Cancellable { #[pin] fut: F, polls_left: usize, } } impl Future for Cancellable { type Output = Option; fn poll(self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll { let this = self.project(); match this.fut.poll(ctx) { Poll::Ready(r) => Poll::Ready(Some(r)), Poll::Pending => { *this.polls_left = this.polls_left.saturating_sub(1); if *this.polls_left == 0 { Poll::Ready(None) } else { Poll::Pending } } } } } async fn connect_raw(s: &str) -> Result<(Client, Connection), Error> { let socket = TcpStream::connect("127.0.0.1:5433").await.unwrap(); let config = s.parse::().unwrap(); config.connect_raw(socket, NoTls).await } async fn connect(s: &str) -> Client { let (client, connection) = connect_raw(s).await.unwrap(); let connection = connection.map(|r| r.unwrap()); tokio::spawn(connection); client } async fn current_transaction_id(client: &Client) -> i64 { client .query("SELECT txid_current()", &[]) .await .unwrap() .pop() .unwrap() .get::<_, i64>("txid_current") } async fn in_transaction(client: &Client) -> bool { current_transaction_id(client).await == current_transaction_id(client).await } #[tokio::test] async fn plain_password_missing() { connect_raw("user=pass_user dbname=postgres") .await .err() .unwrap(); } #[tokio::test] async fn plain_password_wrong() { match connect_raw("user=pass_user password=foo dbname=postgres").await { Ok(_) => panic!("unexpected success"), Err(ref e) if e.code() == Some(&SqlState::INVALID_PASSWORD) => {} Err(e) => panic!("{}", e), } } #[tokio::test] async fn plain_password_ok() { connect("user=pass_user password=password dbname=postgres").await; } #[tokio::test] async fn md5_password_missing() { connect_raw("user=md5_user dbname=postgres") .await .err() .unwrap(); } #[tokio::test] async fn md5_password_wrong() { match connect_raw("user=md5_user password=foo dbname=postgres").await { Ok(_) => panic!("unexpected success"), Err(ref e) if e.code() == Some(&SqlState::INVALID_PASSWORD) => {} Err(e) => panic!("{}", e), } } #[tokio::test] async fn md5_password_ok() { connect("user=md5_user password=password dbname=postgres").await; } #[tokio::test] async fn scram_password_missing() { connect_raw("user=scram_user dbname=postgres") .await .err() .unwrap(); } #[tokio::test] async fn scram_password_wrong() { match connect_raw("user=scram_user password=foo dbname=postgres").await { Ok(_) => panic!("unexpected success"), Err(ref e) if e.code() == Some(&SqlState::INVALID_PASSWORD) => {} Err(e) => panic!("{}", e), } } #[tokio::test] async fn scram_password_ok() { connect("user=scram_user password=password dbname=postgres").await; } #[tokio::test] async fn pipelined_prepare() { let client = connect("user=postgres").await; let prepare1 = client.prepare("SELECT $1::HSTORE[]"); let prepare2 = client.prepare("SELECT $1::BIGINT"); let (statement1, statement2) = try_join!(prepare1, prepare2).unwrap(); assert_eq!(statement1.params()[0].name(), "_hstore"); assert_eq!(statement1.columns()[0].type_().name(), "_hstore"); assert_eq!(statement2.params()[0], Type::INT8); assert_eq!(statement2.columns()[0].type_(), &Type::INT8); } #[tokio::test] async fn insert_select() { let client = connect("user=postgres").await; client .batch_execute("CREATE TEMPORARY TABLE foo (id SERIAL, name TEXT)") .await .unwrap(); let insert = client.prepare("INSERT INTO foo (name) VALUES ($1), ($2)"); let select = client.prepare("SELECT id, name FROM foo ORDER BY id"); let (insert, select) = try_join!(insert, select).unwrap(); let insert = client.execute(&insert, &[&"alice", &"bob"]); let select = client.query(&select, &[]); let (_, rows) = try_join!(insert, select).unwrap(); assert_eq!(rows.len(), 2); assert_eq!(rows[0].get::<_, i32>(0), 1); assert_eq!(rows[0].get::<_, &str>(1), "alice"); assert_eq!(rows[1].get::<_, i32>(0), 2); assert_eq!(rows[1].get::<_, &str>(1), "bob"); } #[tokio::test] async fn custom_enum() { let client = connect("user=postgres").await; client .batch_execute( "CREATE TYPE pg_temp.mood AS ENUM ( 'sad', 'ok', 'happy' )", ) .await .unwrap(); let select = client.prepare("SELECT $1::mood").await.unwrap(); let ty = &select.params()[0]; assert_eq!("mood", ty.name()); assert_eq!( &Kind::Enum(vec![ "sad".to_string(), "ok".to_string(), "happy".to_string(), ]), ty.kind(), ); } #[tokio::test] async fn custom_domain() { let client = connect("user=postgres").await; client .batch_execute("CREATE DOMAIN pg_temp.session_id AS bytea CHECK(octet_length(VALUE) = 16)") .await .unwrap(); let select = client.prepare("SELECT $1::session_id").await.unwrap(); let ty = &select.params()[0]; assert_eq!("session_id", ty.name()); assert_eq!(&Kind::Domain(Type::BYTEA), ty.kind()); } #[tokio::test] async fn custom_array() { let client = connect("user=postgres").await; let select = client.prepare("SELECT $1::HSTORE[]").await.unwrap(); let ty = &select.params()[0]; assert_eq!("_hstore", ty.name()); match ty.kind() { Kind::Array(ty) => { assert_eq!("hstore", ty.name()); assert_eq!(&Kind::Simple, ty.kind()); } _ => panic!("unexpected kind"), } } #[tokio::test] async fn custom_composite() { let client = connect("user=postgres").await; client .batch_execute( "CREATE TYPE pg_temp.inventory_item AS ( name TEXT, supplier INTEGER, price NUMERIC )", ) .await .unwrap(); let select = client.prepare("SELECT $1::inventory_item").await.unwrap(); let ty = &select.params()[0]; assert_eq!(ty.name(), "inventory_item"); match ty.kind() { Kind::Composite(fields) => { assert_eq!(fields[0].name(), "name"); assert_eq!(fields[0].type_(), &Type::TEXT); assert_eq!(fields[1].name(), "supplier"); assert_eq!(fields[1].type_(), &Type::INT4); assert_eq!(fields[2].name(), "price"); assert_eq!(fields[2].type_(), &Type::NUMERIC); } _ => panic!("unexpected kind"), } } #[tokio::test] async fn custom_range() { let client = connect("user=postgres").await; client .batch_execute( "CREATE TYPE pg_temp.floatrange AS RANGE ( subtype = float8, subtype_diff = float8mi )", ) .await .unwrap(); let select = client.prepare("SELECT $1::floatrange").await.unwrap(); let ty = &select.params()[0]; assert_eq!("floatrange", ty.name()); assert_eq!(&Kind::Range(Type::FLOAT8), ty.kind()); } #[tokio::test] #[allow(clippy::get_first)] async fn simple_query() { let client = connect("user=postgres").await; let messages = client .simple_query( "CREATE TEMPORARY TABLE foo ( id SERIAL, name TEXT ); INSERT INTO foo (name) VALUES ('steven'), ('joe'); SELECT * FROM foo ORDER BY id;", ) .await .unwrap(); match messages[0] { SimpleQueryMessage::CommandComplete(0) => {} _ => panic!("unexpected message"), } match messages[1] { SimpleQueryMessage::CommandComplete(2) => {} _ => panic!("unexpected message"), } match &messages[2] { SimpleQueryMessage::RowDescription(columns) => { assert_eq!(columns.get(0).map(|c| c.name()), Some("id")); assert_eq!(columns.get(1).map(|c| c.name()), Some("name")); } _ => panic!("unexpected message"), } match &messages[3] { SimpleQueryMessage::Row(row) => { assert_eq!(row.columns().get(0).map(|c| c.name()), Some("id")); assert_eq!(row.columns().get(1).map(|c| c.name()), Some("name")); assert_eq!(row.get(0), Some("1")); assert_eq!(row.get(1), Some("steven")); } _ => panic!("unexpected message"), } match &messages[4] { SimpleQueryMessage::Row(row) => { assert_eq!(row.columns().get(0).map(|c| c.name()), Some("id")); assert_eq!(row.columns().get(1).map(|c| c.name()), Some("name")); assert_eq!(row.get(0), Some("2")); assert_eq!(row.get(1), Some("joe")); } _ => panic!("unexpected message"), } match messages[5] { SimpleQueryMessage::CommandComplete(2) => {} _ => panic!("unexpected message"), } assert_eq!(messages.len(), 6); } #[tokio::test] async fn cancel_query_raw() { let client = connect("user=postgres").await; let socket = TcpStream::connect("127.0.0.1:5433").await.unwrap(); let cancel_token = client.cancel_token(); let cancel = cancel_token.cancel_query_raw(socket, NoTls); let cancel = time::sleep(Duration::from_millis(100)).then(|()| cancel); let sleep = client.batch_execute("SELECT pg_sleep(100)"); match join!(sleep, cancel) { (Err(ref e), Ok(())) if e.code() == Some(&SqlState::QUERY_CANCELED) => {} t => panic!("unexpected return: {:?}", t), } } #[tokio::test] async fn transaction_commit() { let mut client = connect("user=postgres").await; client .batch_execute( "CREATE TEMPORARY TABLE foo( id SERIAL, name TEXT )", ) .await .unwrap(); let transaction = client.transaction().await.unwrap(); transaction .batch_execute("INSERT INTO foo (name) VALUES ('steven')") .await .unwrap(); transaction.commit().await.unwrap(); let stmt = client.prepare("SELECT name FROM foo").await.unwrap(); let rows = client.query(&stmt, &[]).await.unwrap(); assert_eq!(rows.len(), 1); assert_eq!(rows[0].get::<_, &str>(0), "steven"); } #[tokio::test] async fn transaction_rollback() { let mut client = connect("user=postgres").await; client .batch_execute( "CREATE TEMPORARY TABLE foo( id SERIAL, name TEXT )", ) .await .unwrap(); let transaction = client.transaction().await.unwrap(); transaction .batch_execute("INSERT INTO foo (name) VALUES ('steven')") .await .unwrap(); transaction.rollback().await.unwrap(); let stmt = client.prepare("SELECT name FROM foo").await.unwrap(); let rows = client.query(&stmt, &[]).await.unwrap(); assert_eq!(rows.len(), 0); } #[tokio::test] async fn transaction_future_cancellation() { let mut client = connect("user=postgres").await; for i in 0.. { let done = { let txn = client.transaction(); let fut = Cancellable { fut: txn, polls_left: i, }; fut.await .map(|res| res.expect("transaction failed")) .is_some() }; assert!(!in_transaction(&client).await); if done { break; } } } #[tokio::test] async fn transaction_commit_future_cancellation() { let mut client = connect("user=postgres").await; for i in 0.. { let done = { let txn = client.transaction().await.unwrap(); let commit = txn.commit(); let fut = Cancellable { fut: commit, polls_left: i, }; fut.await .map(|res| res.expect("transaction failed")) .is_some() }; assert!(!in_transaction(&client).await); if done { break; } } } #[tokio::test] async fn transaction_rollback_future_cancellation() { let mut client = connect("user=postgres").await; for i in 0.. { let done = { let txn = client.transaction().await.unwrap(); let rollback = txn.rollback(); let fut = Cancellable { fut: rollback, polls_left: i, }; fut.await .map(|res| res.expect("transaction failed")) .is_some() }; assert!(!in_transaction(&client).await); if done { break; } } } #[tokio::test] async fn transaction_rollback_drop() { let mut client = connect("user=postgres").await; client .batch_execute( "CREATE TEMPORARY TABLE foo( id SERIAL, name TEXT )", ) .await .unwrap(); let transaction = client.transaction().await.unwrap(); transaction .batch_execute("INSERT INTO foo (name) VALUES ('steven')") .await .unwrap(); drop(transaction); let stmt = client.prepare("SELECT name FROM foo").await.unwrap(); let rows = client.query(&stmt, &[]).await.unwrap(); assert_eq!(rows.len(), 0); } #[tokio::test] async fn transaction_builder() { let mut client = connect("user=postgres").await; client .batch_execute( "CREATE TEMPORARY TABLE foo( id SERIAL, name TEXT )", ) .await .unwrap(); let transaction = client .build_transaction() .isolation_level(IsolationLevel::Serializable) .read_only(true) .deferrable(true) .start() .await .unwrap(); transaction .batch_execute("INSERT INTO foo (name) VALUES ('steven')") .await .unwrap(); transaction.commit().await.unwrap(); let stmt = client.prepare("SELECT name FROM foo").await.unwrap(); let rows = client.query(&stmt, &[]).await.unwrap(); assert_eq!(rows.len(), 1); assert_eq!(rows[0].get::<_, &str>(0), "steven"); } #[tokio::test] async fn copy_in() { let client = connect("user=postgres").await; client .batch_execute( "CREATE TEMPORARY TABLE foo ( id INTEGER, name TEXT )", ) .await .unwrap(); let mut stream = stream::iter( vec![ Bytes::from_static(b"1\tjim\n"), Bytes::from_static(b"2\tjoe\n"), ] .into_iter() .map(Ok::<_, Error>), ); let sink = client.copy_in("COPY foo FROM STDIN").await.unwrap(); pin_mut!(sink); sink.send_all(&mut stream).await.unwrap(); let rows = sink.finish().await.unwrap(); assert_eq!(rows, 2); let rows = client .query("SELECT id, name FROM foo ORDER BY id", &[]) .await .unwrap(); assert_eq!(rows.len(), 2); assert_eq!(rows[0].get::<_, i32>(0), 1); assert_eq!(rows[0].get::<_, &str>(1), "jim"); assert_eq!(rows[1].get::<_, i32>(0), 2); assert_eq!(rows[1].get::<_, &str>(1), "joe"); } #[tokio::test] async fn copy_in_large() { let client = connect("user=postgres").await; client .batch_execute( "CREATE TEMPORARY TABLE foo ( id INTEGER, name TEXT )", ) .await .unwrap(); let a = Bytes::from_static(b"0\tname0\n"); let mut b = BytesMut::new(); for i in 1..5_000 { writeln!(b, "{0}\tname{0}", i).unwrap(); } let mut c = BytesMut::new(); for i in 5_000..10_000 { writeln!(c, "{0}\tname{0}", i).unwrap(); } let mut stream = stream::iter( vec![a, b.freeze(), c.freeze()] .into_iter() .map(Ok::<_, Error>), ); let sink = client.copy_in("COPY foo FROM STDIN").await.unwrap(); pin_mut!(sink); sink.send_all(&mut stream).await.unwrap(); let rows = sink.finish().await.unwrap(); assert_eq!(rows, 10_000); } #[tokio::test] async fn copy_in_error() { let client = connect("user=postgres").await; client .batch_execute( "CREATE TEMPORARY TABLE foo ( id INTEGER, name TEXT )", ) .await .unwrap(); { let sink = client.copy_in("COPY foo FROM STDIN").await.unwrap(); pin_mut!(sink); sink.send(Bytes::from_static(b"1\tsteven")).await.unwrap(); } let rows = client .query("SELECT id, name FROM foo ORDER BY id", &[]) .await .unwrap(); assert_eq!(rows.len(), 0); } #[tokio::test] async fn copy_out() { let client = connect("user=postgres").await; client .batch_execute( "CREATE TEMPORARY TABLE foo ( id SERIAL, name TEXT ); INSERT INTO foo (name) VALUES ('jim'), ('joe');", ) .await .unwrap(); let stmt = client.prepare("COPY foo TO STDOUT").await.unwrap(); let data = client .copy_out(&stmt) .await .unwrap() .try_fold(BytesMut::new(), |mut buf, chunk| async move { buf.extend_from_slice(&chunk); Ok(buf) }) .await .unwrap(); assert_eq!(&data[..], b"1\tjim\n2\tjoe\n"); } #[tokio::test] async fn notices() { let long_name = "x".repeat(65); let (client, mut connection) = connect_raw(&format!("user=postgres application_name={}", long_name,)) .await .unwrap(); let (tx, rx) = mpsc::unbounded(); let stream = stream::poll_fn(move |cx| connection.poll_message(cx)).map_err(|e| panic!("{}", e)); let connection = stream.forward(tx).map(|r| r.unwrap()); tokio::spawn(connection); client .batch_execute("DROP DATABASE IF EXISTS noexistdb") .await .unwrap(); drop(client); let notices = rx .filter_map(|m| match m { AsyncMessage::Notice(n) => future::ready(Some(n)), _ => future::ready(None), }) .collect::>() .await; assert_eq!(notices.len(), 2); assert_eq!( notices[0].message(), "identifier \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\" \ will be truncated to \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"" ); assert_eq!( notices[1].message(), "database \"noexistdb\" does not exist, skipping" ); } #[tokio::test] async fn notifications() { let (client, mut connection) = connect_raw("user=postgres").await.unwrap(); let (tx, rx) = mpsc::unbounded(); let stream = stream::poll_fn(move |cx| connection.poll_message(cx)).map_err(|e| panic!("{}", e)); let connection = stream.forward(tx).map(|r| r.unwrap()); tokio::spawn(connection); client .batch_execute( "LISTEN test_notifications; NOTIFY test_notifications, 'hello'; NOTIFY test_notifications, 'world';", ) .await .unwrap(); drop(client); let notifications = rx .filter_map(|m| match m { AsyncMessage::Notification(n) => future::ready(Some(n)), _ => future::ready(None), }) .collect::>() .await; assert_eq!(notifications.len(), 2); assert_eq!(notifications[0].channel(), "test_notifications"); assert_eq!(notifications[0].payload(), "hello"); assert_eq!(notifications[1].channel(), "test_notifications"); assert_eq!(notifications[1].payload(), "world"); } #[tokio::test] async fn query_portal() { let mut client = connect("user=postgres").await; client .batch_execute( "CREATE TEMPORARY TABLE foo ( id SERIAL, name TEXT ); INSERT INTO foo (name) VALUES ('alice'), ('bob'), ('charlie');", ) .await .unwrap(); let stmt = client .prepare("SELECT id, name FROM foo ORDER BY id") .await .unwrap(); let transaction = client.transaction().await.unwrap(); let portal = transaction.bind(&stmt, &[]).await.unwrap(); let f1 = transaction.query_portal(&portal, 2); let f2 = transaction.query_portal(&portal, 2); let f3 = transaction.query_portal(&portal, 2); let (r1, r2, r3) = try_join!(f1, f2, f3).unwrap(); assert_eq!(r1.len(), 2); assert_eq!(r1[0].get::<_, i32>(0), 1); assert_eq!(r1[0].get::<_, &str>(1), "alice"); assert_eq!(r1[1].get::<_, i32>(0), 2); assert_eq!(r1[1].get::<_, &str>(1), "bob"); assert_eq!(r2.len(), 1); assert_eq!(r2[0].get::<_, i32>(0), 3); assert_eq!(r2[0].get::<_, &str>(1), "charlie"); assert_eq!(r3.len(), 0); } #[tokio::test] async fn require_channel_binding() { connect_raw("user=postgres channel_binding=require") .await .err() .unwrap(); } #[tokio::test] async fn prefer_channel_binding() { connect("user=postgres channel_binding=prefer").await; } #[tokio::test] async fn disable_channel_binding() { connect("user=postgres channel_binding=disable").await; } #[tokio::test] async fn check_send() { fn is_send(_: &T) {} let f = connect("user=postgres"); is_send(&f); let mut client = f.await; let f = client.prepare("SELECT $1::TEXT"); is_send(&f); let stmt = f.await.unwrap(); let f = client.query(&stmt, &[&"hello"]); is_send(&f); drop(f); let f = client.execute(&stmt, &[&"hello"]); is_send(&f); drop(f); let f = client.transaction(); is_send(&f); let trans = f.await.unwrap(); let f = trans.query(&stmt, &[&"hello"]); is_send(&f); drop(f); let f = trans.execute(&stmt, &[&"hello"]); is_send(&f); drop(f); } #[tokio::test] async fn query_one() { let client = connect("user=postgres").await; client .batch_execute( " CREATE TEMPORARY TABLE foo ( name TEXT ); INSERT INTO foo (name) VALUES ('alice'), ('bob'), ('carol'); ", ) .await .unwrap(); client .query_one("SELECT * FROM foo WHERE name = 'dave'", &[]) .await .err() .unwrap(); client .query_one("SELECT * FROM foo WHERE name = 'alice'", &[]) .await .unwrap(); client .query_one("SELECT * FROM foo", &[]) .await .err() .unwrap(); } #[tokio::test] async fn query_opt() { let client = connect("user=postgres").await; client .batch_execute( " CREATE TEMPORARY TABLE foo ( name TEXT ); INSERT INTO foo (name) VALUES ('alice'), ('bob'), ('carol'); ", ) .await .unwrap(); assert!(client .query_opt("SELECT * FROM foo WHERE name = 'dave'", &[]) .await .unwrap() .is_none()); client .query_opt("SELECT * FROM foo WHERE name = 'alice'", &[]) .await .unwrap() .unwrap(); client .query_one("SELECT * FROM foo", &[]) .await .err() .unwrap(); } #[tokio::test] async fn deferred_constraint() { let client = connect("user=postgres").await; client .batch_execute( " CREATE TEMPORARY TABLE t ( i INT, UNIQUE (i) DEFERRABLE INITIALLY DEFERRED ); ", ) .await .unwrap(); client .execute("INSERT INTO t (i) VALUES (1)", &[]) .await .unwrap(); client .execute("INSERT INTO t (i) VALUES (1)", &[]) .await .unwrap_err(); } #[tokio::test] async fn query_typed_no_transaction() { let client = connect("user=postgres").await; client .batch_execute( " CREATE TEMPORARY TABLE foo ( name TEXT, age INT ); INSERT INTO foo (name, age) VALUES ('alice', 20), ('bob', 30), ('carol', 40); ", ) .await .unwrap(); let rows: Vec = client .query_typed( "SELECT name, age, 'literal', 5 FROM foo WHERE name <> $1 AND age < $2 ORDER BY age", &[(&"alice", Type::TEXT), (&50i32, Type::INT4)], ) .await .unwrap(); assert_eq!(rows.len(), 2); let first_row = &rows[0]; assert_eq!(first_row.get::<_, &str>(0), "bob"); assert_eq!(first_row.get::<_, i32>(1), 30); assert_eq!(first_row.get::<_, &str>(2), "literal"); assert_eq!(first_row.get::<_, i32>(3), 5); let second_row = &rows[1]; assert_eq!(second_row.get::<_, &str>(0), "carol"); assert_eq!(second_row.get::<_, i32>(1), 40); assert_eq!(second_row.get::<_, &str>(2), "literal"); assert_eq!(second_row.get::<_, i32>(3), 5); // Test for UPDATE that returns no data let updated_rows = client .query_typed("UPDATE foo set age = 33", &[]) .await .unwrap(); assert_eq!(updated_rows.len(), 0); } #[tokio::test] async fn query_typed_with_transaction() { let mut client = connect("user=postgres").await; client .batch_execute( " CREATE TEMPORARY TABLE foo ( name TEXT, age INT ); ", ) .await .unwrap(); let transaction = client.transaction().await.unwrap(); let rows: Vec = transaction .query_typed( "INSERT INTO foo (name, age) VALUES ($1, $2), ($3, $4), ($5, $6) returning name, age", &[ (&"alice", Type::TEXT), (&20i32, Type::INT4), (&"bob", Type::TEXT), (&30i32, Type::INT4), (&"carol", Type::TEXT), (&40i32, Type::INT4), ], ) .await .unwrap(); let inserted_values: Vec<(String, i32)> = rows .iter() .map(|row| (row.get::<_, String>(0), row.get::<_, i32>(1))) .collect(); assert_eq!( inserted_values, [ ("alice".to_string(), 20), ("bob".to_string(), 30), ("carol".to_string(), 40) ] ); let rows: Vec = transaction .query_typed( "SELECT name, age, 'literal', 5 FROM foo WHERE name <> $1 AND age < $2 ORDER BY age", &[(&"alice", Type::TEXT), (&50i32, Type::INT4)], ) .await .unwrap(); assert_eq!(rows.len(), 2); let first_row = &rows[0]; assert_eq!(first_row.get::<_, &str>(0), "bob"); assert_eq!(first_row.get::<_, i32>(1), 30); assert_eq!(first_row.get::<_, &str>(2), "literal"); assert_eq!(first_row.get::<_, i32>(3), 5); let second_row = &rows[1]; assert_eq!(second_row.get::<_, &str>(0), "carol"); assert_eq!(second_row.get::<_, i32>(1), 40); assert_eq!(second_row.get::<_, &str>(2), "literal"); assert_eq!(second_row.get::<_, i32>(3), 5); // Test for UPDATE that returns no data let updated_rows = transaction .query_typed("UPDATE foo set age = 33", &[]) .await .unwrap(); assert_eq!(updated_rows.len(), 0); } tokio-postgres-0.7.12/tests/test/parse.rs000064400000000000000000000075761046102023000165240ustar 00000000000000use std::time::Duration; use tokio_postgres::config::{Config, TargetSessionAttrs}; fn check(s: &str, config: &Config) { assert_eq!(s.parse::().expect(s), *config, "`{}`", s); } #[test] fn pairs_ok() { check( r"user=foo password=' fizz \'buzz\\ ' application_name = ''", Config::new() .user("foo") .password(r" fizz 'buzz\ ") .application_name(""), ); } #[test] fn pairs_ws() { check( " user\t=\r\n\x0bfoo \t password = hunter2 ", Config::new().user("foo").password("hunter2"), ); } #[test] fn settings() { check( "connect_timeout=3 keepalives=0 keepalives_idle=30 target_session_attrs=read-write", Config::new() .connect_timeout(Duration::from_secs(3)) .keepalives(false) .keepalives_idle(Duration::from_secs(30)) .target_session_attrs(TargetSessionAttrs::ReadWrite), ); check( "connect_timeout=3 keepalives=0 keepalives_idle=30 target_session_attrs=read-only", Config::new() .connect_timeout(Duration::from_secs(3)) .keepalives(false) .keepalives_idle(Duration::from_secs(30)) .target_session_attrs(TargetSessionAttrs::ReadOnly), ); } #[test] fn keepalive_settings() { check( "keepalives=1 keepalives_idle=15 keepalives_interval=5 keepalives_retries=9", Config::new() .keepalives(true) .keepalives_idle(Duration::from_secs(15)) .keepalives_interval(Duration::from_secs(5)) .keepalives_retries(9), ); } #[test] fn url() { check("postgresql://", &Config::new()); check( "postgresql://localhost", Config::new().host("localhost").port(5432), ); check( "postgresql://localhost:5433", Config::new().host("localhost").port(5433), ); check( "postgresql://localhost/mydb", Config::new().host("localhost").port(5432).dbname("mydb"), ); check( "postgresql://user@localhost", Config::new().user("user").host("localhost").port(5432), ); check( "postgresql://user:secret@localhost", Config::new() .user("user") .password("secret") .host("localhost") .port(5432), ); check( "postgresql://other@localhost/otherdb?connect_timeout=10&application_name=myapp", Config::new() .user("other") .host("localhost") .port(5432) .dbname("otherdb") .connect_timeout(Duration::from_secs(10)) .application_name("myapp"), ); check( "postgresql://host1:123,host2:456/somedb?target_session_attrs=any&application_name=myapp", Config::new() .host("host1") .port(123) .host("host2") .port(456) .dbname("somedb") .target_session_attrs(TargetSessionAttrs::Any) .application_name("myapp"), ); check( "postgresql:///mydb?host=localhost&port=5433", Config::new().dbname("mydb").host("localhost").port(5433), ); check( "postgresql://[2001:db8::1234]/database", Config::new() .host("2001:db8::1234") .port(5432) .dbname("database"), ); check( "postgresql://[2001:db8::1234]:5433/database", Config::new() .host("2001:db8::1234") .port(5433) .dbname("database"), ); #[cfg(unix)] check( "postgresql:///dbname?host=/var/lib/postgresql", Config::new() .dbname("dbname") .host_path("/var/lib/postgresql"), ); #[cfg(unix)] check( "postgresql://%2Fvar%2Flib%2Fpostgresql/dbname", Config::new() .host_path("/var/lib/postgresql") .port(5432) .dbname("dbname"), ) } tokio-postgres-0.7.12/tests/test/runtime.rs000064400000000000000000000066151046102023000170660ustar 00000000000000use futures_util::{join, FutureExt}; use std::time::Duration; use tokio::time; use tokio_postgres::error::SqlState; use tokio_postgres::{Client, NoTls}; async fn connect(s: &str) -> Client { let (client, connection) = tokio_postgres::connect(s, NoTls).await.unwrap(); let connection = connection.map(|e| e.unwrap()); tokio::spawn(connection); client } async fn smoke_test(s: &str) { let client = connect(s).await; let stmt = client.prepare("SELECT $1::INT").await.unwrap(); let rows = client.query(&stmt, &[&1i32]).await.unwrap(); assert_eq!(rows[0].get::<_, i32>(0), 1i32); } #[tokio::test] #[ignore] // FIXME doesn't work with our docker-based tests :( async fn unix_socket() { smoke_test("host=/var/run/postgresql port=5433 user=postgres").await; } #[tokio::test] async fn tcp() { smoke_test("host=localhost port=5433 user=postgres").await; } #[tokio::test] async fn multiple_hosts_one_port() { smoke_test("host=foobar.invalid,localhost port=5433 user=postgres").await; } #[tokio::test] async fn multiple_hosts_multiple_ports() { smoke_test("host=foobar.invalid,localhost port=5432,5433 user=postgres").await; } #[tokio::test] async fn wrong_port_count() { tokio_postgres::connect("host=localhost port=5433,5433 user=postgres", NoTls) .await .err() .unwrap(); } #[tokio::test] async fn target_session_attrs_ok() { smoke_test("host=localhost port=5433 user=postgres target_session_attrs=read-write").await; } #[tokio::test] async fn target_session_attrs_err() { tokio_postgres::connect( "host=localhost port=5433 user=postgres target_session_attrs=read-write options='-c default_transaction_read_only=on'", NoTls, ) .await .err() .unwrap(); } #[tokio::test] async fn host_only_ok() { let _ = tokio_postgres::connect( "host=localhost port=5433 user=pass_user dbname=postgres password=password", NoTls, ) .await .unwrap(); } #[tokio::test] async fn hostaddr_only_ok() { let _ = tokio_postgres::connect( "hostaddr=127.0.0.1 port=5433 user=pass_user dbname=postgres password=password", NoTls, ) .await .unwrap(); } #[tokio::test] async fn hostaddr_and_host_ok() { let _ = tokio_postgres::connect( "hostaddr=127.0.0.1 host=localhost port=5433 user=pass_user dbname=postgres password=password", NoTls, ) .await .unwrap(); } #[tokio::test] async fn hostaddr_host_mismatch() { let _ = tokio_postgres::connect( "hostaddr=127.0.0.1,127.0.0.2 host=localhost port=5433 user=pass_user dbname=postgres password=password", NoTls, ) .await .err() .unwrap(); } #[tokio::test] async fn hostaddr_host_both_missing() { let _ = tokio_postgres::connect( "port=5433 user=pass_user dbname=postgres password=password", NoTls, ) .await .err() .unwrap(); } #[tokio::test] async fn cancel_query() { let client = connect("host=localhost port=5433 user=postgres").await; let cancel_token = client.cancel_token(); let cancel = cancel_token.cancel_query(NoTls); let cancel = time::sleep(Duration::from_millis(100)).then(|()| cancel); let sleep = client.batch_execute("SELECT pg_sleep(100)"); match join!(sleep, cancel) { (Err(ref e), Ok(())) if e.code() == Some(&SqlState::QUERY_CANCELED) => {} t => panic!("unexpected return: {:?}", t), } } tokio-postgres-0.7.12/tests/test/types/bit_vec_06.rs000064400000000000000000000012071046102023000204570ustar 00000000000000use bit_vec_06::BitVec; use crate::types::test_type; #[tokio::test] async fn test_bit_params() { let mut bv = BitVec::from_bytes(&[0b0110_1001, 0b0000_0111]); bv.pop(); bv.pop(); test_type( "BIT(14)", &[(Some(bv), "B'01101001000001'"), (None, "NULL")], ) .await } #[tokio::test] async fn test_varbit_params() { let mut bv = BitVec::from_bytes(&[0b0110_1001, 0b0000_0111]); bv.pop(); bv.pop(); test_type( "VARBIT", &[ (Some(bv), "B'01101001000001'"), (Some(BitVec::from_bytes(&[])), "B''"), (None, "NULL"), ], ) .await } tokio-postgres-0.7.12/tests/test/types/chrono_04.rs000064400000000000000000000123361046102023000203370ustar 00000000000000use chrono_04::{DateTime, NaiveDate, NaiveDateTime, NaiveTime, Utc}; use std::fmt; use tokio_postgres::types::{Date, FromSqlOwned, Timestamp}; use tokio_postgres::Client; use crate::connect; use crate::types::test_type; #[tokio::test] async fn test_naive_date_time_params() { fn make_check(time: &str) -> (Option, &str) { ( Some(NaiveDateTime::parse_from_str(time, "'%Y-%m-%d %H:%M:%S.%f'").unwrap()), time, ) } test_type( "TIMESTAMP", &[ make_check("'1970-01-01 00:00:00.010000000'"), make_check("'1965-09-25 11:19:33.100314000'"), make_check("'2010-02-09 23:11:45.120200000'"), (None, "NULL"), ], ) .await; } #[tokio::test] async fn test_with_special_naive_date_time_params() { fn make_check(time: &str) -> (Timestamp, &str) { ( Timestamp::Value( NaiveDateTime::parse_from_str(time, "'%Y-%m-%d %H:%M:%S.%f'").unwrap(), ), time, ) } test_type( "TIMESTAMP", &[ make_check("'1970-01-01 00:00:00.010000000'"), make_check("'1965-09-25 11:19:33.100314000'"), make_check("'2010-02-09 23:11:45.120200000'"), (Timestamp::PosInfinity, "'infinity'"), (Timestamp::NegInfinity, "'-infinity'"), ], ) .await; } #[tokio::test] async fn test_date_time_params() { fn make_check(time: &str) -> (Option>, &str) { ( Some( DateTime::parse_from_str(time, "'%Y-%m-%d %H:%M:%S.%f%#z'") .unwrap() .to_utc(), ), time, ) } test_type( "TIMESTAMP WITH TIME ZONE", &[ make_check("'1970-01-01 00:00:00.010000000Z'"), make_check("'1965-09-25 11:19:33.100314000Z'"), make_check("'2010-02-09 23:11:45.120200000Z'"), (None, "NULL"), ], ) .await; } #[tokio::test] async fn test_with_special_date_time_params() { fn make_check(time: &str) -> (Timestamp>, &str) { ( Timestamp::Value( DateTime::parse_from_str(time, "'%Y-%m-%d %H:%M:%S.%f%#z'") .unwrap() .to_utc(), ), time, ) } test_type( "TIMESTAMP WITH TIME ZONE", &[ make_check("'1970-01-01 00:00:00.010000000Z'"), make_check("'1965-09-25 11:19:33.100314000Z'"), make_check("'2010-02-09 23:11:45.120200000Z'"), (Timestamp::PosInfinity, "'infinity'"), (Timestamp::NegInfinity, "'-infinity'"), ], ) .await; } #[tokio::test] async fn test_date_params() { fn make_check(time: &str) -> (Option, &str) { ( Some(NaiveDate::parse_from_str(time, "'%Y-%m-%d'").unwrap()), time, ) } test_type( "DATE", &[ make_check("'1970-01-01'"), make_check("'1965-09-25'"), make_check("'2010-02-09'"), (None, "NULL"), ], ) .await; } #[tokio::test] async fn test_with_special_date_params() { fn make_check(date: &str) -> (Date, &str) { ( Date::Value(NaiveDate::parse_from_str(date, "'%Y-%m-%d'").unwrap()), date, ) } test_type( "DATE", &[ make_check("'1970-01-01'"), make_check("'1965-09-25'"), make_check("'2010-02-09'"), (Date::PosInfinity, "'infinity'"), (Date::NegInfinity, "'-infinity'"), ], ) .await; } #[tokio::test] async fn test_time_params() { fn make_check(time: &str) -> (Option, &str) { ( Some(NaiveTime::parse_from_str(time, "'%H:%M:%S.%f'").unwrap()), time, ) } test_type( "TIME", &[ make_check("'00:00:00.010000000'"), make_check("'11:19:33.100314000'"), make_check("'23:11:45.120200000'"), (None, "NULL"), ], ) .await; } #[tokio::test] async fn test_special_params_without_wrapper() { async fn assert_overflows(client: &mut Client, val: &str, sql_type: &str) where T: FromSqlOwned + fmt::Debug, { let err = client .query_one(&*format!("SELECT {}::{}", val, sql_type), &[]) .await .unwrap() .try_get::<_, T>(0) .unwrap_err(); assert_eq!( err.to_string(), "error deserializing column 0: value too large to decode" ); } let mut client = connect("user=postgres").await; assert_overflows::>(&mut client, "'-infinity'", "timestamptz").await; assert_overflows::>(&mut client, "'infinity'", "timestamptz").await; assert_overflows::(&mut client, "'-infinity'", "timestamp").await; assert_overflows::(&mut client, "'infinity'", "timestamp").await; assert_overflows::(&mut client, "'-infinity'", "date").await; assert_overflows::(&mut client, "'infinity'", "date").await; } tokio-postgres-0.7.12/tests/test/types/eui48_1.rs000064400000000000000000000005341046102023000177170ustar 00000000000000use eui48_1::MacAddress; use crate::types::test_type; #[tokio::test] async fn test_eui48_params() { test_type( "MACADDR", &[ ( Some(MacAddress::parse_str("12-34-56-AB-CD-EF").unwrap()), "'12-34-56-ab-cd-ef'", ), (None, "NULL"), ], ) .await } tokio-postgres-0.7.12/tests/test/types/geo_types_06.rs000064400000000000000000000024451046102023000210470ustar 00000000000000use geo_types_06::{Coordinate, LineString, Point, Rect}; use crate::types::test_type; #[tokio::test] async fn test_point_params() { test_type( "POINT", &[ (Some(Point::new(0.0, 0.0)), "POINT(0, 0)"), (Some(Point::new(-3.2, 1.618)), "POINT(-3.2, 1.618)"), (None, "NULL"), ], ) .await; } #[tokio::test] async fn test_box_params() { test_type( "BOX", &[ ( Some(Rect::new( Coordinate { x: -3.2, y: 1.618 }, Coordinate { x: 160.0, y: 69701.5615, }, )), "BOX(POINT(160.0, 69701.5615), POINT(-3.2, 1.618))", ), (None, "NULL"), ], ) .await; } #[tokio::test] async fn test_path_params() { let points = vec![ Coordinate { x: 0., y: 0. }, Coordinate { x: -3.2, y: 1.618 }, Coordinate { x: 160.0, y: 69701.5615, }, ]; test_type( "PATH", &[ ( Some(LineString(points)), "path '((0, 0), (-3.2, 1.618), (160.0, 69701.5615))'", ), (None, "NULL"), ], ) .await; } tokio-postgres-0.7.12/tests/test/types/geo_types_07.rs000064400000000000000000000024561046102023000210520ustar 00000000000000#[cfg(feature = "with-geo-types-0_7")] use geo_types_07::{Coord, LineString, Point, Rect}; use crate::types::test_type; #[tokio::test] async fn test_point_params() { test_type( "POINT", &[ (Some(Point::new(0.0, 0.0)), "POINT(0, 0)"), (Some(Point::new(-3.2, 1.618)), "POINT(-3.2, 1.618)"), (None, "NULL"), ], ) .await; } #[tokio::test] async fn test_box_params() { test_type( "BOX", &[ ( Some(Rect::new( Coord { x: -3.2, y: 1.618 }, Coord { x: 160.0, y: 69701.5615, }, )), "BOX(POINT(160.0, 69701.5615), POINT(-3.2, 1.618))", ), (None, "NULL"), ], ) .await; } #[tokio::test] async fn test_path_params() { let points = vec![ Coord { x: 0., y: 0. }, Coord { x: -3.2, y: 1.618 }, Coord { x: 160.0, y: 69701.5615, }, ]; test_type( "PATH", &[ ( Some(LineString(points)), "path '((0, 0), (-3.2, 1.618), (160.0, 69701.5615))'", ), (None, "NULL"), ], ) .await; } tokio-postgres-0.7.12/tests/test/types/jiff_01.rs000064400000000000000000000115671046102023000177670ustar 00000000000000use jiff_01::{ civil::{Date as JiffDate, DateTime, Time}, Timestamp as JiffTimestamp, }; use std::fmt; use tokio_postgres::{ types::{Date, FromSqlOwned, Timestamp}, Client, }; use crate::connect; use crate::types::test_type; #[tokio::test] async fn test_datetime_params() { fn make_check(s: &str) -> (Option, &str) { (Some(s.trim_matches('\'').parse().unwrap()), s) } test_type( "TIMESTAMP", &[ make_check("'1970-01-01 00:00:00.010000000'"), make_check("'1965-09-25 11:19:33.100314000'"), make_check("'2010-02-09 23:11:45.120200000'"), (None, "NULL"), ], ) .await; } #[tokio::test] async fn test_with_special_datetime_params() { fn make_check(s: &str) -> (Timestamp, &str) { (Timestamp::Value(s.trim_matches('\'').parse().unwrap()), s) } test_type( "TIMESTAMP", &[ make_check("'1970-01-01 00:00:00.010000000'"), make_check("'1965-09-25 11:19:33.100314000'"), make_check("'2010-02-09 23:11:45.120200000'"), (Timestamp::PosInfinity, "'infinity'"), (Timestamp::NegInfinity, "'-infinity'"), ], ) .await; } #[tokio::test] async fn test_timestamp_params() { fn make_check(s: &str) -> (Option, &str) { (Some(s.trim_matches('\'').parse().unwrap()), s) } test_type( "TIMESTAMP WITH TIME ZONE", &[ make_check("'1970-01-01 00:00:00.010000000Z'"), make_check("'1965-09-25 11:19:33.100314000Z'"), make_check("'2010-02-09 23:11:45.120200000Z'"), (None, "NULL"), ], ) .await; } #[tokio::test] async fn test_with_special_timestamp_params() { fn make_check(s: &str) -> (Timestamp, &str) { (Timestamp::Value(s.trim_matches('\'').parse().unwrap()), s) } test_type( "TIMESTAMP WITH TIME ZONE", &[ make_check("'1970-01-01 00:00:00.010000000Z'"), make_check("'1965-09-25 11:19:33.100314000Z'"), make_check("'2010-02-09 23:11:45.120200000Z'"), (Timestamp::PosInfinity, "'infinity'"), (Timestamp::NegInfinity, "'-infinity'"), ], ) .await; } #[tokio::test] async fn test_date_params() { fn make_check(s: &str) -> (Option, &str) { (Some(s.trim_matches('\'').parse().unwrap()), s) } test_type( "DATE", &[ make_check("'1970-01-01'"), make_check("'1965-09-25'"), make_check("'2010-02-09'"), (None, "NULL"), ], ) .await; } #[tokio::test] async fn test_with_special_date_params() { fn make_check(s: &str) -> (Date, &str) { (Date::Value(s.trim_matches('\'').parse().unwrap()), s) } test_type( "DATE", &[ make_check("'1970-01-01'"), make_check("'1965-09-25'"), make_check("'2010-02-09'"), (Date::PosInfinity, "'infinity'"), (Date::NegInfinity, "'-infinity'"), ], ) .await; } #[tokio::test] async fn test_time_params() { fn make_check(s: &str) -> (Option