postgres-0.19.9/.cargo_vcs_info.json0000644000000001460000000000100130030ustar { "git": { "sha1": "6ae17e0f2d174fc4d5592ac7869909d7bdd346d9" }, "path_in_vcs": "postgres" }postgres-0.19.9/CHANGELOG.md000064400000000000000000000212171046102023000134060ustar 00000000000000# Change Log ## Unreleased ## v0.19.9 - 2024-09-15 ### Added * Added support for `jiff` 0.1 via the `with-jiff-01` feature. ## v0.19.8 - 2024-07-21 ### Added * Added `{Client, Transaction, GenericClient}::query_typed`. ## v0.19.7 - 2023-08-25 ## Fixed * Defered default username lookup to avoid regressing `Config` behavior. ## v0.19.6 - 2023-08-19 ### Added * Added support for the `hostaddr` config option to bypass DNS lookups. * Added support for the `load_balance_hosts` config option to randomize connection ordering. * The `user` config option now defaults to the executing process's user. ## v0.19.5 - 2023-03-27 ### Added * Added `keepalives_interval` and `keepalives_retries` config options. * Added the `tcp_user_timeout` config option. * Added `RowIter::rows_affected`. ### Changed * Passing an incorrect number of parameters to a query method now returns an error instead of panicking. ## v0.19.4 - 2022-08-21 ### Added * Added `ToSql` and `FromSql` implementations for `[u8; N]` via the `array-impls` feature. * Added support for `smol_str` 0.1 via the `with-smol_str-01` feature. * Added `ToSql::encode_format` to support text encodings of parameters. ## v0.19.3 - 2022-04-30 ### Added * Added support for `uuid` 1.0 via the `with-uuid-1` feature. ## v0.19.2 - 2021-09-29 ### Added * Added `SimpleQueryRow::columns`. * Added support for `eui48` 1.0 via the `with-eui48-1` feature. * Added `FromSql` and `ToSql` implementations for arrays via the `array-impls` feature. * Added support for `time` 0.3 via the `with-time-0_3` feature. ## v0.19.1 - 2021-04-03 ### Added * Added support for `geo-types` 0.7 via `with-geo-types-0_7` feature. * Added `Client::clear_type_cache`. ## v0.19.0 - 2020-12-25 ### Changed * Upgraded to `tokio-postgres` 0.7. * Methods taking iterators of `ToSql` values can now take both `&dyn ToSql` and `T: ToSql` values. ### Added * Added `Client::is_valid` which can be used to check that the connection is still alive with a timeout. ## v0.18.1 - 2020-10-19 ### Fixed * Restored the `Send` implementation for `Client`. ## v0.18.0 - 2020-10-17 ### Changed * Upgraded to `tokio-postgres` 0.6. ### Added * Added `Config::notice_callback`, which can be used to provide a custom callback for notices. ### Fixed * Fixed client shutdown to explicitly terminate the database session. ## v0.17.5 - 2020-07-19 ### Fixed * Fixed transactions to roll back immediately on drop. ## v0.17.4 - 2020-07-03 ### Added * Added support for `geo-types` 0.6. ## v0.17.3 - 2020-05-01 ### Fixed * Errors sent by the server will now be returned from `Client` methods rather than just being logged. ### Added * Added `Transaction::savepoint`, which can be used to create a savepoint with a custom name. * Added `Client::notifications`, which returns an interface to the notifications sent by the server. ## v0.17.2 - 2020-03-05 ### Added * Added `Debug` implementations for `Client`, `Row`, and `Column`. * Added `time` 0.2 support. ## v0.17.1 - 2020-01-31 ### Added * Added `Client::build_transaction` to allow configuration of various transaction options. * Added `Client::cancel_token`, which returns a separate owned object that can be used to cancel queries. * Added accessors for `Config` fields. * Added a `GenericClient` trait implemented for `Client` and `Transaction` and covering shared functionality. ## v0.17.0 - 2019-12-23 ### Changed * Each `Client` now has its own non-threaded tokio `Runtime` rather than sharing a global threaded `Runtime`. This significantly improves performance by minimizing context switches and cross-thread synchronization. * `Client::copy_in` now returns a writer rather than taking in a reader. * `Client::query_raw` now returns a named type. * `Client::copy_in` and `Client::copy_out` no longer take query parameters as PostgreSQL doesn't support them in COPY queries. ### Removed * Removed support for `uuid` 0.7. ### Added * Added `Client::query_opt` for queries that are expected to return zero or one rows. * Added binary copy support in the `binary_copy` module. * The `fallible-iterator` crate is now publicly reexported. ## v0.17.0-alpha.2 - 2019-11-27 ### Changed * Changed `Config::executor` to `Config::spawner`. ### Added * Added support for `uuid` 0.8. * Added `Transaction::query_one`. ## v0.17.0-alpha.1 - 2019-10-14 ### Changed * Updated `tokio-postgres` to 0.5.0-alpha.1. ## v0.16.0-rc.2 - 2019-06-29 ### Fixed * Documentation fixes ## v0.16.0-rc.1 - 2019-04-06 ### Changed * `Connection` has been renamed to `Client`. * The `Client` type is now a thin wrapper around the tokio-postgres nonblocking client. By default, this is handled transparently by spawning connections onto an internal tokio `Runtime`, but this can also be controlled explicitly. * The `ConnectParams` type and `IntoConnectParams` trait have been replaced by a builder-style `Config` type. Before: ```rust let params = ConnectParams::builder() .user("postgres", None) .build(Host::Tcp("localhost".to_string())) .build(); let conn = Connection::connect(params, &TlsMode::None)?; ``` After: ```rust let client = Client::configure() .user("postgres") .host("localhost") .connect(NoTls)?; ``` * The TLS connection mode (e.g. `prefer`) is now part of the connection configuration instead of being passed in separately. Before: ```rust let conn = Connection::connect("postgres://postgres@localhost", &TlsMode::Prefer(connector))?; ``` After: ```rust let client = Client::connect("postgres://postgres@localhost?sslmode=prefer", connector)?; ``` * `Client` and `Transaction` methods take `&mut self` rather than `&self`, and correct use of the active transaction is verified at compile time rather than runtime. * `Row` no longer borrows any data. * `Statement` is now a "token" which is passed into methods on `Client` and `Transaction` and does not borrow the client: Before: ```rust let statement = conn.prepare("SELECT * FROM foo WHERE bar = $1")?; let rows = statement.query(&[&1i32])?; ``` After: ```rust let statement = client.prepare("SELECT * FROM foo WHERE bar = $1")?; let rows = client.query(&statement, &[1i32])?; ``` * `Statement::lazy_query` has been replaced with `Transaction::bind`, which returns a `Portal` type that can be used with `Transaction::query_portal`. * `Statement::copy_in` and `Statement::copy_out` have been moved to `Client` and `Transaction`. * `Client::copy_out` and `Transaction::copy_out` now return a `Read`er rather than consuming in a `Write`r. * `Connection::batch_execute` and `Transaction::batch_execute` have been replaced with `Client::simple_query` and `Transaction::simple_query`. * The Cargo features enabling `ToSql` and `FromSql` implementations for external crates are now versioned. For example, `with-uuid` is now `with-uuid-0_7`. This enables us to add support for new major versions of the crates in parallel without breaking backwards compatibility. ### Added * Connection string configuration now more fully mirrors libpq's syntax, and supports both URL-style and key-value style strings. * `FromSql` implementations can now borrow from the data buffer. In particular, this means that you can deserialize values as `&str`. The `FromSqlOwned` trait can be used as a bound to restrict code to deserializing owned values. * Added support for channel binding with SCRAM authentication. * Added multi-host support in connection configuration. * Added support for simple query requests returning row data. * Added variants of query methods which return fallible iterators of values and avoid fully buffering the response in memory. ### Removed * The `with-openssl` and `with-native-tls` Cargo features have been removed. Use the `tokio-postgres-openssl` and `tokio-postgres-native-tls` crates instead. * The `with-rustc_serialize` and `with-time` Cargo features have been removed. Use `serde` and `SystemTime` or `chrono` instead. * The `Transaction::set_commit` and `Transaction::set_rollback` methods have been removed. The only way to commit a transaction is to explicitly consume it via `Transaction::commit`. * The `Rows` type has been removed; methods now return `Vec` instead. * `Connection::prepare_cache` has been removed, as `Statement` is now `'static` and can be more easily cached externally. * Some other slightly more obscure features have been removed in the initial release. If you depended on them, please file an issue and we can find the right design to add them back! ## Older Look at the [release tags] for information about older releases. [release tags]: https://github.com/sfackler/rust-postgres/releases postgres-0.19.9/Cargo.toml0000644000000040730000000000100110040ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2018" name = "postgres" version = "0.19.9" authors = ["Steven Fackler "] description = "A native, synchronous PostgreSQL client" readme = "README.md" keywords = [ "database", "postgres", "postgresql", "sql", ] categories = ["database"] license = "MIT OR Apache-2.0" repository = "https://github.com/sfackler/rust-postgres" resolver = "2" [package.metadata.docs.rs] all-features = true [[bench]] name = "bench" harness = false [dependencies.bytes] version = "1.0" [dependencies.fallible-iterator] version = "0.2" [dependencies.futures-util] version = "0.3.14" features = ["sink"] [dependencies.log] version = "0.4" [dependencies.tokio] version = "1.0" features = [ "rt", "time", ] [dependencies.tokio-postgres] version = "0.7.12" [dev-dependencies.criterion] version = "0.5" [features] array-impls = ["tokio-postgres/array-impls"] with-bit-vec-0_6 = ["tokio-postgres/with-bit-vec-0_6"] with-chrono-0_4 = ["tokio-postgres/with-chrono-0_4"] with-eui48-0_4 = ["tokio-postgres/with-eui48-0_4"] with-eui48-1 = ["tokio-postgres/with-eui48-1"] with-geo-types-0_6 = ["tokio-postgres/with-geo-types-0_6"] with-geo-types-0_7 = ["tokio-postgres/with-geo-types-0_7"] with-jiff-0_1 = ["tokio-postgres/with-jiff-0_1"] with-serde_json-1 = ["tokio-postgres/with-serde_json-1"] with-smol_str-01 = ["tokio-postgres/with-smol_str-01"] with-time-0_2 = ["tokio-postgres/with-time-0_2"] with-time-0_3 = ["tokio-postgres/with-time-0_3"] with-uuid-0_8 = ["tokio-postgres/with-uuid-0_8"] with-uuid-1 = ["tokio-postgres/with-uuid-1"] [badges.circle-ci] repository = "sfackler/rust-postgres" postgres-0.19.9/Cargo.toml.orig0000644000000030040000000000100117340ustar [package] name = "postgres" version = "0.19.9" authors = ["Steven Fackler "] edition = "2018" license = "MIT OR Apache-2.0" description = "A native, synchronous PostgreSQL client" repository = "https://github.com/sfackler/rust-postgres" readme = "../README.md" keywords = ["database", "postgres", "postgresql", "sql"] categories = ["database"] [[bench]] name = "bench" harness = false [package.metadata.docs.rs] all-features = true [badges] circle-ci = { repository = "sfackler/rust-postgres" } [features] array-impls = ["tokio-postgres/array-impls"] with-bit-vec-0_6 = ["tokio-postgres/with-bit-vec-0_6"] with-chrono-0_4 = ["tokio-postgres/with-chrono-0_4"] with-eui48-0_4 = ["tokio-postgres/with-eui48-0_4"] with-eui48-1 = ["tokio-postgres/with-eui48-1"] with-geo-types-0_6 = ["tokio-postgres/with-geo-types-0_6"] with-geo-types-0_7 = ["tokio-postgres/with-geo-types-0_7"] with-jiff-0_1 = ["tokio-postgres/with-jiff-0_1"] with-serde_json-1 = ["tokio-postgres/with-serde_json-1"] with-smol_str-01 = ["tokio-postgres/with-smol_str-01"] with-uuid-0_8 = ["tokio-postgres/with-uuid-0_8"] with-uuid-1 = ["tokio-postgres/with-uuid-1"] with-time-0_2 = ["tokio-postgres/with-time-0_2"] with-time-0_3 = ["tokio-postgres/with-time-0_3"] [dependencies] bytes = "1.0" fallible-iterator = "0.2" futures-util = { version = "0.3.14", features = ["sink"] } log = "0.4" tokio-postgres = { version = "0.7.12", path = "../tokio-postgres" } tokio = { version = "1.0", features = ["rt", "time"] } [dev-dependencies] criterion = "0.5" postgres-0.19.9/Cargo.toml.orig000064400000000000000000000030041046102023000144560ustar 00000000000000[package] name = "postgres" version = "0.19.9" authors = ["Steven Fackler "] edition = "2018" license = "MIT OR Apache-2.0" description = "A native, synchronous PostgreSQL client" repository = "https://github.com/sfackler/rust-postgres" readme = "../README.md" keywords = ["database", "postgres", "postgresql", "sql"] categories = ["database"] [[bench]] name = "bench" harness = false [package.metadata.docs.rs] all-features = true [badges] circle-ci = { repository = "sfackler/rust-postgres" } [features] array-impls = ["tokio-postgres/array-impls"] with-bit-vec-0_6 = ["tokio-postgres/with-bit-vec-0_6"] with-chrono-0_4 = ["tokio-postgres/with-chrono-0_4"] with-eui48-0_4 = ["tokio-postgres/with-eui48-0_4"] with-eui48-1 = ["tokio-postgres/with-eui48-1"] with-geo-types-0_6 = ["tokio-postgres/with-geo-types-0_6"] with-geo-types-0_7 = ["tokio-postgres/with-geo-types-0_7"] with-jiff-0_1 = ["tokio-postgres/with-jiff-0_1"] with-serde_json-1 = ["tokio-postgres/with-serde_json-1"] with-smol_str-01 = ["tokio-postgres/with-smol_str-01"] with-uuid-0_8 = ["tokio-postgres/with-uuid-0_8"] with-uuid-1 = ["tokio-postgres/with-uuid-1"] with-time-0_2 = ["tokio-postgres/with-time-0_2"] with-time-0_3 = ["tokio-postgres/with-time-0_3"] [dependencies] bytes = "1.0" fallible-iterator = "0.2" futures-util = { version = "0.3.14", features = ["sink"] } log = "0.4" tokio-postgres = { version = "0.7.12", path = "../tokio-postgres" } tokio = { version = "1.0", features = ["rt", "time"] } [dev-dependencies] criterion = "0.5" postgres-0.19.9/LICENSE-APACHE000064400000000000000000000251371046102023000135260ustar 00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. postgres-0.19.9/LICENSE-MIT000064400000000000000000000020721046102023000132270ustar 00000000000000The MIT License (MIT) Copyright (c) 2016 Steven Fackler Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. postgres-0.19.9/README.md000064400000000000000000000032371046102023000130560ustar 00000000000000# Rust-Postgres PostgreSQL support for Rust. ## postgres [![Latest Version](https://img.shields.io/crates/v/postgres.svg)](https://crates.io/crates/postgres) [Documentation](https://docs.rs/postgres) A native, synchronous PostgreSQL client. ## tokio-postgres [![Latest Version](https://img.shields.io/crates/v/tokio-postgres.svg)](https://crates.io/crates/tokio-postgres) [Documentation](https://docs.rs/tokio-postgres) A native, asynchronous PostgreSQL client. ## postgres-types [![Latest Version](https://img.shields.io/crates/v/postgres-types.svg)](https://crates.io/crates/postgres-types) [Documentation](https://docs.rs/postgres-types) Conversions between Rust and Postgres types. ## postgres-native-tls [![Latest Version](https://img.shields.io/crates/v/postgres-native-tls.svg)](https://crates.io/crates/postgres-native-tls) [Documentation](https://docs.rs/postgres-native-tls) TLS support for postgres and tokio-postgres via native-tls. ## postgres-openssl [![Latest Version](https://img.shields.io/crates/v/postgres-openssl.svg)](https://crates.io/crates/postgres-openssl) [Documentation](https://docs.rs/postgres-openssl) TLS support for postgres and tokio-postgres via openssl. # Running test suite The test suite requires postgres to be running in the correct configuration. The easiest way to do this is with docker: 1. Install `docker` and `docker-compose`. 1. On ubuntu: `sudo apt install docker.io docker-compose`. 1. Make sure your user has permissions for docker. 1. On ubuntu: ``sudo usermod -aG docker $USER`` 1. Change to top-level directory of `rust-postgres` repo. 1. Run `docker-compose up -d`. 1. Run `cargo test`. 1. Run `docker-compose stop`. postgres-0.19.9/src/binary_copy.rs000064400000000000000000000063461046102023000152560ustar 00000000000000//! Utilities for working with the PostgreSQL binary copy format. use crate::connection::ConnectionRef; use crate::types::{BorrowToSql, ToSql, Type}; use crate::{CopyInWriter, CopyOutReader, Error}; use fallible_iterator::FallibleIterator; use futures_util::StreamExt; use std::pin::Pin; #[doc(inline)] pub use tokio_postgres::binary_copy::BinaryCopyOutRow; use tokio_postgres::binary_copy::{self, BinaryCopyOutStream}; /// A type which serializes rows into the PostgreSQL binary copy format. /// /// The copy *must* be explicitly completed via the `finish` method. If it is not, the copy will be aborted. pub struct BinaryCopyInWriter<'a> { connection: ConnectionRef<'a>, sink: Pin>, } impl<'a> BinaryCopyInWriter<'a> { /// Creates a new writer which will write rows of the provided types. pub fn new(writer: CopyInWriter<'a>, types: &[Type]) -> BinaryCopyInWriter<'a> { let stream = writer .sink .into_unpinned() .expect("writer has already been written to"); BinaryCopyInWriter { connection: writer.connection, sink: Box::pin(binary_copy::BinaryCopyInWriter::new(stream, types)), } } /// Writes a single row. /// /// # Panics /// /// Panics if the number of values provided does not match the number expected. pub fn write(&mut self, values: &[&(dyn ToSql + Sync)]) -> Result<(), Error> { self.connection.block_on(self.sink.as_mut().write(values)) } /// A maximally-flexible version of `write`. /// /// # Panics /// /// Panics if the number of values provided does not match the number expected. pub fn write_raw(&mut self, values: I) -> Result<(), Error> where P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { self.connection .block_on(self.sink.as_mut().write_raw(values)) } /// Completes the copy, returning the number of rows added. /// /// This method *must* be used to complete the copy process. If it is not, the copy will be aborted. pub fn finish(mut self) -> Result { self.connection.block_on(self.sink.as_mut().finish()) } } /// An iterator of rows deserialized from the PostgreSQL binary copy format. pub struct BinaryCopyOutIter<'a> { connection: ConnectionRef<'a>, stream: Pin>, } impl<'a> BinaryCopyOutIter<'a> { /// Creates a new iterator from a raw copy out reader and the types of the columns being returned. pub fn new(reader: CopyOutReader<'a>, types: &[Type]) -> BinaryCopyOutIter<'a> { let stream = reader .stream .into_unpinned() .expect("reader has already been read from"); BinaryCopyOutIter { connection: reader.connection, stream: Box::pin(BinaryCopyOutStream::new(stream, types)), } } } impl FallibleIterator for BinaryCopyOutIter<'_> { type Item = BinaryCopyOutRow; type Error = Error; fn next(&mut self) -> Result, Error> { let stream = &mut self.stream; self.connection .block_on(async { stream.next().await.transpose() }) } } postgres-0.19.9/src/cancel_token.rs000064400000000000000000000023531046102023000153570ustar 00000000000000use tokio::runtime; use tokio_postgres::tls::MakeTlsConnect; use tokio_postgres::{Error, Socket}; /// The capability to request cancellation of in-progress queries on a /// connection. #[derive(Clone)] pub struct CancelToken(tokio_postgres::CancelToken); impl CancelToken { pub(crate) fn new(inner: tokio_postgres::CancelToken) -> CancelToken { CancelToken(inner) } /// Attempts to cancel the in-progress query on the connection associated /// with this `CancelToken`. /// /// The server provides no information about whether a cancellation attempt was successful or not. An error will /// only be returned if the client was unable to connect to the database. /// /// Cancellation is inherently racy. There is no guarantee that the /// cancellation request will reach the server before the query terminates /// normally, or that the connection associated with this token is still /// active. pub fn cancel_query(&self, tls: T) -> Result<(), Error> where T: MakeTlsConnect, { runtime::Builder::new_current_thread() .enable_all() .build() .unwrap() // FIXME don't unwrap .block_on(self.0.cancel_query(tls)) } } postgres-0.19.9/src/client.rs000064400000000000000000000550171046102023000142150ustar 00000000000000use crate::connection::Connection; use crate::{ CancelToken, Config, CopyInWriter, CopyOutReader, Notifications, RowIter, Statement, ToStatement, Transaction, TransactionBuilder, }; use std::task::Poll; use std::time::Duration; use tokio_postgres::tls::{MakeTlsConnect, TlsConnect}; use tokio_postgres::types::{BorrowToSql, ToSql, Type}; use tokio_postgres::{Error, Row, SimpleQueryMessage, Socket}; /// A synchronous PostgreSQL client. pub struct Client { connection: Connection, client: tokio_postgres::Client, } impl Drop for Client { fn drop(&mut self) { let _ = self.close_inner(); } } impl Client { pub(crate) fn new(connection: Connection, client: tokio_postgres::Client) -> Client { Client { connection, client } } /// A convenience function which parses a configuration string into a `Config` and then connects to the database. /// /// See the documentation for [`Config`] for information about the connection syntax. /// /// [`Config`]: config/struct.Config.html pub fn connect(params: &str, tls_mode: T) -> Result where T: MakeTlsConnect + 'static + Send, T::TlsConnect: Send, T::Stream: Send, >::Future: Send, { params.parse::()?.connect(tls_mode) } /// Returns a new `Config` object which can be used to configure and connect to a database. pub fn configure() -> Config { Config::new() } /// Executes a statement, returning the number of rows modified. /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the parameter of the list /// provided, 1-indexed. /// /// If the statement does not modify any rows (e.g. `SELECT`), 0 is returned. /// /// The `query` argument can either be a `Statement`, or a raw query string. If the same statement will be /// repeatedly executed (perhaps with different query parameters), consider preparing the statement up front /// with the `prepare` method. /// /// # Example /// /// ```no_run /// use postgres::{Client, NoTls}; /// /// # fn main() -> Result<(), postgres::Error> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let bar = 1i32; /// let baz = true; /// let rows_updated = client.execute( /// "UPDATE foo SET bar = $1 WHERE baz = $2", /// &[&bar, &baz], /// )?; /// /// println!("{} rows updated", rows_updated); /// # Ok(()) /// # } /// ``` pub fn execute(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement, { self.connection.block_on(self.client.execute(query, params)) } /// Executes a statement, returning the resulting rows. /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the parameter of the list /// provided, 1-indexed. /// /// The `query` argument can either be a `Statement`, or a raw query string. If the same statement will be /// repeatedly executed (perhaps with different query parameters), consider preparing the statement up front /// with the `prepare` method. /// /// # Examples /// /// ```no_run /// use postgres::{Client, NoTls}; /// /// # fn main() -> Result<(), postgres::Error> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let baz = true; /// for row in client.query("SELECT foo FROM bar WHERE baz = $1", &[&baz])? { /// let foo: i32 = row.get("foo"); /// println!("foo: {}", foo); /// } /// # Ok(()) /// # } /// ``` pub fn query(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result, Error> where T: ?Sized + ToStatement, { self.connection.block_on(self.client.query(query, params)) } /// Executes a statement which returns a single row, returning it. /// /// Returns an error if the query does not return exactly one row. /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the parameter of the list /// provided, 1-indexed. /// /// The `query` argument can either be a `Statement`, or a raw query string. If the same statement will be /// repeatedly executed (perhaps with different query parameters), consider preparing the statement up front /// with the `prepare` method. /// /// # Examples /// /// ```no_run /// use postgres::{Client, NoTls}; /// /// # fn main() -> Result<(), postgres::Error> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let baz = true; /// let row = client.query_one("SELECT foo FROM bar WHERE baz = $1", &[&baz])?; /// let foo: i32 = row.get("foo"); /// println!("foo: {}", foo); /// # Ok(()) /// # } /// ``` pub fn query_one(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement, { self.connection .block_on(self.client.query_one(query, params)) } /// Executes a statement which returns zero or one rows, returning it. /// /// Returns an error if the query returns more than one row. /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the parameter of the list /// provided, 1-indexed. /// /// The `query` argument can either be a `Statement`, or a raw query string. If the same statement will be /// repeatedly executed (perhaps with different query parameters), consider preparing the statement up front /// with the `prepare` method. /// /// # Examples /// /// ```no_run /// use postgres::{Client, NoTls}; /// /// # fn main() -> Result<(), postgres::Error> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let baz = true; /// let row = client.query_opt("SELECT foo FROM bar WHERE baz = $1", &[&baz])?; /// match row { /// Some(row) => { /// let foo: i32 = row.get("foo"); /// println!("foo: {}", foo); /// } /// None => println!("no matching foo"), /// } /// # Ok(()) /// # } /// ``` pub fn query_opt( &mut self, query: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement, { self.connection .block_on(self.client.query_opt(query, params)) } /// A maximally-flexible version of `query`. /// /// It takes an iterator of parameters rather than a slice, and returns an iterator of rows rather than collecting /// them into an array. /// /// # Examples /// /// ```no_run /// use postgres::{Client, NoTls}; /// use fallible_iterator::FallibleIterator; /// use std::iter; /// /// # fn main() -> Result<(), postgres::Error> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let baz = true; /// let mut it = client.query_raw("SELECT foo FROM bar WHERE baz = $1", iter::once(baz))?; /// /// while let Some(row) = it.next()? { /// let foo: i32 = row.get("foo"); /// println!("foo: {}", foo); /// } /// # Ok(()) /// # } /// ``` /// /// If you have a type like `Vec` where `T: ToSql` Rust will not know how to use it as params. To get around /// this the type must explicitly be converted to `&dyn ToSql`. /// /// ```no_run /// # use postgres::{Client, NoTls}; /// use postgres::types::ToSql; /// use fallible_iterator::FallibleIterator; /// # fn main() -> Result<(), postgres::Error> { /// # let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let params: Vec = vec![ /// "first param".into(), /// "second param".into(), /// ]; /// let mut it = client.query_raw( /// "SELECT foo FROM bar WHERE biz = $1 AND baz = $2", /// params, /// )?; /// /// while let Some(row) = it.next()? { /// let foo: i32 = row.get("foo"); /// println!("foo: {}", foo); /// } /// # Ok(()) /// # } /// ``` pub fn query_raw(&mut self, query: &T, params: I) -> Result, Error> where T: ?Sized + ToStatement, P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { let stream = self .connection .block_on(self.client.query_raw(query, params))?; Ok(RowIter::new(self.connection.as_ref(), stream)) } /// Like `query`, but requires the types of query parameters to be explicitly specified. /// /// Compared to `query`, this method allows performing queries without three round trips (for /// prepare, execute, and close) by requiring the caller to specify parameter values along with /// their Postgres type. Thus, this is suitable in environments where prepared statements aren't /// supported (such as Cloudflare Workers with Hyperdrive). /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the /// parameter of the list provided, 1-indexed. pub fn query_typed( &mut self, query: &str, params: &[(&(dyn ToSql + Sync), Type)], ) -> Result, Error> { self.connection .block_on(self.client.query_typed(query, params)) } /// The maximally flexible version of [`query_typed`]. /// /// Compared to `query`, this method allows performing queries without three round trips (for /// prepare, execute, and close) by requiring the caller to specify parameter values along with /// their Postgres type. Thus, this is suitable in environments where prepared statements aren't /// supported (such as Cloudflare Workers with Hyperdrive). /// /// A statement may contain parameters, specified by `$n`, where `n` is the index of the /// parameter of the list provided, 1-indexed. /// /// [`query_typed`]: #method.query_typed /// /// # Examples /// ```no_run /// # use postgres::{Client, NoTls}; /// use postgres::types::{ToSql, Type}; /// use fallible_iterator::FallibleIterator; /// # fn main() -> Result<(), postgres::Error> { /// # let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let params: Vec<(String, Type)> = vec![ /// ("first param".into(), Type::TEXT), /// ("second param".into(), Type::TEXT), /// ]; /// let mut it = client.query_typed_raw( /// "SELECT foo FROM bar WHERE biz = $1 AND baz = $2", /// params, /// )?; /// /// while let Some(row) = it.next()? { /// let foo: i32 = row.get("foo"); /// println!("foo: {}", foo); /// } /// # Ok(()) /// # } /// ``` pub fn query_typed_raw(&mut self, query: &str, params: I) -> Result, Error> where P: BorrowToSql, I: IntoIterator, { let stream = self .connection .block_on(self.client.query_typed_raw(query, params))?; Ok(RowIter::new(self.connection.as_ref(), stream)) } /// Creates a new prepared statement. /// /// Prepared statements can be executed repeatedly, and may contain query parameters (indicated by `$1`, `$2`, etc), /// which are set when executed. Prepared statements can only be used with the connection that created them. /// /// # Examples /// /// ```no_run /// use postgres::{Client, NoTls}; /// /// # fn main() -> Result<(), postgres::Error> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let statement = client.prepare("SELECT name FROM people WHERE id = $1")?; /// /// for id in 0..10 { /// let rows = client.query(&statement, &[&id])?; /// let name: &str = rows[0].get(0); /// println!("name: {}", name); /// } /// # Ok(()) /// # } /// ``` pub fn prepare(&mut self, query: &str) -> Result { self.connection.block_on(self.client.prepare(query)) } /// Like `prepare`, but allows the types of query parameters to be explicitly specified. /// /// The list of types may be smaller than the number of parameters - the types of the remaining parameters will be /// inferred. For example, `client.prepare_typed(query, &[])` is equivalent to `client.prepare(query)`. /// /// # Examples /// /// ```no_run /// use postgres::{Client, NoTls}; /// use postgres::types::Type; /// /// # fn main() -> Result<(), postgres::Error> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let statement = client.prepare_typed( /// "SELECT name FROM people WHERE id = $1", /// &[Type::INT8], /// )?; /// /// for id in 0..10 { /// let rows = client.query(&statement, &[&id])?; /// let name: &str = rows[0].get(0); /// println!("name: {}", name); /// } /// # Ok(()) /// # } /// ``` pub fn prepare_typed(&mut self, query: &str, types: &[Type]) -> Result { self.connection .block_on(self.client.prepare_typed(query, types)) } /// Executes a `COPY FROM STDIN` statement, returning the number of rows created. /// /// The `query` argument can either be a `Statement`, or a raw query string. The data in the provided reader is /// passed along to the server verbatim; it is the caller's responsibility to ensure it uses the proper format. /// PostgreSQL does not support parameters in `COPY` statements, so this method does not take any. /// /// The copy *must* be explicitly completed via the `finish` method. If it is not, the copy will be aborted. /// /// # Examples /// /// ```no_run /// use postgres::{Client, NoTls}; /// use std::io::Write; /// /// # fn main() -> Result<(), Box> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let mut writer = client.copy_in("COPY people FROM stdin")?; /// writer.write_all(b"1\tjohn\n2\tjane\n")?; /// writer.finish()?; /// # Ok(()) /// # } /// ``` pub fn copy_in(&mut self, query: &T) -> Result, Error> where T: ?Sized + ToStatement, { let sink = self.connection.block_on(self.client.copy_in(query))?; Ok(CopyInWriter::new(self.connection.as_ref(), sink)) } /// Executes a `COPY TO STDOUT` statement, returning a reader of the resulting data. /// /// The `query` argument can either be a `Statement`, or a raw query string. PostgreSQL does not support parameters /// in `COPY` statements, so this method does not take any. /// /// # Examples /// /// ```no_run /// use postgres::{Client, NoTls}; /// use std::io::Read; /// /// # fn main() -> Result<(), Box> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let mut reader = client.copy_out("COPY people TO stdout")?; /// let mut buf = vec![]; /// reader.read_to_end(&mut buf)?; /// # Ok(()) /// # } /// ``` pub fn copy_out(&mut self, query: &T) -> Result, Error> where T: ?Sized + ToStatement, { let stream = self.connection.block_on(self.client.copy_out(query))?; Ok(CopyOutReader::new(self.connection.as_ref(), stream)) } /// Executes a sequence of SQL statements using the simple query protocol. /// /// Statements should be separated by semicolons. If an error occurs, execution of the sequence will stop at that /// point. The simple query protocol returns the values in rows as strings rather than in their binary encodings, /// so the associated row type doesn't work with the `FromSql` trait. Rather than simply returning the rows, this /// method returns a sequence of an enum which indicates either the completion of one of the commands, or a row of /// data. This preserves the framing between the separate statements in the request. /// /// This is a simple convenience method over `simple_query_iter`. /// /// # Warning /// /// Prepared statements should be used for any query which contains user-specified data, as they provided the /// functionality to safely embed that data in the request. Do not form statements via string concatenation and pass /// them to this method! pub fn simple_query(&mut self, query: &str) -> Result, Error> { self.connection.block_on(self.client.simple_query(query)) } /// Validates the connection by performing a simple no-op query. /// /// If the specified timeout is reached before the backend responds, an error will be returned. pub fn is_valid(&mut self, timeout: Duration) -> Result<(), Error> { let inner_client = &self.client; self.connection.block_on(async { let trivial_query = inner_client.simple_query(""); tokio::time::timeout(timeout, trivial_query) .await .map_err(|_| Error::__private_api_timeout())? .map(|_| ()) }) } /// Executes a sequence of SQL statements using the simple query protocol. /// /// Statements should be separated by semicolons. If an error occurs, execution of the sequence will stop at that /// point. This is intended for use when, for example, initializing a database schema. /// /// # Warning /// /// Prepared statements should be use for any query which contains user-specified data, as they provided the /// functionality to safely embed that data in the request. Do not form statements via string concatenation and pass /// them to this method! pub fn batch_execute(&mut self, query: &str) -> Result<(), Error> { self.connection.block_on(self.client.batch_execute(query)) } /// Begins a new database transaction. /// /// The transaction will roll back by default - use the `commit` method to commit it. /// /// # Examples /// /// ```no_run /// use postgres::{Client, NoTls}; /// /// # fn main() -> Result<(), postgres::Error> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let mut transaction = client.transaction()?; /// transaction.execute("UPDATE foo SET bar = 10", &[])?; /// // ... /// /// transaction.commit()?; /// # Ok(()) /// # } /// ``` pub fn transaction(&mut self) -> Result, Error> { let transaction = self.connection.block_on(self.client.transaction())?; Ok(Transaction::new(self.connection.as_ref(), transaction)) } /// Returns a builder for a transaction with custom settings. /// /// Unlike the `transaction` method, the builder can be used to control the transaction's isolation level and other /// attributes. /// /// # Examples /// /// ```no_run /// use postgres::{Client, IsolationLevel, NoTls}; /// /// # fn main() -> Result<(), postgres::Error> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let mut transaction = client.build_transaction() /// .isolation_level(IsolationLevel::RepeatableRead) /// .start()?; /// transaction.execute("UPDATE foo SET bar = 10", &[])?; /// // ... /// /// transaction.commit()?; /// # Ok(()) /// # } /// ``` pub fn build_transaction(&mut self) -> TransactionBuilder<'_> { TransactionBuilder::new(self.connection.as_ref(), self.client.build_transaction()) } /// Returns a structure providing access to asynchronous notifications. /// /// Use the `LISTEN` command to register this connection for notifications. pub fn notifications(&mut self) -> Notifications<'_> { Notifications::new(self.connection.as_ref()) } /// Constructs a cancellation token that can later be used to request cancellation of a query running on this /// connection. /// /// # Examples /// /// ```no_run /// use postgres::{Client, NoTls}; /// use postgres::error::SqlState; /// use std::thread; /// use std::time::Duration; /// /// # fn main() -> Result<(), Box> { /// let mut client = Client::connect("host=localhost user=postgres", NoTls)?; /// /// let cancel_token = client.cancel_token(); /// /// thread::spawn(move || { /// // Abort the query after 5s. /// thread::sleep(Duration::from_secs(5)); /// let _ = cancel_token.cancel_query(NoTls); /// }); /// /// match client.simple_query("SELECT long_running_query()") { /// Err(e) if e.code() == Some(&SqlState::QUERY_CANCELED) => { /// // Handle canceled query. /// } /// Err(err) => return Err(err.into()), /// Ok(rows) => { /// // ... /// } /// } /// // ... /// /// # Ok(()) /// # } /// ``` pub fn cancel_token(&self) -> CancelToken { CancelToken::new(self.client.cancel_token()) } /// Clears the client's type information cache. /// /// When user-defined types are used in a query, the client loads their definitions from the database and caches /// them for the lifetime of the client. If those definitions are changed in the database, this method can be used /// to flush the local cache and allow the new, updated definitions to be loaded. pub fn clear_type_cache(&self) { self.client.clear_type_cache(); } /// Determines if the client's connection has already closed. /// /// If this returns `true`, the client is no longer usable. pub fn is_closed(&self) -> bool { self.client.is_closed() } /// Closes the client's connection to the server. /// /// This is equivalent to `Client`'s `Drop` implementation, except that it returns any error encountered to the /// caller. pub fn close(mut self) -> Result<(), Error> { self.close_inner() } fn close_inner(&mut self) -> Result<(), Error> { self.client.__private_api_close(); self.connection.poll_block_on(|_, _, done| { if done { Poll::Ready(Ok(())) } else { Poll::Pending } }) } } postgres-0.19.9/src/config.rs000064400000000000000000000452451046102023000142060ustar 00000000000000//! Connection configuration. use crate::connection::Connection; use crate::Client; use log::info; use std::fmt; use std::net::IpAddr; use std::path::Path; use std::str::FromStr; use std::sync::Arc; use std::time::Duration; use tokio::runtime; #[doc(inline)] pub use tokio_postgres::config::{ ChannelBinding, Host, LoadBalanceHosts, SslMode, TargetSessionAttrs, }; use tokio_postgres::error::DbError; use tokio_postgres::tls::{MakeTlsConnect, TlsConnect}; use tokio_postgres::{Error, Socket}; /// Connection configuration. /// /// Configuration can be parsed from libpq-style connection strings. These strings come in two formats: /// /// # Key-Value /// /// This format consists of space-separated key-value pairs. Values which are either the empty string or contain /// whitespace should be wrapped in `'`. `'` and `\` characters should be backslash-escaped. /// /// ## Keys /// /// * `user` - The username to authenticate with. Defaults to the user executing this process. /// * `password` - The password to authenticate with. /// * `dbname` - The name of the database to connect to. Defaults to the username. /// * `options` - Command line options used to configure the server. /// * `application_name` - Sets the `application_name` parameter on the server. /// * `sslmode` - Controls usage of TLS. If set to `disable`, TLS will not be used. If set to `prefer`, TLS will be used /// if available, but not used otherwise. If set to `require`, TLS will be forced to be used. Defaults to `prefer`. /// * `host` - The host to connect to. On Unix platforms, if the host starts with a `/` character it is treated as the /// path to the directory containing Unix domain sockets. Otherwise, it is treated as a hostname. Multiple hosts /// can be specified, separated by commas. Each host will be tried in turn when connecting. Required if connecting /// with the `connect` method. /// * `hostaddr` - Numeric IP address of host to connect to. This should be in the standard IPv4 address format, /// e.g., 172.28.40.9. If your machine supports IPv6, you can also use those addresses. /// If this parameter is not specified, the value of `host` will be looked up to find the corresponding IP address, /// or if host specifies an IP address, that value will be used directly. /// Using `hostaddr` allows the application to avoid a host name look-up, which might be important in applications /// with time constraints. However, a host name is required for TLS certificate verification. /// Specifically: /// * If `hostaddr` is specified without `host`, the value for `hostaddr` gives the server network address. /// The connection attempt will fail if the authentication method requires a host name; /// * If `host` is specified without `hostaddr`, a host name lookup occurs; /// * If both `host` and `hostaddr` are specified, the value for `hostaddr` gives the server network address. /// The value for `host` is ignored unless the authentication method requires it, /// in which case it will be used as the host name. /// * `port` - The port to connect to. Multiple ports can be specified, separated by commas. The number of ports must be /// either 1, in which case it will be used for all hosts, or the same as the number of hosts. Defaults to 5432 if /// omitted or the empty string. /// * `connect_timeout` - The time limit in seconds applied to each socket-level connection attempt. Note that hostnames /// can resolve to multiple IP addresses, and this limit is applied to each address. Defaults to no timeout. /// * `tcp_user_timeout` - The time limit that transmitted data may remain unacknowledged before a connection is forcibly closed. /// This is ignored for Unix domain socket connections. It is only supported on systems where TCP_USER_TIMEOUT is available /// and will default to the system default if omitted or set to 0; on other systems, it has no effect. /// * `keepalives` - Controls the use of TCP keepalive. A value of 0 disables keepalive and nonzero integers enable it. /// This option is ignored when connecting with Unix sockets. Defaults to on. /// * `keepalives_idle` - The number of seconds of inactivity after which a keepalive message is sent to the server. /// This option is ignored when connecting with Unix sockets. Defaults to 2 hours. /// * `keepalives_interval` - The time interval between TCP keepalive probes. /// This option is ignored when connecting with Unix sockets. /// * `keepalives_retries` - The maximum number of TCP keepalive probes that will be sent before dropping a connection. /// This option is ignored when connecting with Unix sockets. /// * `target_session_attrs` - Specifies requirements of the session. If set to `read-write`, the client will check that /// the `transaction_read_write` session parameter is set to `on`. This can be used to connect to the primary server /// in a database cluster as opposed to the secondary read-only mirrors. Defaults to `all`. /// * `channel_binding` - Controls usage of channel binding in the authentication process. If set to `disable`, channel /// binding will not be used. If set to `prefer`, channel binding will be used if available, but not used otherwise. /// If set to `require`, the authentication process will fail if channel binding is not used. Defaults to `prefer`. /// * `load_balance_hosts` - Controls the order in which the client tries to connect to the available hosts and /// addresses. Once a connection attempt is successful no other hosts and addresses will be tried. This parameter /// is typically used in combination with multiple host names or a DNS record that returns multiple IPs. If set to /// `disable`, hosts and addresses will be tried in the order provided. If set to `random`, hosts will be tried /// in a random order, and the IP addresses resolved from a hostname will also be tried in a random order. Defaults /// to `disable`. /// /// ## Examples /// /// ```not_rust /// host=localhost user=postgres connect_timeout=10 keepalives=0 /// ``` /// /// ```not_rust /// host=/var/lib/postgresql,localhost port=1234 user=postgres password='password with spaces' /// ``` /// /// ```not_rust /// host=host1,host2,host3 port=1234,,5678 hostaddr=127.0.0.1,127.0.0.2,127.0.0.3 user=postgres target_session_attrs=read-write /// ``` /// /// ```not_rust /// host=host1,host2,host3 port=1234,,5678 user=postgres target_session_attrs=read-write /// ``` /// /// # Url /// /// This format resembles a URL with a scheme of either `postgres://` or `postgresql://`. All components are optional, /// and the format accepts query parameters for all of the key-value pairs described in the section above. Multiple /// host/port pairs can be comma-separated. Unix socket paths in the host section of the URL should be percent-encoded, /// as the path component of the URL specifies the database name. /// /// ## Examples /// /// ```not_rust /// postgresql://user@localhost /// ``` /// /// ```not_rust /// postgresql://user:password@%2Fvar%2Flib%2Fpostgresql/mydb?connect_timeout=10 /// ``` /// /// ```not_rust /// postgresql://user@host1:1234,host2,host3:5678?target_session_attrs=read-write /// ``` /// /// ```not_rust /// postgresql:///mydb?user=user&host=/var/lib/postgresql /// ``` #[derive(Clone)] pub struct Config { config: tokio_postgres::Config, notice_callback: Arc, } impl fmt::Debug for Config { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt.debug_struct("Config") .field("config", &self.config) .finish() } } impl Default for Config { fn default() -> Config { Config::new() } } impl Config { /// Creates a new configuration. pub fn new() -> Config { tokio_postgres::Config::new().into() } /// Sets the user to authenticate with. /// /// If the user is not set, then this defaults to the user executing this process. pub fn user(&mut self, user: &str) -> &mut Config { self.config.user(user); self } /// Gets the user to authenticate with, if one has been configured with /// the `user` method. pub fn get_user(&self) -> Option<&str> { self.config.get_user() } /// Sets the password to authenticate with. pub fn password(&mut self, password: T) -> &mut Config where T: AsRef<[u8]>, { self.config.password(password); self } /// Gets the password to authenticate with, if one has been configured with /// the `password` method. pub fn get_password(&self) -> Option<&[u8]> { self.config.get_password() } /// Sets the name of the database to connect to. /// /// Defaults to the user. pub fn dbname(&mut self, dbname: &str) -> &mut Config { self.config.dbname(dbname); self } /// Gets the name of the database to connect to, if one has been configured /// with the `dbname` method. pub fn get_dbname(&self) -> Option<&str> { self.config.get_dbname() } /// Sets command line options used to configure the server. pub fn options(&mut self, options: &str) -> &mut Config { self.config.options(options); self } /// Gets the command line options used to configure the server, if the /// options have been set with the `options` method. pub fn get_options(&self) -> Option<&str> { self.config.get_options() } /// Sets the value of the `application_name` runtime parameter. pub fn application_name(&mut self, application_name: &str) -> &mut Config { self.config.application_name(application_name); self } /// Gets the value of the `application_name` runtime parameter, if it has /// been set with the `application_name` method. pub fn get_application_name(&self) -> Option<&str> { self.config.get_application_name() } /// Sets the SSL configuration. /// /// Defaults to `prefer`. pub fn ssl_mode(&mut self, ssl_mode: SslMode) -> &mut Config { self.config.ssl_mode(ssl_mode); self } /// Gets the SSL configuration. pub fn get_ssl_mode(&self) -> SslMode { self.config.get_ssl_mode() } /// Adds a host to the configuration. /// /// Multiple hosts can be specified by calling this method multiple times, and each will be tried in order. On Unix /// systems, a host starting with a `/` is interpreted as a path to a directory containing Unix domain sockets. /// There must be either no hosts, or the same number of hosts as hostaddrs. pub fn host(&mut self, host: &str) -> &mut Config { self.config.host(host); self } /// Gets the hosts that have been added to the configuration with `host`. pub fn get_hosts(&self) -> &[Host] { self.config.get_hosts() } /// Gets the hostaddrs that have been added to the configuration with `hostaddr`. pub fn get_hostaddrs(&self) -> &[IpAddr] { self.config.get_hostaddrs() } /// Adds a Unix socket host to the configuration. /// /// Unlike `host`, this method allows non-UTF8 paths. #[cfg(unix)] pub fn host_path(&mut self, host: T) -> &mut Config where T: AsRef, { self.config.host_path(host); self } /// Adds a hostaddr to the configuration. /// /// Multiple hostaddrs can be specified by calling this method multiple times, and each will be tried in order. /// There must be either no hostaddrs, or the same number of hostaddrs as hosts. pub fn hostaddr(&mut self, hostaddr: IpAddr) -> &mut Config { self.config.hostaddr(hostaddr); self } /// Adds a port to the configuration. /// /// Multiple ports can be specified by calling this method multiple times. There must either be no ports, in which /// case the default of 5432 is used, a single port, in which it is used for all hosts, or the same number of ports /// as hosts. pub fn port(&mut self, port: u16) -> &mut Config { self.config.port(port); self } /// Gets the ports that have been added to the configuration with `port`. pub fn get_ports(&self) -> &[u16] { self.config.get_ports() } /// Sets the timeout applied to socket-level connection attempts. /// /// Note that hostnames can resolve to multiple IP addresses, and this timeout will apply to each address of each /// host separately. Defaults to no limit. pub fn connect_timeout(&mut self, connect_timeout: Duration) -> &mut Config { self.config.connect_timeout(connect_timeout); self } /// Gets the connection timeout, if one has been set with the /// `connect_timeout` method. pub fn get_connect_timeout(&self) -> Option<&Duration> { self.config.get_connect_timeout() } /// Sets the TCP user timeout. /// /// This is ignored for Unix domain socket connections. It is only supported on systems where /// TCP_USER_TIMEOUT is available and will default to the system default if omitted or set to 0; /// on other systems, it has no effect. pub fn tcp_user_timeout(&mut self, tcp_user_timeout: Duration) -> &mut Config { self.config.tcp_user_timeout(tcp_user_timeout); self } /// Gets the TCP user timeout, if one has been set with the /// `user_timeout` method. pub fn get_tcp_user_timeout(&self) -> Option<&Duration> { self.config.get_tcp_user_timeout() } /// Controls the use of TCP keepalive. /// /// This is ignored for Unix domain socket connections. Defaults to `true`. pub fn keepalives(&mut self, keepalives: bool) -> &mut Config { self.config.keepalives(keepalives); self } /// Reports whether TCP keepalives will be used. pub fn get_keepalives(&self) -> bool { self.config.get_keepalives() } /// Sets the amount of idle time before a keepalive packet is sent on the connection. /// /// This is ignored for Unix domain sockets, or if the `keepalives` option is disabled. Defaults to 2 hours. pub fn keepalives_idle(&mut self, keepalives_idle: Duration) -> &mut Config { self.config.keepalives_idle(keepalives_idle); self } /// Gets the configured amount of idle time before a keepalive packet will /// be sent on the connection. pub fn get_keepalives_idle(&self) -> Duration { self.config.get_keepalives_idle() } /// Sets the time interval between TCP keepalive probes. /// On Windows, this sets the value of the tcp_keepalive struct’s keepaliveinterval field. /// /// This is ignored for Unix domain sockets, or if the `keepalives` option is disabled. pub fn keepalives_interval(&mut self, keepalives_interval: Duration) -> &mut Config { self.config.keepalives_interval(keepalives_interval); self } /// Gets the time interval between TCP keepalive probes. pub fn get_keepalives_interval(&self) -> Option { self.config.get_keepalives_interval() } /// Sets the maximum number of TCP keepalive probes that will be sent before dropping a connection. /// /// This is ignored for Unix domain sockets, or if the `keepalives` option is disabled. pub fn keepalives_retries(&mut self, keepalives_retries: u32) -> &mut Config { self.config.keepalives_retries(keepalives_retries); self } /// Gets the maximum number of TCP keepalive probes that will be sent before dropping a connection. pub fn get_keepalives_retries(&self) -> Option { self.config.get_keepalives_retries() } /// Sets the requirements of the session. /// /// This can be used to connect to the primary server in a clustered database rather than one of the read-only /// secondary servers. Defaults to `Any`. pub fn target_session_attrs( &mut self, target_session_attrs: TargetSessionAttrs, ) -> &mut Config { self.config.target_session_attrs(target_session_attrs); self } /// Gets the requirements of the session. pub fn get_target_session_attrs(&self) -> TargetSessionAttrs { self.config.get_target_session_attrs() } /// Sets the channel binding behavior. /// /// Defaults to `prefer`. pub fn channel_binding(&mut self, channel_binding: ChannelBinding) -> &mut Config { self.config.channel_binding(channel_binding); self } /// Gets the channel binding behavior. pub fn get_channel_binding(&self) -> ChannelBinding { self.config.get_channel_binding() } /// Sets the host load balancing behavior. /// /// Defaults to `disable`. pub fn load_balance_hosts(&mut self, load_balance_hosts: LoadBalanceHosts) -> &mut Config { self.config.load_balance_hosts(load_balance_hosts); self } /// Gets the host load balancing behavior. pub fn get_load_balance_hosts(&self) -> LoadBalanceHosts { self.config.get_load_balance_hosts() } /// Sets the notice callback. /// /// This callback will be invoked with the contents of every /// [`AsyncMessage::Notice`] that is received by the connection. Notices use /// the same structure as errors, but they are not "errors" per-se. /// /// Notices are distinct from notifications, which are instead accessible /// via the [`Notifications`] API. /// /// [`AsyncMessage::Notice`]: tokio_postgres::AsyncMessage::Notice /// [`Notifications`]: crate::Notifications pub fn notice_callback(&mut self, f: F) -> &mut Config where F: Fn(DbError) + Send + Sync + 'static, { self.notice_callback = Arc::new(f); self } /// Opens a connection to a PostgreSQL database. pub fn connect(&self, tls: T) -> Result where T: MakeTlsConnect + 'static + Send, T::TlsConnect: Send, T::Stream: Send, >::Future: Send, { let runtime = runtime::Builder::new_current_thread() .enable_all() .build() .unwrap(); // FIXME don't unwrap let (client, connection) = runtime.block_on(self.config.connect(tls))?; let connection = Connection::new(runtime, connection, self.notice_callback.clone()); Ok(Client::new(connection, client)) } } impl FromStr for Config { type Err = Error; fn from_str(s: &str) -> Result { s.parse::().map(Config::from) } } impl From for Config { fn from(config: tokio_postgres::Config) -> Config { Config { config, notice_callback: Arc::new(|notice| { info!("{}: {}", notice.severity(), notice.message()) }), } } } postgres-0.19.9/src/connection.rs000064400000000000000000000075301046102023000150730ustar 00000000000000use crate::{Error, Notification}; use futures_util::{future, pin_mut, Stream}; use std::collections::VecDeque; use std::future::Future; use std::ops::{Deref, DerefMut}; use std::pin::Pin; use std::sync::Arc; use std::task::{Context, Poll}; use tokio::io::{AsyncRead, AsyncWrite}; use tokio::runtime::Runtime; use tokio_postgres::error::DbError; use tokio_postgres::AsyncMessage; pub struct Connection { runtime: Runtime, connection: Pin> + Send>>, notifications: VecDeque, notice_callback: Arc, } impl Connection { pub fn new( runtime: Runtime, connection: tokio_postgres::Connection, notice_callback: Arc, ) -> Connection where S: AsyncRead + AsyncWrite + Unpin + 'static + Send, T: AsyncRead + AsyncWrite + Unpin + 'static + Send, { Connection { runtime, connection: Box::pin(ConnectionStream { connection }), notifications: VecDeque::new(), notice_callback, } } pub fn as_ref(&mut self) -> ConnectionRef<'_> { ConnectionRef { connection: self } } pub fn enter(&self, f: F) -> T where F: FnOnce() -> T, { let _guard = self.runtime.enter(); f() } pub fn block_on(&mut self, future: F) -> Result where F: Future>, { pin_mut!(future); self.poll_block_on(|cx, _, _| future.as_mut().poll(cx)) } pub fn poll_block_on(&mut self, mut f: F) -> Result where F: FnMut(&mut Context<'_>, &mut VecDeque, bool) -> Poll>, { let connection = &mut self.connection; let notifications = &mut self.notifications; let notice_callback = &mut self.notice_callback; self.runtime.block_on({ future::poll_fn(|cx| { let done = loop { match connection.as_mut().poll_next(cx) { Poll::Ready(Some(Ok(AsyncMessage::Notification(notification)))) => { notifications.push_back(notification); } Poll::Ready(Some(Ok(AsyncMessage::Notice(notice)))) => { notice_callback(notice) } Poll::Ready(Some(Ok(_))) => {} Poll::Ready(Some(Err(e))) => return Poll::Ready(Err(e)), Poll::Ready(None) => break true, Poll::Pending => break false, } }; f(cx, notifications, done) }) }) } pub fn notifications(&self) -> &VecDeque { &self.notifications } pub fn notifications_mut(&mut self) -> &mut VecDeque { &mut self.notifications } } pub struct ConnectionRef<'a> { connection: &'a mut Connection, } // no-op impl to extend the borrow until drop impl Drop for ConnectionRef<'_> { #[inline] fn drop(&mut self) {} } impl Deref for ConnectionRef<'_> { type Target = Connection; #[inline] fn deref(&self) -> &Connection { self.connection } } impl DerefMut for ConnectionRef<'_> { #[inline] fn deref_mut(&mut self) -> &mut Connection { self.connection } } struct ConnectionStream { connection: tokio_postgres::Connection, } impl Stream for ConnectionStream where S: AsyncRead + AsyncWrite + Unpin, T: AsyncRead + AsyncWrite + Unpin, { type Item = Result; fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { self.connection.poll_message(cx) } } postgres-0.19.9/src/copy_in_writer.rs000064400000000000000000000032021046102023000157600ustar 00000000000000use crate::connection::ConnectionRef; use crate::lazy_pin::LazyPin; use bytes::{Bytes, BytesMut}; use futures_util::SinkExt; use std::io; use std::io::Write; use tokio_postgres::{CopyInSink, Error}; /// The writer returned by the `copy_in` method. /// /// The copy *must* be explicitly completed via the `finish` method. If it is not, the copy will be aborted. pub struct CopyInWriter<'a> { pub(crate) connection: ConnectionRef<'a>, pub(crate) sink: LazyPin>, buf: BytesMut, } impl<'a> CopyInWriter<'a> { pub(crate) fn new(connection: ConnectionRef<'a>, sink: CopyInSink) -> CopyInWriter<'a> { CopyInWriter { connection, sink: LazyPin::new(sink), buf: BytesMut::new(), } } /// Completes the copy, returning the number of rows written. /// /// If this is not called, the copy will be aborted. pub fn finish(mut self) -> Result { self.flush_inner()?; self.connection.block_on(self.sink.pinned().finish()) } fn flush_inner(&mut self) -> Result<(), Error> { if self.buf.is_empty() { return Ok(()); } self.connection .block_on(self.sink.pinned().send(self.buf.split().freeze())) } } impl Write for CopyInWriter<'_> { fn write(&mut self, buf: &[u8]) -> io::Result { if self.buf.len() > 4096 { self.flush()?; } self.buf.extend_from_slice(buf); Ok(buf.len()) } fn flush(&mut self) -> io::Result<()> { self.flush_inner() .map_err(|e| io::Error::new(io::ErrorKind::Other, e)) } } postgres-0.19.9/src/copy_out_reader.rs000064400000000000000000000030111046102023000161050ustar 00000000000000use crate::connection::ConnectionRef; use crate::lazy_pin::LazyPin; use bytes::{Buf, Bytes}; use futures_util::StreamExt; use std::io::{self, BufRead, Read}; use tokio_postgres::CopyOutStream; /// The reader returned by the `copy_out` method. pub struct CopyOutReader<'a> { pub(crate) connection: ConnectionRef<'a>, pub(crate) stream: LazyPin, cur: Bytes, } impl<'a> CopyOutReader<'a> { pub(crate) fn new(connection: ConnectionRef<'a>, stream: CopyOutStream) -> CopyOutReader<'a> { CopyOutReader { connection, stream: LazyPin::new(stream), cur: Bytes::new(), } } } impl Read for CopyOutReader<'_> { fn read(&mut self, buf: &mut [u8]) -> io::Result { let b = self.fill_buf()?; let len = usize::min(buf.len(), b.len()); buf[..len].copy_from_slice(&b[..len]); self.consume(len); Ok(len) } } impl BufRead for CopyOutReader<'_> { fn fill_buf(&mut self) -> io::Result<&[u8]> { while !self.cur.has_remaining() { let mut stream = self.stream.pinned(); match self .connection .block_on(async { stream.next().await.transpose() }) { Ok(Some(cur)) => self.cur = cur, Err(e) => return Err(io::Error::new(io::ErrorKind::Other, e)), Ok(None) => break, }; } Ok(&self.cur) } fn consume(&mut self, amt: usize) { self.cur.advance(amt); } } postgres-0.19.9/src/generic_client.rs000064400000000000000000000170251046102023000157060ustar 00000000000000use crate::types::{BorrowToSql, ToSql, Type}; use crate::{ Client, CopyInWriter, CopyOutReader, Error, Row, RowIter, SimpleQueryMessage, Statement, ToStatement, Transaction, }; mod private { pub trait Sealed {} } /// A trait allowing abstraction over connections and transactions. /// /// This trait is "sealed", and cannot be implemented outside of this crate. pub trait GenericClient: private::Sealed { /// Like `Client::execute`. fn execute(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement; /// Like `Client::query`. fn query(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result, Error> where T: ?Sized + ToStatement; /// Like `Client::query_one`. fn query_one(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement; /// Like `Client::query_opt`. fn query_opt( &mut self, query: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement; /// Like `Client::query_raw`. fn query_raw(&mut self, query: &T, params: I) -> Result, Error> where T: ?Sized + ToStatement, P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator; /// Like [`Client::query_typed`] fn query_typed( &mut self, statement: &str, params: &[(&(dyn ToSql + Sync), Type)], ) -> Result, Error>; /// Like [`Client::query_typed_raw`] fn query_typed_raw(&mut self, statement: &str, params: I) -> Result, Error> where P: BorrowToSql, I: IntoIterator + Sync + Send; /// Like `Client::prepare`. fn prepare(&mut self, query: &str) -> Result; /// Like `Client::prepare_typed`. fn prepare_typed(&mut self, query: &str, types: &[Type]) -> Result; /// Like `Client::copy_in`. fn copy_in(&mut self, query: &T) -> Result, Error> where T: ?Sized + ToStatement; /// Like `Client::copy_out`. fn copy_out(&mut self, query: &T) -> Result, Error> where T: ?Sized + ToStatement; /// Like `Client::simple_query`. fn simple_query(&mut self, query: &str) -> Result, Error>; /// Like `Client::batch_execute`. fn batch_execute(&mut self, query: &str) -> Result<(), Error>; /// Like `Client::transaction`. fn transaction(&mut self) -> Result, Error>; } impl private::Sealed for Client {} impl GenericClient for Client { fn execute(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement, { self.execute(query, params) } fn query(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result, Error> where T: ?Sized + ToStatement, { self.query(query, params) } fn query_one(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement, { self.query_one(query, params) } fn query_opt( &mut self, query: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement, { self.query_opt(query, params) } fn query_raw(&mut self, query: &T, params: I) -> Result, Error> where T: ?Sized + ToStatement, P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { self.query_raw(query, params) } fn query_typed( &mut self, statement: &str, params: &[(&(dyn ToSql + Sync), Type)], ) -> Result, Error> { self.query_typed(statement, params) } fn query_typed_raw(&mut self, statement: &str, params: I) -> Result, Error> where P: BorrowToSql, I: IntoIterator + Sync + Send, { self.query_typed_raw(statement, params) } fn prepare(&mut self, query: &str) -> Result { self.prepare(query) } fn prepare_typed(&mut self, query: &str, types: &[Type]) -> Result { self.prepare_typed(query, types) } fn copy_in(&mut self, query: &T) -> Result, Error> where T: ?Sized + ToStatement, { self.copy_in(query) } fn copy_out(&mut self, query: &T) -> Result, Error> where T: ?Sized + ToStatement, { self.copy_out(query) } fn simple_query(&mut self, query: &str) -> Result, Error> { self.simple_query(query) } fn batch_execute(&mut self, query: &str) -> Result<(), Error> { self.batch_execute(query) } fn transaction(&mut self) -> Result, Error> { self.transaction() } } impl private::Sealed for Transaction<'_> {} impl GenericClient for Transaction<'_> { fn execute(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement, { self.execute(query, params) } fn query(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result, Error> where T: ?Sized + ToStatement, { self.query(query, params) } fn query_one(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement, { self.query_one(query, params) } fn query_opt( &mut self, query: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement, { self.query_opt(query, params) } fn query_raw(&mut self, query: &T, params: I) -> Result, Error> where T: ?Sized + ToStatement, P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { self.query_raw(query, params) } fn query_typed( &mut self, statement: &str, params: &[(&(dyn ToSql + Sync), Type)], ) -> Result, Error> { self.query_typed(statement, params) } fn query_typed_raw(&mut self, statement: &str, params: I) -> Result, Error> where P: BorrowToSql, I: IntoIterator + Sync + Send, { self.query_typed_raw(statement, params) } fn prepare(&mut self, query: &str) -> Result { self.prepare(query) } fn prepare_typed(&mut self, query: &str, types: &[Type]) -> Result { self.prepare_typed(query, types) } fn copy_in(&mut self, query: &T) -> Result, Error> where T: ?Sized + ToStatement, { self.copy_in(query) } fn copy_out(&mut self, query: &T) -> Result, Error> where T: ?Sized + ToStatement, { self.copy_out(query) } fn simple_query(&mut self, query: &str) -> Result, Error> { self.simple_query(query) } fn batch_execute(&mut self, query: &str) -> Result<(), Error> { self.batch_execute(query) } fn transaction(&mut self) -> Result, Error> { self.transaction() } } postgres-0.19.9/src/lazy_pin.rs000064400000000000000000000010351046102023000145530ustar 00000000000000use std::pin::Pin; pub(crate) struct LazyPin { value: Box, pinned: bool, } impl LazyPin { pub fn new(value: T) -> LazyPin { LazyPin { value: Box::new(value), pinned: false, } } pub fn pinned(&mut self) -> Pin<&mut T> { self.pinned = true; unsafe { Pin::new_unchecked(&mut *self.value) } } pub fn into_unpinned(self) -> Option { if self.pinned { None } else { Some(*self.value) } } } postgres-0.19.9/src/lib.rs000064400000000000000000000104111046102023000134720ustar 00000000000000//! A synchronous client for the PostgreSQL database. //! //! # Example //! //! ```no_run //! use postgres::{Client, NoTls}; //! //! # fn main() -> Result<(), postgres::Error> { //! let mut client = Client::connect("host=localhost user=postgres", NoTls)?; //! //! client.batch_execute(" //! CREATE TABLE person ( //! id SERIAL PRIMARY KEY, //! name TEXT NOT NULL, //! data BYTEA //! ) //! ")?; //! //! let name = "Ferris"; //! let data = None::<&[u8]>; //! client.execute( //! "INSERT INTO person (name, data) VALUES ($1, $2)", //! &[&name, &data], //! )?; //! //! for row in client.query("SELECT id, name, data FROM person", &[])? { //! let id: i32 = row.get(0); //! let name: &str = row.get(1); //! let data: Option<&[u8]> = row.get(2); //! //! println!("found person: {} {} {:?}", id, name, data); //! } //! # Ok(()) //! # } //! ``` //! //! # Implementation //! //! This crate is a lightweight wrapper over tokio-postgres. The `postgres::Client` is simply a wrapper around a //! `tokio_postgres::Client` along side a tokio `Runtime`. The client simply blocks on the futures provided by the async //! client. //! //! # SSL/TLS support //! //! TLS support is implemented via external libraries. `Client::connect` and `Config::connect` take a TLS implementation //! as an argument. The `NoTls` type in this crate can be used when TLS is not required. Otherwise, the //! `postgres-openssl` and `postgres-native-tls` crates provide implementations backed by the `openssl` and `native-tls` //! crates, respectively. //! //! # Features //! //! The following features can be enabled from `Cargo.toml`: //! //! | Feature | Description | Extra dependencies | Default | //! | ------- | ----------- | ------------------ | ------- | //! | `with-bit-vec-0_6` | Enable support for the `bit-vec` crate. | [bit-vec](https://crates.io/crates/bit-vec) 0.6 | no | //! | `with-chrono-0_4` | Enable support for the `chrono` crate. | [chrono](https://crates.io/crates/chrono) 0.4 | no | //! | `with-eui48-0_4` | Enable support for the 0.4 version of the `eui48` crate. This is deprecated and will be removed. | [eui48](https://crates.io/crates/eui48) 0.4 | no | //! | `with-eui48-1` | Enable support for the 1.0 version of the `eui48` crate. | [eui48](https://crates.io/crates/eui48) 1.0 | no | //! | `with-geo-types-0_6` | Enable support for the 0.6 version of the `geo-types` crate. | [geo-types](https://crates.io/crates/geo-types/0.6.0) 0.6 | no | //! | `with-geo-types-0_7` | Enable support for the 0.7 version of the `geo-types` crate. | [geo-types](https://crates.io/crates/geo-types/0.7.0) 0.7 | no | //! | `with-serde_json-1` | Enable support for the `serde_json` crate. | [serde_json](https://crates.io/crates/serde_json) 1.0 | no | //! | `with-uuid-0_8` | Enable support for the `uuid` crate. | [uuid](https://crates.io/crates/uuid) 0.8 | no | //! | `with-uuid-1` | Enable support for the `uuid` crate. | [uuid](https://crates.io/crates/uuid) 1.0 | no | //! | `with-time-0_2` | Enable support for the 0.2 version of the `time` crate. | [time](https://crates.io/crates/time/0.2.0) 0.2 | no | //! | `with-time-0_3` | Enable support for the 0.3 version of the `time` crate. | [time](https://crates.io/crates/time/0.3.0) 0.3 | no | #![warn(clippy::all, rust_2018_idioms, missing_docs)] pub use fallible_iterator; pub use tokio_postgres::{ error, row, tls, types, Column, IsolationLevel, Notification, Portal, SimpleQueryMessage, Socket, Statement, ToStatement, }; pub use crate::cancel_token::CancelToken; pub use crate::client::*; pub use crate::config::Config; pub use crate::copy_in_writer::CopyInWriter; pub use crate::copy_out_reader::CopyOutReader; #[doc(no_inline)] pub use crate::error::Error; pub use crate::generic_client::GenericClient; #[doc(inline)] pub use crate::notifications::Notifications; #[doc(no_inline)] pub use crate::row::{Row, SimpleQueryRow}; pub use crate::row_iter::RowIter; #[doc(no_inline)] pub use crate::tls::NoTls; pub use crate::transaction::*; pub use crate::transaction_builder::TransactionBuilder; pub mod binary_copy; mod cancel_token; mod client; pub mod config; mod connection; mod copy_in_writer; mod copy_out_reader; mod generic_client; mod lazy_pin; pub mod notifications; mod row_iter; mod transaction; mod transaction_builder; #[cfg(test)] mod test; postgres-0.19.9/src/notifications.rs000064400000000000000000000126261046102023000156070ustar 00000000000000//! Asynchronous notifications. use crate::connection::ConnectionRef; use crate::{Error, Notification}; use fallible_iterator::FallibleIterator; use futures_util::{ready, FutureExt}; use std::pin::Pin; use std::task::Poll; use std::time::Duration; use tokio::time::{self, Instant, Sleep}; /// Notifications from a PostgreSQL backend. pub struct Notifications<'a> { connection: ConnectionRef<'a>, } impl<'a> Notifications<'a> { pub(crate) fn new(connection: ConnectionRef<'a>) -> Notifications<'a> { Notifications { connection } } /// Returns the number of already buffered pending notifications. pub fn len(&self) -> usize { self.connection.notifications().len() } /// Determines if there are any already buffered pending notifications. pub fn is_empty(&self) -> bool { self.connection.notifications().is_empty() } /// Returns a nonblocking iterator over notifications. /// /// If there are no already buffered pending notifications, this iterator will poll the connection but will not /// block waiting on notifications over the network. A return value of `None` either indicates that there are no /// pending notifications or that the server has disconnected. /// /// # Note /// /// This iterator may start returning `Some` after previously returning `None` if more notifications are received. pub fn iter(&mut self) -> Iter<'_> { Iter { connection: self.connection.as_ref(), } } /// Returns a blocking iterator over notifications. /// /// If there are no already buffered pending notifications, this iterator will block indefinitely waiting on the /// PostgreSQL backend server to send one. It will only return `None` if the server has disconnected. pub fn blocking_iter(&mut self) -> BlockingIter<'_> { BlockingIter { connection: self.connection.as_ref(), } } /// Returns an iterator over notifications which blocks a limited amount of time. /// /// If there are no already buffered pending notifications, this iterator will block waiting on the PostgreSQL /// backend server to send one up to the provided timeout. A return value of `None` either indicates that there are /// no pending notifications or that the server has disconnected. /// /// # Note /// /// This iterator may start returning `Some` after previously returning `None` if more notifications are received. pub fn timeout_iter(&mut self, timeout: Duration) -> TimeoutIter<'_> { TimeoutIter { delay: Box::pin(self.connection.enter(|| time::sleep(timeout))), timeout, connection: self.connection.as_ref(), } } } /// A nonblocking iterator over pending notifications. pub struct Iter<'a> { connection: ConnectionRef<'a>, } impl<'a> FallibleIterator for Iter<'a> { type Item = Notification; type Error = Error; fn next(&mut self) -> Result, Self::Error> { if let Some(notification) = self.connection.notifications_mut().pop_front() { return Ok(Some(notification)); } self.connection .poll_block_on(|_, notifications, _| Poll::Ready(Ok(notifications.pop_front()))) } fn size_hint(&self) -> (usize, Option) { (self.connection.notifications().len(), None) } } /// A blocking iterator over pending notifications. pub struct BlockingIter<'a> { connection: ConnectionRef<'a>, } impl<'a> FallibleIterator for BlockingIter<'a> { type Item = Notification; type Error = Error; fn next(&mut self) -> Result, Self::Error> { if let Some(notification) = self.connection.notifications_mut().pop_front() { return Ok(Some(notification)); } self.connection .poll_block_on(|_, notifications, done| match notifications.pop_front() { Some(notification) => Poll::Ready(Ok(Some(notification))), None if done => Poll::Ready(Ok(None)), None => Poll::Pending, }) } fn size_hint(&self) -> (usize, Option) { (self.connection.notifications().len(), None) } } /// A time-limited blocking iterator over pending notifications. pub struct TimeoutIter<'a> { connection: ConnectionRef<'a>, delay: Pin>, timeout: Duration, } impl<'a> FallibleIterator for TimeoutIter<'a> { type Item = Notification; type Error = Error; fn next(&mut self) -> Result, Self::Error> { if let Some(notification) = self.connection.notifications_mut().pop_front() { self.delay.as_mut().reset(Instant::now() + self.timeout); return Ok(Some(notification)); } let delay = &mut self.delay; let timeout = self.timeout; self.connection.poll_block_on(|cx, notifications, done| { match notifications.pop_front() { Some(notification) => { delay.as_mut().reset(Instant::now() + timeout); return Poll::Ready(Ok(Some(notification))); } None if done => return Poll::Ready(Ok(None)), None => {} } ready!(delay.poll_unpin(cx)); Poll::Ready(Ok(None)) }) } fn size_hint(&self) -> (usize, Option) { (self.connection.notifications().len(), None) } } postgres-0.19.9/src/row_iter.rs000064400000000000000000000017651046102023000145720ustar 00000000000000use crate::connection::ConnectionRef; use fallible_iterator::FallibleIterator; use futures_util::StreamExt; use std::pin::Pin; use tokio_postgres::{Error, Row, RowStream}; /// The iterator returned by `query_raw`. pub struct RowIter<'a> { connection: ConnectionRef<'a>, it: Pin>, } impl<'a> RowIter<'a> { pub(crate) fn new(connection: ConnectionRef<'a>, stream: RowStream) -> RowIter<'a> { RowIter { connection, it: Box::pin(stream), } } /// Returns the number of rows affected by the query. /// /// This function will return `None` until the iterator has been exhausted. pub fn rows_affected(&self) -> Option { self.it.rows_affected() } } impl FallibleIterator for RowIter<'_> { type Item = Row; type Error = Error; fn next(&mut self) -> Result, Error> { let it = &mut self.it; self.connection .block_on(async { it.next().await.transpose() }) } } postgres-0.19.9/src/test.rs000064400000000000000000000345051046102023000137150ustar 00000000000000use std::io::{Read, Write}; use std::str::FromStr; use std::sync::mpsc; use std::thread; use std::time::Duration; use tokio_postgres::error::SqlState; use tokio_postgres::types::Type; use tokio_postgres::NoTls; use super::*; use crate::binary_copy::{BinaryCopyInWriter, BinaryCopyOutIter}; use fallible_iterator::FallibleIterator; #[test] fn prepare() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); let stmt = client.prepare("SELECT 1::INT, $1::TEXT").unwrap(); assert_eq!(stmt.params(), &[Type::TEXT]); assert_eq!(stmt.columns().len(), 2); assert_eq!(stmt.columns()[0].type_(), &Type::INT4); assert_eq!(stmt.columns()[1].type_(), &Type::TEXT); } #[test] fn query_prepared() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); let stmt = client.prepare("SELECT $1::TEXT").unwrap(); let rows = client.query(&stmt, &[&"hello"]).unwrap(); assert_eq!(rows.len(), 1); assert_eq!(rows[0].get::<_, &str>(0), "hello"); } #[test] fn query_unprepared() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); let rows = client.query("SELECT $1::TEXT", &[&"hello"]).unwrap(); assert_eq!(rows.len(), 1); assert_eq!(rows[0].get::<_, &str>(0), "hello"); } #[test] fn transaction_commit() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .simple_query("CREATE TEMPORARY TABLE foo (id SERIAL PRIMARY KEY)") .unwrap(); let mut transaction = client.transaction().unwrap(); transaction .execute("INSERT INTO foo DEFAULT VALUES", &[]) .unwrap(); transaction.commit().unwrap(); let rows = client.query("SELECT * FROM foo", &[]).unwrap(); assert_eq!(rows.len(), 1); assert_eq!(rows[0].get::<_, i32>(0), 1); } #[test] fn transaction_rollback() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .simple_query("CREATE TEMPORARY TABLE foo (id SERIAL PRIMARY KEY)") .unwrap(); let mut transaction = client.transaction().unwrap(); transaction .execute("INSERT INTO foo DEFAULT VALUES", &[]) .unwrap(); transaction.rollback().unwrap(); let rows = client.query("SELECT * FROM foo", &[]).unwrap(); assert_eq!(rows.len(), 0); } #[test] fn transaction_drop() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .simple_query("CREATE TEMPORARY TABLE foo (id SERIAL PRIMARY KEY)") .unwrap(); let mut transaction = client.transaction().unwrap(); transaction .execute("INSERT INTO foo DEFAULT VALUES", &[]) .unwrap(); drop(transaction); let rows = client.query("SELECT * FROM foo", &[]).unwrap(); assert_eq!(rows.len(), 0); } #[test] fn transaction_drop_immediate_rollback() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); let mut client2 = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .simple_query("CREATE TABLE IF NOT EXISTS foo (id SERIAL PRIMARY KEY)") .unwrap(); client .execute("INSERT INTO foo VALUES (1) ON CONFLICT DO NOTHING", &[]) .unwrap(); let mut transaction = client.transaction().unwrap(); transaction .execute("SELECT * FROM foo FOR UPDATE", &[]) .unwrap(); drop(transaction); let rows = client2.query("SELECT * FROM foo FOR UPDATE", &[]).unwrap(); assert_eq!(rows.len(), 1); } #[test] fn nested_transactions() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .batch_execute("CREATE TEMPORARY TABLE foo (id INT PRIMARY KEY)") .unwrap(); let mut transaction = client.transaction().unwrap(); transaction .execute("INSERT INTO foo (id) VALUES (1)", &[]) .unwrap(); let mut transaction2 = transaction.transaction().unwrap(); transaction2 .execute("INSERT INTO foo (id) VALUES (2)", &[]) .unwrap(); transaction2.rollback().unwrap(); let rows = transaction .query("SELECT id FROM foo ORDER BY id", &[]) .unwrap(); assert_eq!(rows.len(), 1); assert_eq!(rows[0].get::<_, i32>(0), 1); let mut transaction3 = transaction.transaction().unwrap(); transaction3 .execute("INSERT INTO foo (id) VALUES(3)", &[]) .unwrap(); let mut transaction4 = transaction3.transaction().unwrap(); transaction4 .execute("INSERT INTO foo (id) VALUES(4)", &[]) .unwrap(); transaction4.commit().unwrap(); transaction3.commit().unwrap(); transaction.commit().unwrap(); let rows = client.query("SELECT id FROM foo ORDER BY id", &[]).unwrap(); assert_eq!(rows.len(), 3); assert_eq!(rows[0].get::<_, i32>(0), 1); assert_eq!(rows[1].get::<_, i32>(0), 3); assert_eq!(rows[2].get::<_, i32>(0), 4); } #[test] fn savepoints() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .batch_execute("CREATE TEMPORARY TABLE foo (id INT PRIMARY KEY)") .unwrap(); let mut transaction = client.transaction().unwrap(); transaction .execute("INSERT INTO foo (id) VALUES (1)", &[]) .unwrap(); let mut savepoint1 = transaction.savepoint("savepoint1").unwrap(); savepoint1 .execute("INSERT INTO foo (id) VALUES (2)", &[]) .unwrap(); savepoint1.rollback().unwrap(); let rows = transaction .query("SELECT id FROM foo ORDER BY id", &[]) .unwrap(); assert_eq!(rows.len(), 1); assert_eq!(rows[0].get::<_, i32>(0), 1); let mut savepoint2 = transaction.savepoint("savepoint2").unwrap(); savepoint2 .execute("INSERT INTO foo (id) VALUES(3)", &[]) .unwrap(); let mut savepoint3 = savepoint2.savepoint("savepoint3").unwrap(); savepoint3 .execute("INSERT INTO foo (id) VALUES(4)", &[]) .unwrap(); savepoint3.commit().unwrap(); savepoint2.commit().unwrap(); transaction.commit().unwrap(); let rows = client.query("SELECT id FROM foo ORDER BY id", &[]).unwrap(); assert_eq!(rows.len(), 3); assert_eq!(rows[0].get::<_, i32>(0), 1); assert_eq!(rows[1].get::<_, i32>(0), 3); assert_eq!(rows[2].get::<_, i32>(0), 4); } #[test] fn copy_in() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .simple_query("CREATE TEMPORARY TABLE foo (id INT, name TEXT)") .unwrap(); let mut writer = client.copy_in("COPY foo FROM stdin").unwrap(); writer.write_all(b"1\tsteven\n2\ttimothy").unwrap(); writer.finish().unwrap(); let rows = client .query("SELECT id, name FROM foo ORDER BY id", &[]) .unwrap(); assert_eq!(rows.len(), 2); assert_eq!(rows[0].get::<_, i32>(0), 1); assert_eq!(rows[0].get::<_, &str>(1), "steven"); assert_eq!(rows[1].get::<_, i32>(0), 2); assert_eq!(rows[1].get::<_, &str>(1), "timothy"); } #[test] fn copy_in_abort() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .simple_query("CREATE TEMPORARY TABLE foo (id INT, name TEXT)") .unwrap(); let mut writer = client.copy_in("COPY foo FROM stdin").unwrap(); writer.write_all(b"1\tsteven\n2\ttimothy").unwrap(); drop(writer); let rows = client .query("SELECT id, name FROM foo ORDER BY id", &[]) .unwrap(); assert_eq!(rows.len(), 0); } #[test] fn binary_copy_in() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .simple_query("CREATE TEMPORARY TABLE foo (id INT, name TEXT)") .unwrap(); let writer = client.copy_in("COPY foo FROM stdin BINARY").unwrap(); let mut writer = BinaryCopyInWriter::new(writer, &[Type::INT4, Type::TEXT]); writer.write(&[&1i32, &"steven"]).unwrap(); writer.write(&[&2i32, &"timothy"]).unwrap(); writer.finish().unwrap(); let rows = client .query("SELECT id, name FROM foo ORDER BY id", &[]) .unwrap(); assert_eq!(rows.len(), 2); assert_eq!(rows[0].get::<_, i32>(0), 1); assert_eq!(rows[0].get::<_, &str>(1), "steven"); assert_eq!(rows[1].get::<_, i32>(0), 2); assert_eq!(rows[1].get::<_, &str>(1), "timothy"); } #[test] fn copy_out() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .simple_query( "CREATE TEMPORARY TABLE foo (id INT, name TEXT); INSERT INTO foo (id, name) VALUES (1, 'steven'), (2, 'timothy');", ) .unwrap(); let mut reader = client.copy_out("COPY foo (id, name) TO STDOUT").unwrap(); let mut s = String::new(); reader.read_to_string(&mut s).unwrap(); drop(reader); assert_eq!(s, "1\tsteven\n2\ttimothy\n"); client.simple_query("SELECT 1").unwrap(); } #[test] fn binary_copy_out() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .simple_query( "CREATE TEMPORARY TABLE foo (id INT, name TEXT); INSERT INTO foo (id, name) VALUES (1, 'steven'), (2, 'timothy');", ) .unwrap(); let reader = client .copy_out("COPY foo (id, name) TO STDOUT BINARY") .unwrap(); let rows = BinaryCopyOutIter::new(reader, &[Type::INT4, Type::TEXT]) .collect::>() .unwrap(); assert_eq!(rows.len(), 2); assert_eq!(rows[0].get::(0), 1); assert_eq!(rows[0].get::<&str>(1), "steven"); assert_eq!(rows[1].get::(0), 2); assert_eq!(rows[1].get::<&str>(1), "timothy"); client.simple_query("SELECT 1").unwrap(); } #[test] fn portal() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .simple_query( "CREATE TEMPORARY TABLE foo (id INT); INSERT INTO foo (id) VALUES (1), (2), (3);", ) .unwrap(); let mut transaction = client.transaction().unwrap(); let portal = transaction .bind("SELECT * FROM foo ORDER BY id", &[]) .unwrap(); let rows = transaction.query_portal(&portal, 2).unwrap(); assert_eq!(rows.len(), 2); assert_eq!(rows[0].get::<_, i32>(0), 1); assert_eq!(rows[1].get::<_, i32>(0), 2); let rows = transaction.query_portal(&portal, 2).unwrap(); assert_eq!(rows.len(), 1); assert_eq!(rows[0].get::<_, i32>(0), 3); } #[test] fn cancel_query() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); let cancel_token = client.cancel_token(); let cancel_thread = thread::spawn(move || { thread::sleep(Duration::from_millis(100)); cancel_token.cancel_query(NoTls).unwrap(); }); match client.batch_execute("SELECT pg_sleep(100)") { Err(e) if e.code() == Some(&SqlState::QUERY_CANCELED) => {} t => panic!("unexpected return: {:?}", t), } cancel_thread.join().unwrap(); } #[test] fn notifications_iter() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .batch_execute( "\ LISTEN notifications_iter; NOTIFY notifications_iter, 'hello'; NOTIFY notifications_iter, 'world'; ", ) .unwrap(); let notifications = client.notifications().iter().collect::>().unwrap(); assert_eq!(notifications.len(), 2); assert_eq!(notifications[0].payload(), "hello"); assert_eq!(notifications[1].payload(), "world"); } #[test] fn notifications_blocking_iter() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .batch_execute( "\ LISTEN notifications_blocking_iter; NOTIFY notifications_blocking_iter, 'hello'; ", ) .unwrap(); thread::spawn(|| { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); thread::sleep(Duration::from_secs(1)); client .batch_execute("NOTIFY notifications_blocking_iter, 'world'") .unwrap(); }); let notifications = client .notifications() .blocking_iter() .take(2) .collect::>() .unwrap(); assert_eq!(notifications.len(), 2); assert_eq!(notifications[0].payload(), "hello"); assert_eq!(notifications[1].payload(), "world"); } #[test] fn notifications_timeout_iter() { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client .batch_execute( "\ LISTEN notifications_timeout_iter; NOTIFY notifications_timeout_iter, 'hello'; ", ) .unwrap(); thread::spawn(|| { let mut client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); thread::sleep(Duration::from_secs(1)); client .batch_execute("NOTIFY notifications_timeout_iter, 'world'") .unwrap(); thread::sleep(Duration::from_secs(10)); client .batch_execute("NOTIFY notifications_timeout_iter, '!'") .unwrap(); }); let notifications = client .notifications() .timeout_iter(Duration::from_secs(2)) .collect::>() .unwrap(); assert_eq!(notifications.len(), 2); assert_eq!(notifications[0].payload(), "hello"); assert_eq!(notifications[1].payload(), "world"); } #[test] fn notice_callback() { let (notice_tx, notice_rx) = mpsc::sync_channel(64); let mut client = Config::from_str("host=localhost port=5433 user=postgres") .unwrap() .notice_callback(move |n| notice_tx.send(n).unwrap()) .connect(NoTls) .unwrap(); client .batch_execute("DO $$BEGIN RAISE NOTICE 'custom'; END$$") .unwrap(); assert_eq!(notice_rx.recv().unwrap().message(), "custom"); } #[test] fn explicit_close() { let client = Client::connect("host=localhost port=5433 user=postgres", NoTls).unwrap(); client.close().unwrap(); } #[test] fn check_send() { fn is_send() {} is_send::(); is_send::(); is_send::>(); } postgres-0.19.9/src/transaction.rs000064400000000000000000000207051046102023000152600ustar 00000000000000use crate::connection::ConnectionRef; use crate::{CancelToken, CopyInWriter, CopyOutReader, Portal, RowIter, Statement, ToStatement}; use tokio_postgres::types::{BorrowToSql, ToSql, Type}; use tokio_postgres::{Error, Row, SimpleQueryMessage}; /// A representation of a PostgreSQL database transaction. /// /// Transactions will implicitly roll back by default when dropped. Use the `commit` method to commit the changes made /// in the transaction. Transactions can be nested, with inner transactions implemented via savepoints. pub struct Transaction<'a> { connection: ConnectionRef<'a>, transaction: Option>, } impl<'a> Drop for Transaction<'a> { fn drop(&mut self) { if let Some(transaction) = self.transaction.take() { let _ = self.connection.block_on(transaction.rollback()); } } } impl<'a> Transaction<'a> { pub(crate) fn new( connection: ConnectionRef<'a>, transaction: tokio_postgres::Transaction<'a>, ) -> Transaction<'a> { Transaction { connection, transaction: Some(transaction), } } /// Consumes the transaction, committing all changes made within it. pub fn commit(mut self) -> Result<(), Error> { self.connection .block_on(self.transaction.take().unwrap().commit()) } /// Rolls the transaction back, discarding all changes made within it. /// /// This is equivalent to `Transaction`'s `Drop` implementation, but provides any error encountered to the caller. pub fn rollback(mut self) -> Result<(), Error> { self.connection .block_on(self.transaction.take().unwrap().rollback()) } /// Like `Client::prepare`. pub fn prepare(&mut self, query: &str) -> Result { self.connection .block_on(self.transaction.as_ref().unwrap().prepare(query)) } /// Like `Client::prepare_typed`. pub fn prepare_typed(&mut self, query: &str, types: &[Type]) -> Result { self.connection.block_on( self.transaction .as_ref() .unwrap() .prepare_typed(query, types), ) } /// Like `Client::execute`. pub fn execute(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement, { self.connection .block_on(self.transaction.as_ref().unwrap().execute(query, params)) } /// Like `Client::query`. pub fn query(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result, Error> where T: ?Sized + ToStatement, { self.connection .block_on(self.transaction.as_ref().unwrap().query(query, params)) } /// Like `Client::query_one`. pub fn query_one(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement, { self.connection .block_on(self.transaction.as_ref().unwrap().query_one(query, params)) } /// Like `Client::query_opt`. pub fn query_opt( &mut self, query: &T, params: &[&(dyn ToSql + Sync)], ) -> Result, Error> where T: ?Sized + ToStatement, { self.connection .block_on(self.transaction.as_ref().unwrap().query_opt(query, params)) } /// Like `Client::query_raw`. pub fn query_raw(&mut self, query: &T, params: I) -> Result, Error> where T: ?Sized + ToStatement, P: BorrowToSql, I: IntoIterator, I::IntoIter: ExactSizeIterator, { let stream = self .connection .block_on(self.transaction.as_ref().unwrap().query_raw(query, params))?; Ok(RowIter::new(self.connection.as_ref(), stream)) } /// Like `Client::query_typed`. pub fn query_typed( &mut self, statement: &str, params: &[(&(dyn ToSql + Sync), Type)], ) -> Result, Error> { self.connection.block_on( self.transaction .as_ref() .unwrap() .query_typed(statement, params), ) } /// Like `Client::query_typed_raw`. pub fn query_typed_raw(&mut self, query: &str, params: I) -> Result, Error> where P: BorrowToSql, I: IntoIterator, { let stream = self.connection.block_on( self.transaction .as_ref() .unwrap() .query_typed_raw(query, params), )?; Ok(RowIter::new(self.connection.as_ref(), stream)) } /// Binds parameters to a statement, creating a "portal". /// /// Portals can be used with the `query_portal` method to page through the results of a query without being forced /// to consume them all immediately. /// /// Portals are automatically closed when the transaction they were created in is closed. /// /// # Panics /// /// Panics if the number of parameters provided does not match the number expected. pub fn bind(&mut self, query: &T, params: &[&(dyn ToSql + Sync)]) -> Result where T: ?Sized + ToStatement, { self.connection .block_on(self.transaction.as_ref().unwrap().bind(query, params)) } /// Continues execution of a portal, returning the next set of rows. /// /// Unlike `query`, portals can be incrementally evaluated by limiting the number of rows returned in each call to /// `query_portal`. If the requested number is negative or 0, all remaining rows will be returned. pub fn query_portal(&mut self, portal: &Portal, max_rows: i32) -> Result, Error> { self.connection.block_on( self.transaction .as_ref() .unwrap() .query_portal(portal, max_rows), ) } /// The maximally flexible version of `query_portal`. pub fn query_portal_raw( &mut self, portal: &Portal, max_rows: i32, ) -> Result, Error> { let stream = self.connection.block_on( self.transaction .as_ref() .unwrap() .query_portal_raw(portal, max_rows), )?; Ok(RowIter::new(self.connection.as_ref(), stream)) } /// Like `Client::copy_in`. pub fn copy_in(&mut self, query: &T) -> Result, Error> where T: ?Sized + ToStatement, { let sink = self .connection .block_on(self.transaction.as_ref().unwrap().copy_in(query))?; Ok(CopyInWriter::new(self.connection.as_ref(), sink)) } /// Like `Client::copy_out`. pub fn copy_out(&mut self, query: &T) -> Result, Error> where T: ?Sized + ToStatement, { let stream = self .connection .block_on(self.transaction.as_ref().unwrap().copy_out(query))?; Ok(CopyOutReader::new(self.connection.as_ref(), stream)) } /// Like `Client::simple_query`. pub fn simple_query(&mut self, query: &str) -> Result, Error> { self.connection .block_on(self.transaction.as_ref().unwrap().simple_query(query)) } /// Like `Client::batch_execute`. pub fn batch_execute(&mut self, query: &str) -> Result<(), Error> { self.connection .block_on(self.transaction.as_ref().unwrap().batch_execute(query)) } /// Like `Client::cancel_token`. pub fn cancel_token(&self) -> CancelToken { CancelToken::new(self.transaction.as_ref().unwrap().cancel_token()) } /// Like `Client::transaction`, but creates a nested transaction via a savepoint. pub fn transaction(&mut self) -> Result, Error> { let transaction = self .connection .block_on(self.transaction.as_mut().unwrap().transaction())?; Ok(Transaction::new(self.connection.as_ref(), transaction)) } /// Like `Client::transaction`, but creates a nested transaction via a savepoint with the specified name. pub fn savepoint(&mut self, name: I) -> Result, Error> where I: Into, { let transaction = self .connection .block_on(self.transaction.as_mut().unwrap().savepoint(name))?; Ok(Transaction::new(self.connection.as_ref(), transaction)) } } postgres-0.19.9/src/transaction_builder.rs000064400000000000000000000033141046102023000167630ustar 00000000000000use crate::connection::ConnectionRef; use crate::{Error, IsolationLevel, Transaction}; /// A builder for database transactions. pub struct TransactionBuilder<'a> { connection: ConnectionRef<'a>, builder: tokio_postgres::TransactionBuilder<'a>, } impl<'a> TransactionBuilder<'a> { pub(crate) fn new( connection: ConnectionRef<'a>, builder: tokio_postgres::TransactionBuilder<'a>, ) -> TransactionBuilder<'a> { TransactionBuilder { connection, builder, } } /// Sets the isolation level of the transaction. pub fn isolation_level(mut self, isolation_level: IsolationLevel) -> Self { self.builder = self.builder.isolation_level(isolation_level); self } /// Sets the access mode of the transaction. pub fn read_only(mut self, read_only: bool) -> Self { self.builder = self.builder.read_only(read_only); self } /// Sets the deferrability of the transaction. /// /// If the transaction is also serializable and read only, creation of the transaction may block, but when it /// completes the transaction is able to run with less overhead and a guarantee that it will not be aborted due to /// serialization failure. pub fn deferrable(mut self, deferrable: bool) -> Self { self.builder = self.builder.deferrable(deferrable); self } /// Begins the transaction. /// /// The transaction will roll back by default - use the `commit` method to commit it. pub fn start(mut self) -> Result, Error> { let transaction = self.connection.block_on(self.builder.start())?; Ok(Transaction::new(self.connection, transaction)) } }