pax_global_header00006660000000000000000000000064137324117240014516gustar00rootroot0000000000000052 comment=1ebb4de6a08a6723d6de29e7f5278ea4742faa07 sequel_pg-1.14.0/000077500000000000000000000000001373241172400135655ustar00rootroot00000000000000sequel_pg-1.14.0/.gitignore000066400000000000000000000001771373241172400155620ustar00rootroot00000000000000/ext/sequel_pg/Makefile /ext/sequel_pg/mkmf.log /ext/sequel_pg/sequel_pg.*o /ext/sequel_pg/1.* /pkg /tmp /lib/*.so *.gem *.rbc sequel_pg-1.14.0/CHANGELOG000066400000000000000000000205221373241172400150000ustar00rootroot00000000000000=== 1.14.0 (2020-09-22) * Reduce stack memory usage for result sets with 64 or fewer columns (jeremyevans) * Support result sets with more than 256 columns by default (jeremyevans) (#39) === 1.13.0 (2020-04-13) * Allow overriding of inet/cidr type conversion using conversion procs (beanieboi, jeremyevans) (#36, #37) === 1.12.5 (2020-03-23) * Fix offset calculation for timestamptz types when datetime_class is DateTime and using local application timezone (jeremyevans) * Fix wrong method call when parsing timestamptz types when datetime_class is Time and using utc database timezone and local application timezone (jeremyevans) === 1.12.4 (2020-01-02) * Work with pg 1.2.1+ (jeremyevans) === 1.12.3 (2020-01-02) * Warn and do not load sequel_pg if pg >1.2 is used (jeremyevans) * Avoid verbose warnings on Ruby 2.7 due to tainting (jeremyevans) === 1.12.2 (2019-06-06) * Avoid use of pkg_config as it breaks compilation in some environments (jeremyevans) (#33) === 1.12.1 (2019-05-31) * Avoid using Proc.new without block, fixing deprecation warning on ruby 2.7+ (jeremyevans) * Use rb_gc_register_mark_object instead of rb_global_variable (jeremyevans) * Use pkg_config instead of pg_config for configuration (jeremyevans) (#31) === 1.12.0 (2019-03-01) * Allow Dataset#paged_each to be called without a block when using streaming (jeremyevans) * Freeze ruby strings used as temporary buffers for parsing arrays (jeremyevans) === 1.11.0 (2018-07-09) * Set encoding correctly for hash symbol keys (jeremyevans) * Slight performance improvement to float parsing (jeremyevans) === 1.10.0 (2018-06-25) * Add native inet/cidr parsers (jeremyevans) * Don't leak memory if unable to create a Sequel::SQL::Blob string when parsing bytea (jeremyevans) * Improve performance of bytea parsing (jeremyevans) * Drop Sequel <4.38.0 support (jeremyevans) * Respect Sequel.application_timezone setting when parsing values for time and timetz columns (jeremyevans) * Respect Sequel::SQLTime.date setting when parsing values for time and timetz columns (jeremyevans) * Improve performance of time parsing (jeremyevans) * Improve performance of date parsing (jeremyevans) * Improve performance of timestamp parsing by borrowing and optimizing ruby-pg's parser (jeremyevans) === 1.9.0 (2018-06-06) * Return arrays of common data types as PGArray instances automatically with much improved performance (jeremyevans) * Borrow pg_text_dec_integer function from ruby-pg and use it to improve performance (jeremyevans) * Borrow PG_ENCODING_SET_NOCHECK macro from ruby-pg and use it to improve performance (jeremyevans) === 1.8.2 (2018-05-25) * Use Kernel.BigDecimal instead of BigDecimal.new to avoid verbose mode deprecation warnings (jeremyevans) === 1.8.1 (2017-12-13) * Fix issue when using Dataset#as_hash and Dataset#to_hash_groups with the null_dataset extension (jeremyevans) === 1.8.0 (2017-10-18) * Handle timezone offsets with seconds when parsing timestamps (jeremyevans) * Parse BC dates and timestamps correctly (jeremyevans) * Drop Sequel <4.34.0 support (jeremyevans) * Drop pg <0.18.0 support (jeremyevans) * Drop ruby <1.9.3 support (jeremyevans) === 1.7.1 (2017-08-25) * Handle case where PGconn#get_result returns nil in single row mode (jeremyevans) * Fix RB_GC_GUARD usage to handle additional segfault (Eric Wong) === 1.7.0 (2017-06-30) * Add Dataset#with_optimize_model_load to change optimized model loading for specific datasets (jeremyevans) * Deprecate optimize_model_load Database and Dataset accessors (jeremyevans) * Turn optimized model loading on by default, disable automatically when Model.call overridden (jeremyevans) * Override Dataset#as_hash instead of #to_hash if #as_hash is defined (jeremyevans) === 1.6.19 (2017-06-13) * Use PG::Error instead of PGError if available, avoiding deprecation warning on pg 0.21.0+ (jeremyevans) === 1.6.18 (2017-04-27) * Support logging of connection information in single row mode (jeremyevans) * Check Sequel compatibility before overwriting methods, supported in Sequel 4.44.0+ (jeremyevans) * Remove verbose mode warnings (jeremyevans) === 1.6.17 (2016-04-29) * Work with upcoming 4.34.0 release, supporting the duplicate_column_handler extension (jeremyevans) === 1.6.16 (2016-04-11) * Work with upcoming Sequel 4.34.0 release, and Sequel 4.0+ (jeremyevans) (#22) === 1.6.15 (2016-04-11) * Work with upcoming Sequel 4.34.0 release, supporting to_hash taking the hash to insert into (jeremyevans) === 1.6.14 (2016-01-19) * Make array parser ignore explicit bounds (jeremyevans) === 1.6.13 (2015-06-29) * Fix Dataset#paged_each when called with no arguments (jeremyevans) * Remove handling of int2vector type (jeremyevans) === 1.6.12 (2015-03-23) * Fix segfault when Dataset#yield_hash_rows is passed a nil value when using ruby pg 0.18+ (jeremyevans) (#19) === 1.6.11 (2014-11-04) * Work with ruby pg 0.18+ (currently a prerelease) (jeremyevans) === 1.6.10 (2014-07-11) * Work correctly when the database timezone is not a named timezone but the application timezone is (jeremyevans) === 1.6.9 (2014-03-05) * When using the streaming extension, automatically use streaming to implement paging in Dataset#paged_each (jeremyevans) === 1.6.8 (2013-08-05) * Allow overriding maximum allowed columns in a result set via -- --with-cflags=\"-DSPG_MAX_FIELDS=1600\" (jeremyevans) (#12) === 1.6.7 (2013-06-06) * Correctly handle fractional seconds in the time type (jeremyevans) === 1.6.6 (2013-05-31) * Work correctly when using the named_timezones extension (jeremyevans) * Work around format-security false positive (jeremyevans) (#9) === 1.6.5 (2013-03-06) * Handle infinite dates using Database#convert_infinite_timestamps (jeremyevans) === 1.6.4 (2013-01-14) * Remove type conversion of int2vector and money types on PostgreSQL, since previous conversions were wrong (jeremyevans) (#8) === 1.6.3 (2012-11-30) * Make streaming support not swallow errors when rows are not retrieved (jeremyevans) === 1.6.2 (2012-11-16) * Make sequel_pg runnable on rubinius by fixing bad rb_global_variable call (dbussink) (#7) === 1.6.1 (2012-10-25) * Make PostgreSQL array parser handle string encodings correctly on ruby 1.9 (jeremyevans) === 1.6.0 (2012-09-04) * Replace PQsetRowProcessor streaming with PQsetSingleRowMode streaming introduced in PostgreSQL 9.2beta3 (jeremyevans) === 1.5.1 (2012-08-02) * Sprinkle some RB_GC_GUARD to work around segfaults in the PostgreSQL array parser (jeremyevans) === 1.5.0 (2012-07-02) * Add C-based PostgreSQL array parser, for major speedup in parsing arrays (Dan McClain, jeremyevans) === 1.4.0 (2012-06-01) * Add support for streaming on PostgreSQL 9.2 using PQsetRowProcessor (jeremyevans) * Respect DEBUG environment variable when building (jeremyevans) === 1.3.0 (2012-04-02) * Build Windows version against PostgreSQL 9.1.1, ruby 1.8.7, and ruby 1.9.2 (previously 9.0.1, 1.8.6, and 1.9.1) (jeremyevans) * Add major speedup for new Sequel 3.34.0 methods Dataset#to_hash_groups and #select_hash_groups (jeremyevans) * Handle infinite timestamp values using Database#convert_infinite_timestamps in Sequel 3.34.0 (jeremyevans) === 1.2.2 (2012-03-09) * Get microsecond accuracy when using datetime_class = DateTime with 1.8-1.9.2 stdlib date library via Rational (jeremyevans) === 1.2.1 (2012-02-22) * Handle NaN, Infinity, and -Infinity for double precision values correctly (jeremyevans) === 1.2.0 (2011-11-01) * Add optimize_model_load setting to speedup loading of model objects, off by default (jeremyevans) * Add major speedup to Dataset#map, #to_hash, #select_map, #select_order_map, and #select_hash (jeremyevans) * Work with the new Database#timezone setting in Sequel 3.29.0 (jeremyevans) === 1.1.1 (2011-09-01) * Work with new Sequel::SQLTime for time columns in Sequel 3.27.0 (jeremyevans) === 1.1.0 (2011-06-01) * Work with new Database#conversion_procs method in Sequel 3.24.0 (jeremyevans) === 1.0.2 (2011-03-16) * Build the Windows gem against PostgreSQL 9.0.1 to support the new default bytea serialization format (jeremyevans) * Allow use of Sequel::Postgres::PG_TYPES to add custom conversion support for types not handled by default (funny-falcon) (#2) * Fix handling of timestamps with fractional seconds and offsets (funny-falcon) (#1) === 1.0.1 (2010-09-12) * Correctly handle timestamps with negative offsets and fractional hours (jeremyevans) === 1.0.0 (2010-08-31) * Initial Public Release sequel_pg-1.14.0/MIT-LICENSE000066400000000000000000000071731373241172400152310ustar00rootroot00000000000000Copyright (c) 2010-2020 Jeremy Evans Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The original array parsing code (parse_pg_array, read_array) was taken from the pg_array_parser library (https://github.com/dockyard/pg_array_parser) and has the following license: Copyright (c) 2012 Dan McClain MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Some improvements were taken from the ruby pg library (https://bitbucket.org/ged/ruby-pg/wiki/Home), under the following license: Copyright (C) 1993-2013 Yukihiro Matsumoto. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. sequel_pg-1.14.0/README.rdoc000066400000000000000000000103731373241172400153770ustar00rootroot00000000000000= sequel_pg sequel_pg overwrites the inner loop of the Sequel postgres adapter row fetching code with a C version. The C version is significantly faster than the pure ruby version that Sequel uses by default. == Real world difference The speed up that sequel_pg gives you depends on what you are selecting, but it should be noticeable whenever many rows are selected. Here's an example that shows the difference it makes on a couple of models: Track.count # => 202261 Album.count # => 7264 Without sequel_pg: puts Benchmark.measure{Track.each{}} # 3.400000 0.290000 3.690000 ( 4.005150) puts Benchmark.measure{10.times{Album.each{}}} # 2.180000 0.120000 2.300000 ( 2.479352) With sequel_pg: puts Benchmark.measure{Track.each{}} # 1.660000 0.260000 1.920000 ( 2.287216) puts Benchmark.measure{10.times{Album.each{}}} # 0.960000 0.110000 1.070000 ( 1.260913) sequel_pg also speeds up the following Dataset methods: * map * as_hash/to_hash * to_hash_groups * select_hash * select_hash_groups * select_map * select_order_map Additionally, in most cases sequel_pg also speeds up the loading of model datasets by optimizing model instance creation. == Streaming If you are using PostgreSQL 9.2+ on the client, then sequel_pg should enable streaming support. This allows you to stream returned rows one at a time, instead of collecting the entire result set in memory (which is how PostgreSQL works by default). You can check if streaming is supported by: Sequel::Postgres.supports_streaming? If streaming is supported, you can load the streaming support into the database: DB.extension(:pg_streaming) Then you can call the Dataset#stream method to have the dataset use the streaming support: DB[:table].stream.each{|row| ...} If you want to enable streaming for all of a database's datasets, you can do the following: DB.stream_all_queries = true == Installing the gem gem install sequel_pg Make sure the pg_config binary is in your PATH so the installation can find the PostgreSQL shared library and header files. Alternatively, you can use the POSTGRES_LIB and POSTGRES_INCLUDE environment variables to specify the shared library and header directories. == Running the specs sequel_pg doesn't ship with it's own specs. It's designed to replace a part of Sequel, so it just uses Sequel's specs. Specifically, the spec_postgres rake task from Sequel. == Reporting issues/bugs sequel_pg uses GitHub Issues for tracking issues/bugs: http://github.com/jeremyevans/sequel_pg/issues == Contributing The source code is on GitHub: http://github.com/jeremyevans/sequel_pg To get a copy: git clone git://github.com/jeremyevans/sequel_pg.git There are only a few requirements, which you should probably have before considering use of the library: * Rake * Sequel * pg * libpq headers and library == Building To build the library from a git checkout, after installing the requirements: rake build == Platforms Supported sequel_pg has been tested on the following: * ruby 1.9.3 * ruby 2.0 * ruby 2.1 * ruby 2.2 * ruby 2.3 * ruby 2.4 * ruby 2.5 * ruby 2.6 * ruby 2.7 == Known Issues * You must be using the ISO PostgreSQL date format (which is the default). Using the SQL, POSTGRESQL, or GERMAN date formats will result in incorrect date/timestamp handling. In addition to PostgreSQL defaulting to ISO, Sequel also manually sets the date format to ISO by default, so unless you are overriding that setting (via DB.use_iso_date_format = false), you should be OK. * Adding your own type conversion procs only has an effect if those types are not handled by default. * You do not need to require the library, the sequel postgres adapter will require it automatically. If you are using bundler, you should add it to your Gemfile like so: gem 'sequel_pg', :require=>'sequel' * sequel_pg currently calls functions defined in the pg gem, which does not work on Windows and does not work in some unix-like operating systems that disallow undefined functions in shared libraries. If RbConfig::CONFIG['LDFLAGS'] contains -Wl,--no-undefined, you'll probably have issues installing sequel_pg. You should probably fix RbConfig::CONFIG['LDFLAGS'] in that case. == Author Jeremy Evans sequel_pg-1.14.0/Rakefile000066400000000000000000000005621373241172400152350ustar00rootroot00000000000000require "rake" require "rake/clean" CLEAN.include %w'**.rbc rdoc' desc "Do a full cleaning" task :distclean do CLEAN.include %w'tmp pkg sequel_pg*.gem lib/*.so' Rake::Task[:clean].invoke end desc "Build the gem" task :gem do sh %{gem build sequel_pg.gemspec} end begin require 'rake/extensiontask' Rake::ExtensionTask.new('sequel_pg') rescue LoadError end sequel_pg-1.14.0/ext/000077500000000000000000000000001373241172400143655ustar00rootroot00000000000000sequel_pg-1.14.0/ext/sequel_pg/000077500000000000000000000000001373241172400163515ustar00rootroot00000000000000sequel_pg-1.14.0/ext/sequel_pg/extconf.rb000066400000000000000000000012731373241172400203470ustar00rootroot00000000000000require 'mkmf' $CFLAGS << " -O0 -g" if ENV['DEBUG'] $CFLAGS << " -Drb_tainted_str_new=rb_str_new -DNO_TAINT" if RUBY_VERSION >= '2.7' $CFLAGS << " -Wall " unless RUBY_PLATFORM =~ /solaris/ dir_config('pg', ENV["POSTGRES_INCLUDE"] || (IO.popen("pg_config --includedir").readline.chomp rescue nil), ENV["POSTGRES_LIB"] || (IO.popen("pg_config --libdir").readline.chomp rescue nil)) if (have_library('pq') || have_library('libpq') || have_library('ms/libpq')) && have_header('libpq-fe.h') have_func 'PQsetSingleRowMode' have_func 'timegm' create_makefile("sequel_pg") else puts 'Could not find PostgreSQL build environment (libraries & headers): Makefile not created' end sequel_pg-1.14.0/ext/sequel_pg/sequel_pg.c000066400000000000000000001621571373241172400205150ustar00rootroot00000000000000#define SEQUEL_PG_VERSION_INTEGER 11400 #include #include #include #include #include #include #include #include #include #include #include #include #include #define SPG_MINUTES_PER_DAY 1440.0 #define SPG_SECONDS_PER_DAY 86400.0 #define SPG_DT_ADD_USEC if (usec != 0) { dt = rb_funcall(dt, spg_id_op_plus, 1, rb_Rational2(INT2NUM(usec), spg_usec_per_day)); } #ifndef RARRAY_AREF #define RARRAY_AREF(a, i) (RARRAY_PTR(a)[i]) #endif #define ntohll(c) ((uint64_t)( \ (((uint64_t)(*((unsigned char*)(c)+0)))<<56LL) | \ (((uint64_t)(*((unsigned char*)(c)+1)))<<48LL) | \ (((uint64_t)(*((unsigned char*)(c)+2)))<<40LL) | \ (((uint64_t)(*((unsigned char*)(c)+3)))<<32LL) | \ (((uint64_t)(*((unsigned char*)(c)+4)))<<24LL) | \ (((uint64_t)(*((unsigned char*)(c)+5)))<<16LL) | \ (((uint64_t)(*((unsigned char*)(c)+6)))<< 8LL) | \ (((uint64_t)(*((unsigned char*)(c)+7))) ) \ )) #define SPG_DB_LOCAL (1) #define SPG_DB_UTC (1<<1) #define SPG_DB_CUSTOM (1<<2) #define SPG_APP_LOCAL (1<<3) #define SPG_APP_UTC (1<<4) #define SPG_APP_CUSTOM (1<<5) #define SPG_TZ_INITIALIZED (1<<6) #define SPG_USE_TIME (1<<7) #define SPG_HAS_TIMEZONE (1<<8) #define SPG_YEAR_SHIFT 16 #define SPG_MONTH_SHIFT 8 #define SPG_MONTH_MASK 0x0000ffff #define SPG_DAY_MASK 0x0000001f #define SPG_TIME_UTC 32 #define SPG_YIELD_NORMAL 0 #define SPG_YIELD_COLUMN 1 #define SPG_YIELD_COLUMNS 2 #define SPG_YIELD_FIRST 3 #define SPG_YIELD_ARRAY 4 #define SPG_YIELD_KV_HASH 5 #define SPG_YIELD_MKV_HASH 6 #define SPG_YIELD_KMV_HASH 7 #define SPG_YIELD_MKMV_HASH 8 #define SPG_YIELD_MODEL 9 #define SPG_YIELD_KV_HASH_GROUPS 10 #define SPG_YIELD_MKV_HASH_GROUPS 11 #define SPG_YIELD_KMV_HASH_GROUPS 12 #define SPG_YIELD_MKMV_HASH_GROUPS 13 /* External functions defined by ruby-pg */ PGconn* pg_get_pgconn(VALUE); PGresult* pgresult_get(VALUE); int pg_get_result_enc_idx(VALUE); static int spg_use_ipaddr_alloc; static int spg_use_pg_get_result_enc_idx; static VALUE spg_Sequel; static VALUE spg_PGArray; static VALUE spg_Blob; static VALUE spg_Blob_instance; static VALUE spg_Date; static VALUE spg_DateTime; static VALUE spg_SQLTime; static VALUE spg_PGError; static VALUE spg_IPAddr; static VALUE spg_vmasks4; static VALUE spg_vmasks6; static VALUE spg_sym_utc; static VALUE spg_sym_local; static VALUE spg_sym_map; static VALUE spg_sym_first; static VALUE spg_sym_array; static VALUE spg_sym_hash; static VALUE spg_sym_hash_groups; static VALUE spg_sym_model; static VALUE spg_sym__sequel_pg_type; static VALUE spg_sym__sequel_pg_value; static VALUE spg_sym_text; static VALUE spg_sym_character_varying; static VALUE spg_sym_integer; static VALUE spg_sym_timestamp; static VALUE spg_sym_timestamptz; static VALUE spg_sym_time; static VALUE spg_sym_timetz; static VALUE spg_sym_bigint; static VALUE spg_sym_numeric; static VALUE spg_sym_double_precision; static VALUE spg_sym_boolean; static VALUE spg_sym_bytea; static VALUE spg_sym_date; static VALUE spg_sym_smallint; static VALUE spg_sym_oid; static VALUE spg_sym_real; static VALUE spg_sym_xml; static VALUE spg_sym_money; static VALUE spg_sym_bit; static VALUE spg_sym_bit_varying; static VALUE spg_sym_uuid; static VALUE spg_sym_xid; static VALUE spg_sym_cid; static VALUE spg_sym_name; static VALUE spg_sym_tid; static VALUE spg_sym_int2vector; static VALUE spg_sym_inet; static VALUE spg_sym_cidr; static VALUE spg_nan; static VALUE spg_pos_inf; static VALUE spg_neg_inf; static VALUE spg_usec_per_day; static ID spg_id_BigDecimal; static ID spg_id_new; static ID spg_id_date; static ID spg_id_local; static ID spg_id_year; static ID spg_id_month; static ID spg_id_day; static ID spg_id_output_identifier; static ID spg_id_datetime_class; static ID spg_id_application_timezone; static ID spg_id_to_application_timestamp; static ID spg_id_timezone; static ID spg_id_op_plus; static ID spg_id_utc; static ID spg_id_utc_offset; static ID spg_id_localtime; static ID spg_id_new_offset; static ID spg_id_convert_infinite_timestamps; static ID spg_id_infinite_timestamp_value; static ID spg_id_call; static ID spg_id_get; static ID spg_id_opts; static ID spg_id_db; static ID spg_id_conversion_procs; static ID spg_id_columns_equal; static ID spg_id_columns; static ID spg_id_encoding; static ID spg_id_values; static ID spg_id_lshift; static ID spg_id_mask; static ID spg_id_family; static ID spg_id_addr; static ID spg_id_mask_addr; #if HAVE_PQSETSINGLEROWMODE static ID spg_id_get_result; static ID spg_id_clear; static ID spg_id_check; #endif struct spg_blob_initialization { char *blob_string; size_t length; }; static int enc_get_index(VALUE val) { int i = ENCODING_GET_INLINED(val); if (i == ENCODING_INLINE_MAX) { i = NUM2INT(rb_ivar_get(val, spg_id_encoding)); } return i; } #define PG_ENCODING_SET_NOCHECK(obj,i) \ do { \ if ((i) < ENCODING_INLINE_MAX) \ ENCODING_SET_INLINED((obj), (i)); \ else \ rb_enc_set_index((obj), (i)); \ } while(0) static VALUE pg_text_dec_integer(char *val, int len) { long i; int max_len; if( sizeof(i) >= 8 && FIXNUM_MAX >= 1000000000000000000LL ){ /* 64 bit system can safely handle all numbers up to 18 digits as Fixnum */ max_len = 18; } else if( sizeof(i) >= 4 && FIXNUM_MAX >= 1000000000LL ){ /* 32 bit system can safely handle all numbers up to 9 digits as Fixnum */ max_len = 9; } else { /* unknown -> don't use fast path for int conversion */ max_len = 0; } if( len <= max_len ){ /* rb_cstr2inum() seems to be slow, so we do the int conversion by hand. * This proved to be 40% faster by the following benchmark: * * conn.type_mapping_for_results = PG::BasicTypeMapForResults.new conn * Benchmark.measure do * conn.exec("select generate_series(1,1000000)").values } * end */ char *val_pos = val; char digit = *val_pos; int neg; int error = 0; if( digit=='-' ){ neg = 1; i = 0; }else if( digit>='0' && digit<='9' ){ neg = 0; i = digit - '0'; } else { error = 1; } while (!error && (digit=*++val_pos)) { if( digit>='0' && digit<='9' ){ i = i * 10 + (digit - '0'); } else { error = 1; } } if( !error ){ return LONG2FIX(neg ? -i : i); } } /* Fallback to ruby method if number too big or unrecognized. */ return rb_cstr2inum(val, 10); } static VALUE spg__array_col_value(char *v, size_t length, VALUE converter, int enc_index, int oid, VALUE db); static VALUE read_array(int *index, char *c_pg_array_string, int array_string_length, VALUE buf, VALUE converter, int enc_index, int oid, VALUE db) { int word_index = 0; char *word = RSTRING_PTR(buf); /* The current character in the input string. */ char c; /* 0: Currently outside a quoted string, current word never quoted * 1: Currently inside a quoted string * -1: Currently outside a quoted string, current word previously quoted */ int openQuote = 0; /* Inside quoted input means the next character should be treated literally, * instead of being treated as a metacharacter. * Outside of quoted input, means that the word shouldn't be pushed to the array, * used when the last entry was a subarray (which adds to the array itself). */ int escapeNext = 0; VALUE array = rb_ary_new(); RB_GC_GUARD(array); /* Special case the empty array, so it doesn't need to be handled manually inside * the loop. */ if(((*index) < array_string_length) && c_pg_array_string[(*index)] == '}') { return array; } for(;(*index) < array_string_length; ++(*index)) { c = c_pg_array_string[*index]; if(openQuote < 1) { if(c == ',' || c == '}') { if(!escapeNext) { if(openQuote == 0 && word_index == 4 && !strncmp(word, "NULL", word_index)) { rb_ary_push(array, Qnil); } else { word[word_index] = '\0'; rb_ary_push(array, spg__array_col_value(word, word_index, converter, enc_index, oid, db)); } } if(c == '}') { return array; } escapeNext = 0; openQuote = 0; word_index = 0; } else if(c == '"') { openQuote = 1; } else if(c == '{') { (*index)++; rb_ary_push(array, read_array(index, c_pg_array_string, array_string_length, buf, converter, enc_index, oid, db)); escapeNext = 1; } else { word[word_index] = c; word_index++; } } else if (escapeNext) { word[word_index] = c; word_index++; escapeNext = 0; } else if (c == '\\') { escapeNext = 1; } else if (c == '"') { openQuote = -1; } else { word[word_index] = c; word_index++; } } RB_GC_GUARD(buf); return array; } static VALUE check_pg_array(int* index, char *c_pg_array_string, int array_string_length) { if (array_string_length == 0) { rb_raise(rb_eArgError, "unexpected PostgreSQL array format, empty"); } else if (array_string_length == 2 && c_pg_array_string[0] == '{' && c_pg_array_string[0] == '}') { return rb_ary_new(); } switch (c_pg_array_string[0]) { case '[': /* Skip explicit subscripts, scanning until opening array */ for(;(*index) < array_string_length && c_pg_array_string[(*index)] != '{'; ++(*index)) /* nothing */; if ((*index) >= array_string_length) { rb_raise(rb_eArgError, "unexpected PostgreSQL array format, no {"); } else { ++(*index); } case '{': break; default: rb_raise(rb_eArgError, "unexpected PostgreSQL array format, doesn't start with { or ["); } return Qnil; } static VALUE parse_pg_array(VALUE self, VALUE pg_array_string, VALUE converter) { /* convert to c-string, create additional ruby string buffer of * the same length, as that will be the worst case. */ char *c_pg_array_string = StringValueCStr(pg_array_string); int array_string_length = RSTRING_LEN(pg_array_string); int index = 1; VALUE ary; if(RTEST(ary = check_pg_array(&index, c_pg_array_string, array_string_length))) { return ary; } ary = rb_str_buf_new(array_string_length); rb_str_set_len(ary, array_string_length); rb_obj_freeze(ary); return read_array(&index, c_pg_array_string, array_string_length, ary, converter, enc_get_index(pg_array_string), 0, Qnil); } static VALUE spg_timestamp_error(const char *s, VALUE self, const char *error_msg) { self = rb_funcall(self, spg_id_db, 0); if(RTEST(rb_funcall(self, spg_id_convert_infinite_timestamps, 0))) { if((strcmp(s, "infinity") == 0) || (strcmp(s, "-infinity") == 0)) { return rb_funcall(self, spg_id_infinite_timestamp_value, 1, rb_tainted_str_new(s, strlen(s))); } } rb_raise(rb_eArgError, "%s", error_msg); } static inline int char_to_digit(char c) { return c - '0'; } static int str4_to_int(const char *str) { return char_to_digit(str[0]) * 1000 + char_to_digit(str[1]) * 100 + char_to_digit(str[2]) * 10 + char_to_digit(str[3]); } static int str2_to_int(const char *str) { return char_to_digit(str[0]) * 10 + char_to_digit(str[1]); } static VALUE spg_time(const char *p, size_t length, int current) { int hour, minute, second, i; int usec = 0; ID meth = spg_id_local; if (length < 8) { rb_raise(rb_eArgError, "unexpected time format, too short"); } if (p[2] == ':' && p[5] == ':') { hour = str2_to_int(p); minute = str2_to_int(p+3); second = str2_to_int(p+6); p += 8; if (length >= 10 && p[0] == '.') { static const int coef[6] = { 100000, 10000, 1000, 100, 10, 1 }; p++; for (i = 0; i < 6 && isdigit(*p); i++) { usec += coef[i] * char_to_digit(*p++); } } } else { rb_raise(rb_eArgError, "unexpected time format"); } if (current & SPG_TIME_UTC) { meth = spg_id_utc; } return rb_funcall(spg_SQLTime, meth, 7, INT2NUM(current >> SPG_YEAR_SHIFT), INT2NUM((current & SPG_MONTH_MASK) >> SPG_MONTH_SHIFT), INT2NUM(current & SPG_DAY_MASK), INT2NUM(hour), INT2NUM(minute), INT2NUM(second), INT2NUM(usec)); } /* Caller should check length is at least 4 */ static int parse_year(const char **str, size_t *length) { int year, i; size_t remaining = *length; const char * p = *str; year = str4_to_int(p); p += 4; remaining -= 4; for(i = 0; isdigit(*p) && i < 3; i++, p++, remaining--) { year = 10 * year + char_to_digit(*p); } *str = p; *length = remaining; return year; } static VALUE spg_date(const char *s, VALUE self, size_t length) { int year, month, day; const char *p = s; if (length < 10) { return spg_timestamp_error(s, self, "unexpected date format, too short"); } year = parse_year(&p, &length); if (length >= 5 && p[0] == '-' && p[3] == '-') { month = str2_to_int(p+1); day = str2_to_int(p+4); } else { return spg_timestamp_error(s, self, "unexpected date format"); } if(s[10] == ' ' && s[11] == 'B' && s[12] == 'C') { year = -year; year++; } return rb_funcall(spg_Date, spg_id_new, 3, INT2NUM(year), INT2NUM(month), INT2NUM(day)); } static VALUE spg_timestamp(const char *s, VALUE self, size_t length, int tz) { VALUE dt; int year, month, day, hour, min, sec, utc_offset; char offset_sign = 0; int offset_seconds = 0; int usec = 0; int i; const char *p = s; size_t remaining = length; if (tz & SPG_DB_CUSTOM || tz & SPG_APP_CUSTOM) { return rb_funcall(rb_funcall(self, spg_id_db, 0), spg_id_to_application_timestamp, 1, rb_str_new2(s)); } if (remaining < 19) { return spg_timestamp_error(s, self, "unexpected timetamp format, too short"); } year = parse_year(&p, &remaining); if (remaining >= 15 && p[0] == '-' && p[3] == '-' && p[6] == ' ' && p[9] == ':' && p[12] == ':') { month = str2_to_int(p+1); day = str2_to_int(p+4); hour = str2_to_int(p+7); min = str2_to_int(p+10); sec = str2_to_int(p+13); p += 15; remaining -= 15; if (remaining >= 2 && p[0] == '.') { /* microsecond part, up to 6 digits */ static const int coef[6] = { 100000, 10000, 1000, 100, 10, 1 }; p++; remaining--; for (i = 0; i < 6 && isdigit(*p); i++) { usec += coef[i] * char_to_digit(*p++); remaining--; } } if ((tz & SPG_HAS_TIMEZONE) && remaining >= 3 && (p[0] == '+' || p[0] == '-')) { offset_sign = p[0]; offset_seconds += str2_to_int(p+1)*3600; p += 3; remaining -= 3; if (p[0] == ':') { p++; remaining--; } if (remaining >= 2 && isdigit(p[0]) && isdigit(p[1])) { offset_seconds += str2_to_int(p)*60; p += 2; remaining -= 2; } if (p[0] == ':') { p++; remaining--; } if (remaining >= 2 && isdigit(p[0]) && isdigit(p[1])) { offset_seconds += str2_to_int(p); p += 2; remaining -= 2; } if (offset_sign == '-') { offset_seconds *= -1; } } if (remaining == 3 && p[0] == ' ' && p[1] == 'B' && p[2] == 'C') { year = -year; year++; } else if (remaining != 0) { return spg_timestamp_error(s, self, "unexpected timestamp format, remaining data left"); } } else { return spg_timestamp_error(s, self, "unexpected timestamp format"); } if (tz & SPG_USE_TIME) { #if (RUBY_API_VERSION_MAJOR > 2 || (RUBY_API_VERSION_MAJOR == 2 && RUBY_API_VERSION_MINOR >= 3)) && defined(HAVE_TIMEGM) /* Fast path for time conversion */ struct tm tm; struct timespec ts; time_t time; tm.tm_year = year - 1900; tm.tm_mon = month - 1; tm.tm_mday = day; tm.tm_hour = hour; tm.tm_min = min; tm.tm_sec = sec; tm.tm_isdst = -1; ts.tv_nsec = usec*1000; if (offset_sign) { time = timegm(&tm); if (time != -1) { ts.tv_sec = time - offset_seconds; dt = rb_time_timespec_new(&ts, offset_seconds); if (tz & SPG_APP_UTC) { dt = rb_funcall(dt, spg_id_utc, 0); } else if (tz & SPG_APP_LOCAL) { dt = rb_funcall(dt, spg_id_localtime, 0); } return dt; } } else { if (tz & SPG_DB_UTC) { time = timegm(&tm); } else { time = mktime(&tm); } if (time != -1) { ts.tv_sec = time; if (tz & SPG_APP_UTC) { offset_seconds = INT_MAX-1; } else { offset_seconds = INT_MAX; } return rb_time_timespec_new(&ts, offset_seconds); } } #endif if (offset_sign) { /* Offset given, convert to local time if not already in local time. * While PostgreSQL generally returns timestamps in local time, it's unwise to rely on this. */ dt = rb_funcall(rb_cTime, spg_id_local, 7, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec), INT2NUM(usec)); utc_offset = NUM2INT(rb_funcall(dt, spg_id_utc_offset, 0)); if (utc_offset != offset_seconds) { dt = rb_funcall(dt, spg_id_op_plus, 1, INT2NUM(utc_offset - offset_seconds)); } if (tz & SPG_APP_UTC) { dt = rb_funcall(dt, spg_id_utc, 0); } return dt; } else if (!(tz & (SPG_APP_LOCAL|SPG_DB_LOCAL|SPG_APP_UTC|SPG_DB_UTC))) { return rb_funcall(rb_cTime, spg_id_local, 7, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec), INT2NUM(usec)); } /* No offset given, and some timezone combination given */ if (tz & SPG_DB_UTC) { dt = rb_funcall(rb_cTime, spg_id_utc, 7, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec), INT2NUM(usec)); if (tz & SPG_APP_LOCAL) { return rb_funcall(dt, spg_id_localtime, 0); } else { return dt; } } else { dt = rb_funcall(rb_cTime, spg_id_local, 7, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec), INT2NUM(usec)); if (tz & SPG_APP_UTC) { return rb_funcall(dt, spg_id_utc, 0); } else { return dt; } } } else { /* datetime.class == DateTime */ double offset_fraction; if (offset_sign) { /* Offset given, handle correct local time. * While PostgreSQL generally returns timestamps in local time, it's unwise to rely on this. */ offset_fraction = offset_seconds/(double)SPG_SECONDS_PER_DAY; dt = rb_funcall(spg_DateTime, spg_id_new, 7, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec), rb_float_new(offset_fraction)); SPG_DT_ADD_USEC if (tz & SPG_APP_LOCAL) { offset_fraction = NUM2INT(rb_funcall(rb_funcall(rb_cTime, spg_id_local, 6, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec)), spg_id_utc_offset, 0))/SPG_SECONDS_PER_DAY; dt = rb_funcall(dt, spg_id_new_offset, 1, rb_float_new(offset_fraction)); } else if (tz & SPG_APP_UTC) { dt = rb_funcall(dt, spg_id_new_offset, 1, INT2NUM(0)); } return dt; } else if (!(tz & (SPG_APP_LOCAL|SPG_DB_LOCAL|SPG_APP_UTC|SPG_DB_UTC))) { dt = rb_funcall(spg_DateTime, spg_id_new, 6, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec)); SPG_DT_ADD_USEC return dt; } /* No offset given, and some timezone combination given */ if (tz & SPG_DB_LOCAL) { offset_fraction = NUM2INT(rb_funcall(rb_funcall(rb_cTime, spg_id_local, 6, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec)), spg_id_utc_offset, 0))/SPG_SECONDS_PER_DAY; dt = rb_funcall(spg_DateTime, spg_id_new, 7, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec), rb_float_new(offset_fraction)); SPG_DT_ADD_USEC if (tz & SPG_APP_UTC) { return rb_funcall(dt, spg_id_new_offset, 1, INT2NUM(0)); } else { return dt; } } else { dt = rb_funcall(spg_DateTime, spg_id_new, 6, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec)); SPG_DT_ADD_USEC if (tz & SPG_APP_LOCAL) { offset_fraction = NUM2INT(rb_funcall(rb_funcall(rb_cTime, spg_id_local, 6, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec)), spg_id_utc_offset, 0))/SPG_SECONDS_PER_DAY; dt = rb_funcall(dt, spg_id_new_offset, 1, rb_float_new(offset_fraction)); return dt; } else { return dt; } } } } static VALUE spg_inet(char *val, size_t len) { VALUE ip; VALUE ip_int; VALUE vmasks; unsigned int dst[4]; char buf[64]; int af = strchr(val, '.') ? AF_INET : AF_INET6; int mask = -1; if (len >= 64) { rb_raise(rb_eTypeError, "unable to parse IP address, too long"); } if (len >= 4) { if (val[len-2] == '/') { mask = val[len-1] - '0'; memcpy(buf, val, len-2); val = buf; val[len-2] = '\0'; } else if (val[len-3] == '/') { mask = (val[len-2]- '0')*10 + val[len-1] - '0'; memcpy(buf, val, len-3); val = buf; val[len-3] = '\0'; } else if (val[len-4] == '/') { mask = (val[len-3]- '0')*100 + (val[len-2]- '0')*10 + val[len-1] - '0'; memcpy(buf, val, len-4); val = buf; val[len-4] = '\0'; } } if (1 != inet_pton(af, val, (char *)dst)) { rb_raise(rb_eTypeError, "unable to parse IP address: %s", val); } if (af == AF_INET) { unsigned int ip_int_native; if (mask == -1) { mask = 32; } else if (mask < 0 || mask > 32) { rb_raise(rb_eTypeError, "invalid mask for IPv4: %d", mask); } vmasks = spg_vmasks4; ip_int_native = ntohl(*dst); /* Work around broken IPAddr behavior of convering portion of address after netmask to 0 */ switch (mask) { case 0: ip_int_native = 0; break; case 32: /* nothing to do */ break; default: ip_int_native &= ~((1UL<<(32-mask))-1); break; } ip_int = UINT2NUM(ip_int_native); } else { unsigned long long * dstllp = (unsigned long long *)dst; unsigned long long ip_int_native1; unsigned long long ip_int_native2; if (mask == -1) { mask = 128; } else if (mask < 0 || mask > 128) { rb_raise(rb_eTypeError, "invalid mask for IPv6: %d", mask); } vmasks = spg_vmasks6; ip_int_native1 = ntohll(dstllp); dstllp++; ip_int_native2 = ntohll(dstllp); if (mask == 128) { /* nothing to do */ } else if (mask == 64) { ip_int_native2 = 0; } else if (mask == 0) { ip_int_native1 = 0; ip_int_native2 = 0; } else if (mask < 64) { ip_int_native1 &= ~((1ULL<<(64-mask))-1); ip_int_native2 = 0; } else { ip_int_native2 &= ~((1ULL<<(128-mask))-1); } /* 4 Bignum allocations */ ip_int = ULL2NUM(ip_int_native1); ip_int = rb_funcall(ip_int, spg_id_lshift, 1, INT2NUM(64)); ip_int = rb_funcall(ip_int, spg_id_op_plus, 1, ULL2NUM(ip_int_native2)); } if (spg_use_ipaddr_alloc) { ip = rb_obj_alloc(spg_IPAddr); rb_ivar_set(ip, spg_id_family, INT2NUM(af)); rb_ivar_set(ip, spg_id_addr, ip_int); rb_ivar_set(ip, spg_id_mask_addr, RARRAY_AREF(vmasks, mask)); } else { VALUE ip_args[2]; ip_args[0] = ip_int; ip_args[1] = INT2NUM(af); ip = rb_class_new_instance(2, ip_args, spg_IPAddr); ip = rb_funcall(ip, spg_id_mask, 1, INT2NUM(mask)); } return ip; } static VALUE spg_create_Blob(VALUE v) { struct spg_blob_initialization *bi = (struct spg_blob_initialization *)v; if (bi->blob_string == NULL) { rb_raise(rb_eNoMemError, "PQunescapeBytea failure: probably not enough memory"); } v = rb_str_new_with_class(spg_Blob_instance, bi->blob_string, bi->length); #ifndef NO_TAINT rb_obj_taint(v); #endif return v; } static VALUE spg_fetch_rows_set_cols(VALUE self, VALUE ignore) { return Qnil; } static VALUE spg__array_col_value(char *v, size_t length, VALUE converter, int enc_index, int oid, VALUE db) { VALUE rv; struct spg_blob_initialization bi; switch(oid) { case 16: /* boolean */ rv = *v == 't' ? Qtrue : Qfalse; break; case 17: /* bytea */ bi.blob_string = (char *)PQunescapeBytea((unsigned char*)v, &bi.length); rv = rb_ensure(spg_create_Blob, (VALUE)&bi, (VALUE(*)())PQfreemem, (VALUE)bi.blob_string); break; case 20: /* integer */ case 21: case 23: case 26: rv = pg_text_dec_integer(v, length); break; case 700: /* float */ case 701: switch(*v) { case 'N': rv = spg_nan; break; case 'I': rv = spg_pos_inf; break; case '-': if (v[1] == 'I') { rv = spg_neg_inf; } else { rv = rb_float_new(rb_cstr_to_dbl(v, Qfalse)); } break; default: rv = rb_float_new(rb_cstr_to_dbl(v, Qfalse)); break; } break; case 1700: /* numeric */ rv = rb_funcall(rb_mKernel, spg_id_BigDecimal, 1, rb_str_new(v, length)); break; case 1082: /* date */ rv = spg_date(v, db, length); break; case 1083: /* time */ case 1266: rv = spg_time(v, length, (int)converter); break; case 1114: /* timestamp */ case 1184: rv = spg_timestamp(v, db, length, (int)converter); break; case 18: /* char */ case 25: /* text */ case 1043: /* varchar*/ rv = rb_tainted_str_new(v, length); PG_ENCODING_SET_NOCHECK(rv, enc_index); break; case 869: /* inet */ case 650: /* cidr */ if (!RTEST(converter)) { rv = spg_inet(v, length); break; } default: rv = rb_tainted_str_new(v, length); PG_ENCODING_SET_NOCHECK(rv, enc_index); if (RTEST(converter)) { rv = rb_funcall(converter, spg_id_call, 1, rv); } } return rv; } static VALUE spg_array_value(char *c_pg_array_string, int array_string_length, VALUE converter, int enc_index, int oid, VALUE self, VALUE array_type) { int index = 1; VALUE buf; VALUE args[2]; args[1] = array_type; if(RTEST(args[0] = check_pg_array(&index, c_pg_array_string, array_string_length))) { return rb_class_new_instance(2, args, spg_PGArray); } buf = rb_str_buf_new(array_string_length); rb_str_set_len(buf, array_string_length); rb_obj_freeze(buf); args[0] = read_array(&index, c_pg_array_string, array_string_length, buf, converter, enc_index, oid, self); return rb_class_new_instance(2, args, spg_PGArray); } static int spg_time_info_bitmask(void) { int info = 0; VALUE now = rb_funcall(spg_SQLTime, spg_id_date, 0); info = NUM2INT(rb_funcall(now, spg_id_year, 0)) << SPG_YEAR_SHIFT; info += NUM2INT(rb_funcall(now, spg_id_month, 0)) << SPG_MONTH_SHIFT; info += NUM2INT(rb_funcall(now, spg_id_day, 0)); if (rb_funcall(spg_Sequel, spg_id_application_timezone, 0) == spg_sym_utc) { info += SPG_TIME_UTC; } return info; } static int spg_timestamp_info_bitmask(VALUE self) { VALUE db, rtz; int tz = 0; db = rb_funcall(self, spg_id_db, 0); rtz = rb_funcall(db, spg_id_timezone, 0); if (rtz != Qnil) { if (rtz == spg_sym_local) { tz |= SPG_DB_LOCAL; } else if (rtz == spg_sym_utc) { tz |= SPG_DB_UTC; } else { tz |= SPG_DB_CUSTOM; } } rtz = rb_funcall(spg_Sequel, spg_id_application_timezone, 0); if (rtz != Qnil) { if (rtz == spg_sym_local) { tz |= SPG_APP_LOCAL; } else if (rtz == spg_sym_utc) { tz |= SPG_APP_UTC; } else { tz |= SPG_APP_CUSTOM; } } if (rb_cTime == rb_funcall(spg_Sequel, spg_id_datetime_class, 0)) { tz |= SPG_USE_TIME; } tz |= SPG_TZ_INITIALIZED; return tz; } static VALUE spg__col_value(VALUE self, PGresult *res, long i, long j, VALUE* colconvert, int enc_index) { char *v; VALUE rv; int ftype = PQftype(res, j); VALUE array_type; VALUE scalar_oid; struct spg_blob_initialization bi; if(PQgetisnull(res, i, j)) { rv = Qnil; } else { v = PQgetvalue(res, i, j); switch(ftype) { case 16: /* boolean */ rv = *v == 't' ? Qtrue : Qfalse; break; case 17: /* bytea */ bi.blob_string = (char *)PQunescapeBytea((unsigned char*)v, &bi.length); rv = rb_ensure(spg_create_Blob, (VALUE)&bi, (VALUE(*)())PQfreemem, (VALUE)bi.blob_string); break; case 20: /* integer */ case 21: case 23: case 26: rv = pg_text_dec_integer(v, PQgetlength(res, i, j)); break; case 700: /* float */ case 701: switch(*v) { case 'N': rv = spg_nan; break; case 'I': rv = spg_pos_inf; break; case '-': if (v[1] == 'I') { rv = spg_neg_inf; } else { rv = rb_float_new(rb_cstr_to_dbl(v, Qfalse)); } break; default: rv = rb_float_new(rb_cstr_to_dbl(v, Qfalse)); break; } break; case 1700: /* numeric */ rv = rb_funcall(rb_mKernel, spg_id_BigDecimal, 1, rb_str_new(v, PQgetlength(res, i, j))); break; case 1082: /* date */ rv = spg_date(v, self, PQgetlength(res, i, j)); break; case 1083: /* time */ case 1266: rv = spg_time(v, PQgetlength(res, i, j), (int)colconvert[j]); break; case 1114: /* timestamp */ case 1184: rv = spg_timestamp(v, self, PQgetlength(res, i, j), (int)colconvert[j]); break; case 18: /* char */ case 25: /* text */ case 1043: /* varchar*/ rv = rb_tainted_str_new(v, PQgetlength(res, i, j)); PG_ENCODING_SET_NOCHECK(rv, enc_index); break; /* array types */ case 1009: case 1014: case 1015: case 1007: case 1115: case 1185: case 1183: case 1270: case 1016: case 1231: case 1022: case 1000: case 1001: case 1182: case 1005: case 1028: case 1021: case 143: case 791: case 1561: case 1563: case 2951: case 1011: case 1012: case 1003: case 1010: case 1006: case 1041: case 651: switch(ftype) { case 1009: case 1014: array_type = spg_sym_text; scalar_oid = 25; break; case 1015: array_type = spg_sym_character_varying; scalar_oid = 25; break; case 1007: array_type = spg_sym_integer; scalar_oid = 23; break; case 1115: array_type = spg_sym_timestamp; scalar_oid = 1114; break; case 1185: array_type = spg_sym_timestamptz; scalar_oid = 1184; break; case 1183: array_type = spg_sym_time; scalar_oid = 1083; break; case 1270: array_type = spg_sym_timetz; scalar_oid = 1266; break; case 1016: array_type = spg_sym_bigint; scalar_oid = 20; break; case 1231: array_type = spg_sym_numeric; scalar_oid = 1700; break; case 1022: array_type = spg_sym_double_precision; scalar_oid = 701; break; case 1000: array_type = spg_sym_boolean; scalar_oid = 16; break; case 1001: array_type = spg_sym_bytea; scalar_oid = 17; break; case 1182: array_type = spg_sym_date; scalar_oid = 1082; break; case 1005: array_type = spg_sym_smallint; scalar_oid = 21; break; case 1028: array_type = spg_sym_oid; scalar_oid = 26; break; case 1021: array_type = spg_sym_real; scalar_oid = 700; break; case 143: array_type = spg_sym_xml; scalar_oid = 142; break; case 791: array_type = spg_sym_money; scalar_oid = 790; break; case 1561: array_type = spg_sym_bit; scalar_oid = 1560; break; case 1563: array_type = spg_sym_bit_varying; scalar_oid = 1562; break; case 2951: array_type = spg_sym_uuid; scalar_oid = 2950; break; case 1011: array_type = spg_sym_xid; scalar_oid = 28; break; case 1012: array_type = spg_sym_cid; scalar_oid = 29; break; case 1003: array_type = spg_sym_name; scalar_oid = 19; break; case 1010: array_type = spg_sym_tid; scalar_oid = 27; break; case 1006: array_type = spg_sym_int2vector; scalar_oid = 22; break; case 1041: if (RTEST(colconvert[j])) { goto default_cond; } array_type = spg_sym_inet; scalar_oid = 869; break; case 651: if (RTEST(colconvert[j])) { goto default_cond; } array_type = spg_sym_cidr; scalar_oid = 650; break; } rv = spg_array_value(v, PQgetlength(res, i, j), colconvert[j], enc_index, scalar_oid, self, array_type); break; case 869: /* inet */ case 650: /* cidr */ if (colconvert[j] == Qnil) { rv = spg_inet(v, PQgetlength(res, i, j)); break; } default: default_cond: rv = rb_tainted_str_new(v, PQgetlength(res, i, j)); PG_ENCODING_SET_NOCHECK(rv, enc_index); if (colconvert[j] != Qnil) { rv = rb_funcall(colconvert[j], spg_id_call, 1, rv); } } } return rv; } static VALUE spg__col_values(VALUE self, VALUE v, VALUE *colsyms, long nfields, PGresult *res, long i, VALUE *colconvert, int enc_index) { long j; VALUE cur; long len = RARRAY_LEN(v); VALUE a = rb_ary_new2(len); for (j=0; j= 1.2"))) { spg_use_pg_get_result_enc_idx = 1; } rb_const_set(spg_Postgres, rb_intern("SEQUEL_PG_VERSION_INTEGER"), INT2FIX(SEQUEL_PG_VERSION_INTEGER)); spg_id_BigDecimal = rb_intern("BigDecimal"); spg_id_new = rb_intern("new"); spg_id_date = rb_intern("date"); spg_id_local = rb_intern("local"); spg_id_year = rb_intern("year"); spg_id_month = rb_intern("month"); spg_id_day = rb_intern("day"); spg_id_output_identifier = rb_intern("output_identifier"); spg_id_datetime_class = rb_intern("datetime_class"); spg_id_application_timezone = rb_intern("application_timezone"); spg_id_to_application_timestamp = rb_intern("to_application_timestamp"); spg_id_timezone = rb_intern("timezone"); spg_id_op_plus = rb_intern("+"); spg_id_utc = rb_intern("utc"); spg_id_utc_offset = rb_intern("utc_offset"); spg_id_localtime = rb_intern("localtime"); spg_id_new_offset = rb_intern("new_offset"); spg_id_convert_infinite_timestamps = rb_intern("convert_infinite_timestamps"); spg_id_infinite_timestamp_value = rb_intern("infinite_timestamp_value"); spg_id_call = rb_intern("call"); spg_id_get = rb_intern("[]"); spg_id_opts = rb_intern("opts"); spg_id_db = rb_intern("db"); spg_id_conversion_procs = rb_intern("conversion_procs"); spg_id_columns_equal = rb_intern("columns="); spg_id_columns = rb_intern("@columns"); spg_id_encoding = rb_intern("@encoding"); spg_id_values = rb_intern("@values"); spg_id_family = rb_intern("@family"); spg_id_addr = rb_intern("@addr"); spg_id_mask_addr = rb_intern("@mask_addr"); spg_id_lshift = rb_intern("<<"); spg_id_mask = rb_intern("mask"); spg_sym_utc = ID2SYM(rb_intern("utc")); spg_sym_local = ID2SYM(rb_intern("local")); spg_sym_map = ID2SYM(rb_intern("map")); spg_sym_first = ID2SYM(rb_intern("first")); spg_sym_array = ID2SYM(rb_intern("array")); spg_sym_hash = ID2SYM(rb_intern("hash")); spg_sym_hash_groups = ID2SYM(rb_intern("hash_groups")); spg_sym_model = ID2SYM(rb_intern("model")); spg_sym__sequel_pg_type = ID2SYM(rb_intern("_sequel_pg_type")); spg_sym__sequel_pg_value = ID2SYM(rb_intern("_sequel_pg_value")); spg_sym_text = ID2SYM(rb_intern("text")); spg_sym_character_varying = ID2SYM(rb_intern("character varying")); spg_sym_integer = ID2SYM(rb_intern("integer")); spg_sym_timestamp = ID2SYM(rb_intern("timestamp")); spg_sym_timestamptz = ID2SYM(rb_intern("timestamptz")); spg_sym_time = ID2SYM(rb_intern("time")); spg_sym_timetz = ID2SYM(rb_intern("timetz")); spg_sym_bigint = ID2SYM(rb_intern("bigint")); spg_sym_numeric = ID2SYM(rb_intern("numeric")); spg_sym_double_precision = ID2SYM(rb_intern("double precision")); spg_sym_boolean = ID2SYM(rb_intern("boolean")); spg_sym_bytea = ID2SYM(rb_intern("bytea")); spg_sym_date = ID2SYM(rb_intern("date")); spg_sym_smallint = ID2SYM(rb_intern("smallint")); spg_sym_oid = ID2SYM(rb_intern("oid")); spg_sym_real = ID2SYM(rb_intern("real")); spg_sym_xml = ID2SYM(rb_intern("xml")); spg_sym_money = ID2SYM(rb_intern("money")); spg_sym_bit = ID2SYM(rb_intern("bit")); spg_sym_bit_varying = ID2SYM(rb_intern("bit varying")); spg_sym_uuid = ID2SYM(rb_intern("uuid")); spg_sym_xid = ID2SYM(rb_intern("xid")); spg_sym_cid = ID2SYM(rb_intern("cid")); spg_sym_name = ID2SYM(rb_intern("name")); spg_sym_tid = ID2SYM(rb_intern("tid")); spg_sym_int2vector = ID2SYM(rb_intern("int2vector")); spg_sym_inet = ID2SYM(rb_intern("inet")); spg_sym_cidr = ID2SYM(rb_intern("cidr")); spg_Blob = rb_const_get(rb_const_get(spg_Sequel, rb_intern("SQL")), rb_intern("Blob")); rb_gc_register_mark_object(spg_Blob); spg_Blob_instance = rb_obj_freeze(rb_funcall(spg_Blob, spg_id_new, 0)); rb_gc_register_mark_object(spg_Blob_instance); spg_SQLTime = rb_const_get(spg_Sequel, rb_intern("SQLTime")); rb_gc_register_mark_object(spg_SQLTime); spg_Date = rb_const_get(rb_cObject, rb_intern("Date")); rb_gc_register_mark_object(spg_Date); spg_DateTime = rb_const_get(rb_cObject, rb_intern("DateTime")); rb_gc_register_mark_object(spg_DateTime); spg_PGError = rb_const_get(rb_const_get(rb_cObject, rb_intern("PG")), rb_intern("Error")); rb_gc_register_mark_object(spg_PGError); spg_nan = rb_eval_string("0.0/0.0"); rb_gc_register_mark_object(spg_nan); spg_pos_inf = rb_eval_string("1.0/0.0"); rb_gc_register_mark_object(spg_pos_inf); spg_neg_inf = rb_eval_string("-1.0/0.0"); rb_gc_register_mark_object(spg_neg_inf); spg_usec_per_day = ULL2NUM(86400000000ULL); rb_gc_register_mark_object(spg_usec_per_day); rb_require("ipaddr"); spg_IPAddr = rb_const_get(rb_cObject, rb_intern("IPAddr")); rb_gc_register_mark_object(spg_IPAddr); spg_use_ipaddr_alloc = RTEST(rb_eval_string("IPAddr.new.instance_variables.sort == [:@addr, :@family, :@mask_addr]")); spg_vmasks4 = rb_eval_string("a = [0]*33; a[0] = 0; a[32] = 0xffffffff; 31.downto(1){|i| a[i] = a[i+1] - (1 << (31 - i))}; a.freeze"); rb_gc_register_mark_object(spg_vmasks4); spg_vmasks6 = rb_eval_string("a = [0]*129; a[0] = 0; a[128] = 0xffffffffffffffffffffffffffffffff; 127.downto(1){|i| a[i] = a[i+1] - (1 << (127 - i))}; a.freeze"); rb_gc_register_mark_object(spg_vmasks6); c = rb_const_get(spg_Postgres, rb_intern("Dataset")); rb_undef_method(c, "yield_hash_rows"); rb_define_private_method(c, "yield_hash_rows", spg_yield_hash_rows, 2); rb_undef_method(c, "fetch_rows_set_cols"); rb_define_private_method(c, "fetch_rows_set_cols", spg_fetch_rows_set_cols, 1); rb_define_singleton_method(spg_Postgres, "supports_streaming?", spg_supports_streaming_p, 0); #if HAVE_PQSETSINGLEROWMODE spg_id_get_result = rb_intern("get_result"); spg_id_clear = rb_intern("clear"); spg_id_check = rb_intern("check"); rb_define_private_method(c, "yield_each_row", spg_yield_each_row, 1); c = rb_const_get(spg_Postgres, rb_intern("Adapter")); rb_define_private_method(c, "set_single_row_mode", spg_set_single_row_mode, 0); #endif rb_define_singleton_method(spg_Postgres, "parse_pg_array", parse_pg_array, 2); rb_require("sequel_pg/sequel_pg"); rb_require("sequel/extensions/pg_array"); spg_PGArray = rb_const_get(spg_Postgres, rb_intern("PGArray")); rb_gc_register_mark_object(spg_PGArray); } sequel_pg-1.14.0/lib/000077500000000000000000000000001373241172400143335ustar00rootroot00000000000000sequel_pg-1.14.0/lib/sequel/000077500000000000000000000000001373241172400156315ustar00rootroot00000000000000sequel_pg-1.14.0/lib/sequel/extensions/000077500000000000000000000000001373241172400200305ustar00rootroot00000000000000sequel_pg-1.14.0/lib/sequel/extensions/pg_streaming.rb000066400000000000000000000104061373241172400230350ustar00rootroot00000000000000unless Sequel::Postgres.respond_to?(:supports_streaming?) raise LoadError, "either sequel_pg not loaded, or an old version of sequel_pg loaded" end unless Sequel::Postgres.supports_streaming? raise LoadError, "streaming is not supported by the version of libpq in use" end # Database methods necessary to support streaming. You should load this extension # into your database object: # # DB.extension(:pg_streaming) # # Then you can call #stream on your datasets to use the streaming support: # # DB[:table].stream.each{|row| ...} # # Or change a set so that all dataset calls use streaming: # # DB.stream_all_queries = true module Sequel::Postgres::Streaming attr_accessor :stream_all_queries # Also extend the database's datasets to support streaming. # This extension requires modifying connections, so disconnect # so that new connections will get the methods. def self.extended(db) db.extend_datasets(DatasetMethods) db.stream_all_queries = false db.disconnect end # Make sure all new connections have the appropriate methods added. def connect(server) conn = super conn.extend(AdapterMethods) conn end private # If streaming is requested, and a prepared statement is not # used, tell the connection to use single row mode for the query. def _execute(conn, sql, opts={}, &block) if opts[:stream] && !sql.is_a?(Symbol) conn.single_row_mode = true end super end # If streaming is requested, send the prepared statement instead # of executing it and blocking. def _execute_prepared_statement(conn, ps_name, args, opts) if opts[:stream] conn.send_prepared_statement(ps_name, args) else super end end module AdapterMethods # Whether the next query on this connection should use # single_row_mode. attr_accessor :single_row_mode # Send the prepared statement on this connection using # single row mode. def send_prepared_statement(ps_name, args) send_query_prepared(ps_name, args) set_single_row_mode block self end private if Sequel::Database.instance_methods.map(&:to_s).include?('log_connection_yield') # If using single row mode, send the query instead of executing it. def execute_query(sql, args) if @single_row_mode @single_row_mode = false @db.log_connection_yield(sql, self, args){args ? send_query(sql, args) : send_query(sql)} set_single_row_mode block self else super end end else def execute_query(sql, args) if @single_row_mode @single_row_mode = false @db.log_yield(sql, args){args ? send_query(sql, args) : send_query(sql)} set_single_row_mode block self else super end end end end # Dataset methods used to implement streaming. module DatasetMethods # If streaming has been requested and the current dataset # can be streamed, request the database use streaming when # executing this query, and use yield_each_row to process # the separate PGresult for each row in the connection. def fetch_rows(sql) if stream_results? execute(sql, :stream=>true) do |conn| yield_each_row(conn){|h| yield h} end else super end end # Use streaming to implement paging. def paged_each(opts=Sequel::OPTS, &block) unless block_given? return enum_for(:paged_each, opts) end stream.each(&block) end # Return a clone of the dataset that will use streaming to load # rows. def stream clone(:stream=>true) end private # Only stream results if streaming has been specifically requested # and the query is streamable. def stream_results? (@opts[:stream] || db.stream_all_queries) && streamable? end # Queries using cursors are not streamable, and queries that use # the map/select_map/to_hash/to_hash_groups optimizations are not # streamable, but other queries are streamable. def streamable? spgt = (o = @opts)[:_sequel_pg_type] (spgt.nil? || spgt == :model) && !o[:cursor] end end end Sequel::Database.register_extension(:pg_streaming, Sequel::Postgres::Streaming) sequel_pg-1.14.0/lib/sequel_pg/000077500000000000000000000000001373241172400163175ustar00rootroot00000000000000sequel_pg-1.14.0/lib/sequel_pg/sequel_pg.rb000066400000000000000000000102121373241172400206240ustar00rootroot00000000000000# Add speedup for model class creation from dataset class Sequel::Postgres::Database def optimize_model_load=(v) Sequel::Deprecation.deprecate("Database#optimize_model_load= is deprecated. Optimized model loading is now enabled by default, and can only be disabled on a per-Dataset basis.") v end def optimize_model_load Sequel::Deprecation.deprecate("Database#optimize_model_load is deprecated. Optimized model loading is now enabled by default, and can only be disabled on a per-Dataset basis.") true end end # Add faster versions of Dataset#map, #as_hash, #to_hash_groups, #select_map, #select_order_map, and #select_hash class Sequel::Postgres::Dataset def optimize_model_load=(v) Sequel::Deprecation.deprecate("Dataset#optimize_model_load= mutation method is deprecated. Switch to using Dataset#with_optimize_model_load, which returns a modified dataset") opts[:optimize_model_load] = v end def optimize_model_load Sequel::Deprecation.deprecate("Dataset#optimize_model_load method is deprecated. Optimized model loading is enabled by default.") opts.has_key?(:optimize_model_load) ? opts[:optimize_model_load] : true end # In the case where an argument is given, use an optimized version. def map(sym=nil) if sym if block_given? super else rows = [] clone(:_sequel_pg_type=>:map, :_sequel_pg_value=>sym).fetch_rows(sql){|s| rows << s} rows end else super end end # Return a modified copy with the optimize_model_load setting changed. def with_optimize_model_load(v) clone(:optimize_model_load=>v) end # In the case where both arguments given, use an optimized version. def as_hash(key_column, value_column = nil, opts = Sequel::OPTS) if value_column && !opts[:hash] clone(:_sequel_pg_type=>:hash, :_sequel_pg_value=>[key_column, value_column]).fetch_rows(sql){|s| return s} {} elsif opts.empty? super(key_column, value_column) else super end end unless Sequel::Dataset.method_defined?(:as_hash) # Handle previous versions of Sequel that use to_hash instead of as_hash alias to_hash as_hash remove_method :as_hash end # In the case where both arguments given, use an optimized version. def to_hash_groups(key_column, value_column = nil, opts = Sequel::OPTS) if value_column && !opts[:hash] clone(:_sequel_pg_type=>:hash_groups, :_sequel_pg_value=>[key_column, value_column]).fetch_rows(sql){|s| return s} {} elsif opts.empty? super(key_column, value_column) else super end end if defined?(Sequel::Model::ClassMethods) # If model loads are being optimized and this is a model load, use the optimized # version. def each(&block) if optimize_model_load? clone(:_sequel_pg_type=>:model, :_sequel_pg_value=>row_proc).fetch_rows(sql, &block) else super end end end protected # Always use optimized version def _select_map_multiple(ret_cols) rows = [] clone(:_sequel_pg_type=>:array).fetch_rows(sql){|s| rows << s} rows end # Always use optimized version def _select_map_single rows = [] clone(:_sequel_pg_type=>:first).fetch_rows(sql){|s| rows << s} rows end private if defined?(Sequel::Model::ClassMethods) # The model load can only be optimized if it's for a model and it's not a graphed dataset # or using a cursor. def optimize_model_load? (rp = row_proc) && rp.is_a?(Class) && rp < Sequel::Model && rp.method(:call).owner == Sequel::Model::ClassMethods && opts[:optimize_model_load] != false && !opts[:use_cursor] && !opts[:graph] end end end if defined?(Sequel::Postgres::PGArray) # pg_array extension previously loaded class Sequel::Postgres::PGArray::Creator # Override Creator to use sequel_pg's C-based parser instead of the pure ruby parser. def call(string) Sequel::Postgres::PGArray.new(Sequel::Postgres.parse_pg_array(string, @converter), @type) end end # Remove the pure-ruby parser, no longer needed. Sequel::Postgres::PGArray.send(:remove_const, :Parser) end sequel_pg-1.14.0/sequel_pg.gemspec000066400000000000000000000035741373241172400171270ustar00rootroot00000000000000version_integer = File.readlines('ext/sequel_pg/sequel_pg.c').first.split.last.to_i raise "invalid version" unless version_integer >= 10617 SEQUEL_PG_GEMSPEC = Gem::Specification.new do |s| s.name = 'sequel_pg' s.version = "#{version_integer/10000}.#{(version_integer%10000)/100}.#{version_integer%100}" s.platform = Gem::Platform::RUBY s.extra_rdoc_files = ["README.rdoc", "CHANGELOG", "MIT-LICENSE"] s.rdoc_options += ["--quiet", "--line-numbers", "--inline-source", '--title', 'sequel_pg: Faster SELECTs when using Sequel with pg', '--main', 'README.rdoc'] s.summary = "Faster SELECTs when using Sequel with pg" s.author = "Jeremy Evans" s.email = "code@jeremyevans.net" s.homepage = "http://github.com/jeremyevans/sequel_pg" s.required_ruby_version = ">= 1.9.3" s.files = %w(MIT-LICENSE CHANGELOG README.rdoc Rakefile ext/sequel_pg/extconf.rb ext/sequel_pg/sequel_pg.c lib/sequel_pg/sequel_pg.rb lib/sequel/extensions/pg_streaming.rb) s.license = 'MIT' s.extensions << 'ext/sequel_pg/extconf.rb' s.add_dependency("pg", [">= 0.18.0", "!= 1.2.0"]) s.add_dependency("sequel", [">= 4.38.0"]) s.metadata = { 'bug_tracker_uri' => 'https://github.com/jeremyevans/sequel_pg/issues', 'changelog_uri' => 'https://github.com/jeremyevans/sequel_pg/blob/master/CHANGELOG', 'documentation_uri' => 'https://github.com/jeremyevans/sequel_pg/blob/master/README.rdoc', 'mailing_list_uri' => 'https://groups.google.com/forum/#!forum/sequel-talk', 'source_code_uri' => 'https://github.com/jeremyevans/sequel_pg', } s.description = <