pax_global_header00006660000000000000000000000064122015653550014516gustar00rootroot0000000000000052 comment=ffe517f298cd5e78ccb20ad2ec0da80ae770b910 ruby-sequel-4.1.1/000077500000000000000000000000001220156535500137765ustar00rootroot00000000000000ruby-sequel-4.1.1/.gitignore000066400000000000000000000001761220156535500157720ustar00rootroot00000000000000*.lock *.rbc *.swp /coverage /rdoc /sequel-*.gem /spec/bin-sequel-* /spec/spec_config.rb /www/public/*.html /www/public/rdoc* ruby-sequel-4.1.1/.travis.gemfile000066400000000000000000000010071220156535500167140ustar00rootroot00000000000000# This file is only used for TravisCI integration. source 'http://rubygems.org' gem 'rake', '<10.0.0' gem 'rspec', '<2.10.0' # Plugin/Extension Dependencies gem 'tzinfo' gem 'activemodel', '<4.0.0' gem 'nokogiri', '<1.6.0' gem 'json' # MRI/Rubinius Adapter Dependencies gem "sqlite3", :platform => :ruby gem "mysql2", :platform => :ruby gem "pg", :platform => :ruby # JRuby Adapter Dependencies gem 'jdbc-sqlite3', :platform => :jruby gem 'jdbc-mysql', :platform => :jruby gem 'jdbc-postgres', :platform => :jruby ruby-sequel-4.1.1/.travis.yml000066400000000000000000000004541220156535500161120ustar00rootroot00000000000000language: ruby rvm: - 1.8.7 - 1.9.2 - 1.9.3 - 2.0.0 - jruby-18mode - jruby-19mode - rbx-18mode - rbx-19mode script: bundle exec rake spec_travis gemfile: .travis.gemfile before_script: - mysql -e 'create database sequel_test;' - psql -c 'create database sequel_test;' -U postgres ruby-sequel-4.1.1/CHANGELOG000066400000000000000000006762431220156535500152320ustar00rootroot00000000000000=== 4.1.1 (2013-08-01) * Fix select_map, select_order_map, and single_value methods on eager_graphed datasets (jeremyevans) === 4.1.0 (2013-08-01) * Support :inherits option in Database#create_table on PostgreSQL, for table inheritance (jeremyevans) * Handle dropping indexes for schema qualified tables on PostgreSQL (jeremyevans) * Add Database#error_info on PostgreSQL 9.3+ if pg-0.16.0+ is used, to get a hash of metadata for a given database exception (jeremyevans) * Allow prepared_statements plugin to work with instance_filters and update_primary_key plugins (jeremyevans) * Support deferrable exclusion constraints on PostgreSQL using the :deferrable option (mfoody) (#687) * Make Database#run and #<< accept SQL::PlaceholderLiteralString values (jeremyevans) * Deprecate :driver option in odbc adapter since it appears to be broken (jeremyevans) * Support :drvconnect option in odbc adapter for supplying the ODBC connection string directly (jeremyevans) * Support mysql2 0.3.12+ result streaming via Dataset#stream (jeremyevans) * Convert Java::JavaUtil::HashMap to ruby Hash in jdbc adapter, for better handling of PostgreSQL hstore type (jeremyevans) (#686) * Raise NoMatchingRow if calling add_association with a primary key value that doesn't match an existing row (jeremyevans) * Allow PostgreSQL add_constraint to support :not_valid option (jeremyevans) * Allow CHECK constraints to have options by using an options hash as the constraint name (jeremyevans) * Correctly raise error when using an invalid virtual row block function call (jeremyevans) * Support REPLACE on SQLite via Dataset#replace and #multi_replace (etehtsea) (#681) === 4.0.0 (2013-07-01) * Correctly parse composite primary keys on SQLite 3.7.16+ (jeremyevans) * Recognize another disconnect error in the jdbc/oracle adapter (jeremyevans) * Add pg_json_ops extension for calling JSON functions and operators in PostgreSQL 9.3+ (jeremyevans) * Handle non-JSON plain strings, integers, and floats in PostgreSQL JSON columns in pg_json extension (jeremyevans) * Dataset#from now accepts virtual row blocks (jeremyevans) * Add Database#refresh_view on PostgreSQL to support refreshing materialized views (jeremyevans) * Support the Database#drop_view :if_exists option on PostgreSQL (jeremyevans) * Support the Database#{create,drop}_view :materialized option for creating materialized views in PostgreSQL 9.3+ (jeremyevans) * Support the Database#create_view :recursive option for creating recursive views in PostgreSQL 9.3+ (jeremyevans) * Support the Database#create_view :columns option for using explicit columns (jeremyevans) * Support the Database#create_schema :owner and :if_not_exists options on PostgreSQL (jeremyevans) * Support :index_type=>:gist option to create GIST full text indexes on PostgreSQL (jeremyevans) * Add Postgres::ArrayOp#replace for the array_replace function in PostgreSQL 9.3+ (jeremyevans) * Add Postgres::ArrayOp#remove for the array_remove function in PostgreSQL 9.3+ (jeremyevans) * Add Postgres::ArrayOp#hstore for creating hstores from arrays (jeremyevans) * Make Postgres::ArrayOp#[] return ArrayOp if given a range (jeremyevans) * Ensure that CHECK constraints are surrounded with parentheses (jeremyevans) * Ensure Dataset#unbind returned variable hash uses symbol keys (jeremyevans) * Add pg_array_associations plugin, for associations based on PostgreSQL arrays containing foreign keys (jeremyevans) * Add Sequel.deep_qualify, for easily doing a deep qualification (jeremyevans) * Enable use of window functions for limited eager loading by default (jeremyevans) * Handle offsets correctly when eager loading one_to_one associations (jeremyevans) * Raise exception for infinite and NaN floats on MySQL (jeremyevans) (#677) * Make dataset string literalization that requires database connection use dataset's chosen server (jeremyevans) * Make sure an offset without a limit is handled correctly when eager loading (jeremyevans) * Allow providing ranges as subscripts for array[start:end] (jeremyevans) * Prepare one_to_one associations in the prepared_statements_associations plugin (jeremyevans) * Use prepared statements when the association has :conditions in the prepared_statements_associations plugin (jeremyevans) * Fix prepared statement usage in some additional cases in the prepared_statements_associations plugin (jeremyevans) * Hex escape blob input on MySQL (jeremyevans) * Handle more disconnect errors when using the postgres adapter with the postgres-pr driver (jeremyevans) * Model#setter_methods private method now accepts 1 argument instead of 2 (jeremyevans) * Model#set_restricted and #update_restricted private methods now accept 2 arguments instead of 3 (jeremyevans) * ungraphed on an eager_graph dataset now resets the original row_proc (jeremyevans) * eager_graph now returns a naked dataset (jeremyevans) * All behavior deprecated in Sequel 3.48.0 has been removed (jeremyevans) * Make adapter/integration spec environment variables more consistent (jeremyevans) * Sequel no longer provides default databases for adapter/integration specs (jeremyevans) * Model#save no longer calls #_refresh internally (jeremyevans) * Model#set_all and #update_all can now update the primary key (jeremyevans) * Integrate many_to_one_pk_lookup and association_autoreloading plugins into main associations plugin (jeremyevans) * Make defaults_setter plugin operate in a lazy manner (jeremyevans) * Plugins now extend the model class with ClassMethods before including InstanceMethods (jeremyevans) * Remove Model::EMPTY_INSTANCE_VARIABLES (jeremyevans) * Model.raise_on_typecast_failure now defaults to false (jeremyevans) * Model#_save private method now only takes a single argument (jeremyevans) * Remove Dataset#columns_without_introspection from columns_introspection extension (jeremyevans) * Make boolean prepared statement arguments work on sqlite adapter when integer_booleans is true (jeremyevans) * Make Database#tables and #views reflect search_path on PostgreSQL (jeremyevans) * SQLite now defaults to true for integer_booleans and false for use_timestamp_timezones (jeremyevans) * Make the default value for most option hashes a shared frozen hash (jeremyevans) * Remove Sequel::NotImplemented exception (jeremyevans) * Automatically alias single expressions in Dataset#get, #select_map, and #select_order_map, to work around possible DoS issues (jeremyevans) * Use a connection queue instead of stack by default for threaded connection pools (jeremyevans) * Remove SQL::SQLArray alias for SQL::ValueList (jeremyevans) * Remove SQL::NoBooleanInputMethods empty module (jeremyevans) === 3.48.0 (2013-06-01) * Make named_timezones extension usable by databases allowing timezone strings to be given to Database#timezone= (jeremyevans) * Make Dataset#or just clone if given an empty argument (jeremyevans) * Deprecated using a mismatched number of placeholders and arguments in a placeholder literal string (jeremyevans) * Add Dataset#qualify_to and #qualify_to_first_source to sequel_3_dataset_methods extension (jeremyevans) * Add scissors plugin for Model.update, .delete, and .destroy (jeremyevans) * Validate against explicit nil values in NOT NULL columns with default values in the auto_validations plugin (jeremyevans) * Support :not_null=>:presence option for auto_validations plugin, for using presence validation for not null columns (jeremyevans) * Rename auto_validate_presence_columns to auto_validate_not_null_columns (jeremyevans) * Make pg_hstore_ops extension integrate with pg_array, pg_hstore, and pg_array_ops extensions (jeremyevans) * Add Sequel.json_parser_error_class and Sequel.object_to_json to allow the use of alternative JSON implementations (jeremyevans) (#662) * Deprecate JSON.create_id usage in the json_serializer plugin (jeremyevans) * Emulate offsets on Microsoft Access using reverse orders and total counts (jeremyevans) (#661) * Make ado adapter handle disconnecting an already disconnected connection (jeremyevans) * Deprecate parsing columns for the same table name in multiple schemas on jdbc (jeremyevans) * Allow association_proxies plugin to accept a block to give user control over which methods are proxied to the dataset (jeremyevans) (#660) * Deprecate calling Dataset#add_graph_aliases before #graph or #set_graph_aliases (jeremyevans) * Deprecate Model.add_graph_aliases, .insert_multiple, .query, .set_overrides, .set_defaults, .to_csv, and .paginate (jeremyevans) * Add guide for ordering code with Sequel (jeremyevans) * Deprecate Database#transaction :disconnect=>:retry option (jeremyevans) * Deprecate Model.set, .update, .delete, and .destroy (jeremyevans) * Deprecate Dataset#set (jeremyevans) * Add specs for bin/sequel (jeremyevans) * Make constraint_validations plugin reflect validations by column (jeremyevans) * Allow for per-model/per-validation type customization of validation options in constraint_validations plugin (jeremyevans) * Make Database#constraint_validations in the constraint_validations plugin have raw row values (jeremyevans) * Fix statement freeing in the ibmdb adapter (jeremyevans) * Make single and class table inheritance plugins handle usage of set_dataset in a subclass (jeremyevans) * Allow validates_schema_types in validation_helpers plugin accept an options hash (jeremyevans) * Deprecate Model.set_primary_key taking multiple arguments (jeremyevans) * Make auto_validations plugin work with databases that don't support index parsing (jeremyevans) * Model classes will no longer call Database#schema if it isn't supported (jeremyevans) * Speed up Model.with_pk and with_pk! class methods (jeremyevans) * Speed up Dataset#clone when called without an argument (jeremyevans) * Deprecate Postgres::PGRangeOp#{starts_before,ends_after} (jeremyevans) * Deprecate global use of null_dataset, pagination, pretty_table, query, select_remove, schema_caching, schema_dumper, and to_dot extensions (jeremyevans) * Deprecate Dataset.introspect_all_columns in the columns_introspection extension (jeremyevans) * Add empty_array_ignore_nulls extension for ignoring null handling for IN/NOT with an empty array (jeremyevans) * Deprecate Sequel.empty_array_handle_nulls accessor (jeremyevans) * Deprecate Sequel.{k,ts,tsk}_require and Sequel.check_requiring_thread (jeremyevans) * Discontinue use of manual thread-safe requiring (jeremyevans) * Deprecate using an unsupported client_min_messages setting on PostgreSQL (jeremyevans) * Deprecate passing non-hash 4th argument to Dataset#join_table (jeremyevans) * Deprecate passing non-hash 2nd argument to Dataset#union/intersect/except (jeremyevans) * Deprecate one_to_many with :one_to_one option raising an error (jeremyevans) * Automatically typecast hash and array to string for string columns in the looser_typecasting extension (jeremyevans) * Deprecate automatic typecasting of hash and array to string for string columns (jeremyevans) * Deprecate :eager_loader and :eager_grapher association options getting passed 3 separate arguments (jeremyevans) * Deprecate validates_not_string (jeremyevans) * Deprecate casting via __type suffix for prepared type placeholders in the postgres adapter (jeremyevans) * Deprecate json_serializer's Model.json_create (jeremyevans) * Deprecate json_serializer from_json and xml_serializer from_xml :all_columns and :all_associations options (jeremyevans) * Deprecate passing an unsupported lock mode to Dataset#lock on PostgreSQL (jeremyevans) * Deprecate Model::InstanceMethods.class_attr_{overridable,reader} (jeremyevans) * Deprecate all methods in Dataset::PUBLIC_APPEND_METHODS except for literal, quote_identifier, quote_schema_table (jeremyevans) * Deprecate all methods in Dataset::PRIVATE_APPEND_METHODS (jeremyevans) * Deprecate Dataset.def_append_methods (jeremyevans) * Deprecate Dataset#table_ref_append (jeremyevans) * Deprecate SQL::Expression#to_s taking an argument and returning a literal SQL string (jeremyevans) * Deprecate creating Model class methods automatically from plugin public dataset methods (jeremyevans) * Add Sequel.cache_anonymous_models accessor (jeremyevans) * Deprecate Sequel::Model.cache_anonymous_models accessor (jeremyevans) * Deprecate identity_map plugin (jeremyevans) * Deprecate Model#set_values (jeremyevans) * Deprecate pg_auto_parameterize and pg_statement_cache extensions (jeremyevans) * Deprecate Model#pk_or_nil (jeremyevans) * Deprecate Model.print and Model.each_page (jeremyevans) * Deprecate Dataset checking that the Database implements the identifier mangling methods (jeremyevans) * Deprecate Database#reset_schema_utility_dataset private method (jeremyevans) * Speed up Database#fetch, #from, #select, and #get by using a cached dataset (jeremyevans) * Make sure adapters with subadapters have fully initialized database instances before calling Database.after_initialize (jeremyevans) * Set identifier mangling methods on Database initialization (jeremyevans) * Switch internal use of class variables to instance variables (jeremyevans) * Deprecate passing an options hash to Database#dataset or Dataset.new (jeremyevans) * Speed up Dataset#clone (jeremyevans) * Add sequel_3_dataset_methods extension for Dataset#[]=, #insert_multiple, #set, #to_csv, #db=, and #opts= (jeremyevans) * Deprecate Dataset#[]=, #insert_multiple, #to_csv, #db=, and #opts= (jeremyevans) * Add blacklist_security plugin for Model.restricted_columns, Model.set_restricted_columns, Model#set_except, and Model#update_except (jeremyevans) * Deprecate Model.restricted_columns, Model.set_restricted_columns, Model#set_except, and Model#update_except (jeremyevans) * Deprecate Database#default_schema (jeremyevans) * Deprecate Sequel::NotImplemented and defining methods that raise it (jeremyevans) * Add Database#supports_{index_parsing,foreign_key_parsing,table_listing,view_listing}? (jeremyevans) * Deprecate Sequel.virtual_row_instance_eval accessor (jeremyevans) * Deprecate sequel_core.rb and sequel_model.rb (jeremyevans) * Add graph_each extension for Dataset#graph_each (jeremyevans) * Deprecate Dataset#graph_each (jeremyevans) * Add set_overrides extension for Dataset#set_overrides and #set_defaults (jeremyevans) * Deprecate Dataset#set_overrides and #set_defaults (jeremyevans) * Deprecate Database#query in the informix adapter (jeremyevans) * Deprecate Database#do as an alias to execute/execute_dui in some adapters (jeremyevans) * Deprecate modifying initial Dataset hash if the hash wasn't provided as an argument (jeremyevans) * Make active_model plugin use an errors class with autovivification (jeremyevans) * Deprecate Model::Errors#[] autovivification (returning empty array when missing) (jeremyevans) * Add Model#errors_class private method for choosing the errors class on a per-model basis (jeremyevans) * Add after_initialize plugin for the after_initialize hook (jeremyevans) * Deprecate Model after_initialize hook (jeremyevans) * Deprecate passing two arguments to Model.new (jeremyevans) * Deprecate choosing reciprocal associations with conditions, blocks, or differing primary keys (jeremyevans) * Deprecate choosing first from ambiguous reciprocal associations (jeremyevans) * Deprecate validates_type allowing nil values by default (jeremyevans) * Deprecate the correlated_subquery eager limit strategy (jeremyevans) * Add hash_aliases extension for making Dataset#select and #from treat hashes as alias specifiers (jeremyevans) * Deprecate having Dataset#select and #from treat hashes as alias specifiers (jeremyevans) * Do not automatically convert virtual row block return values to arrays by some Dataset methods (jeremyevans) * Add filter_having extension for making Dataset#{and,filter,exclude,or} affect the HAVING clause if present (jeremyevans) * Deprecate Dataset#select_more meaning Dataset#select when called without an existing selection (jeremyevans) * Deprecate Dataset#and, #or, and #invert raising exceptions for no existing filter (jeremyevans) * Deprecate Dataset#{and,filter,exclude,or} affecting the HAVING clause (jeremyevans) * Deprecate passing explicit columns to update as separate arguments to Model#save (jeremyevans) * Allow specifying explicit columns to update in Model#save via the :columns option (jeremyevans) * Add ability set the default for join_table's :qualify option via Dataset#default_join_table_qualification (jeremyevans) * Deprecated :root=>true meaning :root=>:both in the json_serializer (jeremyevans) * Deprecate core extension usage if the core_extensions have not been explicitly loaded (jeremyevans) * Deprecate Symbol#{[],<,<=,>,>=} methods when using the core_extensions (jeremyevans) * Add ruby18_symbol_extensions extension for the Symbol#{[],<,<=,>,>=} methods (jeremyevans) === 3.47.0 (2013-05-01) * Don't fail for missing conversion proc in pg_typecast_on_load plugin (jeremyevans) * Rename PGRangeOp #starts_before and #ends_after to #ends_before and #starts_after (soupmatt) (#655) * Add Database#supports_schema_parsing? for checking for schema parsing support (jeremyevans) * Handle hstore[] types on PostgreSQL if using pg_array and pg_hstore extensions (jeremyevans) * Don't reset conversion procs when loading pg_* extensions (jeremyevans) * Handle domain types when parsing the schema on PostgreSQL (jeremyevans) * Handle domain types in composite types in the pg_row extension (jeremyevans) * Add Database.extension, for loading an extension into all future databases (jeremyevans) * Support a :search_path Database option for setting PostgreSQL search_path (jeremyevans) * Support a :convert_infinite_timestamps Database option in the postgres adapter (jeremyevans) * Support a :use_iso_date_format Database option in the postgres adapter, for per-Database specific behavior (jeremyevans) * Add Model.default_set_fields_options, for having a model-wide default setting (jeremyevans) * Make Model.map, .to_hash, and .to_hash_groups work without a query when using the static_cache plugin (jeremyevans) * Support :hash_dup and Proc Model inherited instance variable types (jeremyevans) * Handle aliased tables in the pg_row plugin (jeremyevans) * Add input_transformer plugin, for automatically transform input to model column setters (jeremyevans) * Add auto_validations plugin, for automatically adding not null, type, and unique validations (jeremyevans) * Add validates_not_null to validation_helpers (jeremyevans) * Add :setter, :adder, :remover, and :clearer association options for overriding the default modification behavior (jeremyevans) * Add Database#register_array_type to the pg_array extension, for registering database-specific array types (jeremyevans) * Speed up fetching model instances when using update_primary_key plugin (jeremyevans) * In the update_primary_key plugin, if the primary key column changes, clear related associations (jeremyevans) * Add :allow_missing_migration_files option to migrators, for not raising if migration files are missing (bporterfield) (#652) * Fix race condition related to prepared_sql for newly prepared statements (jeremyevans) (#651) * Support :keep_reference=>false Database option for not adding reference to Sequel::DATABASES (jeremyevans) * Make Postgres::HStoreOp#- explicitly cast a string argument to text, to avoid PostgreSQL assuming it is an hstore (jeremyevans) * Add validates_schema_types validation for validating column values are instances of an appropriate class (jeremyevans) * Allow validates_type validation to accept an array of allowable classes (jeremyevans) * Add Database#schema_type_class for getting the ruby class or classes related to the type symbol (jeremyevans) * Add error_splitter plugin, for splitting multi-column errors into separate errors per column (jeremyevans) * Skip validates_unique validation if underlying columns are not valid (jeremyevans) * Allow Model#modified! to take an optional column argument and mark that column as being modified (jeremyevans) * Allow Model#modified? to take an optional column argument and check if that column has been modified (jeremyevans) * Make Model.count not issue a database query if using the static_cache plugin (jeremyevans) * Handle more corner cases in the many_to_one_pk_lookup plugin (jeremyevans) * Handle database connection during initialization in jdbc adapter (jeremyevans) (#646) * Add Database.after_initialize, which takes a block and calls the block with each newly created Database instance (ged) (#641) * Add a guide detailing PostgreSQL-specific support (jeremyevans) * Make model plugins deal with frozen instances (jeremyevans) * Allow freezing of model instances for models without primary keys (jeremyevans) * Reflect constraint_validations extension :allow_nil=>true setting in the database constraints (jeremyevans) * Add Plugins.after_set_dataset for easily running code after set_dataset (jeremyevans) * Add Plugins.inherited_instance_variables for easily setting class instance variables when subclassing (jeremyevans) * Add Plugins.def_dataset_methods for easily defining class methods that call dataset methods (jeremyevans) * Make lazy_attributes plugin no longer depend on identity_map plugin (jeremyevans) * Make Dataset#get with an array of values handle case where no row is returned (jeremyevans) * Make caching plugin handle memcached API for deletes if ignore_exceptions option is used (rintaun) (#639) === 3.46.0 (2013-04-02) * Add Dataset#cross_apply and Dataset#outer_apply on Microsoft SQL Server (jeremyevans) * Speed up threaded connection pools when :connection_handling=>:queue is used (jeremyevans) * Allow external connection pool classes to be loaded automatically (jeremyevans) * Add Dataset#with_pk! for model datasets, like #with_pk, but raising instead of returning nil (jeremyevans) * Add Dataset#first!, like #first, but raising a Sequel::NoMatchingRow exception instead of returning nil (jeremyevans) * Dataset #select_map, #select_order_map, and #get no longer support a plain string inside an array of arguments (jeremyevans) * Escape ] characters in identifiers on Microsoft SQL Server (jeremyevans) * Add security guide (jeremyevans) * Make validates_type handle false values correctly (jeremyevans) (#636) * Have associations, composition, serialization, and dirty plugins clear caches in some additional cases (jeremyevans) (#635) * Add alter_table drop_foreign_key method for dropping foreign keys by column names (raxoft, jeremyevans) (#627) * Allow creation named column constraints via :*_constraint_name column options (jeremyevans) * Handle drop_constraint :type=>:primary_key on H2 (jeremyevans) * Handle infinite dates in the postgres adapter using Database#convert_infinite_timestamps (jeremyevans) * Make the looser_typecasting extension use looser typecasting for decimal columns as well as integers and floats (jeremyevans) * Do strict typecasting of decimal columns by default, similar to integer/float typecasting (jeremyevans) === 3.45.0 (2013-03-01) * Remove bad model typecasting of money type on PostgreSQL (jeremyevans) (#624) * Use simplecov instead of rcov for coverage testing on 1.9+ (jeremyevans) * Make the Database#quote_identifier method public (jeremyevans) * Make PostgreSQL metadata parsing handle tables with the same name in multiple schemas (jeremyevans) * Switch query extension to use a proxy instead of Object#extend (chanks, jeremyevans) * Remove Dataset#def_mutiation_method instance method (jeremyevans) * Make foreign key parsing on MySQL not pick up foreign keys in other databases (jeremyevans) * Allow per-instance overrides of Postgres.force_standard_strings and .client_min_messages (jeremyevans) (#618) * Add Sequel.tzinfo_disambiguator= to the named_timezones plugin for automatically handling TZInfo::AmbiguousTime exceptions (jeremyevans) (#616) * Add Dataset#escape_like, for escaping LIKE metacharacters (jeremyevans) (#614) * The LIKE operators now use an explicit ESCAPE '\' clause for similar behavior across databases (jeremyevans) * Make Database#tables and #views accept a :qualify option on PostgreSQL to return qualified identifiers (jeremyevans) * Make json_serializer and xml_serializer plugins secure by default (jeremyevans) * Address JSON.parse vulnerabilities (jeremyevans) * Fix Dataset#from_self! to no longer create a self-referential dataset (jeremyevans) * Use SQLSTATE or database error codes if available instead of regexp parsing for more specific DatabaseErrors (jeremyevans) * Add unlimited_update plugin to work around MySQL warning in replicated environments (jeremyevans) * Add the :retry_on and :num_retries transaction options for automatically retrying transactions (jeremyevans) * Raise serialization failures/deadlocks as Sequel::SerializationFailure exceptions (jeremyevans) * Support transaction isolation levels on Oracle and DB2 (jeremyevans) * Support transaction isolation levels when using the JDBC transaction support (jeremyevans) === 3.44.0 (2013-02-04) * Speedup mysql2 adapter with identifier output method fetch speed by up to 50% (jeremyevans) * Speedup tinytds adapter fetch speed by up to 60% (jeremyevans) * Expand columns_introspection extension to consider cached schema values in the database (jeremyevans) * Expand columns_introspection extension to handle subselects (jeremyevans) * Have #last and #paged_each for model datasets order by the model's primary key by default (jeremyevans) * Improve emulated offset support to handle subqueries (jeremyevans) * Remove use of Object#extend from the eager_each plugin (jeremyevans) * Add support for temporary views on SQLite and PostgreSQL via the :temp option to create_view (chanks, jeremyevans) * Emulate Database#create_or_replace_view if not supported directly (jeremyevans) * Add Dataset#paged_each, for processing entire datasets without keeping all rows in memory (jeremyevans) * Add Sequel::ConstraintViolation exception class and subclasses for easier exception handling (jeremyevans) * Fix use of identity_map plugin with many_to_many associations with right composite keys (chanks) (#603) * Increase virtual row performance by using a shared VirtualRow instance (jeremyevans) * Allow the :dataset association option to accept the association reflection as an argument (jeremyevans) * Improve association method performance by caching intermediate dataset (jeremyevans) === 3.43.0 (2013-01-08) * Move the #meta_def support for Database, Dataset, and Model to the meta_def extension (jeremyevans) * Fix Database#copy_into on jdbc/postgres when an exception is raised (jeremyevans) * Add core_refinements extension, providing refinement versions of Sequel's core extensions (jeremyevans) * Make Database#copy_into raise a DatabaseError if the database signals an error in the postgres adapter (jeremyevans) * Define respond_to_missing? where method_missing is defined and the object supports respond_to? (jeremyevans) * Allow lambda procs with 0 arity as virtual row blocks on ruby 1.9 (jeremyevans) * Handle schema-qualified row_types in the pg_array integration in the pg_row extension (jeremyevans) (#595) * Support default_schema when reseting primary key sequences on PostgreSQL (jeremyevans) (#596) * Allow treating tinyint(1) unsigned columns as booleans in the mysql adapters (jeremyevans) * Support the jdbc-hsqldb gem in the jdbc adapter, since it has been updated to 2.2.9 (jeremyevans) * Work with new jdbc-* gems that require manual driver loading (kares) (#598) * Cast blobs correctly on DB2 when use_clob_as_blob is false (mluu, jeremyevans) (#594) * Add date_arithmetic extension for database-independent date calculations (jeremyevans) * Make Database#schema handle [host.]database.schema.table qualified tables on Microsoft SQL Server (jeremyevans) * Add Dataset#split_qualifiers helper method for splitting a qualifier identifier into array of strings (jeremyevans) * Make Database#schema_and_table always return strings for the schema and table (jeremyevans) * Skip stripping of blob columns in the string_stripper plugin (jeremyevans) (#593) * Allow Dataset#get to take an array to return multiple values, similar to map/select_map (jeremyevans) * Default :prefetch_rows to 100 in the Oracle adapter (andrewhr) (#592) === 3.42.0 (2012-12-03) * If an exception occurs while committing a transaction, attempt to rollback (jeremyevans) * Support setting default string column sizes on a per-Database basis via default_string_column_size (jeremyevans) * Reset Model.instance_dataset when extending the model's dataset (jeremyevans) * Make the force_encoding plugin work with frozen strings (jeremyevans) * Add Database#do on PostgreSQL for using the DO anonymous code block execution statement (jeremyevans) * Remove Model.dataset_methods (jeremyevans) * Allow subset to be called inside a dataset_module block (jeremyevans) * Make Dataset#avg, #interval, #min, #max, #range, and #sum accept virtual row blocks (jeremyevans) * Make Dataset#count use a subselect when the dataset has an offset without a limit (jeremyevans) (#587) * Dump deferrable status of unique indexes on PostgreSQL (radford) (#583) * Extend deferrable constraint support to all types of constraints, not just foreign keys (radford, jeremyevans) (#583) * Support Database#copy_table and #copy_into on jdbc/postgres (bdon) (#580) * Make Dataset#update not use a limit (TOP) on Microsoft SQL Server 2000 (jeremyevans) (#578) === 3.41.0 (2012-11-01) * Add bin/sequel usage guide (jeremyevans) * Make Dataset#reverse and #reverse_order accept virtual row blocks (jeremyevans) * Add Sequel.delay for generic delayed evaluation (jeremyevans) * Make uniqueness validations correctly handle nil values (jeremyevans) * Support :unlogged option for create_table on PostgreSQL (JonathanTron) (#575) * Add ConnectionPool#pool_type to get the type of connection pool in use (jeremyevans) * Explicitly mark primary keys as NOT NULL on SQLite (jeremyevans) * Add support for renaming primary key columns on MySQL (jeremyevans) * Add connection_validator extension for automatically checking connections and transparently handling disconnects (jeremyevans) * Add Database#valid_connection? for checking whether a given connection is valid (jeremyevans) * Make dataset.limit(nil, nil) reset offset as well as limit (jeremyevans) (#571) * Support IMMEDIATE/EXCLUSIVE/DEFERRED transaction modes on SQLite (Eric Wong) * Major change in the Database <-> ConnectionPool interface (jeremyevans) * Make touch plugin handle touching of many_*_many associations (jeremyevans) * Make single_table_inheritance plugin handle non-bijective mappings (hannesg) (#567) * Support foreign key parsing on MSSQL (munkyboy) (#564) * Include SQL::AliasMethods in most pg_* extension objects (treydempsey, jeremyevans) (#563) * Handle failure to create a prepared statement better in the postgres, mysql, and mysql2 adapters (jeremyevans) (#560) * Treat clob columns as strings instead of blobs (jeremyevans) === 3.40.0 (2012-09-26) * Add a cubrid adapter for accessing CUBRID databases via the cubrid gem (jeremyevans) * Add a jdbc/cubrid adapter for accessing CUBRID databases via JDBC on JRuby (jeremyevans) * Return OCI8::CLOB values as ruby Strings in the Oracle adapter (jeremyevans) * Use clob for String :text=>true types on Oracle, DB2, HSQLDB, and Derby (jeremyevans) (#555) * Allowing marshalling of Sequel::Postgres::HStore (jeremyevans) (#556) * Quote channel identifier names when using LISTEN/NOTIFY on PostgreSQL (jeremyevans) * Handle nil values when formatting bound variable arguments in the pg_row extension (jeremyevans) (#548) * Handle nil values when parsing composite types in the pg_row extension (jeremyevans) (#548) * Add :disconnect=>:retry option to Database#transaction, for automatically retrying the transaction on disconnect (jeremyevans) * Greatly improved support on Microsoft Access (jeremyevans) * Support Database#{schema,tables,views,indexes,foreign_key_list} when using ado/access adapter (ericgj) (#545, #546) * Add ado/access adapter for accessing Microsoft Access via the ado adapter (jeremyevans) * Combine disconnect error detection for mysql and mysql2 adapters (jeremyevans) * Update the association_pks plugin to handle composite primary keys (chanks, jeremyevans) (#544) === 3.39.0 (2012-09-01) * Fix defaults_setter to set false default values (jeremyevans) * Fix serial sequence query in Database#primary_key_sequence on PostgreSQL (jeremyevans) (#538) * Add Database#copy_into when using postgres adapter with pg driver, for very fast inserts into tables (jeremyevans) * Combine multiple alter_table operations into a single query where possible on MySQL and PostgreSQL (jeremyevans) * Handle sets of alter_table operations on MySQL and MSSQL where later operations depend on earlier ones (jeremyevans) * Add constraint_validations plugin for automatic validations of constaints defined by extension (jeremyevans) * Add constraint_validations extension for defining database constraints similar to validations (jeremyevans) * Add Database#supports_regexp? for checking for regular expression support (jeremyevans) * Add Sequel.trim for cross platform trim function (jeremyevans) * Add Sequel.char_length for cross platform char_length function (jeremyevans) * Fixing caching of MySQL server version (hannesg) (#536) * Allow overriding the convert_tinyint_to_bool setting on a per-Dataset basis in the mysql and mysql2 adapters (jeremyevans) * Make ValidationFailed and HookFailed exceptions have model method that returns the related model (jeremyevans) * Automatically wrap array arguments to most PGArrayOp methods in PGArrays (jeremyevans) * Add set_column_not_null to alter table generator for marking a column as not null (jeremyevans) * Default second argument of set_column_allow_null to true in alter table generator (jeremyevans) * Allow Dataset#count to take an argument or virtual row block (jeremyevans) * Attempt to recognize CURRENT_{DATE,TIMESTAMP} defaults and return them as Sequel::CURRENT_{DATE,TIMESTAMP} (jeremyevans) * Make dataset.insert(model) assume a single column if model uses the pg_row plugin (jeremyevans) * No longer handle model instances in plain (non-model) datasets when inserting (jeremyevans) * Use subselects for model classes as tables in join methods in model datasets if the model's dataset isn't a simple select (jeremyevans) * No longer handle model classes as tables in join/graph methods in plain (non-model) datasets (jeremyevans) * Make Time->DateTime and DateTime->Time typecasts retain fractional seconds on ruby 1.8 (jeremyevans) (#531) * Add bin/sequel -c support, for running code string instead of using an IRB prompt (jeremyevans) * Allow subclasses plugin to take a block, which is called with each subclasses created (jeremyevans) * Add :where option to validates_unique, for custom uniqueness filters (jeremyevans) * Add :connection_handling=>:disconnect option for threaded connection pools (jeremyevans) * Add Postgres::PGRowOp#* for referencing the members of the composite type as separate columns (jeremyevans) * Make identity_map plugin work with models lacking a primary key (jeremyevans) * Recognize MySQL set type and default value (jeremyevans) (#529) === 3.38.0 (2012-08-01) * Sequel now recognizes the double(x, y) and double(x, y) unsigned MySQL types (Slike9, jeremyevans) (#528) * The swift subadapters now require swift-db-* instead of swift itself (deepfryed, jeremyevans) (#526) * Add :textsize option to tinytds adapter to override the default TEXTSIZE (jeremyevans, wardrop) (#525) * Support an output identifier method in the swift adapter (jeremyevans) * Add Model#to_hash as an alias to Model#values (jeremyevans) * When loading multiple pg_* extensions via Database#extension, only reset the conversion procs once (jeremyevans) * Don't allow model typecasting from string to postgres array, hstore, or composite types (jeremyevans) * Add pg_typecast_on_load plugin for converting advanced PostgreSQL types on load the {jdbc,do,swift}/postgres adapters (jeremyevans) * Make all adapters that connect to PostgreSQL store type conversion procs (jeremyevans) * Add type oid to column schema on PostgreSQL (jeremyevans) * Add pg_row plugin, for using Sequel::Model classes to represent PostgreSQL row-valued/composite types (jeremyevans) * Add pg_row_ops extension for DSL support for PostgreSQL row-valued/composite types (jeremyevans) * Add pg_row extension for dealing with PostgreSQL row-valued/composite types (jeremyevans) * Allow custom registered array types in the pg_array extension to be Database instance specific (jeremyevans) * Remove Sequel::SQL::IdentifierMethods (jeremyevans) * Don't have the schema_dumper extension produce code that relies on the core_extensions (jeremyevans) * Fix dropping of columns with constraints on Microsoft SQL Server (mluu, jeremyevans) (#515, #518) * Don't have pg_* extensions add methods to core classes unless the core_extensions extension is loaded (jeremyevans) * Use real boolean literals on derby 10.7+ (jeremyevans, matthauck) (#514) * Work around JRuby 1.6 ruby 1.9 mode bug in Time#nsec for Time prepared statement arguments on jdbc (jeremyevans) * Handle blob prepared statement arguments on jdbc/db2 and jdbc/oracle (jeremyevans) * Handle blob values in the swift adapter (jeremyevans) * Handle better nil prepared statement arguments on jdbc (jeremyevans) (#513) * Make SQL::Blob objects handle as, cast, and lit methods even if the core extensions are not loaded (jeremyevans) * Make #* with no arguments produce a ColumnAll for Identifier and QualifiedIdentifier (jeremyevans) * Sequel.expr(:symbol) now returns Identifier, QualifiedIdentifier, or AliasedExpression instead of Wrapper (jeremyevans) * Treat clob columns as string instead of blob on Derby (jeremyevans) (#509) === 3.37.0 (2012-07-02) * Allow specifying eager_graph alias base on a per-call basis using an AliasedExpression (jeremyevans) * Allow bin/sequel to respect multiple -l options for logging to multiple files (jeremyevans) * Correctly handle cases where SCOPE_IDENTITY is nil in the odbc/mssql adapter (stnoonan, jeremyevans) * Add pg_interval extension, for returning interval types as ActiveSupport::Duration instances (jeremyevans) * Save a new one_to_one associated object once instead of twice in the nested_attributes plugin (jeremyevans) * Don't add unnecessary filter condition when passing a new object to a one_to_one setter method (jeremyevans) * Differentiate between column references and method references in many_through_many associations (jeremyevans) * Use :qualify=>:deep option when joining tables in model association datasets (jeremyevans) * Support :qualify=>:deep option to Dataset#join_table to qualify subexpressions in the expression tree (jeremyevans) * Support :qualify=>false option to Dataset#join_table to not automatically qualify keys/values (jeremyevans) * Make filter by associations support use column references and method references correctly (jeremyevans) * Call super in list plugin before_create (jeremyevans) (#504) * Do not automatically cast String to text in pg_auto_parameterize extension (jeremyevans) * Support alter_table validate_constraint on PostgreSQL for validating constraints previously declared with NOT VALID (jeremyevans) * Support :not_valid option when adding foreign key constraints on PostgreSQL (jeremyevans) * Support exclusion constraints on PostgreSQL (jeremyevans) * Allow for overriding the create/alter table generators used per Database object (jeremyevans) * Make casting to Date/(Time/DateTime) use date/datetime functions on SQLite (jeremyevans) * Add pg_range_ops extension for DSL support for PostgreSQL range operators and functions (jeremyevans) * The json library is now required when running the plugin/extension specs (jeremyevans) * Use change migrations instead of up/down migrations in the schema_dumper (jeremyevans) * Dump unsigned integer columns with a check >= 0 constraint in the schema_dumper (stu314) * Switch the :key_hash entry to the association :eager_loader option to use the method symbol(s) instead of the column symbol(s) (jeremyevans) * Add :id_map entry to the hash passed to the association :eager_loader option, for easier custom eager loading (jeremyevans) * Fix dumping of non-integer foreign key columns in the schema_dumper (jeremyevans) (#502) * Add nested_attributes :fields option to be a proc that is called with the associated object (chanks) (#498) * Add split_array_nil extension, for compiling :col=>[1, nil] to col IN (1) OR col IS NULL (jeremyevans) * Add Database#extension and Dataset#extension for loading extension modules into objects automatically (jeremyevans) * Respect an existing dataset limit when updating on Microsoft SQL Server (jeremyevans) * Add pg_range extension, for dealing with PostgreSQL 9.2+ range types (jeremyevans) * Make pg_array extension convert array members when typecasting Array to PGArray (jeremyevans) * Make jdbc/postgres adapter convert array type elements (e.g. date[] arrays are returned as arrays of Date instances) (jeremyevans) * Make the pg_inet extension handle inet[]/cidr[]/macaddr[] types when used with the pg_array extension (jeremyevans) * Make the pg_json extension handle json[] type when used with the pg_array extension (jeremyevans) * Fix schema parsing of h2 clob types (jeremyevans) * Make the pg_array extension handle array types for scalar types handled by the native postgres adapter (jeremyevans) * Generalize handling of array types in the pg_array extension, allowing easy support of custom array types (jeremyevans) * Remove type conversion of int2vector and money types on PostgreSQL, since previous conversions were wrong (jeremyevans) * Add eval_inspect extension, which makes Sequel::SQL::Expression#inspect attempt to return a string suitable for eval (jeremyevans) * When emulating offset with ROW_NUMBER, default to ordering by all columns if no specific order is given (stnoonan, jeremyevans) (#490) * Work around JRuby 1.6 ruby 1.9 mode bug in Time -> SQLTime conversion (jeremyevans) === 3.36.1 (2012-06-01) * Fix jdbc adapter when DriverManager#getConnection fails (aportnov) (#488) === 3.36.0 (2012-06-01) * Use Bignum generic type when dumping unsigned integer types that could potentially overflow 32-bit signed integer values (stu314) * Support :transform option in the nested_attributes plugin, for automatically preprocessing input hashes (chanks) * Support :unmatched_pk option in the nested_attributes plugin, can be set to :create for associated objects with natural keys (chanks) * Support composite primary keys in the nested_attributes plugin (chanks) * Allow Model#from_json in the json_serializer plugin to use set_fields if a :fields option is given (jeremyevans) * Support :using option to set_column_type on PostgreSQL, to force a specific conversion from the old value to the new value (jeremyevans) * Drop indexes in the reverse order that they were added in the schema dumper (jeremyevans) * Add :index_names option to schema dumper method, can be set to false or :namespace (stu314, jeremyevans) * Add Database#global_index_namespace? for checking if index namespace is global or per table (jeremyevans) * Fix typecasting of time columns on jdbc/postgres, before could be off by a millisecond (jeremyevans) * Add document explaining Sequel's object model (jeremyevans) * Attempt to detect more disconnect errors in the mysql2 adapter (jeremyevans) * Add is_current? and check_current to the migrators, for checking/raising if there are unapplied migrations (pvh, jeremyevans) (#487) * Add a jdbc subadapter for the Progress database (Michael Gliwinski, jeremyevans) * Add pg_inet extension, for working with PostgreSQL inet and cidr types (jeremyevans) * Fix bug in model column setters when passing an object that raises an exception for ==('') (jeremyevans) * Add eager_each plugin, which makes each on an eagerly loaded dataset do eager loading (jeremyevans) * Fix bugs when parsing foreign keys for tables with explicit schema on PostgreSQL (jeremyevans) * Remove Database#case_sensitive_like on SQLite (jeremyevans) * Remove Database#single_value in the native sqlite adapter (jeremyevans) * Make Dataset#get work with nil and false arguments (jeremyevans) * Make json_serializer plugin respect :root=>:collection and :root=>:instance options (jeremyevans) * Support savepoints in prepared transactions on MySQL 5.5.23+ (jeremyevans) * Add pg_json extension, for working with PostgreSQL 9.2's new json type (jeremyevans) * In the optimistic locking plugin, make refresh and save after a failed save work correctly (jeremyevans) * Support partial indexes on Microsoft SQL Server 2008 (jeremyevans) * Make Database#call pass blocks (jeremyevans) * Support :each when preparing statements, useful for iterating over large datasets (jeremyevans) * Support :if_exists and :cascade options when dropping indexes on PostgreSQL (jeremyevans) * Support :concurrently option when adding and dropping indexes on PostgreSQL (jeremyevans) * Make Database#transaction on PostgreSQL recognize :synchronous, :read_only, and :deferrable options (jeremyevans) * Support :sql_mode option when connecting to MySQL (jeremyevans) * Apply :timeout MySQL connection setting on do, jdbc, and swift adapters (jeremyevans) * Don't set Sequel::Model.db automatically when creating an anonymous class with an associated database object (jeremyevans) * Add :connection_handling=>:queue option to the threaded connection pools, may reduce chance of stale connections (jeremyevans) (#481) * Handle JRuby 1.7 exception handling changes when connecting in the jdbc adapter (jeremyevans) (#477) * Make *_to_one association setters be noops if you pass a value that is the same as the cached value (jeremyevans) * Make Model#refresh return self when using dirty plugin (jeremyevans) === 3.35.0 (2012-05-01) * Correctly handle parsing schema for tables in other databases on MySQL (jeremyevans) * Add DSL support for the modulus operator (%), similar to the bitwise operators (jeremyevans) * Fix possible thread-safety issues on non-GVL ruby implementations (jeremyevans) * Allow truncation of multiple tables at the same time on PostgreSQL (jeremyevans) * Allow truncate to take a :cascade, :only, and :restart options on PostgreSQL (hgimenez, jeremyevans) * Allow json and xml serializers to support :array option in class to_json method to serialize existing array of model instances (jeremyevans) * Add dirty plugin, which saves the initial value of the column when the value is changed (jeremyevans) * create_table now supports an :as option to create a table directly from the results of a query (jeremyevans) * The :index option when creating columns in the schema generator can now be a hash of options passed to index (jeremyevans) * Parsing the default column values in the oracle adapter no longer requires superuser privileges (Jason Hines) * Add Database#cache_schema to allow schema caching to be turned of, useful for development modes where models are reloaded (jeremyevans) * Correctly handle errors that occur when rolling back transactions (jeremyevans) * Recognize identity type in the schema dumper (jeremyevans) (#468) * Don't assign instance variables to Java objects, for future JRuby 2.0 support (jeremyevans) (#466) * Use date and timestamp formats that are multilanguage and not DATEFORMAT dependent on Microsoft SQL Server (jeremyevans) * Add Database#log_exception, which logs when a query raises an exception, for easier overriding (jeremyevans) (#465) * Make the migrators only use transactions by default if the database supports transactional DDL (jeremyevans) * Add Database#supports_transactional_ddl? for checking if DDL statements can be rolled back in transactions (jeremyevans) * Don't use auto parameterization when using cursors in the pg_auto_parameterize extension (jeremyevans) (#463) * No longer escape backslashes in strings by default, fixes doubled backslashes on some adapters (jeremyevans) * Escape blackslash-carriage return-line feed in strings on Microsoft SQL Server (mluu, jeremyevans) (#462, #461) * Remove Array#all_two_pairs? (jeremyevans) * Remove Dataset#disable_insert_returning on PostgreSQL (jeremyevans) * Remove support for PostgreSQL <8.2 (jeremyevans) * Remove support for Ruby <1.8.7 (jeremyevans) === 3.34.1 (2012-04-02) * Fix bug in optimization of primary key lookup (jeremyevans) (#460) === 3.34.0 (2012-04-02) * Fix connection failures when connecting to PostgreSQL with newer versions of swift (jeremyevans) * Fix using a bound variable for a limit in the ibmdb adapter on ruby 1.9 (jeremyevans) * primary_key :column, :type=>Bignum now works correctly on H2 (jeremyevans) * Add query_literals extension for treating regular strings like literal strings in select, group, and order methods (jeremyevans) * Actually use RETURNING for deletes/updates on PostgreSQL 8.2-9.0 (jeremyevans) * You can now require 'sequel/no_core_ext' to load Sequel without the core extensions (jeremyevans) * The core extensions have now been made a real Sequel extension (still loaded by default) (jeremyevans) * VirtualRow#` has been added for creating literal strings (jeremyevans) * VirtualRow instances now have operator methods defined {+,-,*,/,&,|,~,>,<,>=,<=} (jeremyevans) * Array#all_two_pairs? is now deprecated and will be removed after 3.34.0 is released (jeremyevans) * All of Sequel's core extensions now have equivalent methods defined on the Sequel module (jeremyevans) * Add Sequel.core_extensions? for checking if the core extensions are enabled (jeremyevans) * Increase speed of Model#this by about 85% (jeremyevans) * Increase speed of Model#delete and #destroy by about 75% for models with simple datasets (jeremyevans) * Make nested_attributes plugin work when destroying/removing associated objects when strict_param_setting is true (r-stu31) (#455) * Dataset#disable_insert_returning on PostgreSQL is now deprecated and will be removed after 3.34.0 is released (jeremyevans) * Double speed of Model[pk] for models with simple datasets (most models) (jeremyevans) * Support for ruby <1.8.7 and PostgreSQL <8.2 is now deprecated and will be removed after 3.34.0 is released (jeremyevans) * Add select_remove extension which adds Dataset#select_remove for removing columns/expressions from a dataset selection (jeremyevans) * Add static_cache plugin, for staticly caching all model instances, useful for model tables that don't change (jeremyevans) * Add Model#freeze implementation to get a working frozen model object (jeremyevans) * Add many_to_one_pk_lookup plugin, for using a simple primary key lookup for many_to_one associations (great with caching) (jeremyevans) * Use bigint type instead of integer for Bignum generic type on SQLite, except for auto incrementing primary keys (jeremyevans) * Add Database#dump_foreign_key_migration for just dumping foreign key constraints to the schema dumper extension (jeremyevans) * Dump foreign key constraints by default when using the schema dumper extension (jeremyevans) * Don't raise an error when no indexes exist for a table when calling Database#indexes on the jdbc/sqlite adapter (jeremyevans) * Copy composite foreign key constraints when emulating alter_table on SQLite (jeremyevans) * Add Database#foreign_key_list for getting foreign key metadata for a given table on SQLite, MySQL, and PostgreSQL (jeremyevans) * Add Dataset#to_hash_groups and #select_hash_groups for getting a hash with arrays of matching values (jeremyevans) * Model#set_fields and #update_fields now respect :missing=>:skip and :missing=>:raise options for handling missing values (jeremyevans) * The :on_update and :on_delete entries for foreign key can now take strings, which are used literally (jeremyevans) * Add Database#convert_infinite_timestamps to the postgres adapter, can be set to :nil, :string, or :float (jeremyevans) (#454) * Add Database#create_join_table and #drop_join_table for easily creating many-to-many join tables (jeremyevans) * Fix Dataset#group_rollup/#group_cube on Microsoft SQL Server 2005 (jeremyevans) * Add Dataset#explain on MySQL (jeremyevans) * Change formatting and return value of Dataset#explain on SQLite (jeremyevans) * Recognize unsigned tinyint types in the schema dumper (jeremyevans) * Add null_dataset extension, for creating a dataset that never issues a database query (jeremyevans) * Database#uri and #url now return nil if a connection string was not used when connecting (jeremyevans) (#453) * Add schema_caching extension, to speed up loading a large number of models by loading cached schema information from a file (jeremyevans) * Add Dataset#multi_replace on MySQL, allowing you to REPLACE multiple rows in a single query (danielb2) (#452) * Double speed of Model#new with empty hash, and quadruple speed of Model#set with empty hash (jeremyevans) * Allow SQL::QualifiedIdentifier objects to contain arbitrary Sequel expressions (jeremyevans) * Add pg_hstore_ops extension, for easily calling PostgreSQL hstore functions and operators (jeremyevans) * Add Sequel::SQL::Wrapper class for easier dealing with wrapper objects (jeremyevans) * Add pg_hstore extension, for dealing with the PostgreSQL hstore (key/value table) type (jeremyevans) * Add Database#type_supported? method on PostgreSQL for checking if the given type symbol/string is supported (jeremyevans) * Convert Java::OrgPostgresqlUtil::PGobject instances to ruby strings in jdbc/postgres type conversion (jeremyevans) * Allow PlaceholderLiteralString objects to store placeholder string as an array for improved performance (jeremyevans) * Work around ruby-pg bugs 111 (Time/DateTime fractional seconds) and 112 ("\0" in bytea) in bound variable arguments (jeremyevans) (#450) * Handle fractional seconds correctly for time type on jdbc/postgres (jeremyevans) * Add pg_array_ops extension, for easily calling PostgreSQL array functions and operators (jeremyevans) * Add SQL::Subscript#[] for using nested subscripts (accessing member of multi-dimensional array) (jeremyevans) * Add Model.cache_anonymous_models accessor so you can disable the caching of classes created by Sequel::Model() (jeremyevans) * Convert PostgreSQL JDBC arrays to Ruby arrays in the jdbc/postgres adapter (jeremyevans) * The typecast_on_load extension now works correctly when saving new model objects when insert_select is enabled (jeremyevans) * Add pg_array extension, for dealing with string and numeric PostgreSQL arrays (jeremyevans) * Add Database#reset_conversion_procs to the postgres adapter, for use with extensions with modify default conversion procs (jeremyevans) * Escape table and schema names when getting primary key or sequence information on PostgreSQL (jeremyevans) * Escape identifiers when quoting on MySQL and SQLite (jeremyevans) * Add Database#supports_drop_table_if_exists? for checking if DROP TABLE supports IF EXISTS (jeremyevans) * Add Database#drop_table? for dropping a table if it already exists (jeremyevans) * Log full SQL string by default for prepared statements created automatically by model prepared_statements* plugins (jeremyevans) * Add ability for prepared statements to log full SQL string (jeremyevans) * Add pg_statement_cache extension, for automatically preparing queries when using postgres adapter with pg driver (jeremyevans) * Add pg_auto_parameterize extension, for automatically parameterizing queries when using postgres adapter with pg driver (jeremyevans) * Add ConnectionPool#disconnection_proc= method, to modify disconnection_proc after the pool has been created (jeremyevans) * Add ConnectionPool#after_connect= method, to modify after_connect proc after the pool has been created (jeremyevans) * Add ConnectionPool#all_connections method, which yields all available connections in the pool (jeremyevans) === 3.33.0 (2012-03-01) * Add ability to force or disable transactions completely in the migrators using the :use_transactions option (jeremyevans) * Add ability to turn off transactions for migrations by calling no_transaction inside the Sequel.migration block (jeremyevans) * Allow specifically choosing which migrator to use via TimestampMigrator.apply or IntegerMigrator.apply (jeremyevans) * Add arbitrary_servers extension to allow the use of arbitrary servers/shards by providing a hash of options as the server (jeremyevans) * Add server_block extension to scope database access inside the block to a specific default server/shard (jeremyevans) * Respect :collate column option on MySQL (jeremyevans) (#445) * Use Mysql2::Client::FOUND_ROWS to get accurate number of rows matched in the mysql2 adapter (jeremyevans) * Use Mysql#info to get accurate number of rows matched in the mysql adapter (jeremyevans) * Make mock adapter with specific SQL dialect use appropriate defaults for quoting identifiers (jeremyevans) * Make list plugin automatically set position field value on creation if not already set (jeremyevans) * Add Database#integer_booleans setting on SQLite to store booleans as integers (jeremyevans) * Typecast columns stored as integers/floats in the SQLite adapter (jeremyevans) * In the instance_hooks plugin, (before|after)_*_hook instance methods now return self (jeremyevans) * Handle NaN, Infinity, and -Infinity floats on PostgreSQL (kf8a, jeremyevans) (#444) * Support an :sslmode option when using the postgres adapter with the pg driver (jeremyevans) * Add Database#create_schema and #drop_schema to the shared postgres adapter (tkellen, jeremyevans) (#440) * Add Database#supports_savepoints_in_prepared_transactions?, false on MySQL >=5.5.12 (jeremyevans) (#437) * Support an identifier output method in the mysql2 adapter (jeremyevans) * Make foreign key creation work on MySQL with InnoDB engine without specifying :key option (jeremyevans) * Allow disabling use of sudo with SUDO='' when running the rake install/uninstall tasks (jeremyevans) (#433) === 3.32.0 (2012-02-01) * Make serialization_modification_detection plugin work correctly with new objects and after saving existing objects (jeremyevans) (#432) * Make refreshes after model creation clear the deserialized values in the serialization plugin (jeremyevans) * Add Dataset#update_ignore on MySQL, for using UPDATE IGNORE in queries (danielb2) (#429) * Allow select_map/select_order_map to take both a column argument and a block (jeremyevans) * Fix virtual row block handling in select_map/select_order_map if block returns an array (jeremyevans) (#428) * Add Sequel.empty_array_handle_nulls setting, can be set to false for possible better performance on some databases (jeremyevans) * Change exclude(:b=>[]) to not return rows where b is NULL (jeremyevans) (#427) * Support ActiveModel 3.2 in the active_model plugin, by adding support for to_partial_path (jeremyevans) * Fix metadata methods (e.g. tables) on Oracle when custom identifier input methods are used (jeremyevans) * Fix Database#indexes on DB2 (jeremyevans) * Make DateTime/Time columns with Sequel::CURRENT_TIMESTAMP default values use timestamp column on MySQL (jeremyevans) * Wrap column default values in extra parens on SQLite, fixes some cases (jeremyevans) * Make Database#indexes not include primary key indexes on Derby, HSQLDB, Oracle, and DB2 using the jdbc adapter (jeremyevans) * Support Database#indexes in shared MSSQL adapter (jeremyevans) * Support :include option when creating indexes on MSSQL, for storing column values in the index (crawlik) (#426) * Make set_column_type not modify defaults and NULL/NOT NULL setting on MSSQL, H2, and SQLite (jeremyevans) * Qualify identifiers when filtering/excluding by associations (jeremyevans) * Make table_exists? better handle tables where you don't have permissions for all columns (jeremyevans) (#422) * Using new association options, support associations based on columns that clash with ruby method names (jeremyevans) (#417) * Add use_after_commit_rollback setting to models, can be turned off to allow model usage with prepared transactions (jeremyevans) * Fix alter table emulation on SQLite when foreign keys reference the table being altered (jeremyevans) * Fix progress shared adapter, broken since the dataset literalization refactoring (jeremyevans) (#414) * Support :map and :to_hash prepared statement types (jeremyevans) * Make Dataset#naked! work correctly (jeremyevans) * Remove Dataset#paginate!, as it was broken (jeremyevans) * Fix query extension to not break usage of #clone without arguments (jeremyevans) (#413) === 3.31.0 (2012-01-03) * Dataset#from no longer handles :a__b__c___d as a.b.c AS d (jeremyevans) * Support many_to_one associations with the same name as their column, using the :key_column option (jeremyevans) * Add Model.def_column_alias for defining alias methods for columns (jeremyevans) * Support :server option in Dataset#import and #multi_insert (jeremyevans) * Respect existing RETURNING/OUTPUT clauses in #import/#multi_insert on PostgreSQL/MSSQL (jeremyevans) * Support :return=>:primary_key option to Dataset#import and #multi_insert (jeremyevans) * Correctly handle return value for Dataset#insert with column array and value array on PostgreSQL <8.2 (jeremyevans) * Dataset#insert_multiple now returns an array of inserted primary keys (jeremyevans) (#408) * Support RETURNING with DELETE and UPDATE on PostgreSQL 8.2+ (funny-falcon) * Raise error if tables from two separate schema are detected when parsing the schema for a single table on PostgreSQL (jeremyevans) * Handle clob types as string instead of blob on H2 (jeremyevans) * Add database type support to the mock adapter, e.g. mock://postgres (jeremyevans) * Allow creation of full text indexes on Microsoft SQL Server, but you need to provide a :key_index option (jeremyevans) * Allow Dataset#full_text_search usage with prepared statements (jeremyevans) * Make Dataset#exists use a PlaceholderLiteralString so it works with prepared statements (jeremyevans) * Fix Dataset#empty? for datasets with offsets when offset support is emulated (jeremyevans) * Add Dataset#group_rollup and #group_cube methods for GROUP BY ROLLUP and CUBE support (jeremyevans) * Add support for custom serialization formats to the serialization plugin (jeremyevans) * Support a :login_timeout option in the jdbc adapter (glebpom) (#406) === 3.30.0 (2011-12-01) * Handle usage of on_duplicate_key_update in MySQL prepared statements (jeremyevans) (#404) * Make after_commit and after_rollback respect :server option (jeremyevans) (#401) * Respect :connect_timeout option in the postgres adapter when using pg (glebpom, jeremyevans) (#402) * Make Dataset#destroy for model datasets respect dataset shard when using a transaction (jeremyevans) * Make :server option to Model#save set the shard to use (jeremyevans) * Move Model#set_server from the sharding plugin to the base plugin (jeremyevans) * Add :graph_alias_base association option for setting base name to use for table aliases when eager graphing (jeremyevans) * Make ILIKE work correctly on Microsoft SQL Server if database/column collation is case sensitive (jfirebaugh) (#398) * When starting a new dataset graph, assume existing selection is the columns to select from the current table (jeremyevans) * Allow specifying nanoseconds and offsets when converting a hash or array to a timestamp (jeremyevans, jfirebaugh) (#395) * Improve performance when converting Java types to ruby types in the jdbc adapter (jeremyevans, jfirebaugh) (#395) * Fix tinytds adapter if DB.identifier_output_method = nil (jeremyevans) * Explicitly order by the row number column when emulating offsets (jfirebaugh) (#393) * Fix Dataset#graph and #eager_graph modifying the receiver if the receiver is already graphed (jeremyevans) (#392) * Change dataset literalization to an append-only-all-the-way-down design (jeremyevans) === 3.29.0 (2011-11-01) * Allow Model.dataset_module to take a Module instance (jeremyevans) * Apply Model.[] optimization in more cases (jeremyevans) * Fix Model.[] optimization when dataset uses identifier_input_method different than database (jeremyevans) * Work around pragma bug on jdbc/sqlite when emulating alter table support (jeremyevans) * Database#<< and Dataset#<< now return self so they can be safely chained (jeremyevans) * Fully support using an aliased table name as the :join_table option for a many_to_many association (jeremyevans) * Make like case sensitive on SQLite and Microsoft SQL Server (use ilike for case insensitive matching) (jeremyevans) * Add Database#extend_datasets for the equivalent of extending of the Database object's datasets with a module (jeremyevans) * Speed up Dataset #map, #to_hash, and related methods if an array of symbols is given (jeremyevans) * Add Database#dataset_class for modifying the class used for datasets for a single Database object (jeremyevans) * Plugins that override Model.load should be modified to override Model.call instead (jeremyevans) * Speed up loading model objects from the database by up to 7-16% (jeremyevans) * Create accessor methods for all columns in a model's table, even if the dataset doesn't select the columns (jeremyevans) * Add mock adapter for better mocking of a database connection (jeremyevans) * Have models pass their dataset instead of table name to Database#schema (jeremyevans) * Allow Database#schema to take a dataset as the table argument, and use its identifier input/output methods (jeremyevans) * Significant improvements to the db2 adapter (jeremyevans) * Handle methods with names that can't be called directly in Model.def_dataset_method (jeremyevans) * Add dataset_associations plugin for making dataset methods that return datasets of associated objects (jeremyevans) * Don't allow Model.def_dataset_method to override private model methods (jeremyevans) * Parsing primary key information from system tables in the shared MSSQL adapter (jeremyevans) * Fix handling of composite primary keys when emulating alter table operations on SQLite (jeremyevans) * Emulate add_constraint and drop_constraint alter table operations on SQLite (jeremyevans) * Apply the correct pragmas when connecting to SQLite via the Amalgalite and Swift adapters (jeremyevans) * Fix bound variable usage for some types (e.g. Date) when used outside of prepared statements on SQLite (jeremyevans) * Work around SQLite column naming bug when using subselects (jeremyevans) * Make prepared_statements plugin work with adapters that require type specifiers for variable placeholders, such as oracle (jeremyevans) * Add savepoint support to the generic JDBC transaction support (used by 6 jdbc subadapters) (jeremyevans) * Add native prepared statement support to the oracle adapter (jeremyevans) * Support sharding correctly by default when using transactions in model saving/destroying (jeremyevans) * Add Database#in_transaction? method for checking if you are already in a transaction (jeremyevans) * Add after_commit, after_rollback, after_destroy_commit, and after_destroy_rollback hooks to Model objects (jeremyevans) * Add after_commit and after_rollback hooks to Database objects (jeremyevans) (#383) * Support savepoints inside prepared transactions on MySQL (jeremyevans) * Support opening transactions to multiple shards of the same Database object in the same Thread (jeremyevans) * Add Sequel.transaction for running transactions on multiple databases at the same time (jeremyevans) * Support :rollback => :always option in Database#transaction to always rollback the transaction (jeremyevans) * Support :rollback => :reraise option in Database#transaction to reraise the Sequel::Rollback exception (jeremyevans) * Add support for connecting to Apache Derby databases using the jdbc adapter (jeremyevans) * Add support for connecting to HSQLDB databases using the jdbc adapter (jeremyevans) * Fix inserting all default values into a table on DB2 (jeremyevans) * Add :qualify option to many_to_one associations for whether to qualify the primary key column with the associated table (jeremyevans) * Modify rcte_tree plugin to use column aliases if recursive CTEs require them (jeremyevans) * Add Dataset#recursive_cte_requires_column_aliases? method to check if you must provide an argument list for a recursive CTE (jeremyevans) * Much better support for Oracle in both the oci8-based oracle adapter and the jdbc oracle subadapter (jeremyevans) * Handle CTEs in subselects in more places on databases that don't natively support CTEs in subselects (jeremyevans) * Change Dataset#to_hash to not call the row_proc if 2 arguments are given (jeremyevans) * Change Dataset#map to not call the row_proc if an argument is given (jeremyevans) * Make Dataset#select_map and #select_order_map return an array of single element arrays if given an array with a single symbol (jeremyevans) * Make Dataset#columns work correctly on jdbc, odbc, ado, and dbi adapters when using an emulated offset on MSSQL and DB2 (jeremyevans) * Add Database#listen and #notify to the postgres adapter, for LISTEN and NOTIFY support (jeremyevans) * Emulate the bitwise compliment operator on h2 (jeremyevans) * Fix improper handling of emulated bitwise operators with more than two arguments (jeremyevans) * Allow convert_invalid_date_time to be set on a per-Database basis in the mysql adapter (jeremyevans) * Allow convert_tinyint_to_bool to be set on a per-Database basis in the mysql and mysql2 adapters (jeremyevans) * Allow per-Database override of the typeconversion procs on the mysql, sqlite, and ibmdb adapters (jeremyevans) * Add Database#timezone accessor, for overriding Sequel.database_timezone per Database object (jeremyevans) === 3.28.0 (2011-10-03) * Add firebird jdbc subadapter (jeremyevans) * Add SQLTime.create method for easier creation of SQLTime instances (jeremyevans) * Make Dataset#with_pk use a qualified primary key, so it works correctly on joined datasets (jeremyevans) * Support the :limit association option when using eager_graph (jeremyevans) * Fix eager loading via eager_graph of one_to_one associations that match multiple associated objects and use order to pick the first one (jeremyevans) * Make after_load association hooks apply when using eager_graph (jeremyevans) * Make Dataset#with_sql treat a symbol as a first argument as a method name to call to get the SQL (jeremyevans) * Make Dataset #delete, #insert, #update return array of plain hashes if block not given and Dataset#returning is used (jeremyevans) * Allow Dataset #map, #to_hash, #select_map, #select_order_map, and #select_hash to take arrays of columns instead of single columns (jeremyevans) * Make Dataset #delete, #insert, #update yield plain hashes to a block if Dataset#returning is used (jeremyevans) * Add Dataset#returning for setting the columns to return in INSERT/UPDATE/DELETE statements, used by PostgreSQL 9.1 (jeremyevans) * Support WITH clause in INSERT/UPDATE/DELETE on PostgreSQL 9.1+ (jeremyevans) * Add Database#copy_table for PostgreSQL COPY support when using the postgres adapter with pg (jeremyevans) * Support CREATE TABLE IF NOT EXISTS on PostgreSQL 9.1+ (jeremyevans) * Add support for Sequel::Model.default_eager_limit_strategy to set the default :eager_limit_strategy for *_many associations (jeremyevans) * Add support for an :eager_limit_strategy => :correlated_subquery value for limiting using correlated subqueries (jeremyevans) * Allow use of a dataset that uses the emulated offset support on MSSQL and DB2 in an IN subquery by using a nested subquery (jeremyevans) * Allow use of a dataset that uses LIMIT in an IN subquery on MySQL by using a nested subquery (jeremyevans) * Work around serious ActiveSupport bug in Time.=== that breaks literalization of Time values (jeremyevans) * Speed up SQL operator methods by using module_eval instead of define_method (jeremyevans) * Support sql_(boolean,number,string) methods on ComplexExpressions, allowing you do to (x + 1).sql_string + 'a' for (x + 1) || 'a' (jeremyevans) * Don't disallow SQL expression creation based on types, leave that to the database server (jeremyevans) * Make :column [&|] 1 use an SQL bitwise [&|] expression instead of a logical (AND|OR) expression (jeremyevans) * Make :column + 'a' use an SQL string concatenation expression instead of an addition expression (jeremyevans) * Fix :time typecasting from Time to SQLTime for fractional seconds on ruby 1.9 (jeremyevans) * Have Dataset#select_append check supports_select_all_and_column? and select all from all FROM and JOIN tables if no columns selected (jeremyevans) * Add Dataset#supports_select_all_and_column? for checking if you can do SELECT *, column (jeremyevans) * Add support for an :eager_limit_strategy => :window_function value for limiting using window functions (jeremyevans) * Add support for an :eager_limit_strategy => :distinct_on value for one_to_one associations for using DISTINCT ON (jeremyevans) * Add support for an :eager_limit_strategy association option, for manual control over how limiting is done (jeremyevans) * Add Dataset#supports_ordered_distinct_on? for checking if the dataset can use distinct on while respecting order (jeremyevans) * Add support for the association :limit option when eager loading via .eager for *_many associations (jeremyevans) * Add db2 jdbc subadapter (jeremyevans) * Fix the db2 adapter so it actually works (jeremyevans) * Add ibmdb adapter for accessing DB2 (roylez, jeremyevans) (#376) * Add much better support for DB2 databases (roylez, jeremyevans) (#376) * Handle SQL::AliasedExpressions and SQL::JoinClauses in Dataset#select_all (jeremyevans) * Speed up type translation slightly in mysql, postgres, and sqlite adapters (jeremyevans) * Add Dataset#supports_cte_in_subqueries? for checking whether database supports WITH in subqueries (jeremyevans) * Allow Model.set_dataset to accept Sequel::LiteralString arguments as table names (jeremyevans) * Association :after_load hooks in lazy loading are now called after the associated objects have been cached (jeremyevans) * Emulate handling of extract on MSSQL, using datepart (jeremyevans) * Emulate handling of extract on SQLite, but you need to set Database#use_timestamp_timezones = false (jeremyevans) * Abstract handling of ComplexExpressionMethods#extract so that it can work on databases that don't implement extract (jeremyevans) * Emulate xor operator on SQLite (jeremyevans) * Add Dataset#supports_where_true? for checking if the database supports WHERE true (or WHERE 1 if 1 is true) (jeremyevans) * Fix eager loading via eager of one_to_one associations that match multiple associated objects and use order to pick the first one (jeremyevans) === 3.27.0 (2011-09-01) * Add support for native prepared statements to the tinytds adapter (jeremyevans) * Add support for native prepared statements and stored procedures to the mysql2 adapter (jeremyevans) * Support dropping primary key, foreign key, and unique constraints on MySQL via the drop_constraint :type option (jeremyevans) * Add Sequel::SQLTime class for handling SQL time columns (jeremyevans) * Typecast DateTime objects to Date for date columns (jeremyevans) * When typecasting Date objects to timestamps, make the resulting objects always have no fractional date components (jeremyevans) * Add Model.dataset_module for simplifying many def_dataset_method calls (jeremyevans) * Make prepared_statements_safe plugin work on classes without datasets (jeremyevans) * Make Dataset#hash work correctly when referencing SQL::Expression instances (jeremyevans) * Handle allowed mass assignment methods correctly when including modules in classes or extending instances with modules (jeremyevans) * Fix Model#hash to work correctly with composite primary keys and with no primary key (jeremyevans) * Model#exists? now returns false without issuing a query for new model objects (jeremyevans) === 3.26.0 (2011-08-01) * Fix bug in default connection pool if a disconnect error is raised and the disconnection_proc also raises an error (jeremyevans) * Disallow eager loading via eager of many_*_many associations with :eager_graph option (jeremyevans) * Major speedup in dataset creation (jeremyevans) * Replace internal implementation of eager_graph with much faster version (jeremyevans) * Don't treat strings with leading zeros as octal format in the default typecasting (jeremyevans) * Fix literalization of Date, Time, and DateTime values on Microsoft Access (jeremyevans) * Fix handling of nil values with the pure-Java version of nokogiri in the xml_serializer plugin (jeremyevans) * Make identity_map plugin work with standard eager loading of many_to_many and many_through_many associations (jeremyevans) * Make create_table! only attempt to drop the table if it already exists (jeremyevans) * Remove custom table_exists? implementations in the oracle and postgres adapters (jeremyevans) * Handle another type of disconnection in the postgres adapter (jeremyevans) * Handle disconnections in the ado adapter and do postgres subadapter (jeremyevans) * Recognize disconnections when issuing BEGIN/ROLLBACK/COMMIT statements (jeremyevans) (#368) === 3.25.0 (2011-07-01) * Work with tiny_tds-0.4.5 in the tinytds adapter, older versions are no longer supported (jeremyevans) * Make association_pks plugin typecast provided values to integer if the primary key column type is integer (jeremyevans) * Model.set_dataset now accepts Identifier, QualifiedIdentifier, and AliasedExpression arguments (jeremyevans) * Fix handling of nil values in bound variables and prepared statement and stored procedure arguments in the jdbc adapter (jeremyevans, wei) * Allow treating Datasets as Expressions, e.g. DB[:table1].select(:column1) > DB[:table2].select(:column2) (jeremyevans) * No longer use CASCADE by default when dropping tables on PostgreSQL (jeremyevans) * Support :cascade option to #drop_table, #drop_view, #drop_column, and #drop_constraint for using CASCADE (jeremyevans) * If validation error messages are LiteralStrings, don't add the column name to them in Errors#full_messages (jeremyevans) * Fix bug loading plugins on 1.9 where ::ClassMethods, ::InstanceMethods, or ::DatasetMethods is defined (jeremyevans) * Add Dataset#exclude_where and Dataset#exclude_having methods, so you can force use of having or where clause (jeremyevans) * Allow Dataset#select_all to take table name arguments and select all columns from each given table (jeremyevans) * Add Dataset#select_group method, for selecting and grouping on the same columns (jeremyevans) * Allow Dataset#group and Dataset#group_and_count to accept a virtual row block (jeremyevans) === 3.24.1 (2011-06-03) * Ignore index creation errors if using create_table? with the IF NOT EXISTS syntax (jeremyevans) (#362) === 3.24.0 (2011-06-01) * Add prepared_statements_association plugin, for using prepared statements by default for regular association loading (jeremyevans) * Add prepared_statements_safe plugin, for making prepared statement use with models more safe (jeremyevans) * Add prepared_statements_with_pk plugin, for using prepared statements for dataset lookups by primary key (jeremyevans) * Fix bug in emulated prepared statement support not supporting nil or false as bound values (jeremyevans) * Add Dataset#unbind for unbinding values from a dataset, for use with creating prepared statements (jeremyevans) * Add prepared_statements plugin for using prepared statements for updates, inserts, deletes, and lookups by primary key (jeremyevans) * Make Dataset#[] for model datasets consider a single integer argument as a lookup by primary key (jeremyevans) * Add Dataset#with_pk for model datasets, for finding first record with matching primary key value (jeremyevans) * Add defaults_setter plugin for setting default values when initializing model instances (jeremyevans) * Add around hooks (e.g. around_save) to Sequel::Model (jeremyevans) * Add Model#initialize_set private method to ease extension writing (jeremyevans) * Only typecast bit fields to booleans on MSSQL, the MySQL bit type is a bitfield, not a boolean (jeremyevans) * Set SQL_AUTO_IS_NULL=0 by default when connecting to MySQL via the swift and jdbc adapters (jeremyevans) * Fix bug in multiple column IN/NOT IN emulation when a model dataset is used (jeremyevans) * Add support for filtering and excluding by association datasets (jeremyevans) * Fix literalization of boolean values in filters on SQLite and MSSQL (jeremyevans) * Add support for filtering and excluding by multiple associations (jeremyevans) * Add support for inverting some SQL::Constant instances such as TRUE, FALSE, NULL, and NOTNULL (jeremyevans) * Add support for excluding by associations to model datasets (jeremyevans) * The Sequel::Postgres.use_iso_date_format setting now only affects future Database objects (jeremyevans) * Add Sequel::Postgres::PG_NAMED_TYPES hash for extensions to register type conversions for non-standard types (jeremyevans, pvh) * Make create_table? use IF NOT EXISTS instead of using SELECT to determine existence, if supported (jeremyevans) * Fix bug in association_pks plugin when associated table has a different primary key column name (jfirebaugh) * Fix limiting rows when connecting to DB2 (semmons99) * Exclude columns from tables in the INFORMATION_SCHEMA when parsing table schema on JDBC (jeremyevans) * Fix limiting rows when connecting to Microsoft Access (jeremyevans) * Add Database#views for getting an array of symbols of view names for the database (jeremyevans, christian.michon) * Make Datbase#tables no longer include view names on MySQL (jeremyevans) * Convert Java CLOB objects to ruby strings when using the JDBC JTDS subadapter (christian.michon) * If Thread#kill is called on a thread with an open transaction, roll the transaction back on ruby 1.8 and rubinius (jeremyevans) * Split informix adapter into shared/specific parts, add JDBC informix subadapter (jeremyevans) === 3.23.0 (2011-05-02) * Migrate issue tracker from Google Code to GitHub Issues (jeremyevans) * Add support for filtering by associations to model datasets (jeremyevans) * Don't call insert_select when saving a model that doesn't select all columns of the table (jeremyevans) * Fix bug when using :select=>[] option for a many_to_many association (jeremyevans) * Add a columns_introspection extension that attempts to skip database queries by introspecting selected columns (jeremyevans) * When combining old integer migrations and new timestamp migrations, make sure old integer migrations are all applied first (jeremyevans) * Support dynamic callbacks to customize regular association loading at query time (jeremyevans) * Support cascading of eager loading with dynamic callbacks for both eager and eager_graph (jeremyevans) * Make the xml_serializer plugin handle namespaced models by using __ instead of / as a separator (jeremyevans) * Allow the :eager_grapher association proc to accept a single hash instead of 3 arguments (jfirebaugh) * Support dynamic callbacks to customize eager loading at query time (jfirebaugh, jeremyevans) * Fix bug in the identity_map plugin for many_to_one associations when the association reflection hadn't been filled in yet (funny-falcon) * Add serialization_modification_detection plugin for detecting changes in serialized columns (jeremyevans) (#333) === 3.22.0 (2011-04-01) * Add disconnect detection to tinytds adapter, though correct behavior may require an update to tiny_tds (cult_hero) * Add Dataset/Database#mssql_unicode_strings accessor when connecting to MSSQL to control string literalization (semmons99, jeremyevans) * Fix ODBC::Time instance handling in the odbc adapter (jeremyevans) * Use Sequel.application_timezone when connecting in the oracle adapter to set the connection's session's timezone (jmthomas) * In the ADO adapter, assume access to SQL Server if a :conn_string option is given that doesn't indicate Access/Jet (damir.si) (#332) * Use the correct class when loading instances for descendents of model classes that use single table inheritance (jeremyevans) * Support for COLLATE in column definitions (jfirebaugh) * Don't use a schema when creating a temporary table (jeremyevans) * Make migrator work correctly when a default_schema is set (jeremyevans) (#331) === 3.21.0 (2011-03-01) * Make symbol splitting (:table__column___alias) work correctly for identifiers that are not in the \w character class (authorNari) * Enable row locks in Oracle (authorNari) * Prefer cover? over include? for validates_includes/validates_inclusion_of (jeremyevans) * Make using NULL/NOT NULL, DEFAULT, and UNIQUE column options work correctly on H2 and possibly Oracle (jeremyevans) * Make bin/sequel accept file arguments and work correctly when $stdin is not a tty (jeremyevans) * Add support for -I and -r options to bin/sequel (jeremyevans) * Sequel::Model.plugin can now be overridden just like the other Model methods (jeremyevans) * Add tinytds adapter, the best way to connect to MSSQL from a C based ruby running on *nix (jeremyevans) * Recognize bigint unsigned as a Bignum type in the schema dumper (gamespy-tech) (#327) * Add Dataset#calc_found_rows for MySQL datasets (macks) * Add association_autoreloading plugin for clearing association cache when foreign key value changes (jfirebaugh, jeremyevans) * Fix join_table on MySQL ignoring the block (jfirebaugh) * Transfer CTE WITH clauses in subselect to main query when joining on MSSQL (jfirebaugh) * Make specs support both RSpec 1 and RSpec 2 (jeremyevans) * Work with ruby-informix versions >= 0.7.3 in the informix adapter (jeremyevans) (#326) === 3.20.0 (2011-02-01) * Allow a :partial option to Database#indexes on MySQL to include partial indexes (roland.swingler) (#324) * Add a SQLite subadapter to the swift adapter, now that swift supports it (jeremyevans) * Update swift adapter to support swift 0.8.1, older versions no longer supported (jeremyevans) * Allow setting arbitrary JDBC properties in the jdbc adapter with the :jdbc_properties option (jeremyevans) * Use a better error message if a validates_max_length validation is applied to a nil value (jeremyevans) (#322) * Add some basic Microsoft Access support to the ado adapter, autoincrementing primary keys now work (jeremyevans) * Make class_table_inheritance plugin handle subclass associations better (jeremyevans) (#320) === 3.19.0 (2011-01-03) * Handle Date and DateTime types in prepared statements when using the jdbc adapter (jeremyevans) * Handle Date, DateTime, Time, SQL::Blob, true, and false in prepared statements when using the SQLite adapter (jeremyevans) * Use varbinary(max) instead of image for the generic blob type on MSSQL (jeremyevans) * Close prepared statements when disconnecting when using SQLite (jeremyevans) * Allow reflecting on validations in the validation_class_methods plugin (jeremyevans) * Allow passing a primary key value to the add_* association method (gucki) * When typecasting model column values, check the classes of the new and existing values (jeremyevans) * Improve type translation performance in the postgres, mysql, and sqlite adapters by using methods instead of procs (jeremyevans) === 3.18.0 (2010-12-01) * Allow the user to control how the connection pool deals with attempts to access shards that aren't configured (jeremyevans) * Typecast columns when creating model objects from JSON in the json_serializer plugin (jeremyevans) * When parsing the schema for a model that uses an aliased table, use the unaliased table name (jeremyevans) * When emulating schema methods such as drop_column on SQLite, recreate applicable indexes on the recreated table (jeremyevans) * Only remove hook pairs that have been run successfully in the instance_hooks plugin (jeremyevans) * Add reversible migration support to the migration extension (jeremyevans) * Add to_dot extension, for producing visualizations of Dataset abstract syntax trees with Graphviz (jeremyevans) * Switch to using manual type translation in the SQLite adapter (jeremyevans) * Support :read_timeout option in the native mysql adapter (tmm1) * Support :connect_timeout option in the native mysql and mysql2 adapters (tmm1) === 3.17.0 (2010-11-05) * Ensure that the optimistic locking plugin increments the lock column when using Model#modified! (jfirebaugh) * Correctly handle nil values in the xml_serializer plugin, instead of converting them to empty strings (george.haff) (#313) * Use a default wait_timeout that's allowed on Windows for the mysql and mysql2 adapters (jeremyevans) (#314) * Add support for connecting to MySQL over SSL using the :sslca, :sslkey, and related options (jeremyevans) * Fix Database#each_server when used with jdbc or do connection strings without separate :adapter option (jeremyevans) (#312) * Much better support in the AS400 JDBC subadapter (bhauff) * Allow cloning of many_through_many associations (gucki, jeremyevans) * In the nested_attributes plugin, don't make unnecessary update calls to modify associated objects that are about to be deleted (jeremyevans, gucki) * Allow Dataset#(add|set)_graph_aliases to accept as hash values symbols and arrays with a single element (jeremyevans) * Add Databse#views and #view_exists? to the Oracle adapter (gpheruson) * Add Database#sql_log_level for changing the level at which SQL queries are logged (jeremyevans) * Remove unintended use of prepared statements in swift adapter (jeremyevans) * Fix logging in the swift PostgreSQL subadapter (jeremyevans) === 3.16.0 (2010-10-01) * Support composite foreign keys for associations in the identity_map plugin (harukizaemon, jeremyevans) (#310) * Handle INTERSECT and EXCEPT on Microsoft SQL Server 2005+ (jfirebaugh) * Add :replace option to Database#create_language in the postgresql adapter (jeremyevans) * Make rcte_tree plugin work when not all columns are selected (jeremyevans) * Add swift adapter (jeremyevans) * Fix literalization of DateTime objects on 1.9 for databases that support fractional seconds (jeremyevans) === 3.15.0 (2010-09-01) * Make emulated alter_table tasks on SQLite correctly preserve foreign keys (DirtYiCE, jeremyevans) * Add support for sequel_pg to the native postgres adapter when pg is used (jeremyevans) * Make class MyModel < Sequel::Model(DB[:table]) reload safe (jeremyevans) * Fix a possible error when using the do (DataObjects) adapter with postgres (jeremyevans) * Handle a many_to_many :join_table option that uses an implicit alias (mluu, jeremyevans) * Work around bug in Microsoft's SQL Server JDBC Adapter version 3.0 (jfirebaugh) * Make eager graphing a model that uses an aliased table name work correctly (jeremyevans) * Make class_table_inheritance plugin work with non integer primary keys on SQLite (jeremyevans, russm) * Add :auto_increment field to column schema values on MySQL if the column is auto incrementing (dbd) * Handle DSN-less ODBC connections better (Ricardo Ramalho) * Exclude temporary tables when parsing the schema on PostgreSQL (jeremyevans) (#306) * Add Mysql2 adapter (brianmario) * Handle Mysql::Error exceptions when disconnecting in the MySQL adapter (jeremyevans) * Make typecasting work correctly for attributes loaded lazily when using the lazy attributes plugin (jeremyevans) === 3.14.0 (2010-08-02) * Handle OCIInvalidHandle errors when disconnecting in the Oracle adapter (jeremyevans) * Allow calling Model.create_table, .create_table! and .create_table? with blocks containing the schema in the schema plugin (jfirebaugh) * Fix handling of a :conditions options in the rcte plugin (mluu) * Fix aggregate methods such as Dataset#sum and #avg on MSSQL on datasets with an order but no limit (mluu) * Fix rename_table on MSSQL for case sensitive collations and schemas (mluu) * Add a :single_root option to the tree plugin, for enforcing a single root value via a before_save hook (jfirebaugh) * Add a Model#root? method to the tree plugin, for checking if the current node is a root (jfirebaugh) * Add a :raise_on_failure option to Model#save to override the raise_on_save_failure setting (jfirebaugh) * Handle class discriminator column names that are existing ruby method names in the single table inheritance plugin (jeremyevans) * Fix times and datetimes when timezone support is used and you are loading a standard time when in daylight time or vice versa (gcampbell) * Handle literalization of OCI8::CLOB objects in the native oracle adapter (jeremyevans) * Raise a Sequel::Error instead of an ArgumentError if the migration current or target version does not exist (jeremyevans) * Fix Database#schema on Oracle when the same table exists in multiple schemas (djwhitt) * Fix Database#each_server when using a connection string to connect (jeremyevans) * Make Model dataset's destroy method respect the model's use_transactions setting, instead of always using a transaction (jeremyevans) * Add Database#adapter_scheme, for checking which adapter a Database uses (jeremyevans) * Allow Dataset#grep to take :all_patterns, :all_columns, and :case_insensitive options (mighub, jeremyevans) === 3.13.0 (2010-07-01) * Allow Model.find_or_create to take a block which is yielded the object to be created, if no object is found (zaius, jeremyevans) * Make PlaceholderLiteralString a GenericExpression subclass (jeremyevans) * Allow nil/NULL to be used as a CASE expression value (jeremyevans) * Support bitwise operators on more databases (jeremyevans) * Make PostgreSQL do bitwise xor instead of exponentiation for ^ operator (jeremyevans) * Fix handling of tinyint(1) columns when connecting to MySQL via JDBC (jeremyevans) * Handle arrays of two element arrays as filter hash values automatically (jeremyevans) * Allow :frame option for windows to take a string that is used literally (jeremyevans) * Support transaction isolation levels on PostgreSQL, MySQL, and MSSQL (jeremyevans) * Support prepared transactions/two-phase commit on PostgreSQL, MySQL, and H2 (jeremyevans) * Allow NULLS FIRST/LAST when ordering using the :nulls=>:first/:last option to asc and desc (jeremyevans) * On PostgreSQL, if no :schema option is provided for #tables, #table_exists?, or #schema, assume all schemas except the default non-public ones (jeremyevans) (#305) * Cache prepared statements when using the native sqlite driver, improving performance (jeremyevans) * Add a Tree plugin for treating model objects as being part of a tree (jeremyevans, mwlang) * Add a :methods_module association option, for choosing the module into which association methods are placed (jeremyevans) * Add a List plugin for treating model objects as being part of a list (jeremyevans, aemadrid) * Don't attempt to use class polymorphism in the class_table_inheritance plugin if no cti_key is defined (jeremyevans) * Add a XmlSerializer plugin for serializing/deserializing model objects to/from XML (jeremyevans) * Add a JsonSerializer plugin for serializing/deserializing model objects to/from JSON (jeremyevans) * Handle unsigned integers in the schema dumper (jeremyevans) === 3.12.1 (2010-06-09) * Make :encoding option work on MySQL even if config file specifies different encoding (jeremyevans) (#300) === 3.12.0 (2010-06-01) * Add a :deferrable option to foreign_key for creating deferrable foreign keys (hydrow) * Add a :join_table_block many_to_many association option used by the add/remove/remove_all methods (jeremyevans) * Add an AssociationPks plugin that adds association_pks and association_pks= methods for *_to_many associations (jeremyevans) * Add an UpdatePrimaryKey plugin that allows you to update the primary key of a model object (jeremyevans) * Add a SkipCreateRefresh plugin that skips the refresh when saving new model objects (jeremyevans) * Add a StringStripper plugin that strips strings before assigning them to model attributes (jeremyevans) * Allow the :eager_loader association proc to accept a single hash instead of 3 arguments (jeremyevans) * Add a Dataset#order_append alias for order_more, for consistency with order_prepend (jeremyevans) * Add a Dataset#order_prepend method that adds to the end of an existing order (jeremyevans) * Add a Sequel::NotImplemented exception class, use instead of NotImplementedError (jeremyevans) * Correctly handle more than 2 hierarchy levels in the single table inheritance plugin (jeremyevans) * Allow using a custom column value<->class mapping to the single_table_inheritance plugin (jeremyevans, tmm1) * Handle SQL::Identifiers in the schema_dumper extension (jeremyevans) (#304) * Make sure certain alter table operations clear the schema correctly on MySQL (jeremyevans) (#301) * Fix leak of JDBC Statement objects when using transactions on JDBC on databases that support savepoints (jeremyevans) * Add DatabaseDisconnectError support to the ODBC adapter (Joshua Hansen) * Make :encoding option work on MySQL in some cases where it was ignored (jeremyevans) (#300) * Make Model::Errors#on always return nil if there are no errors on that attribute (jeremyevans) * When using multiple plugins that add before hooks, the order that the hooks are called may have changed (jeremyevans) * The hook_class_methods plugin no longer skips later after hooks if earlier after hooks return false (jeremyevans) * Add Model#set_fields and update_fields, similar to set_only and update_only but ignoring other keys in the hash (jeremyevans) * Add Model.qualified_primary_key_hash, similar to primary_key_hash but with qualified columns (jeremyevans) * Make Model::Errors#empty? handle attributes with empty error arrays (jeremyevans) * No longer apply association options to join table dataset when removing all many_to_many associated objects (jeremyevans) * Log the execution times of migrations to the database's loggers (jeremyevans) * Add a TimestampMigrator that can work with migrations where versions are timestamps, and handle migrations applied out of order (jeremyevans) * Completely refactor Sequel::Migrator, now a class instead of a module (jeremyevans) * Save migration version after each migration, instead of after all migrations (jeremyevans) * Raise an error if missing a migration version (jeremyevans) * Raise an error if using a duplicate migration version (jeremyevans) * Add a Sequel.migration DSL for defining migrations (jeremyevans) * Add a sharding plugin giving Sequel::Model objects support for dealing with sharding (jeremyevans) * Handle timestamp(N) with time zone data types (hone) * Fix MSSQL temporary table creation, but watch out as it changes the table name (gpd, jeremyevans) (#299) === 3.11.0 (2010-05-03) * Allow shared postgresql adapter to work with ruby 1.9 with the -Ku switch (golubev.pavel) (#298) * Add support for connecting to MSSQL via JTDS in the JDBC adapter (jeremyevans) * Support returning the number of rows updated/deleted on MSSQL when using the ADO adapter with an explicit :provider (jeremyevans) * Support transactions in the ADO adapter if not using the default :provider (jeremyevans) * Make Database#disconnect not raise an exception when using the unsharded single connection pool (jeremyevans) * Attempt to handle JDBC connection problems in cases where driver auto loading doesn't work (e.g. Tomcat) (elskwid) * Make native MySQL adapter's tinyint to boolean conversion only convert tinyint(1) columns and not larger tinyint columns (roland.swingler) (#294) * Fix use of limit with distinct on Microsoft SQL Server (jeremyevans) (#297) * Correctly swallow errors when using :ignore_index_errors in Database#create_table when using unsupported indexes (jeremyevans) (#295) * Fix insert returning the autogenerated key when using the 5.1.12 MySQL JDBC driver (viking) * Consider number/numeric/decimal columns with a 0 scale to be integer columns (e.g. numeric(10, 0)) (jeremyevans, QaDes) * Fix Database#rename_table on Microsoft SQL Server (rohit.namjoshi) (#293) * Add Dataset#provides_accurate_rows_matched?, for seeing if update and delete are likely to return correct numbers (jeremyevans) * Add require_modification to Sequel::Model, for checking that model instance updating and deleting affects a single row (jeremyevans) * Fix leak of ResultSets when getting metadata in the jdbc adapter (jrun) * Make Dataset#filter and related methods just clone receiver if given an empty argument, such as {}, [], or '' (jeremyevans) * Add instance_filters plugin, for adding arbitrary filters when updating/destroying the instance (jeremyevans) * No longer create the #{plugin}_opts methods for plugins (jeremyevans) * Support :auto_vacuum, :foreign_keys, :synchronous, and :temp_store Database options on SQLite, for thread-safe PRAGMA setting (jeremyevans) * Add foreign_keys accessor to SQLite Database objects (enabled by default), which modifies the foreign_keys PRAGMA available in 3.6.19+ (jeremyevans) * Add an Database#sqlite_version method when connecting to SQLite, used to determine feature support (jeremyevans) * Fix rolling back transactions when connecting to Oracle via JDBC (jeremyevans) * Fix syntax errors when connecting to MSSQL via the dbi adapter (jeremyevans) (#292) * Add support for an :after_connect option when connection, called with each new connection made (jeremyevans) * Add support for a :test option when connecting to be automatically test the connection (jeremyevans) * Add Dataset#select_append, which always appends to the existing SELECTed columns (jeremyevans) * Emulate DISTINCT ON on MySQL using GROUP BY (jeremyevans) * Make MSSQL shared adapter emulate set_column_null alter table op better with types containing sizes (jeremyevans) (#291) * Add :config_default_group and :config_local_infile options to the native MySQL adapter (jeremyevans) * Add log_warn_duration attribute to Database, queries that take longer than it will be logged at warn level (jeremyevans) * Switch Database logging to use log_yield instead of log_info, queries that raise errors are now logged at error level (jeremyevans) * Update active_model plugin to work with the ActiveModel::Lint 3.0.0beta2 specs (jeremyevans) * Support JNDI connection strings in the JDBC adapter (jrun) === 3.10.0 (2010-04-02) * Make one_to_one setter and *_to_many remove_all methods apply the association options (jeremyevans) * Make nested_attributes plugin handle invalid many_to_one associations better (john_firebaugh) * Remove private methods from Sequel::BasicObject on ruby 1.8 (i.e. most Kernel methods) (jeremyevans) * Add Sequel::BasicObject.remove_methods!, useful on 1.8 if libraries required after Sequel add methods to Object (jeremyevans) * Change Sequel.connect with a block to return the block's value (jonas11235) * Add an rcte_tree plugin, which uses recursive common table expressions for loading trees stored as adjacency lists (jeremyevans) * Make typecast_on_load plugin also typecast when refreshing the object (either explicitly or implicitly after creation) (jeremyevans) * Fix schema parsing and dumping of tinyint columns when connecting to MySQL via the do adapter (ricardochimal) * Fix transactions when connecting to Oracle via JDBC (jeremyevans) * Fix plugin loading when plugin module name is the same as an already defined top level constant (jeremyevans) * Add an AS400 JDBC subadapter (need jt400.jar in classpath) (jeremyevans, bhauff) * Fix the emulated MSSQL offset support when core extensions are not used (jeremyevans) * Make Sequel::BasicObject work correctly on Rubinius (kronos) * Add the :eager_loader_key option to associations, useful for custom eager loaders (jeremyevans) * Dataset#group_and_count no longer orders by the count (jeremyevans) * Fix Dataset#limit on MSSQL 2000 (jeremyevans) * Support eagerly load nested associations when lazily loading *_to_one associations using the :eager option (jeremyevans) * Fix the one_to_one setter to work with a nil argument (jeremyevans) * Cache one_to_one associations like many_to_one associations instead of one_to_many associations (jeremyevans) * Use the singular form for one_to_one association names instead of the plural form (john_firebaugh) * Add real one_to_one associations, using the :one_to_one option of one_to_many is now an error (jeremyevans) * Add Model#lock! which uses Dataset#for_update to lock model rows (jeremyevans) * Add Dataset#for_update as a standard dataset method (jeremyevans) * Add composition plugin, simlar to ActiveRecord's composed_of (jeremyevans) * Combine multiple complex expressions for simpler SQL and object tree (jeremyevans) * Add Dataset#first_source_table, for the unaliased version of the table for the first source (jeremyevans) * Raise a more explicit error if attempting to use the sqlite adapter with sqlite3 instead of sqlite3-ruby (jeremyevans) === 3.9.0 (2010-03-04) * Allow loading adapters and extensions from outside of the Sequel lib directory (jeremyevans) * Make limit and offset work as bound variables in prepared statements (jeremyevans) * In the single_table_inheritance plugin, handle case where the sti_key is nil or '' specially (jeremyevans) (#287) * Handle IN/NOT IN with an empty array (jeremyevans) * Emulate IN/NOT IN with multiple columns where the database doesn't support it and a dataset is given (jeremyevans) * Add Dataset#unused_table_alias, for generating a table alias that has not yet been used in the query (jeremyevans) * Support an empty database argument in bin/sequel, useful for testing things without a real database (jeremyevans) * Support for schemas and aliases when eager graphing (jeremyevans) * Handle using an SQL::Identifier as an 4th option to Dataset#join_table (jeremyevans) * Move gem spec from Rakefile to a .gemspec file, for compatibility with gem build and builder (jeremyevans) (#285) * Fix MSSQL 2005+ offset emulation on ruby 1.9 (jeremyevans) * Make active_model plugin work with ActiveModel 3.0 beta Lint specs, now requires active_model (jeremyevans) * Correctly create foreign key constraints on MySQL with the InnoDB engine, but you must specify the :key option (jeremyevans) * Add an optimistic_locking plugin for models, similar to ActiveRecord's optimistic locking support (jeremyevans) * Handle implicitly qualified symbols in UPDATE statements, useful for updating joined datasets (jeremyevans) * Have schema_dumper extension pass options hash to Database#tables (jeremyevans) (#283) * Make all internal uses of require thread-safe (jeremyevans) * Refactor connection pool into 4 separate pools, increase performance for unsharded setups (jeremyevans) * Change a couple instance_evaled lambdas into procs, for 1.9.2 compatibility (jeremyevans) * Raise error message earlier if DISTINCT ON is used on SQLite (jeremyevans) * Speed up prepared statements on SQLite (jeremyevans) * Correctly handle ODBC timestamps when database_timezone is nil (jeremyevans) * Add Sequel::ValidationFailed#errors (tmm1) === 3.8.0 (2010-01-04) * Catch cases in the postgres adapter where exceptions weren't converted or raised appropriately (jeremyevans) * Don't double escape backslashes in string literals in the mssql shared adapter (john_firebaugh) * Fix order of ORDER and HAVING clauses in the mssql shared adapter (mluu) * Add validates_type to the validation_helpers plugin (mluu) * Attempt to detect database disconnects in the JDBC adapter (john_firebaugh) * Add Sequel::SQL::Expression#==, so arbtirary expressions can be compared by value (dlee) * Respect the :size option for the generic File type on MySQL to create tinyblob, mediumblob, and longblob (ibc) * Don't use the OUTPUT clause on SQL Server versions that don't support it (pre-2005) (jeremyevans) (#281) * Raise DatabaseConnectionErrors in the single-threaded connection pool if unable to connect (jeremyevans) * Fix handling of non-existent server in single-threaded connection pool (jeremyevans) * Default to using mysqlplus driver in the native mysql adapter, fall back to mysql driver (ibc, jeremyevans) * Handle 64-bit integers in JDBC prepared statements (paulfras) * Improve blob support when using the H2 JDBC subadapter (nullstyle, jeremyevans, paulfras) * Add Database#each_server, which yields a new Database object for each server in the connection pool which is connected to only that server (jeremyevans) * Add Dataset#each_server, which yields a dataset for each server in the connection pool which is will execute on that server (jeremyevans) * Remove meta_eval and metaclass private methods from Sequel::Metaprogramming (jeremyevans) * Merge Dataset::FROM_SELF_KEEP_OPTS into Dataset::NON_SQL_OPTIONS (jeremyevans) * Add Database#remove_servers for removing servers from the pool on the fly (jeremyevans) * When disconnecting servers, if there are any connections to the server currently in use, schedule them to be disconnected (jeremyevans) * Allow disconnecting specific server(s)/shard(s) in Database#disconnect via a :servers option (jeremyevans) * Handle multiple statements in a single query in the native MySQL adapter in all cases, not just when selecting via Dataset#each (jeremyevans) * In the boolean_readers plugin, don't raise an error if the model's columns can't be determined (jeremyevans) * In the identity_map plugin, remove instances from the cache if they are deleted/destroyed (jeremyevans) * Add Database#add_servers, for adding new servers/shards on the fly (chuckremes, jeremyevans) === 3.7.0 (2009-12-01) * Add Dataset#sequence to the shared Oracle Adapter, for returning autogenerated primary key values on insert (jeremyevans) (#280) * Bring support for modifying joined datasets into Sequel proper, supported on MySQL and PostgreSQL (jeremyevans) * No longer use native autoreconnection in the mysql adapter (jeremyevans) * Add NULL, NOTNULL, TRUE, SQLTRUE, FALSE, and SQLFALSE constants (jeremyevans) * Add Dataset #select_map, #select_order_map, and #select_hash (jeremyevans) * Make Dataset#group_and_count handle arguments other than Symbols (jeremyevans) * Add :only_if_modified option to validates_unique method in validation_helpers plugin (jeremyevans) * Allow specifying the dataset alias via :alias option when using union/intersect/except (jeremyevans) * Allow Model#destroy to take an options hash and respect a :transaction option (john_firebaugh) * If a transaction is being used, raise_on_save_failure is false, and a before hook returns false, rollback the transaction (john_firebaugh, jeremyevans) * In the schema_dumper, explicitly specify the :type option if it isn't Integer (jeremyevans) * On postgres, use bigserial type if :type=>Bignum is given as an option to primary_key (jeremyevans) * Use READ_DEFAULT_GROUP in the mysql adapter to load the options in the client section of the my.cnf file (crohr) === 3.6.0 (2009-11-02) * Make the MSSQL shared adapter correctly parse the column schema information for tables in the non-default database schema (rohit.namjoshi) * Use save_changes instead of save when updating existing associated objects in the nested_attributes plugin (jeremyevans) * Allow Model#save_changes to accept an option hash that is passed to save, so you can save changes without validating (jeremyevans) * Make nested_attributes plugin add newly created objects to cached association array immediately (jeremyevans) * Make add_ association method not add the associated object to the cached array if it's already there (jeremyevans) * Add Model#modified! for explicitly marking an object as modified, so save_changes/update will run callbacks even if no columns have been modified (jeremyevans) * Add support for a :fields option in the nested attributes plugin, and only allow updating of the fields specified (jeremyevans) * Don't allow modifying keys related to the association when updating existing objects in the nested_attributes plugin (jeremyevans) * Add associated_object_keys method to AssociationReflection objects, specifying the key(s) in the associated model table related to the association (jeremyevans) * Support the memcached protocol in the caching plugin via the new :ignore_exceptions option (EppO, jeremyevans) * Don't modify array with a string and placeholders passed to Dataset#filter or related methods (jeremyevans) * Speed up Amalgalite adapter (copiousfreetime) * Fix bound variables on PostgreSQL when using nil and potentially other values (jeremyevans) * Allow easier overriding of default options used in the validation_helpers plugin (jeremyevans) * Have Dataset#literal_other call sql_literal on the object if it responds to it (heda, michaeldiamond) * Fix Dataset#explain in the amalgalite adapter (jeremyevans) * Have Model.table_name respect table aliases (jeremyevans) * Allow marshalling of saved model records after calling #marshallable! (jeremyevans) * one_to_many association methods now make sure that the removed object is currently associated to the receiver (jeremyevans) * Model association add_ and remove_ methods now have more descriptive error messages (jeremyevans) * Model association add_ and remove_ methods now make sure passed object is of the correct class (jeremyevans) * Model association remove_ methods now accept a primary key value and disassociate the associated model object (natewiger, jeremyevans) * Model association add_ methods now accept a hash and create a new associated model object (natewiger, jeremyevans) * Dataset#window for PostgreSQL datasets now respects previous windows (jeremyevans) * Dataset#simple_select_all? now ignores options that don't affect the SQL being issued (jeremyevans) * Account for table aliases in eager_graph (mluu) * Add support for MSSQL clustered index creation (mluu) * Implement insert_select in the MSSQL adapter via OUTPUT. Can be disabled via disable_insert_output. (jfirebaugh, mluu) * Correct error handling when beginning a transaction fails (jfirebaugh, mluu) * Correct JDBC binding for Time objects in prepared statements (jfirebaugh, jeremyevans) * Emulate JOIN USING clause poorly using JOIN ON if the database doesn't support JOIN USING (e.g. MSSQL, H2) (jfirebaugh, jeremyevans) * Support column aliases in Dataset#group_and_count (jfirebaugh) * Support preparing insert statements of the form insert(1,2,3) and insert(columns, values) (jfirebaugh) * Fix add_index for tables in non-default schema (jfirebaugh) * Allow named placeholders in placeholder literal strings (jeremyevans) * Allow the force_encoding plugin to work when refreshing (jeremyevans) * Add Dataset#bind for setting bound variable values before calling #call (jeremyevans) * Add additional join methods to Dataset: (cross|natural|(natural_)?(full|left|right))_join (jeremyevans) * Fix use a dataset aggregate methods (e.g. sum) on limited/grouped/etc. datasets (jeremyevans) * Clear changed_columns when saving new model objects with a database adapter that supports insert_select, such as postgres (jeremyevans) * Fix Dataset#replace with default values on MySQL, and respect insert-related options (jeremyevans) * Fix Dataset#lock on PostgreSQL (jeremyevans) * Fix Dataset#explain on SQLite (jeremyevans) * Add Dataset#use_cursor to the native postgres adapter, for processing large datasets (jeremyevans) * Don't ignore Class.inherited in Sequel::Model.inherited (antage) (#277) * Optimize JDBC::MySQL::DatabaseMethods#last_insert_id to prevent additional queries (tmm1) * Fix use of MSSQL with ruby 1.9 (cult hero) * Don't try to load associated objects when the current object has NULL for one of the key fields (jeremyevans) * No longer require GROUP BY to use HAVING, except on SQLite (jeremyevans) * Add emulated support for the lack of multiple column IN/NOT IN support in MSSQL and SQLite (jeremyevans) * Add emulated support for #ilike on MSSQL and H2 (jeremyevans) * Add a :distinct option for all associations, which uses the SQL DISTINCT clause (jeremyevans) * Don't require :: prefix for constant lookups in instance_evaled virtual row blocks on ruby 1.9 (jeremyevans) === 3.5.0 (2009-10-01) * Correctly literalize timezones in timestamps when using Oracle (jeremyevans) * Add class_table_inheritance plugin, supporting inheritance in the database using a table-per-model-class approach (jeremyevans) * Allow easier overriding of model code to insert and update individual records (jeremyevans) * Allow graphing to work on previously joined datasets, and eager graphing of models backed by joined datasets (jeremyevans) * Fix MSSQL emulated offset support for datasets with row_procs (e.g. Model datasets) (jeremyevans) * Support composite keys with set_primary_key when called with an array of multiple symbols (jeremyevans) * Fix select_more and order_more to not affect receiver (tamas.denes, jeremyevans) * Support composite keys in model associations, including many_through_many plugin support (jeremyevans) * Add the force_encoding plugin for forcing encoding of strings for models (requires ruby 1.9) (jeremyevans) * Support DataObjects 0.10 (previous DataObjects versions are now unsupported) (jeremyevans) * Allow the user to specify the ADO connection string via the :conn_string option (jeremyevans) * Add thread_local_timezones extension for allow per-thread overrides of the global timezone settings (jeremyevans) * Add named_timezones extension for using named timezones such as "America/Los_Angeles" using TZInfo (jeremyevans) * Pass through unsigned/elements/size and other options when altering columns on MySQL (tmm1) * Replace Dataset#virtual_row_block_call with Sequel.virtual_row (jeremyevans) * Allow Dataset #delete, #update, and #insert to respect existing WITH clauses on MSSQL (dlee, jeremyevans) * Add touch plugin, which adds Model#touch for updating an instance's timestamp, as well as touching associations when an instance is updated or destroyed (jeremyevans) * Add sql_expr extension, which adds the sql_expr to all objects, giving them easy access to Sequel's DSL (jeremyevans) * Add active_model plugin, which gives Sequel::Model an ActiveModel compliant API, passes the ActiveModel::Lint tests (jeremyevans) * Fix MySQL commands out of sync error when using queries with multiple result sets without retrieving all result sets (jeremyevans) * Allow splitting of multiple result sets into separate arrays when using multiple statements in a single query in the native MySQL adapter (jeremyevans) * Don't include primary key indexes when parsing MSSQL indexes on JDBC (jeremyevans) * Make Dataset#insert_select return nil on PostgreSQL if disable_insert_returning is used (jeremyevans) * Speed up execution of prepared statements with bound variables on MySQL (ibc@aliax.net) * Add association_dependencies plugin, for deleting, destroying, or nullifying associated objects when destroying a model object (jeremyevans) * Add :validate association option, set to false to not validate when implicitly saving associated objects (jeremyevans) * Add subclasses plugin, for recording all of a models subclasses and descendent classes (jeremyevans) * Add looser_typecasting extension, for using .to_f and .to_i instead of Kernel.Float and Kernel.Integer when typecasting floats and integers (jeremyevans) * Catch database errors when preparing statements or setting variable values when using the native MySQL adapter (jeremyevans) * Add typecast_on_load plugin, for fixing bad database typecasting when loading model objects (jeremyevans) * Detect more types of MySQL disconnection errors (jeremyevans) * Add Sequel.convert_exception_class for wrapping exceptions (jeremyevans) * Model#modified? now always considers new records as modified (jeremyevans) * Typecast before checking current model attribute value, instead of after (jeremyevans) * Don't attempt to use unparseable defaults as literals when dumping the schema for a MySQL database (jeremyevans) * Handle MySQL enum defaults in the schema dumper (jeremyevans) * Support Database#server_version on MSSQL (dlee, jeremyevans) * Support updating and deleting joined datasets on MSSQL (jfirebaugh) * Support the OUTPUT SQL clause on MSSQL delete, insert, and update statements (jfirebaugh) * Refactor generation of delete, insert, select, and update statements (jfirebaugh, jeremyevans) * Do a better job of parsing defaults on MSSQL (jfirebaugh) === 3.4.0 (2009-09-02) * Allow datasets without tables to work correctly on Oracle (mikegolod) * Add #invert, #asc, and #desc to OrderedExpression (dlee) * Allow validates_unique to take a block used to scope the uniqueness constraint (drfreeze, jeremyevans) * Automatically save a new many_to_many associated object when associating the object via add_* (jeremyevans) * Add a nested_attributes plugin for modifying associated objects directly through a model object (jeremyevans) * Add an instance_hooks plugin for adding hooks to specific model instances (jeremyevans) * Add a boolean_readers plugin for creating attribute? methods for boolean columns (jeremyevans) * Add Dataset#ungrouped which removes existing grouping (jeremyevans) * Make Dataset#group with nil or no arguments to remove existing grouping (dlee) * Fix using multiple emulated ALTER TABLE statements (e.g. drop_column) in a single alter_table block on SQLite (jeremyevans) * Don't allow inserting on a grouped dataset or a dataset that selects from multiple tables (jeremyevans) * Allow class Item < Sequel::Model(DB2) to work (jeremyevans) * Add Dataset#truncate for truncating tables (jeremyevans) * Add Database#run method for executing arbitrary SQL on a database (jeremyevans) * Handle index parsing correctly for tables in a non-default schema on JDBC (jfirebaugh) * Handle unique index parsing correctly when connecting to MSSQL via JDBC (jfirebaugh) * Add support for converting Time/DateTime to local or UTC time upon storage, retrieval, or typecasting (jeremyevans) * Accept a hash when typecasting values to date, time, and datetime types (jeremyevans) * Make JDBC adapter prepared statements support booleans, blobs, and potentially any type of object (jfirebaugh) * Refactor the inflection support and modify the default inflections (jeremyevans, dlee) * Make the serialization and lazy_attribute plugins add accessor methods to modules included in the class (jeremyevans) * Make Database#schema on JDBC include a :column_size entry specifying the maximum length/precision for the column (jfirebaugh) * Make Database#schema on JDBC accept a :schema option (dlee) * Fix Dataset#import when called with a dataset (jeremyevans) * Give a much more descriptive error message if the mysql.rb driver is detected (jeremyevans) * Make postgres adapter work with a modified postgres-pr that raises PGError (jeremyevans) * Make ODBC adapter respect Sequel.datetime_class (jeremyevans) * Add support for generic concepts of CURRENT_{DATE,TIME,TIMESTAMP} (jeremyevans) * Add a timestamps plugin for automatically creating hooks for create and update timestamps (jeremyevans) * Add support for serializing to json (derdewey) === 3.3.0 (2009-08-03) * Add an assocation_proxies plugin that uses proxies for associations (jeremyevans) * Have the add/remove/remove_all methods take additional arguments and pass them to the internal methods (clivecrous) * Move convert_tinyint_to_bool method from Sequel to Sequel::MySQL (jeremyevans) * Model associations now default to associating to classes in the same scope (jeremyevans, nougad) (#274) * Add Dataset#unlimited, similar to unfiltered and unordered (jeremyevans) * Make Dataset#from_self take an options hash and respect an :alias option, giving the alias to use (Phrogz) * Make the JDBC adapter accept a :convert_types option to turn off Java type conversion and double performance (jeremyevans) * Slight increase in ConnectionPool performance (jeremyevans) * SQL::WindowFunction can now be aliased/casted etc. just like SQL::Function (jeremyevans) * Model#save no longer attempts to update primary key columns (jeremyevans) * Sequel will now unescape values provided in connection strings (e.g. ado:///db?host=server%5cinstance) (jeremyevans) * Significant improvements to the ODBC and ADO adapters in general (jeremyevans) * The ADO adapter no longer attempts to use database transactions, since they never worked (jeremyevans) * Much better support for Microsoft SQL Server using the ADO, ODBC, and JDBC adapters (jeremyevans) * Support rename_column, set_column_null, set_column_type, and add_foreign_key on H2 (jeremyevans) * Support adding a column with a primary key or unique constraint to an existing table on SQLite (jeremyevans) * Support altering a column's type, null status, or default on SQLite (jeremyevans) * Fix renaming a NOT NULL column without a default on MySQL (nougad, jeremyevans) (#273) * Don't swallow DatabaseConnectionErrors when creating model subclasses (tommy.midttveit) === 3.2.0 (2009-07-02) * In the STI plugin, don't overwrite the STI field if it is already set (jeremyevans) * Add support for Common Table Expressions, which use the SQL WITH clause (jeremyevans) * Add SQL::WindowFunction, expand virtual row blocks to support them and other constructions (jeremyevans) * Add Model#autoincrementing_primary_key, for when the autoincrementing key isn't the same as the primary key (jeremyevans) * Add Dataset#ungraphed, to remove the splitting of results into subhashes or associated records (jeremyevans) * Support :opclass option for PostgreSQL indexes (tmi, jeremyevans) * Make parsing of server's version more reliable for PostgreSQL (jeremyevans) * Add Dataset#qualify, which is qualify_to with a first_source default (jeremyevans) * Add :ruby_default to parsed schema information, which contains a ruby object representing the database default (jeremyevans) * Fix changing a column's name, type, or null status on MySQL when column has a string default (jeremyevans) * Remove Dataset#to_table_reference protected method, no longer used (jeremyevans) * Fix thread-safety issue in stored procedure code (jeremyevans) * Remove SavepointTransactions module, integrate into Database code (jeremyevans) * Add supports_distinct_on? method (jeremyevans) * Remove SQLStandardDateFormat, replace with requires_sql_standard_datetimes? method (jeremyevans) * Remove UnsupportedIsTrue module, replace with supports_is_true? method (jeremyevans) * Remove UnsupportedIntersectExcept(All)? modules, replace with methods (jeremyevans) * Make Database#indexes work on PostgreSQL versions prior to 8.3 (tested on 7.4) (jeremyevans) * Fix bin/sequel using a YAML file on 1.9 (jeremyevans) * Allow connection pool options to be specified in connection string (jeremyevans) * Handle :user and :password options in the JDBC adapter (jeremyevans) * Fix warnings when using the ODBC adapter (jeremyevans) * Add opening_databases.rdoc file for describing how to connect to a database (mwlang, jeremyevans) * Significantly increase JDBC select performance (jeremyevans) * Slightly increase SQLite select performance using the native adapter (jeremyevans) * Majorly increase MySQL select performance using the native adapter (jeremyevans) * Pass through unsigned/elements/size and other options when altering columns on MySQL (tmm1) * Allow on_duplicate_key_update to affect Dataset#insert on MySQL (tmm1) * Support using a given table and column to store schema versions, using new Migrator.run method (bougyman, jeremyevans) * Fix foreign key table constraints on MySQL (jeremyevans) * Remove Dataset#table_exists?, use Database#table_exists? instead (jeremyevans) * Fix graphing of datasets with dataset sources (jeremyevans) (#271) * Raise a Sequel::Error if Sequel.connect is called with something other than a Hash or String (jeremyevans) (#272) * Add -N option to bin/sequel to not test the database connection (jeremyevans) * Make Model.grep call Dataset#grep instead of Enumerable#grep (jeremyevans) * Support the use of Regexp as first argument to StringExpression.like (jeremyevans) * Fix Database#indexes on PostgreSQL when the schema used is a symbol (jeremyevans) === 3.1.0 (2009-06-04) * Require the classes match to consider an association a reciprocal (jeremyevans) (#270) * Make Migrator work correctly with file names like 001_873465873465873465_some_name.rb (jeremyevans) (#267) * Add Dataset#qualify_to and #qualify_to_first_source, for qualifying unqualified identifiers in the dataset (jeremyevans) * All the use of #sql_subscript on most SQL::* objects, and support non-integer subscript values (jeremyevans) * Add reflection.rdoc file which explains and gives examples of many of Sequel's reflection methods (jeremyevans) * Add many_through_many plugin, allowing you to construct an association to multiple objects through multiple join tables (jeremyevans) * Add the :cartesian_product_number option to associations, for specifying if they can cause a cartesian product (jeremyevans) * Make :eager_graph association option work correctly when lazily loading many_to_many associations (jeremyevans) * Make eager_unique_table_alias consider joined tables as well as tables in the FROM clause (jeremyevans) * Make add_graph_aliases work correctly even if set_graph_aliases hasn't been used (jeremyevans) * Fix using :conditions that are a placeholder string in an association (e.g. :conditions=>['a = ?', 42]) (jeremyevans) * On MySQL, make Dataset#insert_ignore affect #insert as well as #multi_insert and #import (jeremyevans, tmm1) * Add -t option to bin/sequel to output the full backtrace if an exception is raised (jeremyevans) * Make schema_dumper extension ignore errors with indexes unless it is dumping in the database-specific type format (jeremyevans) * Don't dump partial indexes in the MySQL adapter (jeremyevans) * Add :ignore_index_errors option to Database#create_table and :ignore_errors option to Database#add_index (jeremyevans) * Make graphing a complex dataset work correctly (jeremyevans) * Fix MySQL command out of sync errors, disconnect from database if they occur (jeremyevans) * In the schema_dumper extension, do a much better job of parsing defaults from the database (jeremyevans) * On PostgreSQL, assume the public schema if one is not given and there is no default in Database#tables (jeremyevans) * Ignore a :default value if creating a String :text=>true or File column on MySQL, since it doesn't support defaults on text/blob columns (jeremyevans) * On PostgreSQL, do not raise an error when attempting to reset the primary key sequence for a table without a primary key (jeremyevans) * Allow plugins to have a configure method that is called on every attempt to load them (jeremyevans) * Attempting to load an already loaded plugin no longer calls the plugin's apply method (jeremyevans) * Make plugin's plugin_opts methods return an array of arguments if multiple arguments were given, instead of just the first argument (jeremyevans) * Keep track of loaded plugins at Model.plugins, allows plugins to depend on other plugins (jeremyevans) * Make Dataset#insert on PostgreSQL work with static SQL (jeremyevans) * Add lazy_attributes plugin, for creating attributes that can be lazily loaded from the database (jeremyevans) * Add tactical_eager_loading plugin, similar to DataMapper's strategic eager loading (jeremyevans) * Don't raise an error when loading a plugin with DatasetMethods where none of the methods are public (jeremyevans) * Add identity_map plugin, for creating temporary thread-local identity maps with some caching (jeremyevans) * Support savepoints when using MySQL and SQLite (jeremyevans) * Add -C option to bin/sequel that copies one database to another (jeremyevans) * In the schema_dumper extension, don't include defaults that contain literal strings unless the DBs are the same (jeremyevans) * Only include valid non-partial indexes of simple column references in the PostgreSQL adapter (jeremyevans) * Add -h option to bin/sequel for outputting the usage, alias for -? (jeremyevans) * Add -d and -D options to bin/sequel for dumping schema migrations (jeremyevans) * Support eager graphing for model tables that lack primary keys (jeremyevans) * Add Model.create_table? to the schema plugin, similar to Database#create_table? (jeremyevans) * Add Database#create_table?, which creates the table if it doesn't already exist (jeremyevans) * Handle ordered and limited datasets correctly when using UNION, INTERSECT, or EXCEPT (jeremyevans) * Fix unlikely threading bug with class level validations (jeremyevans) * Make the schema_dumper extension dump tables in alphabetical order in migrations (jeremyevans) * Add Sequel.extension method for loading extensions, so you don't have to use require (jeremyevans) * Allow bin/sequel to respect multiple -L options instead of ignoring all but the last one (jeremyevans) * Add :command_timeout and :provider options to ADO adapter (hgimenez) * Fix exception messages when Sequel.string_to_* fail (jeremyevans) * Fix String :type=>:text generic type in the Firebird adapter (wishdev) * Add Sequel.amalgalite adapter method (jeremyevans) === 3.0.0 (2009-05-04) * Remove dead threads from connection pool if the pool is full and a connection is requested (jeremyevans) * Add autoincrementing primary key support in the Oracle adapter, using a sequence and trigger (jeremyevans, Mike Golod) * Make Model#save use the same server it uses for saving as for retrieving the saved record (jeremyevans) * Add Database#database_type method, for identifying which type of database the object is connecting to (jeremyevans) * Add ability to reset primary key sequences in the PostgreSQL adapter (jeremyevans) * Fix parsing of non-simple sequence names (that contain uppercase, spaces, etc.) in the PostgreSQL adapter (jeremyevans) * Support dumping indexes in the schema_dumper extension (jeremyevans) * Add index parsing to PostgreSQL, MySQL, SQLite, and JDBC adapters (jeremyevans) * Correctly quote SQL Array references, and handle qualified identifiers with them (e.g. :table__column.sql_subscript(1)) (jeremyevans) * Allow dropping an index with a name different than the default name (jeremyevans) * Allow Dataset#from to remove existing FROM tables when called without an argument, instead of raising an error later (jeremyevans) * Fix string quoting on Oracle so it doesn't double backslashes (jeremyevans) * Alias the count function call in Dataset#count, fixes use on MSSQL (akitaonrails, jeremyevans) * Allow QualifiedIdentifiers to be qualified, to allow :column.qualify(:table).qualify(:schema) (jeremyevans) * Allow :db_type=>'mssql' option to be respected when using the DBI adapter (akitaonrails) * Add schema_dumper extension, for dumping schema of tables (jeremyevans) * Allow generic database types specified as ruby types to take options (jeremyevans) * Change Dataset#exclude to invert given hash argument, not negate it (jeremyevans) * Make Dataset#filter and related methods treat multiple arguments more intuitively (jeremyevans) * Fix full text searching with multiple search terms on MySQL (jeremyevans) * Fix altering a column name, type, default, or NULL/NOT NULL status on MySQL (jeremyevans) * Fix index type syntax on MySQL (jeremyevans) * Add temporary table support, via :temp option to Database#create_table (EppO, jeremyevans) * Add Amalgalite adapter (jeremyevans) * Remove Sequel::Metaprogramming#metaattr_accessor and metaattr_reader (jeremyevans) * Remove Dataset#irregular_function_sql (jeremyevans) * Add Dataset#full_text_sql to the MySQL adapter (dusty) * Fix schema type parsing of decimal types on MySQL (jeremyevans) * Make Dataset#quote_identifier work with SQL::Identifiers (jeremyevans) * Remove methods and features deprecated in 2.12.0 (jeremyevans) === 2.12.0 (2009-04-03) * Deprecate Java::JavaSQL::Timestamp#usec (jeremyevans) * Fix Model.[] optimization introduced in 2.11.0 for databases that don't use LIMIT (jacaetevha) * Don't use the model association plugin if SEQUEL_NO_ASSOCIATIONS constant or environment variable is defined (jeremyevans) * Don't require core_sql if SEQUEL_NO_CORE_EXTENSIONS constant or environment variable is defined (jeremyevans) * Add validation_helpers model plugin, which adds instance level validation support similar to previously standard validations, with a different API (jeremyevans) * Split multi_insert into 2 methods with separate APIs, multi_insert for hashes, import for arrays of columns and values (jeremyevans) * Deprecate Dataset#transform and Model.serialize, and model serialization plugin (jeremyevans) * Add multi_insert_update to the MySQL adapter, used for setting specific update behavior when an error occurs when using multi_insert (dusty) * Add multi_insert_ignore to the MySQL adapter, used for skipping errors on row inserts when using multi_insert (dusty) * Add Sequel::MySQL.convert_invalid_date_time accessor for dealing with dates like "0000-00-00" and times like "25:00:00" (jeremyevans, epugh) * Eliminate internal dependence on core_sql extensions (jeremyevans) * Deprecate Migration and Migrator, require 'sequel/extensions/migration' if you want them (jeremyevans) * Denamespace Sequel::Error decendants (e.g. use Sequel::Rollback instead of Sequel::Error::Rollback) (jeremyevans) * Deprecate Error::InvalidTransform, Error::NoExistingFilter, and Error::InvalidStatement (jeremyevans) * Deprecate Dataset#[] when called without an argument, and Dataset#map when called with an argument and a block (jeremyevans) * Fix aliasing columns in the JDBC adapter (per.melin) (#263) * Make Database#rename_table remove the cached schema entry for the table (jeremyevans) * Make Database schema sql methods private (jeremyevans) * Deprecate Database #multi_threaded? and #logger (jeremyevans) * Make Dataset#where always affect the WHERE clause (jeremyevans) * Deprecate Object#blank? and related extensions, require 'sequel/extensions/blank' to get them back (jeremyevans) * Move lib/sequel_core into lib/sequel and lib/sequel_model into lib/sequel/model (jeremyevans) * Remove Sequel::Schema::SQL module, move methods into Sequel::Database (jeremyevans) * Support creating and dropping schema qualified views (jeremyevans) * Fix saving a newly inserted record in an after_create or after_save hook (jeremyevans) * Deprecate Dataset#print and PrettyTable, require 'sequel/extensions/pretty_table' if you want them (jeremyevans) * Deprecate Database#query and Dataset#query, require 'sequel/extensions/query' if you want them (jeremyevans) * Deprecate Dataset#paginate and #each_page, require 'sequel/extensions/pagination' if you want them (jeremyevans) * Fix ~{:bool_col=>true} and related inversions of boolean values (jeremyevans) * Add disable_insert_returning method to PostgreSQL datasets, so they fallback to just using INSERT (jeremyevans) * Don't use savepoints by default on PostgreSQL, use the :savepoint option to Database#transaction to use a savepoint (jeremyevans) * Deprecate Database#transaction accepting a server symbol argument, use an options hash with the :server option (jeremyevans) * Add Model.use_transactions for setting whether models should use transactions when destroying/saving records (jeremyevans, mjwillson) * Deprecate Model::Validation::Errors, use Model::Errors (jeremyevans) * Deprecate string inflection methods, require 'sequel/extensions/inflector' if you use them (jeremyevans) * Deprecate Model validation class methods, override Model#validate instead or Model.plugin validation_class_methods (jeremyevans) * Deprecate Model schema methods, use Model.plugin :schema (jeremyevans) * Deprecate Model hook class methods, use instance methods instead or Model.plugin :hook_class_methods (jeremyevans) * Deprecate Model.set_sti_key, use Model.plugin :single_table_inheritance (jeremyevans) * Deprecate Model.set_cache, use Model.plugin :caching (jeremyevans) * Move most model instance methods into Model::InstanceMethods, for easier overriding of instance methods for all models (jeremyevans) * Move most model class methods into Model::ClassMethods, for easier overriding of class methods for all models (jeremyevans) * Deprecate String#to_date, #to_datetime, #to_time, and #to_sequel_time, use require 'sequel/extensions/string_date_time' if you want them (jeremyevans) * Deprecate Array#extract_options! and Object#is_one_of? (jeremyevans) * Deprecate Object#meta_def, #meta_eval, and #metaclass (jeremyevans) * Deprecate Module#class_def, #class_attr_overridable, #class_attr_reader, #metaalias, #metaattr_reader, and #metaatt_accessor (jeremyevans) * Speed up the calling of most column accessor methods, and reduce memory overhead of creating them (jeremyevans) * Deprecate Model#set_restricted using Model#[] if no setter method exists, a symbol is used, and the columns are not set (jeremyevans) * Deprecate Model#set_with_params and #update_with_params (jeremyevans) * Deprecate Model#save!, use Model.save(:validate=>false) (jeremyevans) * Deprecate Model#dataset (jeremyevans) * Deprecate Model.is and Model.is_a, use Model.plugin for plugins (jeremyevans) * Deprecate Model.str_columns, Model#str_columns, #set_values, #update_values (jeremyevans) * Deprecate Model.delete_all, .destroy_all, .size, and .uniq (jeremyevans) * Copy all current dataset options when calling Model.db= (jeremyevans) * Deprecate Model.belongs_to, Model.has_many, and Model.has_and_belongs_to_many (jeremyevans) * Remove SQL::SpecificExpression, have subclasses inherit from SQL::Expression instead (jeremyevans) * Deprecate SQL::CastMethods#cast_as (jeremyevans) * Deprecate calling Database#schema without a table argument (jeremyevans) * Remove cached version of @db_schema in model instances to reduce memory and marshalling overhead (tmm1) * Deprecate Dataset#quote_column_ref and Dataset#symbol_to_column_ref (jeremyevans) * Deprecate Dataset#size and Dataset#uniq (jeremyevans) * Deprecate passing options to Dataset#each, #all, #single_record, #single_value, #sql, #select_sql, #update, #update_sql, #delete, #delete_sql, and #exists (jeremyevans) * Deprecate Dataset#[Integer] (jeremyevans) * Deprecate Dataset#create_view and Dataset#create_or_replace_view (jeremyevans) * Model datasets now have a model accessor that returns the related model (jeremyevans) * Model datasets no longer have :models and :polymorphic_key options (jeremyevans) * Deprecate Dataset.dataset_classes, Dataset#model_classes, Dataset#polymorphic_key, and Dataset#set_model (jeremyevans) * Allow Database#get and Database#select to take a block (jeremyevans) * Deprecate Database#>> (jeremyevans) * Deprecate String#to_blob and Sequel::SQL::Blob#to_blob (jeremyevans) * Deprecate use of Symbol#| for SQL array subscripts, add Symbol#sql_subscript (jeremyevans) * Deprecate Symbol#to_column_ref (jeremyevans) * Deprecate String#expr (jeremyevans) * Deprecate Array#to_sql, String#to_sql, and String#split_sql (jeremyevans) * Deprecate passing an array to Database#<< (jeremyevans) * Deprecate Range#interval (jeremyevans) * Deprecate Enumerable#send_each (jeremyevans) * Deprecate Hash#key on ruby 1.8, change some SQLite adapter constants (jeremyevans) * Deprecate Sequel.open, Sequel.use_parse_tree=?, and the upcase_identifier methods (jeremyevans) * Deprecate virtual row blocks without block arguments, unless Sequel.virtual_row_instance_eval is enabled (jeremyevans) * Support schema parsing in the Oracle adapter (jacaetevha) * Allow virtual row blocks to be instance_evaled, add Sequel.virtual_row_instance_eval= (jeremyevans) === 2.11.0 (2009-03-02) * Optimize Model.[] by using static sql when possible, for a 30-40% speed increase (jeremyevans) * Add Dataset#with_sql, which returns a clone of the datatset with static SQL (jeremyevans) * Refactor Dataset#literal so it doesn't need to be overridden in subadapters, for a 20-25% performance increase (jeremyevans) * Remove SQL::IrregularFunction, no longer used internally (jeremyevans) * Allow String#lit to take arguments and return a SQL::PlaceholderLiteralString (jeremyevans) * Add Model#set_associated_object, used by the many_to_one setter method, for easier overriding (jeremyevans) * Allow use of database independent types when casting (jeremyevans) * Give association datasets knowledge of the model object that created them and the related association reflection (jeremyevans) * Make Dataset#select, #select_more, #order, #order_more, and #get take a block that yields a SQL::VirtualRow, similar to #filter (jeremyevans) * Fix stored procedures in MySQL adapter when multiple arguments are used (clivecrous) * Add :conditions association option, for easier filtering of associated objects (jeremyevans) * Add :clone association option, for making clones of existing associations (jeremyevans) * Handle typecasting invalid date strings (and possible other types) correctly (jeremyevans) * Add :compress=>false option to MySQL adapter to turn off compression of client-server connection (tmm1) * Set SQL_AUTO_IS_NULL=0 on MySQL connections, disable with :auto_is_null=>false (tmm1) * Add :timeout option to MySQL adapter, default to 30 days (tmm1) * Set MySQL encoding using Mysql#options so it works across reconnects (tmm1) * Fully support blobs on SQLite (jeremyevans) * Add String#to_sequel_blob, alias String#to_blob to that (jeremyevans) * Fix default index names when a non-String or Symbol column is used (jeremyevans) * Fix some ruby -w warnings (jeremyevans) (#259) * Fix issues with default column values, table names, and quoting in the rename_column and drop_column support in shared SQLite adapter (jeremyevans) * Add rename_column support to SQLite shared adapter (jmhodges) * Add validates_inclusion_of validation (jdunphy) === 2.10.0 (2009-02-03) * Don't use a default schema any longer in the shared PostgreSQL adapter (jeremyevans) * Make Dataset#quote_identifier return LiteralStrings as-is (jeremyevans) * Support symbol keys and unnested hashes in the sequel command line tool's yaml config support (jeremyevans) * Add schema parsing support to the JDBC adapter (jeremyevans) * Add per-database type translation support for schema changes, translating ruby classes to database specific types (jeremyevans) * Add Sequel::DatabaseConnectionError, for indicating that Sequel wasn't able to connect to the database (jeremyevans) * Add validates_not_string validation, useful in conjunction with raise_on_typecast_failure = false (jeremyevans) * Don't modify Model#new? and Model#changed_columns when saving a record until after the after hooks have been run (tamas, jeremyevans) * Database#quote_identifiers= now affects future schema modification statements, even if it is not used before one of the schema modification statements (jeremyevans) * Fix literalization of blobs when using the PostreSQL JDBC subadapter (jeremyevans) * Fix literalization of date and time types when using the MySQL JDBC subadapter (jeremyevans) * Convert some Java specific types to ruby types on output in the JDBC adapter (jeremyevans) * Add Database#tables method to JDBC adapter (jeremyevans) * Add H2 JDBC subadapter (logan_barnett, david_koontz, james_britt, jeremyevans) * Add identifer_output_method, used for converting identifiers coming out of the database, replacing the lowercase support on some databases (jeremyevans) * Add identifier_input_method, used for converting identifiers going into the database, replacing upcase_identifiers (jeremyevans) * Add :allow_missing validation option, useful if the database provides a good default (jeremyevans) * Fix literalization of SQL::Blobs in DataObjects and JDBC adapter's postgresql subadapters when ruby 1.9 is used (jeremyevans) * When using standard strings in the postgres adapter with the postgres-pr driver, use custom string escaping to prevent errors (jeremyevans) * Before hooks now run in reverse order of being added, so later ones are run first (tamas) * Add Firebird adapter, requires Firebird ruby driver located at http://github.com/wishdev/fb (wishdev) * Don't clobber the following Symbol instance methods when using ruby 1.9: [], <, <=, >, >= (jeremyevans) * Quote the table name and the index for PostgreSQL index creation (jeremyevans) * Add DataObjects adapter, supporting PostgreSQL, MySQL, and SQLite (jeremyevans) * Add ability for Database#create_table to take options, support specifying MySQL engine, charset, and collate per table (pusewicz, jeremyevans) * Add Model.add_hook_type class method, for adding your own hook types, mostly for use by plugin authors (pkondzior, jeremyevans) * Add Sequel.version for getting the internal version of Sequel (pusewicz, jeremyevans) === 2.9.0 (2009-01-12) * Add -L option to sequel command line tool to load all .rb files in the given directory (pkondzior, jeremyevans) * Fix Dataset#destroy for model datasets that can't handle nested queries (jeremyevans) * Improve the error messages in parts of Sequel::Model (jeremyevans, pusewicz) * Much better support for Dataset#{union,except,intersect}, allowing chaining and respecting order (jeremyevans) * Default to logging only WARNING level messages when connecting to PostgreSQL (jeremyevans) * Fix add_foreign_key for MySQL (jeremyevans, aphyr) * Correctly literalize BigDecimal NaN and (+-)Infinity values (jeremyevans) (#256) * Make Sequel raise an Error if you attempt to subclass Sequel::Model before setting up a database connection (jeremyevans) * Add Sequel::BeforeHookFailed exception to be raised when a record fails because a before hook fails (bougyman) * Add Sequel::ValidationFailed exception to be raised when a record fails because a validation fails (bougyman) * Make Database#schema raise an error if given a table that doesn't exist (jeremyevans) (#255) * Make Model#inspect call Model#inspect_values private method for easier overloading (bougyman) * Add methods to create and drop functions, triggers, and procedural languages on PostgreSQL (jeremyevans) * Fix Dataset#count when using UNION, EXCEPT, or INTERSECT (jeremyevans) * Make SQLite keep table's primary key information when dropping columns (jmhodges) * Support dropping indicies on SQLite (jmhodges) === 2.8.0 (2008-12-05) * Support drop column operations inside a transaction on sqlite (jeremyevans) * Support literal strings with placeholders and subselects in prepared statements (jeremyevans) * Have the connection pool remove disconnected connections when the adapter supports it (jeremyevans) * Make Dataset#exists return a LiteralString (jeremyevans) * Support multiple SQL statements in one query in the MySQL adapter (jeremyevans) * Add stored procedure support for the MySQL and JDBC adapters (jeremyevans, krsgoss) (#252) * Support options when altering a column's type (for changing enums, varchar size, etc.) (jeremyevans) * Support AliasedExpressions in tables when using implicitly qualified arguments in joins (jeremyevans) * Support Dataset#except on Oracle (jeremyevans) * Raise errors when EXCEPT/INTERSECT is used when not supported (jeremyevans) * Fix ordering of UNION, INTERSECT, and EXCEPT statements (jeremyevans) (#253) * Support aliasing subselects in the Oracle adapter (jeremyevans) * Add a subadapter for the Progress RDBMS to the ODBC adapter (:db_type=>'progress') (groveriffic) (#251) * Make MySQL and Oracle adapters raise an Error if asked to do a SELECT DISTINCT ON (jeremyevans) * Set standard_conforming_strings = ON by default when using PostgreSQL, turn off with Sequel::Postgres.force_standard_strings = false (jeremyevans) (#247) * Fix Database#rename_table when using PostgreSQL (jeremyevans) (#248) * Whether to upcase or quote identifiers can now be set separately, via Sequel.upcase_identifiers= or the :upcase_identifiers database option (jeremyevans) * Support transactions in the ODBC adapter (dlee) * Support multi_insert_sql and unicode string literals in MSSQL shared adapter (dlee) * Make PostgreSQL use the default schema if parsing the schema for all tables at once, even if :schema=>nil option is used (jeremyevans) * Make MySQL adapter not raise an error when giving an SQL::Identifier object to the schema modification methods such as create_table (jeremyevans) * The keys of the hash returned by Database#schema without a table name are now quoted strings instead of symbols (jeremyevans) * Make Database#schema to handle implicit schemas on all databases and multiple identifier object types (jeremyevans) * Remove Sequel.odbc_mssql method (jeremyevans) (#249) * More optimization of Model#initialize (jeremyevans) * Treat interval as it's own type, not an integer type (jeremyevans) * Allow use of implicitly qualified symbol as argument to Symbol#qualify (:a.qualify(:b__c)=>b.c.a), fixes model associations in different schemas (jeremyevans) (#246) === 2.7.1 (2008-11-04) * Fix PostgreSQL Date optimization so that it doesn't reject dates like 11/03/2008 (jeremyevans) === 2.7.0 (2008-11-03) * Transform AssociationReflection from a single class to a class hierarchy (jeremyevans) * Optimize Date object creation in PostgreSQL adapter (jeremyevans) * Allow easier creation of custom association types, though support for them may still be suboptimal (jeremyevans) * Add :eager_grapher option to associations, which the user can use to override the default eager_graph code (jeremyevans) * Associations are now inherited when a model class is subclassed (jeremyevans) * Instance methods added by associations are now added to an anonymous module the class includes, allowing you to override them and use super (jeremyevans) * Add #add_graph_aliases (select_more for graphs), and allow use of arbitrary expressions when graphing (jeremyevans) * Fix a corner case where the wrong table name is used in eager_graph (jeremyevans) * Make Dataset#join_table take an option hash instead of a table_alias argument, add support for :implicit_qualifier option (jeremyevans) * Add :left_primary_key and :right_primary_key options to many_to_many associations (jeremyevans) * Add :primary_key option to one_to_many and many_to_one associations (jeremyevans) * Make after_load association callbacks take effect when eager loading via eager (jeremyevans) * Add a :uniq association option to many_to_many associations (jeremyevans) * Support using any expression as the argument to Symbol#like (jeremyevans) * Much better support for multiple schemas in PostgreSQL (jeremyevans) (#243) * The first argument to Model#initalize can no longer be nil, it must be a hash if it is given (jeremyevans) * Remove Sequel::Model.lazy_load_schema= setting (jeremyevans) * Lazily load model instance options such as raise_on_save_failure, for better performance (jeremyevans) * Make Model::Validiation::Errors more Rails-compatible (jeremyevans) * Refactor model hooks for performance (jeremyevans) * Major performance enhancement when fetching rows using PostgreSQL (jeremyevans) * Don't typecast serialized columns in models (jeremyevans) * Add Array#sql_array to handle ruby arrays of all two pairs as SQL arrays (jeremyevans) (#245) * Add ComplexExpression#== and #eql?, for checking equality (rubymage) (#244) * Allow full text search on PostgreSQL to include rows where a search column is NULL (jeremyevans) * PostgreSQL full text search queries with multiple columns are joined with space to prevent joining border words to one (michalbugno) * Don't modify a dataset's cached column information if calling #each with an option that modifies the columns (jeremyevans) * The PostgreSQL adapter will now generally default to using a unix socket in /tmp if no host is specified, instead of a tcp socket to localhost (jeremyevans) * Make Dataset#sql call Dataset#select_sql instead of being an alias, to allow for easier subclassing (jeremyevans) * Split Oracle adapter into shared and unshared parts, so Oracle is better supported when using JDBC (jeremyevans) * Fix automatic loading of Oracle driver when using JDBC adapter (bburton333) (#242) === 2.6.0 (2008-10-11) * Make the sqlite adapter respect the Sequel.datetime_class setting, for timestamp and datetime types (jeremyevans) * Enhance the CASE statement support to include an optional expression (jarredholman) * Default to using the simple language if no language is specified for a full text index on PostgreSQL (michalbugno) * Add Model.raise_on_typecast_failure=, which makes it possible to not raise errors on invalid typecasts (michalbugno) * Add schema.rdoc file, which provides an brief description of the various parts of Sequel related to schema modification (jeremyevans) * Fix constraint generation when not using a proc or interpolated string (jeremyevans) * Make eager_graph respect associations' :order options (use :order_eager_graph=>false to disable) (jeremyevans) * Cache negative lookup when eagerly loading many_to_one associations where no objects have an associated object (jeremyevans) * Allow string keys to be used when using Dataset#multi_insert (jeremyevans) * Fix join_table when doing the first join for a dataset where the first source is a dataset when using unqualified columns (jeremyevans) * Fix a few corner cases in eager_graph (jeremyevans) * Support transactions on MSSQL (jeremyevans) * Use string literals in AS clauses on SQLite (jeremyevans) (#241) * AlterTableGenerator#set_column_allow_null was added to SET/DROP NOT NULL for columns (divoxx) * Database#tables now works for MySQL databases using the JDBC adapter (jeremyevans) * Database#drop_view can now take multiple arguments to drop multiple views at once (jeremyevans) * Schema modification methods (e.g. drop_table, create_table!) now remove the cached schema entry (jeremyevans) * Models can now determine their primary keys by looking at the schema (jeremyevans) * No longer include :numeric_precision and :max_chars entries in the schema column hashes, use the :db_type entry instead (jeremyevans) * Make schema parsing on PostgreSQL handle implicit schemas (e.g. schema(:schema__table)), so it works with models for tables outside the public schema (jeremyevans) * Significantly speed up schema parsing on MySQL (jeremyevans) * Include primary key information when parsing the schema (jeremyevans) * Fix schema generation of composite foreign keys on MySQL (clivecrous, jeremyevans) === 2.5.0 (2008-09-03) * Add Dataset #set_defaults and #set_overrides, used for scoping the values used in insert/update statements (jeremyevans) * Allow Models to use the RETURNING clause when inserting records on PostgreSQL (jeremyevans) * Raise Sequel::DatabaseError instead of generic Sequel::Error for database errors, don't swallow tracebacks (jeremyevans) * Use INSERT ... RETURNING ... with PostgreSQL 8.2 and higher (jeremyevans) * Make insert_sql, delete_sql, and update_sql respect the :sql option (jeremyevans) * Default to converting 2 digit years, use Sequel.convert_two_digit_years = false to get back the old behavior (jeremyevans) * Make the PostgreSQL adapter with the pg driver use async_exec, so it doesn't block the entire interpreter (jeremyevans) * Make the schema generators support composite primary and foreign keys and unique constraints (jarredholman) * Work with the 2008.08.17 version of the pg gem (erikh) * Disallow abuse of SQL function syntax for types (use :type=>:varchar, :size=>255 instead of :type=>:varchar[255]) (jeremyevans) * Quote index names when creating or dropping indexes (jeremyevans, SanityInAnarchy) * Don't have column accessor methods override plugin instance methods (jeremyevans) * Allow validation of multiple attributes at once, with built in support for uniqueness checking of multiple columns (jeremyevans) * In PostgreSQL adapter, fix inserting a row with a primary key value inside a transaction (jeremyevans) * Allow before_save and before_update to affect the columns saved by save_changes (jeremyevans) * Make Dataset#single_value work when graphing, which fixes count and paginate on graphed datasets (jeremyevans) === 2.4.0 (2008-08-06) * Handle Java::JavaSql::Date type in the JDBC adapter (jeremyevans) * Add support for read-only slave/writable master databases and database sharding (jeremyevans) * Remove InvalidExpression, InvalidFilter, InvalidJoinType, and WorkerStop exceptions (jeremyevans) * Add prepared statement/bound variable support (jeremyevans) * Fix anonymous column names in the ADO adapter (nusco) * Remove odbc_mssql adapter, use :db_type=>'mssql' option instead (jeremyevans) * Split MSSQL specific syntax into separate file, usable by ADO and ODBC adapters (nusco, jeremyevans) === 2.3.0 (2008-07-25) * Enable almost full support for MySQL using JDBC (jeremyevans) * Fix ODBC adapter's conversion of ::ODBC::Time values (Michael Xavier) * Enable full support for SQLite-JDBC using the JDBC adapter (jeremyevans) * Minor changes to allow for full Ruby 1.9 compatibility (jeremyevans) * Make Database#disconnect work for the ADO adapter (spicyj) * Don't raise an exception in the ADO adapter if the dataset contains no records (nusco) * Enable almost full support of PostgreSQL-JDBC using the JDBC adapter (jeremyevans) * Remove Sequel::Worker (jeremyevans) * Make PostgreSQL adapter not raise an error when inserting records into a table without a primary key (jeremyevans) * Make Database.uri_to_options a private class method (jeremyevans) * Make JDBC load drivers automatically for PostgreSQL, MySQL, SQLite, Oracle, and MSSQL (jeremyevans) * Make Oracle adapter work with a nonstandard Oracle database port (pavel.lukin) * Typecast '' to nil by default for non-string non-blob columns, add typecast_empty_string_to_nil= model class and instance methods (jeremyevans) * Use a simpler select in Dataset#empty?, fixes use with MySQL (jeremyevans) * Add integration test suite, testing sequel against a real database, with nothing mocked (jeremyevans) * Make validates_length_of default tag depend on presence of options passed to it (jeremyevans) * Combine the directory structure for sequel_model and sequel_core, now there is going to be only one gem named sequel (jeremyevans) === 2.2.0 (2008-07-05) * Add :extend association option, extending the dataset with module(s) (jeremyevans) * Add :after_load association callback option, called after associated objects have been loaded from the database (jeremyevans) * Make validation methods support a :tag option, to work correctly with source reloading (jeremyevans) * Add :before_add, :after_add, :before_remove, :after_remove association callback options (jeremyevans) * Break many_to_one association setter method in two parts, for easier overriding (jeremyevans) * Model.validates_presence_of now considers false as present instead of absent (jeremyevans) * Add Model.raise_on_save_failure, raising errors on save failure instead of return false (now nil), default to true (jeremyevans) * Add :eager_loader association option, to specify code to be run when eager loading (jeremyevans) * Make :many_to_one associations support :dataset, :order, :limit association options, as well as block arguments (jeremyevans) * Add :dataset association option, which overrides the default base dataset to use (jeremyevans) * Add :eager_graph association option, works just like :eager except it uses #eager_graph (jeremyevans) * Add :graph_join_table_join_type association option (jeremyevans) * Add :graph_only_conditions and :graph_join_table_only_conditions association options (jeremyevans) * Add :graph_block and :graph_join_table_block association options (jeremyevans) * Set the model's dataset's columns in addition to the model's columns when loading the schema for a model (jeremyevans) * Make caching work correctly with subclasses (jeremyevans) * Add the Model.to_hash dataset method (jeremyevans) * Filter blocks now yield a SQL::VirtualRow argument, which is useful if another library defines operator methods on Symbol (jeremyevans) * Add Symbol#identifier method, to make x__a be treated as "x__a" instead of "x"."a" (jeremyevans) * Dataset#update no longer takes a block, please use a hash argument with the expression syntax instead (jeremyevans) * ParseTree support has been removed from Sequel (jeremyevans) * Database#drop_column is now supported in the SQLite adapter (abhay) * Tinyint columns can now be considered integers instead of booleans by setting Sequel.convert_tinyint_to_bool = false (samsouder) * Allow the use of URL parameters in connection strings (jeremyevans) * Ignore any previously selected columns when using Dataset#graph for the first time (jeremyevans) * Dataset#graph now accepts a block which is passed to join_table (jeremyevans) * Make Dataset#columns ignore any filtering, ordering, and distinct clauses (jeremyevans) * Use the safer connection-specific string escaping methods for PostgreSQL (jeremyevans) * Database#transaction now yields a connection when using the Postgres adapter, just like it does for other adapters (jeremyevans) * Dataset#count now works for a limited dataset (divoxx) * Database#add_index is now supported in the SQLite adapter (abhay) * Sequel's MySQL adapter should no longer conflict with ActiveRecord's use of MySQL (careo) * Treat Hash as expression instead of column alias when used in DISTINCT, ORDER BY, and GROUP BY clauses (jeremyevans) * PostgreSQL bytea fields are now fully supported (dlee) * For PostgreSQL, don't raise an error when assigning a value to a SERIAL PRIMARY KEY field when inserting records (jeremyevans) === 2.1.0 (2008-06-17) * Break association add_/remove_/remove_all_ methods into two parts, for easier overriding (jeremyevans) * Add Model.strict_param_setting, on by default, which raises errors if a missing/restricted method is called via new/set/update/etc. (jeremyevans) * Raise errors when using association methods on objects without valid primary keys (jeremyevans) * The model's primary key is a restricted column by default, Add model.unrestrict_primary_key to get the old behavior (jeremyevans) * Add Model.set_(allowed|restricted)_columns, which affect which columns create/new/set/update/etc. modify (jeremyevans) * Calls to Model.def_dataset_method with a block are cached and reapplied to the new dataset if set_dataset is called, even in a subclass (jeremyevans) * The :reciprocal option to associations should now be the symbol name of the reciprocal association, not an instance variable symbol (jeremyevans) * Add Model#associations, which is a hash holding a cache of associated objects, with each association being a separate key (jeremyevans) * Make all associations support a :graph_select option, specifying a column or array of columns to select when using eager_graph (jeremyevans) * Bring back Model#set and Model#update, now the same as Model#set_with_params and Model#update_with_params (jeremyevans) * Allow model datasets to call to_hash without any arguments, which allows easy creation of identity maps (jeremyevans) * Add Model.set_sti_key, for easily setting up single table inheritance (jeremyevans) * Make all associations support a :read_only option, which doesn't add methods that modify the database (jeremyevans) * Make *_to_many associations support a :limit option, for specifying a limit to the resulting records (and possibly an offset) (jeremyevans) * Make association block argument and :eager option affect the _dataset method (jeremyevans) * Add a :one_to_one option to one_to_many associations, which creates a getter and setter similar to many_to_one (a.k.a. has_one) (jeremyevans) * add_ and remove_ one_to_many association methods now raise an error if the passed object cannot be saved, instead of saving without validation (jeremyevans) * Add support for :if option on validations, using a symbol (specifying an instance method) or a proc (dtsato) * Support bitwise operators for NumericExpressions: &, |, ^, ~, <<, >> (jeremyevans) * No longer raise an error for Dataset#filter(true) or Dataset#filter(false) (jeremyevans) * Allow Dataset #filter, #or, #exclude and other methods that call them to use both the block and regular arguments (jeremyevans) * ParseTree support is now officially deprecated, use Sequel.use_parse_tree = false to use the expression (blockless) filters inside blocks (jeremyevans) * Remove :pool_reuse_connections ConnectionPool/Database option, MySQL users need to be careful with nested queries (jeremyevans) * Allow Dataset#graph :select option to take an array of columns to select (jeremyevans) * Allow Dataset#to_hash to be called with only one argument, allowing for easy creation of lookup tables for a single key (jeremyevans) * Allow join_table to accept a block providing the aliases and previous joins, that allows you to specify arbitrary conditions properly qualified (jeremyevans) * Support NATURAL, CROSS, and USING joins in join_table (jeremyevans) * Make sure HAVING comes before ORDER BY, per the SQL standard and at least MySQL, PostgreSQL, and SQLite (juco) * Add cast_numeric and cast_string methods for use in the Sequel DSL, that have default types and wrap the object in the correct class (jeremyevans) * Add Symbol#qualify, for adding a table/schema qualifier to a column/table name (jeremyevans) * Remove Module#metaprivate, since it duplicates the standard Module#private_class_method (jeremyevans) * Support the SQL CASE expression via Array#case and Hash#case (jeremyevans) * Support the SQL EXTRACT function: :date.extract(:year) (jeremyevans) * Convert numeric fields to BigDecimal in PostgreSQL adapter (jeremyevans) * Add :decimal fields to the schema parser (jeremyevans) * The expr argument in join table now allows the same argument as filter, so it can take a string or a blockless filter expression (brushbox, jeremyevans) * No longer assume the expr argument to join_table references the primary key column (jeremyevans) * Rename the Sequel.time_class setting to Sequel.datetime_class (jeremyevans) * Add savepoint/nesting support to postgresql transactions (elven) * Use the specified table alias when joining a dataset, instead of the automatically generated alias (brushbox) === 2.0.1 (2008-06-04) * Make the choice of Time or DateTime optional for typecasting :datetime types, default to Time (jeremyevans) * Reload database schema for table when calling Model.create_table (jeremyevans) * Have PostgreSQL money type use BigDecimal instead of Float (jeremyevans) * Have the PostgreSQL and MySQL adapters use the Sequel.time_class setting for datetime/timestamp types (jeremyevans) * Add Sequel.time_class and String#to_sequel_time, used for converting time values from the database to either Time (default) or DateTime (jeremyevans) * Make identifier quoting uppercase by default, to work better with the SQL standard, override in PostgreSQL (jeremyevans) (#232) * Add StringExpression#+, for simple SQL string concatenation (:x.sql_string + :y) (jeremyevans) * Make StringMethods.like to a case sensensitive search on MySQL (use ilike for the old behavior) (jeremyevans) * Add StringMethods.ilike, for case insensitive pattern matching (jeremyevans) * Refactor ComplexExpression into three subclasses and a few modules, so operators that don't make sense are not defined for the class (jeremyevans) === 2.0.0 (2008-06-01) * Comprehensive update of all documentation (jeremyevans) * Remove methods deprecated in 1.5.0 (jeremyevans) * Add typecasting on attribute assignment to Sequel::Model objects, optional but enabled by default (jeremyevans) * Returning false in one of the before_ hooks now causes the appropriate method(s) to immediately return false (jeremyevans) * Add remove_all_* association method for *_to_many associations, which removes the association with all currently associated objects (jeremyevans) * Add Model.lazy_load_schema=, when set to true, it loads the schema on first instantiation (jeremyevans) * Add before_validation and after_validation hooks, called whenever the model is validated (jeremyevans) * Add Model.default_foreign_key, a private class method that allows changing the default foreign key that Sequel will use in associations (jeremyevans) * Cache negative lookup when eagerly loading many_to_one associations (jeremyevans) * Make all associations support the :select option, not just many_to_many (jeremyevans) * Allow the use of blocks when eager loading, and add the :eager_block and :allow_eager association options for configuration (jeremyevans) * Add the :graph_join_type, :graph_conditions, and :graph_join_table_conditions association options, used when eager graphing (jeremyevans) * Add AssociationReflection class (subclass of Hash), to make calling a couple of private Model methods unnecessary (jeremyevans) * Change hook methods so that if a tag/method is specified it overwrites an existing hook block with the same tag/method (jeremyevans) * Refactor String inflection support, you must use String.inflections instead of Inflector.inflections now (jeremyevans) * Allow connection to ODBC-MSSQL via a URL (petersumskas) (#230) * Comprehensive update of all documentation, except for the block filters and adapters (jeremyevans) * Handle Date and DateTime value literalization correctly in adapters (jeremyevans) * Literalize DateTime values the same as Time values (jeremyevans) * MySQL tinyints are now returned as boolean values instead of integers (jeremyevans) * Set additional MySQL charset options required for creating tables and databases (tmm1) * Remove methods deprecated in 1.5.0 (jeremyevans) * Add Module#metaattr_accessor for creating attr_accessors for the metaclass (jeremyevans) * Add SQL string concatenation support to blockless filters, via Array#sql_string_join (jeremyevans) * Add Pagination#last_page? and Pagination#first_page? (apeiros) * Add limited column reflection support, tested on PostgreSQL, MySQL, and SQLite (jeremyevans) * Allow the use of :schema__table___table_alias syntax for tables, similar to the column support (jeremyevans) * Merge metaid gem into core_ext.rb and clean it up, so sequel now has no external dependencies (jeremyevans) * Add Dataset#as, so using a dataset as a column with an alias is not deprecated (jeremyevans) * Add Dataset#invert, which returns a dataset with inverted HAVING and WHERE clauses (jeremyevans) * Add blockless filter syntax support (jeremyevans) * Passing an array to Dataset#order and Dataset#select no longer works, you need to pass multiple arguments (jeremyevans) * You should use '?' instead of '(?)' when using interpolated strings with array arguments (jeremyevans) * Dataset.literal now surrounds the literalization of arrays with parentheses (jeremyevans) * Add echo option (back?) to sequel command line tool, via -E or --echo (jeremyevans) * Allow databases to have multiple loggers (jeremyevans) * The sequel command line tool now also accepts a path to a database config YAML file in addition to a URI (mtodd) * Major update of the postgresql adapter (jdavis, jeremyevans) (#225) * Make returning inside of a database transaction commit the transaction (ahoward, jeremyevans) * Dataset#to_table_reference is now protected, and it has a different API (jeremyevans) * Dataset#join_table and related functions now take an explicit optional table_alias argument, you can no longer include the table alias in the table argument (jeremyevans) * Aliased and/or qualified columns with embedded spaces can now be specified as symbols (jeremyevans) * When identifier quoting is enabled, the SQL standard double quote is used by default (jeremyevans) * When identifier quoting is enabled, quote tables as well as columns (jeremyevans) * Make identifier quoting optional, enabled by default (jeremyevans) * Allow Sequel::Database.connect and related methods to take a block that disconnects the database when the block finishes (jeremyevans) * Add Dataset#unfiltered, for removing filters from dataset (jeremyevans) * Add add_foreign_key and add_primary_key methods to the AlterTableGenerator (jeremyevans) * Allow migration files to have more than 3 digits (jeremyevans) * Add methods directly to Dataset instead of including modules (jeremyevans) * Make some Dataset instance methods private: invert_order, insert_default_values_sql (jeremyevans) * Don't add methods that depend on ParseTree unless you can load ParseTree (jeremyevans) * Don't wipeout the cached columns every time a dataset is cloned, but only on changes to :select, :sql, :from, or :join (jeremyevans) * Fix Oracle Adapter (yasushi.abe) * Fixed sqlite uri so that sqlite:// works just like file:// (2 slashes for a relative path, 3 for an absolute) (dlee) * Raise a Sequel::Error if an invalid limit or offset is used (jeremyevans) * Refactor and beef up Dataset#first and Dataset#last, with some change in functionality (jeremyevans) * Add String#to_datetime, for consistency (jeremyevans) * Fix Range#interval so that it returns 1 less for an exclusive range * Change SQLite adapter so it doesn't swallow exceptions other than SQLite3::Exception (such as Interrupt) (jeremyevans) * Change PostgreSQL and MySQL adapters to raise Sequel::Error instead of database specific errors if a database error occurs (jeremyevans) * Using a memory database with SQLite now defaults to a single connection, so all queries it uses run against the same database (jeremyevans) * Fix attempting to query MySQL using the same connection being used to concurrently execute another query (jeremyevans) * Add options to the connection pool to configure reusing connections and converting exceptions (jeremyevans) * Use the database driver provided string quoting methods for MySQL and SQLite (jeremyevans) (#223) * Add ColumnAll#==, for checking the equality of two ColumnAlls (jeremyevans) * Allow an array of arrays instead of a hash when specifying conditions (jeremyevans) * Add Sequel::DBI::Database#lowercase, for lowercasing column names (jamesearl) * Remove Dataset#extend_with_destroy, which may break code that uses Dataset#set_model directly and expects the destroy method to be added (jeremyevans) * Fix some issues when running on Ruby 1.9 (Zverok, jeremyevans) * Make the DBI adapter work (partially) with PostgreSQL (Seb) === 1.5.1 (2008-04-30) * Fix Dataset#eager_graph when not all objects have associated objects (jeremyevans) * Have Dataset#graph give a nil value instead of a hash with all nil values if no matching rows exist in the graphed table (jeremyevans) === 1.5.0 (2008-04-29) * Make the validation errors API compatible with Merb (Inviz) * Add validates_uniqueness_of, for protecting against duplicate entries in the database (neaf, jeremyevans) * Alias Model#dataset= to Model#set_dataset (tmm1) * Make some Model class methods private: def_hook_method, hooks, add_hook, plugin_module, plugin_gem (jeremyevans) * Add the eager! and eager_graph! mutation methods to model datasets (jeremyevans) * Remove Model.database_opened (jeremyevans) * Remove Model.super_dataset (jeremyevans) * Deprecate .create_with_params, .create_with, #set, #update, #update_with, and #new_record from Sequel::Model (jeremyevans) * Add Model.def_dataset_method, for defining methods on the model that reference methods on the dataset (jeremyevans) * Deprecate Model.method_missing, add dataset methods to Model via metaprogramming (jeremyevans) * Remove Model.join, so it is the same as Dataset#join (jeremyevans) * Use reciprocal associations for all types of associations in the getter/setter/add_/remove_ methods (jeremyevans) * Fix many_to_one associations to cache negative lookups (jeremyevans) * Change Model#=== to always be false if the primary key is nil (jeremyevans) * Add Model#hash, which should be unique for a given class and primary key (or values if primary key is nil) (jeremyevans) * Add Model#eql? as a alias to Model#== (jeremyevans) * Make Model#reload clear any cached associations (jeremyevans) * No longer depend on the assistance gem, merge the Inflector and Validations code (jeremyevans) * Add Model#set_with_params, which is Model#update_with_params without the save (jeremyevans) * Fix Model#destroy so that it returns self, not the result of after_destroy (jeremyevans) * Define Model column accessors in set_dataset, so they should always be avaiable, deprecate Model#method_missing (jeremyevans) * Add eager loading of associations via new sequel_core object graphing feature (jeremyevans) * Fix many_to_many associations with classes inside modules without an explicit join table (jeremyevans) * Allow creation of new records that don't have primary keys when the cache is on (jeremyevans) (#213) * Make Model#initialize, Model#set, and Model#update_with_params invulnerable to memory exhaustion (jeremyevans) (#210) * Add Model.str_columns, which gives a list of columns as frozen strings (jeremyevans) * Remove pretty_table.rb from sequel, since it is in sequel_core (jeremyevans) * Set a timeout in the Sqlite adapter, default to 5 seconds (hrvoje.marjanovic) (#218) * Document that calling Sequel::ODBC::Database#execute manually requires you to manually drop the returned object (jeremyevans) (#217) * Paginating an already paginated/limited dataset now raises an error (jeremyevans) * Add support for PostgreSQL partial indexes (dlee) * Added support for arbitrary index types (including spatial indexes) (dlee) * Quote column names in SQL generated for SQLite (tmm1) * Deprecate Object#rollback! (jeremyevans) * Make some Dataset methods private (qualified_column_name, column_list, table_ref, source_list) (jeremyevans) * Deprecate Dataset methods #set_options, #set_row_proc, #remove_row_proc, and #clone_merge (jeremyevans) * Add Symbol#*, a replacement for Symbol#all (jeremyevans) * Deprecate including ColumnMethods in Object, include it in Symbol, String, and Sequel::SQL::Expression (jeremyevans) * Deprecate Symbol#method_missing, and #AS, #DESC, #ASC, #ALL, and #all from ColumnMethods (jeremyevans) * Fix table joining in MySQL (jeremyevans) * Deprecate Sequel.method_missing and Object#Sequel, add real Sequel.adapter methods (jeremyevans) * Move dataset methods applicable only to paginated datasets into Sequel::Dataset::Pagination (jeremyevans) * Make Sequel::Dataset::Sequelizer methods private (jeremyevans) * Deprecate Dataset#method_missing, add real mutation methods (e.g. filter!) (jeremyevans) * Fix connecting to an MSSQL server via ODBC using domain user credentials (jeremyevans) (#216) * No longer depend on the assistance gem, merge in the ConnectionPool and .blank methods (jeremyevans) * No longer depend on ParseTree, RubyInline, or ruby2ruby, but you still need them if you want to use the block filters (jeremyevans) * Fix JDBC adapter by issuing index things start at 1 (pdamer) * Fix connecting to a database via the ADO adapter (now requires options instead of URI) (timuckun, jeremyevans) (#204) * Support storing microseconds in postgres timestamp fields (schnarch...@rootimage.msu.edu) (#215) * Allow joining of multiple datasets, by making the table alias different for each dataset joined (jeremyevans) * SECURITY: Fix backslash escaping of strings (dlee) * Add ability to create a graph of objects from a query, with the result split into corresponding tables (jeremyevans) (#113) * Add attr_accessor for dataset row_proc (jeremyevans) * Don't redefine Dataset#each when adding a transform or row_proc (jeremyevans) * Remove array_keys.rb from sequel_core, it was partially broken (since the arrays came from hashes), and redefined Dataset#each (jeremyevans) * Fix MySQL default values insert (matt.binary) (#196) * Fix ODBC adapter improperly escaping date and timestamp values (leo.borisenko) (#165) * Fix renaming columns on MySQL with type :varchar (jeremyevans) (#206) * Add Sequel::SQL::Function#==, for comparing SQL Functions (jeremyevans) (#209) * Update Informix adapter to work with Ruby/Informix 0.7.0 (gerardo.santana@gmail.com) * Remove sequel_core's knowledge of Sequel::Model (jeremyevans) * Use "\n" instead of $/ (since $/ can be redefined in ways we do not want) (jeremyevans) === 1.4.0 (2008-04-08) * Don't mark a column as changed unless the new value is different from the current value (tamas.denes, jeremyevans) (#203). * Switch gem name from "sequel_model" to just "sequel", which required large version bump (jeremyevans). * Add :select option to many_to_many associations, default to selecting only the associated model table and not the join table (jeremyevans) (#208). * Add :reciprocal one_to_many association option, for setting corresponding many_to_one instance variable (jeremyevans). * Add eager loading implementation (jeremyevans). * Change *_to_many associations so that the all associations are considered :cache=>true (jeremyevans). * Fix associations with block arguments and :cache=>true (jeremyevans). * Merge 3 mysql patches from the bugtracker (mvyver) (#200, #201, #202). * Merge 2 postgresql patches from the bugtracker (a...@mellowtone.co.jp) (#211, 212). * Allow overriding of default posgres spec database via ENV['SEQUEL_PG_SPEC_DB'] (jeremyevans). * Allow using the Sequel::Model as the first argument in a dataset join selection (jeremyevans) (#170). * Add simple callback mechanism to make model eager loading implementation easier (jeremyevans). * Added Sequel::Error::InvalidOperation class for invalid operations (#198). * Implemented MySQL::Database#server_version (#199). * Added spec configuration for MySQL socket file. * Fixed transform with array tuples in postgres adapter. * Changed spec configuration to Database objects instead of URIs in order to support custom options for spec databases. * Renamed schema files. * Fixed Dataset#from to work correctly with SQL functions (#193). ===Previous to 1.4.0, Sequel model and Sequel core versioning differed, see the bottom of this file for the changelog to Sequel model prior to 1.4.0. === 1.3 (2008-03-08) * Added configuration file for running specs (#186). * Changed Database#drop_index to accept fixed arity (#173). * Changed column definition sql to put UNSIGNED constraint before unique in order to satisfy MySQL (#171). * Enhanced MySQL adapter to support load data local infile_, added compress option for mysql connection by default (#172). * Fixed bug when inserting hashes in array tuples mode. * Changed SQLite adapter to catch RuntimeError raised when executing a statement and raise an Error::InvalidStatement with the offending SQL and error message (#188). * Added Error::InvalidStatement class. * Fixed Dataset#reverse to not raise for unordered dataset (#189). * Added Dataset#unordered method and changed #order to remove order if nil is specified (#190). * Fixed reversing order of ASC expression (#164). * Added support for :null => true option when defining table columns (#192). * Fixed Symbol#method_missing to accept variable arity (#185). === 1.2.1 (2008-02-29) * Added add_constraint and drop_constraint functionality to Database#alter_table (#182). * Enhanced Dataset#multi_insert to accept datasets (#179). * Added MySQL::Database#use method for switching database (#180). * Enhanced Database.uri_to_options to accept uri strings (#178). * Added Dataset#columns! method that always makes a roundtrip to the DB (#177). * Added new Dataset#each_page method that iterates over all pages in the result set (#175). * Added Dataset#reverse alias to Dataset#reverse_order (#174). * Fixed Dataset#transform_load and #transform_save to create a trasnformed copy of the supplied hash instead of transforming it in place (#184). * Implemented MySQL::Dataset#replace (#163). === 1.2 (2008-02-15) * Added support for :varchar[100] like type declarations in #create_table. * Fixed #rename_column in mysql adapter to support types like varchar(255) (#159). * Added support for order and limit in DELETE statement in MySQL adapter (#160). * Added checks to Dataset#multi_insert to prevent work if no values are given (#162). * Override ruby2ruby implementation of Proc#to_sexp which leaks memory (#161). * Added log option, help for sequel script (#157). === 1.1 (2008-02-15) * Fixed Dataset#join_table to support joining of datasets (#156). * Changed Dataset#empty? to use EXISTS condition instead of counting records, for much better performance (#158). * Implemented insertion of multiple records in a single statement for postgres adapter. This feature is available only in postgres 8.2 and newer. * Implemented Postgres::Database#server_version. * Implemented Database#get, short for dataset.get(...). * Refactored Dataset#multi_insert, added #import alias, added support for calling #multi_insert using array of columns and array of value arrays (thanks David Lee). * Implemented Dataset#get, a replacement for select(column).first[column]. * Implemented Dataset#grep method, poor man's text search. === 1.0.10 (2008-02-13) * Fixed Datset#group_and_count to work inside a query block (#152). * Changed datasets with transforms to automatically transform hash filters (#155). * Changed Marshal stock transform to use Base64 encoding with backward-compatibility to support existing marshaled values (#154). * Added support for inserting multiple records in a single statement using #multi_insert in MySQL adapter (#153). * Added support for :slice option (same as :commit_every) in Dataset#multi_insert. * Changed Dataset#all to accept opts and iteration block. === 1.0.9 (2008-02-10) * Implemented Dataset#inspect and Database#inspect (#151). * Added full-text searching for odbc_mssql adapter (thanks Joseph Love). * Added AlterTableGenerator#add_full_text_index method. * Implemented full_text indexing and searching for PostgreSQL adapter (thanks David Lee). * Implemented full_text indexing and searching for MySQL adapter (thanks David Lee). * Fixed Dataset#insert_sql to work with array subscript references (thanks Jim Morris). === 1.0.8 (2008-02-08) * Added support for multiple choices in string matching expressions (#147). * Renamed Dataset#clone_merge to Dataset#clone, works with or without options for merging (#148). * Fixed MySQL::Database#<< method to always free the result in order to allow multiple calls in a row (#149). Same also for PostgreSQL adapter. === 1.0.7 (2008-02-05) * Added support for conditional filters (using if else statements) inside block filters (thanks Kee). === 1.0.6 (2008-02-05) * Removed code pollution introduced in revs 814, 817 (really bad patch, IMO). * Fixed joining datasets using aliased tables (#140). * Added support additional field types in postgresql adapter (#146). * Added support for date field types in postgresql adapter (#145). * Fixed Dataset#count to work correctly for grouped datasets (#144). * Added Dataset#select_more, Dataset#order_more methods (#129). === 1.0.5 (2008-01-25) * Added support for instantiating models by using the load constructor method. === 1.0.4.1 (2008-01-24) * Fixed bin/sequel to require sequel_model if available. === 1.0.4 (2008-01-24) * Added Dataset#select_all method. * Changed ODBC::Database to support connection using driver and database name, also added support for untitled columns in ODBC::Dataset (thanks Leonid Borisenko). * Fixed MySQL adapter to correctly format foreign key definitions (#123). * Changed MySQL::Dataset to allow HAVING clause on ungrouped datasets, and put HAVING clause before ORDER BY clause (#133). * Changed Dataset#group_and_count to accept multiple columns (#134). * Fixed database spec to open YAML file in binary mode (#131). * Cleaned up gem spec (#132). * Added Dataset#table_exists? convenience method. === 1.0.3 (2008-01-17) * Added support for UNSIGNED constraint, used in MySQL? (#127). * Implemented constraint definitions inside Database#create_table. * Fixed postgres adapter to define PGconn#async_exec as alias to #exec if not defined (for pure-ruby postgres driver). * Added String#to_date. Updated mysql adapter to use String#to_date for mysql date types (thanks drfreeze). === 1.0.2 (2008-01-14) * Removed ConnectionPool, NumericExtensions. Added dependency on assistance. === 1.0.1 (2008-01-12) * Changed postgres adapter to quote column references using double quotes. * Applied patch for oracle adapter: fix behavior of limit and offset, transactions, #table_exists?, #tables and additional specs (thanks Liming Lian #122). * Allow for additional filters on a grouped dataset (#119 and #120) * Changed mysql adapter to default to localhost if :host option is not specified (#114). * Refactored Sequelizer to use Proc#to_sexp (method provided by r2r). * Enhanced Database.connect to accept options with string keys, so it can now accept options loaded from YAML files. Database.connect also automatically converts :username option into :user for compatibility with existing YAML configuration files for AR and DataMapper. === 1.0.0.1 (2008-01-03) * Changed MySQL adapter to support specifying socket option. * Added support for limiting and paginating datasets with fixed SQL, gotten with DB#fetch (thanks Ruy Diaz). * Added new Dataset#from_self method that returns a dataset selecting from the original dataset. === 1.0 (2008-01-02) * Removed deprecated adapter stubs. * Removed Sequel::Model() stub. * Changed name to sequel_core. * 100% code coverage. * Fixed error behavior when sequel_model is not available. * Fixed error behavior when parse_tree or ruby2ruby are not available. === 0.5.0.2 (2008-01-01) * Fixed String#to_time to raise error correctly for invalid time stamps. * Improved code coverage - now at 99.2%. === 0.5.0.1 (2007-12-31) * Added a stub for Sequel::Model that auto-loads sequel_model. * Changed Sequel.method_missing and Database.adapter_class to raise AdapterNotFound if an adapter could not be loaded. * Fixed behavior of error trap in sequel command line tool. === 0.5 (2007-12-30) * Removed model code into separate sub-project. Rearranged trunk into core, model and model_plugins. === 0.4.5 (2007-12-25) * Added rdoc for new alter_table functionality (#109). * Fixed update_sql with array sub-item keys (#110). * Refactored model specs. * Added Model#update as alias to #set. * Refactored validations code. Renamed Model.validations? into Model.has_validations?. * Added initial Model validations (Thanks Lance Carlson) * Added Database#set_column_default method (thanks Jim Morris.) * Removed warning on uninitialized @transform value (thanks Jim Morris). === 0.4.4.2 (2007-12-20) * Fixed parsing errors in Ruby 1.9. * Fixed sync problem in connection_pool_spec. * Changed String#to_time to raise Error::InvalidValue if Time.parse fails. * Refactored sequel error classes. === 0.4.4.1 (2007-12-19) * Fixed schema generation code to use field quoting and support adapter-specific literalization of default values (#108). === 0.4.4 (2007-12-17) * Implemented Database#rename_table (#104). * Fixed drop_index in mysql adapter (#103). * Added ALTER TABLE specs for postgres, sqlite and mysql adapters. Added custom alter_table behavior for sqlite and mysql adapters (#101, #102). * Added direct Database API for altering tables. * Added Database#alter_table method with support for adding, dropping, renaming, modifying columns and adding and droppping indexes. * Added #unique schema method for defining unique indexes (thanks Dado). * Implemented unfolding of #each calls inside sequelizer blocks (thanks Jim Morris). === 0.4.3 (2007-12-15) * Fixed Dataset#update to accept strings (#98). * Fixed Model.[] to raise for boolean argument (#97). * Added Database#add_index method (thanks coda.hale). * Added error reporting for filtering on comparison not in a block (thanks Jim Morris). * Added support for inline index definition (thanks Dado). * Added Database#create_table! method for forcibly creating a table (thanks Dado). * Added support for using Dataset#update with block. * Changed subscript access to use | operator. * Fixed subscript access in sequelizer. * Added support for subscript access using Symbol#/ operator. === 0.4.2.2 (2007-12-10) * Improved code coverage. * Fixed Dataset#count to work properly with datasets with fixed SQL (when using #fetch). * Added Model.create_with_params method that filters the given parameters accordring to the model's columns (thanks Aman Gupta). === 0.4.2.1 (2007-12-09) * Refactored and fixed Dataset#reverse_order to work with field quoting (thanks Christian). * Fixed problem with field quoting in insert statements. * Changed sequelizer code to silently fail on any error when requiring parsetree and ruby2ruby. * Added Database#create_view, #create_or_replace_view and #drop_view methods. Also implemented Dataset#create_view and #create_or_replace_view convenience methods. * Keep DRY by re-using Model#[]= from method_missing. * Added Model.fetch alias for DB.fetch.set_model(Model) === 0.4.2 (2007-12-07) * Implemented Model#save_changes. * Extended Model#save to accept specific columns to update. * Implemented experimental JDBC adapter. * Added adapter skeleton as starting point for new adapters. * Cleaned-up adapters and moved automatic requiring of 'sequel' to adapter stubs. === 0.4.1.3 (2007-12-05) * Better plugin conventions. * Added experimental OpenBase adapter. * Fixed Sequel. methods to accept options hash as well as database name. Fixed Sequel.connect to accept options hash as well as URI (Wayne). === 0.4.1.2 (2007-12-04) * Added release rake task (using RubyForge). * Changed Model.is to accept variable arity. * Implemented plugin loading for model classes. * Fixed odbc-mssql and odbc adapters (thanks Dusty.) * Implemented odbc-mssql adapter (thanks Dusty.) === 0.4.1.1 (2007-11-27) * Fixed #first and #last functionality in Informix::Dataset (thanks Gerardo Santana). === 0.4.1 (2007-11-25) * Put adapter files in lib/sequel/adapters. Requiring sequel/ is now deprecated. Users can now just require 'sequel' and adapters are automagically loaded (#93). === 0.4.0 (2007-11-24) * Reorganized lib directory structure. * Added support for dbi-xxx URI schemes (#86). * Fixed problem in Database#uri where setting the password would raise an error (#87). * Improved Dataset#insert_sql to correctly handle string keys (#92). * Improved error-handling for worker threads. Errors are saved to an array and are accessible through #errors (#91). * Dataset#uniq/distinct can now accept a column list for DISTINCT ON clauses. * Fixed Model.all. * Fixed literalization of strings with escape sequences in postgres adapter (#90). * Added support for literalizing BigDecimal values (#89). * Fixed column qualification for joined datasets (thanks Christian). * Implemented experimental informix adapter. === 0.3.4.1 (2007-11-10) * Changed Dataset#select_sql to support queries without a FROM clause. === 0.3.4 (2007-11-10) * Fixed MySQL adapter to allow calling stored procedures (thanks Sebastian). * Changed Dataset#each to always return self. * Fixed SQL functions without arguments in block filters. * Implemented super-cool Symbol#cast_as method. * Fixed error message in command-line tool if failed to load adapter (#85). * Refactored code relating to column references for better extendibility (#88). * Tiny fix to Model#run_hooks. === 0.3.3 (2007-11-04) * Revised code to generate SQL statements without trailing semicolons. * Added Sequel::Worker implementation of a simple worker thread for asynchronous execution. * Added spec for Oracle adapter. * Fixed Oracle adapter to format INSERT statements without semicolons (thanks Liming Lian). * Renamed alias to Array#keys as Array#columns instead of Array#fields. * Renamed FieldCompositionMethods as ColumnCompositionMethods. * Implemented Sequel::NumericExtensions to provide stuff like 30.days.ago. === 0.3.2 (2007-11-01) * Added #to_column_name as alias to #to_field_name, #column_title as alias to #field_title. * Added Dataset#interval method for getting interval between minimum/maximum values for a column. * Fixed Oracle::Database#execute (#84). * Added group_and_count as general implementation for count_by_xxx. * Added count_by magic method. * Added Dataset#range method for getting the minimum/maximum values for a column. * Fixed timestamp translation in SQLite adapter (#83). * Experimental DB2 adapter. * Added Dataset#set as alias to Dataset#update. * Removed long deprecated expressions.rb code. * Better documentation. * Implemented Dataset magic methods: order_by_xxx, group_by_xxx, filter_by_xxx, all_by_xxx, first_by_xxx, last_by_xxx. * Changed Model.create and Model.new to accept a block. === 0.3.1 (2007-10-30) * Typo fixes (#79). * Added require 'yaml' to dataset.rb (#78). * Changed postgres adapter to use the ruby-postgres library's type conversion if available (#76). * Fixed string literalization in mysql adapter for strings with comment backslashes in them (#75). * Fixed ParseTree dependency to work with version 2.0.0 and later (#74). * foreign_key definitions now accept :key option for specifying the remote key (#73). * Fixed Model#method_missing to not raise error for columns not in the table but for which a value exists (#77). * New documentation for Model. * Implemented Oracle adapter based on ruby-oci8 library. * Implemented Model#pk_hash. Is it really necessary? * Deprecated Model#pkey. Implemented better Model#pk method. * Specs and docs for Model.one_to_one, Model.one_to_many macros. === 0.3.0.1 (2007-10-20) * Changed Database#fetch to return a modified dataset. === 0.3 (2007-10-20) * Added stock transforms to Dataset#transform. Refactored Model.serialize. * Added Database#logger= method for setting the database logger object. * Fixed Model.[] to act as shortcut to Model.find when a hash is given (#71). * Added support for old and new decimal types in MySQL adapter, and updated MYSQL_TYPES with MySQL 5.0 constants (#72). * Implemented Database#disconnect method for all adapters. * Fixed small bug in ArrayKeys module. * Implemented model caching by primary key. * Separated Model.find and Model.[] functionality. Model.find takes a filter. Model.[] is strictly for finding by primary keys. * Enhanced Dataset#first to accept a filter block. Model#find can also now accept a filter block. * Changed Database#[] to act as shortcut to #fetch if a string is given. * Renamed Database#each to #fetch. If no block is given, the method returns an enumerator. * Changed Dataset#join methods to correctly literalize values in join conditions (#70). * Fixed #filter with ranges to correctly literalize field names (#69). * Implemented Database#each method for quickly retrieving records with arbitrary SQL (thanks Aman Gupta). * Fixed bug in postgres adapter where a LiteralString would be literalized as a regular String. * Fixed SQLite insert with subquery (#68). * Reverted back to hashes as default mode. Added Sequel.use_array_tuples and Sequel.use_hash_tuples methods. * Fixed problem with arrays with keys when using #delete. * Implemented ArrayKeys as substitute for ArrayFields. * Added Dataset#each_hash method. * Rewrote SQLite::Database#transaction to use sqlite3-ruby library implementation of transactions. * Fixed Model.destroy_all to work correctly in cases where no before_destroy hook is defined and an after_destroy hook is defined. * Restored Model.has_hooks? implementation. * Changed Database#<< to strip comments and whitespace only when an array is given. * Changed Schema::Generator#primary_key to accept calls with the type argument omitted. * Hooks can now be prepended or appended by choice. * Changed Model.subset to define filter method on the underlying dataset instead of the model class. * Fixed Dataset#transform to work with array fields. * Added Dataset#to_csv method. * PrettyTable can now extract column names from arrayfields. * Converted ado, dbi, odbc adapters to use arrayfields instead of hashes. * Fixed composite key support. * Fixed Dataset#insert_sql, update_sql to support array fields. * Converted sqlite, mysql, postgres adapters to use arrayfields instead of hashes. * Extended Dataset#from to auto alias sub-queries. * Extended Dataset#from to accept hash for aliasing tables. * Added before_update, after_update hooks. === 0.2.1.1 (2007-10-07) * Added Date literalization to sqlite adapter (#60). * Changed Model.serialize to allow calling it after the class is defined (#59). * Fixed after_create hooks to allow calling save inside the hook (#58). * Fixed MySQL quoting of sql functions (#57). * Implemented rollback! global method for cancelling transactions in progress. * Fixed =~ operator in Sequelizer. * Fixed ODBC::Dataset#fetch_rows (thanks Dusty). * Renamed Model.recreate_table to create_table!. recreate_table is deprecated and will issue a warning (#56). === 0.2.1 (2007-09-24) * Added default implementation of Model.primary_key_hash. * Fixed Sequel::Model() to set dataset for inherited classes. * Rewrote Model.serialize to use Dataset#transform. * Implemented Dataset#transform. * Added gem spec for Windows (without ParseTree dependency). * Added support for dynamic strings in Sequelizer (#49). * Query branch merged into trunk. * Implemented self-changing methods. * Add support for ternary operator to Sequelizer. * Fixed sequelizer to evaluate expressions if they don't involve symbols or literal strings. * Added protection against using #each, #delete, #insert, #update inside query blocks. * Improved Model#method_missing to deal with invalid attributes. * Implemented Dataset#query. * Added Dataset#group_by as alias for Dataset#group. * Added Dataset#order_by as alias for Dataset#order. * More model refactoring. Added support for composite keys. * Added Dataset#empty? method (#46). * Fixed Symbol#to_field_name to support names with numbers and upper-case characters (#45). * Added install_no_doc rake task. * Partial refactoring of model code. * Refactored dataset-model association and added Dataset#set_row_filter method. * Added support for case-sensitive regexps to mysql adapter. * Changed mysql adapter to support encoding option as well. * Added charset/encoding option to postgres adapter. * Implemented Model.serialize (thanks Aman Gupta.) * Changed Model.create to INSERT DEFAULT VALUES instead of (id) VALUES (null) (brings back #41.) * Fixed Model.new to work without arguments. * Added Model.no_primary_key method to allow models without primary keys. * Added Model#this method (#42 thanks Duane Johnson). * Fixed Dataset#insert_sql to use DEFAULT VALUES clause if argument is an empty hash. * Fixed Model.create to work correctly when no argument is passed (#41). === 0.2.0.2 (2007-09-07) * Dataset#insert can now accept subqueries. * Changed Migrator.apply to return the version. * Changed Sequel::Model() to cache intermediate classes so descendant classes can be reopened (#39). * Added :charset option to MySQL adapter (#40). * Fixed Dataset#exclude to add parens around NOT expression (#38). * Fixed use of sub-queries with all comparison operators in block filters (#38). * Fixed arithmetic expressions in block filters to not be literalized. * Changed Symbol#method_missing to return LiteralString. * Changed PrettyTable to right-align numbers. * Fixed Model.create_table (thanks Duane Johnson.) === 0.2.0.1 (2007-09-04) * Improved support for invoking methods with inline procs inside block filters. === 0.2.0 (2007-09-02) * Fixed Model.drop_table (thanks Duane Johnson.) * Dataset#each can now return rows for arbitrary SQL by specifying :sql option. * Added spec for postgres adapter. * Fixed Model.method_missing to work with new SQL generation. * Fixed #compare_expr to support regexps. * Fixed postgres, mysql adapters to support regexps. * More specs for block filters. Updated README. * Added support for globals and $X macros in block filters. * Fixed Sequelizer to not fail if ParseTree or Ruby2Ruby gems are missing. * Renamed String#expr into String#lit (#expr should be deprecated in future versions). * Renamed Sequel::ExpressionString into LiteralString. * Fixed Symbol#[] to return an ExpressionString, so as not to be literalized. * Renamed Dataset::Expressions to Dataset::Sequelizer. * Renamed Expressions#format_re_expression to match_expr. * Renamed Expressions#format_eq_expression to compare_expr. * Added support for Regexp in MySQL adapter. * Refactored Regexp expressions into a separate #format_re_expression method. * Added support for arithmetic in proc filters. * Added support for nested proc expressions, more specs. * Added support for SQL function using symbols, e.g. :sum[:x]. * Fixed deadlock bug in ConnectionPool. * Removed deprecated old expressions.rb. * Rewrote Proc filter feature using ParseTree. * Added support for additional functions on columns using Symbol#method_missing. * Added support for supplying filter block to DB#[] method, to allow stuff like DB[:nodes] {:path =~ /^icex1/}. === 0.1.9.12 (2007-08-26) * Added spec for PrettyTable. * Added specs for Schema::Generator and Model (#36 thanks technoweenie). * Fixed Sequel::Model.set_schema (#36 thanks technoweenie.) * Added support for no options on Schema::Generator#foreign_key (#36 thanks technoweenie.) * Implemented (restored?) Schema::Generator#primary_key_name (#36 thanks technoweenie.) * Better spec code coverage. === 0.1.9.11 (2007-08-24) * Changed Dataset#set_model to allow supplying additional arguments to the model's initialize method (#35). Thanks Sunny Hirai. === 0.1.9.10 (2007-08-22) * Changed schema generation code to generate separate statements for CREATE TABLE and each CREATE INDEX (#34). * Refactored Dataset::SQL#field_name for better support of different field quoting standards by specific adapters. * Added #current_page_record_count for paginated datasets. * Removed Database#literal and included Dataset::SQL instead. * Sequel::Dataset:SQL#field_name can now take a hash (as well as #select and any method that uses #field_name) for aliasing column names. E.g. DB[:test].select(:_qqa => 'Date').sql #=> 'SELECT _qqa AS Date FROM test'. * Moved SingleThreadedPool to lib/sequel/connection_pool.rb. * Changed SQLite::Dataset to return affected rows for #delete and #update (#33). * ADO adapter: Added use of Enumerable for Recordset#Fields, playing it safe and moving to the first row before getting results, and changing the auto_increment constant to work for MSSQL. === 0.1.9.9 (2007-08-18) * New ADO adapter by cdcarter (#31). * Added automatic column aliasing to #avg, #sum, #min and #max (#30). * Fixed broken Sequel::DBI::Dataset#fetch_rows (#29 thanks cdcarter.) === 0.1.9.8 (2007-08-15) * Fixed DBI adapter. === 0.1.9.7 (2007-08-15) * Added support for executing batch statements in sqlite adapter. * Changed #current_page_record_range to return 0..0 for an invalid page. * Fixed joining of aliased tables. * Improved Symbol#to_field_name to prevent false positives. * Implemented Dataset#multi_insert with :commit_every option. * More docs for Dataset#set_model. * Implemented automatic creation of convenience methods for each adapter (e.g. Sequel.sqlite etc.) === 0.1.9.6 (2007-08-13) * Refactored schema definition code. Gets rid of famous primary_key problem as well as other issues (e.g. issue #22). * Added #pagination_record_count, #page_range and #current_page_record_range for paginated datasets. * Changed MySQL adapter to automatically reconnect (issue #26). * Changed Sequel() to accept variable arity. * Added :elements option to column definition, in order to support ENUM and SET types. === 0.1.9.5 (2007-08-12) * Fixed migration docs. * Removed dependency on PGconn in Schema class. === 0.1.9.4 (2007-08-11) * Added Sequel.dbi convenience method for using DBI connection strings to open DBI databases. === 0.1.9.3 (2007-08-10) * Added support for specifying field size in schema definitions (thanks Florian Assmann.) * Added migration code based on work by Florian Assmann. * Reintroduced metaid dependency. No need to keep a local copy of it. === 0.1.9.2 (2007-07-24) * Removed metaid dependency. Re-factored requires in lib/sequel.rb. === 0.1.9.1 (2007-07-22) * Improved robustness of MySQL::Dataset#field_name. * Added Sequel.single_threaded= convenience method. === 0.1.9 (2007-07-21) * Fixed #update_sql and #insert_sql to support field quoting by calling #field_name. * Implemented automatic data type conversion in mysql adapter. * Added support for boolean literals in mysql adapter. * Added support for ORDER and LIMIT clauses in UPDATE statements in mysql adapter. * Implemented correct field quoting (using back-ticks) in mysql adapter. * Wrote basic MySQL spec. * Fixd MySQL::Dataset to return correct data types with symbols as hash keys. * Removed discunctional MySQL::Database#transaction. * Added support for single threaded operation. * Fixed bug in Dataset#format_eq_expression where Range objects would not be literalized correctly. * Added parens around postgres LIKE expressions using regexps. === 0.1.8 (2007-07-10) * Implemented Dataset#columns for retrieving the columns in the result set. * Updated Model with changes to how model-associated datasets work. * Beefed-up specs. Coverage is now at 95.0%. * Added support for polymorphic datasets. * The adapter dataset interface was simplified and standardized. Only four methods need be overriden: fetch_rows, update, insert and delete. * The Dataset class was refactored. The bulk of the dataset code was moved into separate modules. * Renamed Dataset#hash_column to Dataset#to_hash. * Added some common pragmas to sqlite adapter. * Added Postgres::Dataset#analyze for EXPLAIN ANALYZE queries. * Fixed broken Postgres::Dataset#explain. === 0.1.7 * Removed db.synchronize wrapping calls in sqlite adapter. * Implemented Model.join method to restrict returned columns to the model table (thanks Pedro Gutierrez). * Implemented Dataset#paginate method. * Fixed after_destroy hook. * Improved Dataset#first and #last to accept a filter hash. * Added Dataset#[]= method. * Added Sequel() convenience method. * Fixed Dataset#first to include a LIMIT clause for a single record. * Small fix to Postgres driver to return a primary_key value for the inserted record if it is specified in the insertion values (thanks Florian Assmann and Pedro Gutierrez). * Fixed Symbol#DESC to support qualified notation (thanks Pedro Gutierrez). === 0.1.6 * Fixed Model#method_missing to raise for an invalid attribute. * Fixed PrettyTable to print model objects (thanks snok.) * Fixed ODBC timestamp conversion to return DateTime rather than Time object (thanks snok.) * Fixed Model.method_missing (thanks snok.) * Model.method_missing now creates stubs for calling Model.dataset methods. Methods like Model.each etc are removed. * Changed default join type to INNER JOIN (thanks snok.) * Added support for literal expressions, e.g. DB[:items].filter(:col1 => 'col2 - 10'.expr). * Added Dataset#and. * SQLite adapter opens a memory DB if no database is specified, e.g. Sequel.open 'sqlite:/'. * Added Dataset#or, pretty nifty. === 0.1.5 * Fixed Dataset#join to support multiple joins. Added #left_outer_join, #right_outer_join, #full_outer_join, #inner_join methods. === 0.1.4 * Added String#split_sql. * Implemented Array#to_sql and String#to_sql. Database#to_sql can now take an array of strings and convert into an SQL string. Comments and excessive white-space are removed. * Improved Schema generator to support data types as method names: DB.create_table :test do integer :abc text :def ... end * Implemented ODBC adapter. === 0.1.3 * Implemented DBI adapter. * Refactored database connection code. Now handled through Database#connect. === 0.1.2 * The first opened database is automatically assigned to to Model.db. * Removed SequelConnectionError. Exception class errors are converted to RuntimeError. * Added support for UNION, INTERSECT and EXCEPT set operations. * Fixed Dataset#single_record to return nil if no record is found. * Updated specs to conform to RSpec 1.0. * Added Model#find_or_create method. * Fixed MySQL::Dataset#query_single (thanks Dries Harnie.) * Added Model.subset method. Fixed Model.filter and Model.exclude to accept blocks. * Added Database#uri method. * Refactored and removed deprecated code in postgres adapter. ===0.1.1 * More documentation for Dataset. * Added Dataset#size as alias to Dataset#count. * Changed Database#<< to call execute (instead of being an alias). Thus it will work for descendants as well. * Fixed Sequel.open to accept variable arity. * Refactored Model#refresh, Model.create. Removed Model#reload. * Refactored Model hooks. * Cleaned up Dataset API. === 0.1.0 * Changed Database#create_table to only accept a block. Nobody's gonna use the other way. * Removed Dataset#[]= method. Too confusing and not really useful. * Fixed ConnectionPool#hold to wrap exceptions only once. * Dataset#where_list Renamed Dataset#expression_list. * Added support for qualified fields in Proc expressions (e.g. filter {items.id == 1}.) * Added like? and in? Proc expression operators. * Added require 'date' in dataset.rb. Is this a 1.8.5 thing? * Refactored Dataset to use literal strings instead of format strings (slight performance improvement and better readability.) * Added support for literalizing Date objects. * Refactored literalization of Time objects. === 0.0.20 * Refactored Dataset where clause construction to use expressions. * Implemented Proc expressions (adapted from a great idea by Sam Smoot.) * Fixed Model#map. * Documentation for ConnectionPool. * Specs for Database. === 0.0.19 * More specs for Dataset. * Fixed Dataset#invert_order to work correctly with strings. * Fixed Model#== to check equality of values. * Added Model#exclude and Model#order. * Fixed Dataset#order and Dataset#group to behave correctly when supplied with qualified field name symbols. * Removed Database#literal. Shouldn't have been there. * Added SQLite::Dataset#explain. Returns an array of opcode hashes. * Specs for ConnectionPool. === 0.0.18 * Implemented SequelError and SequelConnectionError classes. ConnectionPool#hold now catches any connection errors and reraises them SequelConnectionError. * Removed duplication in Database#[]. * :from and :select options are now always arrays (patch by Alex Bradbury.) * Fixed Dataset#exclude to work correctly (patch and specs by Alex Bradbury.) === 0.0.17 * Fixed Postgres::Database#tables to return table names as symbols (caused problem when using Database#table_exists?). * Fixed Dataset#from to have variable arity, like Dataset#select and Dataset#where (patch by Alex Bradbury.) * Added support for GROUP BY and HAVING clauses (patches by Alex Bradbury.) Refactored Dataset#filter. * More specs. * Refactored Dataset#where for better composability. * Added Dataset#[]= method. * Added support for DISTINCT and OFFSET clauses (patches by Alex Bradbury.) Dataset#limit now accepts ranges. Added Dataset#uniq and distinct methods. === 0.0.16 * More documentation. * Added support for subqueries in Dataset#literal. * Added support for Model.all_by_XXX methods through Model.method_missing. * Added basic SQL logging to Database. * Added Enumerable#send_each convenience method. * Changed Dataset#destroy to return the number of deleted records. === 0.0.15 * Improved Dataset#insert_sql to allow arrays as well as hashes. * Database#drop_table now accepts a list of table names. * Added Model#id to to return the id column. === 0.0.14 * Fixed Model's attribute accessors (hopefully for the last time). * Changed Model.db and Model.db= to allow different databases for different model classes. * Fixed bug in aggregate methods (max, min, etc) for datasets using record classes. === 0.0.13 * Fixed Model#method_missing to do both find, filter and attribute accessors. duh. * Fixed bug in Dataset#literal when quoting arrays of strings (thanks Douglas Koszerek.) === 0.0.12 * Model#save now correctly performs an INSERT for new objects. * Added Model#reload for reloading an object from the database. * Added Dataset#naked method for getting a version of a dataset that fetches records as hashes. * Implemented attribute accessors for column values ala ActiveRecord models. * Fixed filtering using nil values (e.g. dataset.filter(:parent_id => nil)). === 0.0.11 * Renamed Model.schema to Model.set_schema and Model.get_schema to Model.schema. * Improved Model class to allow descendants of model clases (thanks Pedro Gutierrez.) * Removed require 'postgres' in schema.rb (thanks Douglas Koszerek.) === 0.0.10 * Added some examples. * Added Dataset#print method for pretty-printing tables. === 0.0.9 * Fixed Postgres::Database#tables and #locks methods. * Added PGconn#last_insert_id method that should support all 7.x and 8.x versions of Postgresql. * Added Dataset#exists method for EXISTS where clauses. * Changed behavior of Dataset#literal to regard symbols as field names. * Refactored and DRY'd Dataset#literal and overrides therof. Added support for subqueries in where clause. === 0.0.8 * Fixed Dataset#reverse_order to provide chainability. This method can be called without arguments to invert the current order or with arguments to provide a descending order. * Fixed literal representation of literals in SQLite adapter (thanks Christian Neukirchen!) * Refactored insert code in Postgres adapter (in preparation for fetching the last insert id for pre-8.1 versions). === 0.0.7 * Fixed bug in Model.schema, duh! === 0.0.6 * Added Dataset#sql as alias to Dataset#select_sql. * Dataset#where and Dataset#exclude can now be used for refining dataset conditions, enabling stuff like posts.where(:title => 'abcdef').exclude(:user_id => 3). * Implemented Dataset#exclude method. * Added Sequel::Schema#auto_primary_key method for setting an automatic primary key to be added to every table definition. Changed the schema generator to not define a primary key by default. * Changed Sequel::Database#table_exists? to rely on the tables method if it is available. * Implemented SQLite::Database#tables. === 0.0.5 * Added Dataset#[] method. Refactored Model#find and Model#[]. * Renamed Pool#conn_maker to Pool#connection_proc. * Added automatic require 'sequel' to all adapters for convenience. === 0.0.4 * Added preliminary MySQL support. * Code cleanup. === 0.0.3 * Add Dataset#sum method. * Added support for exclusive ranges (thanks Christian Neukirchen.) * Added sequel console for quick'n'dirty access to databases. * Fixed small bug in Dataset#qualified_field_name for better join support. === 0.0.2 * Added Sequel.open as alias to Sequel.connect. * Refactored Dataset#where_equal_condition into Dataset#where_condition, allowing arrays and ranges, e.g. posts.filter(:stamp => (3.days.ago)..(1.day.ago)), or posts.filter(:category => ['ruby', 'postgres', 'linux']). * Added Model#[]= method for changing column values and Model#save method for saving them. * Added Dataset#destroy for deleting each record individually as support for models. Renamed Model#delete to Model#destroy (and Model#destroy_all) ala ActiveRecord. * Refactored Dataset#first and Dataset#last code. These methods can now accept the number of records to fetch. === 0.0.1 * More documentation for Dataset. * Renamed Database#query to Database#dataset. * Added Dataset#insert_multiple for inserting multiple records. * Added Dataset#<< as shorthand for inserting records. * Added Database#<< method for executing arbitrary SQL. * Imported Sequel code. == Sequel::Model CHANGELOG 0.1 - 0.5.0.2 === 0.5.0.2 (2008-03-12) * More fixes for Model.associate to accept strings and symbols as class references. === 0.5.0.1 (2008-03-09) * Fixed Model.associate to accept class and class name in :class option. === 0.5 (2008-03-08) * Merged new associations branch into trunk. * Rewrote RDoc for associations. * Added has_and_belongs_to_many alias for many_to_many. * Added support for optional dataset block. * Added :order option to order association datasets. * Added :cache option to return and cache array of objects for association. * Changed one_to_many, many_to_many associations to return dataset by default. * Added has_many, belongs_to aliases. * Refactored associations code. * Added deprecations for old-style relations. * Completed specs for new associations code. * New associations code by Jeremy Evans (replaces relations code.) === 0.4.2 (2008-02-29) * Fixed one_to_many implicit key to work correctly for namespaced classes (#167). * Fixed Model.db= to affect the underlying dataset (#183). * Fixed Model.implicit_table_name to disregard namespaces. === 0.4.1 (2008-02-10) * Implemented Model#inspect (#151). * Changed Model#method_missing to short-circuit and bypass checking #columns if the values hash already contains the relevant column (#150). * Updated to reflect changes in sequel_core (Dataset#clone_merge renamed to Dataset#clone). === 0.4 (2008-02-05) * Fixed Model#set to work with string keys (#143). * Fixed Model.create to correctly initialize instances marked as new (#135). * Fixed Model#initialize to convert string keys into symbol keys. This also fixes problem with validating objects initialized with string keys (#136). === 0.3.3 (2008-01-25) * Finalized support for virtual attributes. === 0.3.2.1 (2008-01-24) * Fixed Model.dataset to correctly set the dataset if using implicit naming or inheriting the superclass dataset (thanks celldee). === 0.3.2 (2008-01-24) * Added Model#update_with_params method with support for virtual attributes and auto-filtering of unrelated parameters, and changed Model.create_with_params to support virtual attributes (#128). * Cleaned up gem spec (#132). * Removed validations code. Now relying on validations in assistance gem. === 0.3.1 (2008-01-21) * Changed Model.dataset to use inflector to pluralize the class name into the table name. Works in similar fashion to table names in AR or DM. === 0.3 (2008-01-18) * Implemented Validatable::Errors class. * Added Model#reload as alias to Model#refresh. * Changed Model.create to accept a block (#126). * Rewrote validations. * Fixed Model#initialize to accept nil values (#115). === 0.2 (2008-01-02) * Removed deprecated Model.recreate_table method. * Removed deprecated :class and :on options from one_to_many macro. * Removed deprecated :class option from one_to_one macro. * Removed deprecated Model#pkey method. * Changed dependency to sequel_core. * Removed examples from sequel core. * Additional specs. We're now at 100% coverage. * Refactored hooks code. Hooks are now inheritable, and can be defined by supplying a block or a method name, or by overriding the hook instance method. Hook chains can now be broken by returning false (#111, #112). === 0.1 (2007-12-30) * Moved model code from sequel into separate model sub-project. ruby-sequel-4.1.1/MIT-LICENSE000066400000000000000000000020741220156535500154350ustar00rootroot00000000000000Copyright (c) 2007-2008 Sharon Rosner Copyright (c) 2008-2013 Jeremy Evans Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ruby-sequel-4.1.1/README.rdoc000066400000000000000000000700361220156535500156120ustar00rootroot00000000000000== Sequel: The Database Toolkit for Ruby Sequel is a simple, flexible, and powerful SQL database access toolkit for Ruby. * Sequel provides thread safety, connection pooling and a concise DSL for constructing SQL queries and table schemas. * Sequel includes a comprehensive ORM layer for mapping records to Ruby objects and handling associated records. * Sequel supports advanced database features such as prepared statements, bound variables, stored procedures, savepoints, two-phase commit, transaction isolation, master/slave configurations, and database sharding. * Sequel currently has adapters for ADO, Amalgalite, CUBRID, DataObjects, DB2, DBI, Firebird, IBM_DB, Informix, JDBC, MySQL, Mysql2, ODBC, OpenBase, Oracle, PostgreSQL, SQLite3, Swift, and TinyTDS. == Resources * {Website}[http://sequel.rubyforge.org] * {Blog}[http://sequel.heroku.com] * {Source code}[http://github.com/jeremyevans/sequel] * {Bug tracking}[http://github.com/jeremyevans/sequel/issues] * {Google group}[http://groups.google.com/group/sequel-talk] * {RDoc}[http://sequel.rubyforge.org/rdoc] To check out the source code: git clone git://github.com/jeremyevans/sequel.git === Contact If you have any comments or suggestions please post to the Google group. == Installation sudo gem install sequel == A Short Example require 'sequel' DB = Sequel.sqlite # memory database DB.create_table :items do primary_key :id String :name Float :price end items = DB[:items] # Create a dataset # Populate the table items.insert(:name => 'abc', :price => rand * 100) items.insert(:name => 'def', :price => rand * 100) items.insert(:name => 'ghi', :price => rand * 100) # Print out the number of records puts "Item count: #{items.count}" # Print out the average price puts "The average price is: #{items.avg(:price)}" == The Sequel Console Sequel includes an IRB console for quick access to databases (usually referred to as bin/sequel). You can use it like this: sequel sqlite://test.db # test.db in current directory You get an IRB session with the database object stored in DB. In addition to providing an IRB shell (the default behavior), bin/sequel also has support for migrating databases, dumping schema migrations, and copying databases. See the {bin/sequel guide}[link:files/doc/bin_sequel_rdoc.html] for more details. == An Introduction Sequel is designed to take the hassle away from connecting to databases and manipulating them. Sequel deals with all the boring stuff like maintaining connections, formatting SQL correctly and fetching records so you can concentrate on your application. Sequel uses the concept of datasets to retrieve data. A Dataset object encapsulates an SQL query and supports chainability, letting you fetch data using a convenient Ruby DSL that is both concise and flexible. For example, the following one-liner returns the average GDP for countries in the middle east region: DB[:countries].filter(:region => 'Middle East').avg(:GDP) Which is equivalent to: SELECT avg(GDP) FROM countries WHERE region = 'Middle East' Since datasets retrieve records only when needed, they can be stored and later reused. Records are fetched as hashes (or custom model objects), and are accessed using an +Enumerable+ interface: middle_east = DB[:countries].filter(:region => 'Middle East') middle_east.order(:name).each{|r| puts r[:name]} Sequel also offers convenience methods for extracting data from Datasets, such as an extended +map+ method: middle_east.map(:name) #=> ['Egypt', 'Turkey', 'Israel', ...] Or getting results as a hash via +to_hash+, with one column as key and another as value: middle_east.to_hash(:name, :area) #=> {'Israel' => 20000, 'Turkey' => 120000, ...} == Getting Started === Connecting to a database To connect to a database you simply provide Sequel.connect with a URL: require 'sequel' DB = Sequel.connect('sqlite://blog.db') The connection URL can also include such stuff as the user name, password, and port: DB = Sequel.connect('postgres://user:password@host:port/database_name') You can also specify optional parameters, such as the connection pool size, or loggers for logging SQL queries: DB = Sequel.connect("postgres://user:password@host:port/database_name", :max_connections => 10, :logger => Logger.new('log/db.log')) You can specify a block to connect, which will disconnect from the database after it completes: Sequel.connect('postgres://user:password@host:port/database_name'){|db| db[:posts].delete} === The DB convention Throughout Sequel's documentation, you will see the +DB+ constant used to refer to the Sequel::Database instance you create. This reflects the recommendation that for an app with a single Sequel::Database instance, the Sequel convention is to store the instance in the +DB+ constant. This is just a convention, it's not required, but it is recommended. Note that some frameworks that use Sequel may create the Sequel::Database instance for you, and you might not know how to access it. In most cases, you can access the Sequel::Database instance through Sequel::Model.db. === Arbitrary SQL queries You can execute arbitrary SQL code using Database#run: DB.run("create table t (a text, b text)") DB.run("insert into t values ('a', 'b')") You can also create datasets based on raw SQL: dataset = DB['select id from items'] dataset.count # will return the number of records in the result set dataset.map(:id) # will return an array containing all values of the id column in the result set You can also fetch records with raw SQL through the dataset: DB['select * from items'].each do |row| p row end You can use placeholders in your SQL string as well: name = 'Jim' DB['select * from items where name = ?', name].each do |row| p row end === Getting Dataset Instances Datasets are the primary way records are retrieved and manipulated. They are generally created via the Database#from or Database#[] methods: posts = DB.from(:posts) posts = DB[:posts] # same Datasets will only fetch records when you tell them to. They can be manipulated to filter records, change ordering, join tables, etc.. === Retrieving Records You can retrieve all records by using the +all+ method: posts.all # SELECT * FROM posts The all method returns an array of hashes, where each hash corresponds to a record. You can also iterate through records one at a time using +each+: posts.each{|row| p row} Or perform more advanced stuff: names_and_dates = posts.map([:name, :date]) old_posts, recent_posts = posts.partition{|r| r[:date] < Date.today - 7} You can also retrieve the first record in a dataset: posts.first # SELECT * FROM posts LIMIT 1 Or retrieve a single record with a specific value: posts[:id => 1] # SELECT * FROM posts WHERE id = 1 LIMIT 1 If the dataset is ordered, you can also ask for the last record: posts.order(:stamp).last # SELECT * FROM posts ORDER BY stamp DESC LIMIT 1 === Filtering Records An easy way to filter records is to provide a hash of values to match to +where+: my_posts = posts.where(:category => 'ruby', :author => 'david') # WHERE category = 'ruby' AND author = 'david' You can also specify ranges: my_posts = posts.where(:stamp => (Date.today - 14)..(Date.today - 7)) # WHERE stamp >= '2010-06-30' AND stamp <= '2010-07-07' Or arrays of values: my_posts = posts.where(:category => ['ruby', 'postgres', 'linux']) # WHERE category IN ('ruby', 'postgres', 'linux') Sequel also accepts expressions: my_posts = posts.where{stamp > Date.today << 1} # WHERE stamp > '2010-06-14' Some adapters will also let you specify Regexps: my_posts = posts.where(:category => /ruby/i) # WHERE category ~* 'ruby' You can also use an inverse filter via +exclude+: my_posts = posts.exclude(:category => ['ruby', 'postgres', 'linux']) # WHERE category NOT IN ('ruby', 'postgres', 'linux') You can also specify a custom WHERE clause using a string: posts.where('stamp IS NOT NULL') # WHERE stamp IS NOT NULL You can use parameters in your string, as well: author_name = 'JKR' posts.where('(stamp < ?) AND (author != ?)', Date.today - 3, author_name) # WHERE (stamp < '2010-07-11') AND (author != 'JKR') Datasets can also be used as subqueries: DB[:items].where('price > ?', DB[:items].select{avg(price) + 100}) # WHERE price > (SELECT avg(price) + 100 FROM items) After filtering, you can retrieve the matching records by using any of the retrieval methods: my_posts.each{|row| p row} See the {Dataset Filtering}[link:files/doc/dataset_filtering_rdoc.html] file for more details. === Security Designing apps with security in mind is a best practice. Please read the {Security Guide}[link:files/doc/security_rdoc.html] for details on security issues that you should be aware of when using Sequel. === Summarizing Records Counting records is easy using +count+: posts.where(:category.like('%ruby%')).count # SELECT COUNT(*) FROM posts WHERE category LIKE '%ruby%' And you can also query maximum/minimum values via +max+ and +min+: max = DB[:history].max(:value) # SELECT max(value) FROM history min = DB[:history].min(:value) # SELECT min(value) FROM history Or calculate a sum or average via +sum+ and +avg+: sum = DB[:items].sum(:price) # SELECT sum(price) FROM items avg = DB[:items].avg(:price) # SELECT avg(price) FROM items === Ordering Records Ordering datasets is simple using +order+: posts.order(:stamp) # ORDER BY stamp posts.order(:stamp, :name) # ORDER BY stamp, name Chaining +order+ doesn't work the same as +where+: posts.order(:stamp).order(:name) # ORDER BY name The +order_append+ method chains this way, though: posts.order(:stamp).order_append(:name) # ORDER BY stamp, name The +order_prepend+ method can be used as well: posts.order(:stamp).order_prepend(:name) # ORDER BY name, stamp You can also specify descending order: posts.reverse_order(:stamp) # ORDER BY stamp DESC posts.order(Sequel.desc(:stamp)) # ORDER BY stamp DESC === Core Extensions Note the use of Sequel.desc(:stamp) in the above example. Much of Sequel's DSL uses this style, calling methods on the Sequel module that return SQL expression objects. Sequel also ships with a {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]) that integrates Sequel's DSL better into the ruby language, allowing you to write: :stamp.desc instead of: Sequel.desc(:stamp) === Selecting Columns Selecting specific columns to be returned is also simple using +select+: posts.select(:stamp) # SELECT stamp FROM posts posts.select(:stamp, :name) # SELECT stamp, name FROM posts Chaining +select+ works like +order+, not +where+: posts.select(:stamp).select(:name) # SELECT name FROM posts As you might expect, there is an +order_append+ equivalent for +select+ called +select_append+: posts.select(:stamp).select_append(:name) # SELECT stamp, name FROM posts === Deleting Records Deleting records from the table is done with +delete+: posts.where('stamp < ?', Date.today - 3).delete # DELETE FROM posts WHERE stamp < '2010-07-11' Be very careful when deleting, as +delete+ affects all rows in the dataset. Call +where+ first and +delete+ second: # DO THIS: posts.where('stamp < ?', Date.today - 7).delete # NOT THIS: posts.delete.where('stamp < ?', Date.today - 7) === Inserting Records Inserting records into the table is done with +insert+: posts.insert(:category => 'ruby', :author => 'david') # INSERT INTO posts (category, author) VALUES ('ruby', 'david') === Updating Records Updating records in the table is done with +update+: posts.where('stamp < ?', Date.today - 7).update(:state => 'archived') # UPDATE posts SET state = 'archived' WHERE stamp < '2010-07-07' You can reference table columns when choosing what values to set: posts.where{|o| o.stamp < Date.today - 7}.update(:backup_number => Sequel.+(:backup_number, 1)) # UPDATE posts SET backup_number = backup_number + 1 WHERE stamp < '2010-07-07' As with +delete+, +update+ affects all rows in the dataset, so +where+ first, +update+ second: # DO THIS: posts.where('stamp < ?', Date.today - 7).update(:state => 'archived') # NOT THIS: posts.update(:state => 'archived').where('stamp < ?', Date.today - 7) === Transactions You can wrap some code in a database transaction using the Database#transaction method: DB.transaction do posts.insert(:category => 'ruby', :author => 'david') posts.where('stamp < ?', Date.today - 7).update(:state => 'archived') end If the block does not raise an exception, the transaction will be committed. If the block does raise an exception, the transaction will be rolled back, and the exception will be reraised. If you want to rollback the transaction and not raise an exception outside the block, you can raise the Sequel::Rollback exception inside the block: DB.transaction do posts.insert(:category => 'ruby', :author => 'david') if posts.filter('stamp < ?', Date.today - 7).update(:state => 'archived') == 0 raise Sequel::Rollback end end === Joining Tables Sequel makes it easy to join tables: order_items = DB[:items].join(:order_items, :item_id => :id). where(:order_id => 1234) # SELECT * FROM items INNER JOIN order_items # ON order_items.item_id = items.id # WHERE order_id = 1234 The important thing to note here is that item_id is automatically qualified with the table being joined, and id is automatically qualified with the last table joined. You can then do anything you like with the dataset: order_total = order_items.sum(:price) # SELECT sum(price) FROM items INNER JOIN order_items # ON order_items.item_id = items.id # WHERE order_items.order_id = 1234 == Column references in Sequel Sequel expects column names to be specified using symbols. In addition, returned hashes always use symbols as their keys. This allows you to freely mix literal values and column references in many cases. For example, the two following lines produce equivalent SQL: items.where(:x => 1) # SELECT * FROM items WHERE (x = 1) items.where(1 => :x) # SELECT * FROM items WHERE (1 = x)" Ruby strings are generally treated as SQL strings: items.where(:x => 'x') # SELECT * FROM items WHERE (x = 'x') === Qualifying identifiers (column/table names) An identifier in SQL is a name that represents a column, table, or schema. Identifiers can be qualified by using the double underscore special notation :table__column: items.literal(:items__price) # items.price Another way to qualify columns is to use the Sequel.qualify method: items.literal(Sequel.qualify(:items, :price)) # items.price While it is more common to qualify column identifiers with table identifiers, you can also qualify table identifiers with schema identifiers to select from a qualified table: posts = DB[:some_schema__posts] # SELECT * FROM some_schema.posts === Identifier aliases You can also alias identifiers by using the triple undersecore special notation :column___alias or :table__column___alias: items.literal(:price___p) # price AS p items.literal(:items__price___p) # items.price AS p Another way to alias columns is to use the Sequel.as method: items.literal(Sequel.as(:price, :p)) # price AS p You can use the Sequel.as method to alias arbitrary expressions, not just identifiers: items.literal(Sequel.as(DB[:posts].select{max(id)}, :p)) # (SELECT max(id) FROM posts) AS p == Sequel Models A model class wraps a dataset, and an instance of that class wraps a single record in the dataset. Model classes are defined as regular Ruby classes inheriting from Sequel::Model: DB = Sequel.connect('sqlite://blog.db') class Post < Sequel::Model end When a model class is created, it parses the schema in the table from the database, and automatically sets up accessor methods for all of the columns in the table (Sequel::Model implements the active record pattern). Sequel model classes assume that the table name is an underscored plural of the class name: Post.table_name #=> :posts You can explicitly set the table name or even the dataset used: class Post < Sequel::Model(:my_posts) end # or: Post.set_dataset :my_posts If you call +set_dataset+ with a symbol, it assumes you are referring to the table with the same name. You can also call it with a dataset, which will set the defaults for all retrievals for that model: Post.set_dataset DB[:my_posts].where(:category => 'ruby') Post.set_dataset DB[:my_posts].select(:id, :name).order(:date) === Model instances Model instances are identified by a primary key. In most cases, Sequel can query the database to determine the primary key, but if not, it defaults to using :id. The Model.[] method can be used to fetch records by their primary key: post = Post[123] The +pk+ method is used to retrieve the record's primary key value: post.pk #=> 123 Sequel models allow you to use any column as a primary key, and even composite keys made from multiple columns: class Post < Sequel::Model set_primary_key [:category, :title] end post = Post['ruby', 'hello world'] post.pk #=> ['ruby', 'hello world'] You can also define a model class that does not have a primary key via +no_primary_key+, but then you lose the ability to easily update and delete records: Post.no_primary_key A single model instance can also be fetched by specifying a condition: post = Post[:title => 'hello world'] post = Post.first{num_comments < 10} === Acts like a dataset A model class forwards many methods to the underlying dataset. This means that you can use most of the +Dataset+ API to create customized queries that return model instances, e.g.: Post.where(:category => 'ruby').each{|post| p post} You can also manipulate the records in the dataset: Post.where{num_comments < 7}.delete Post.where(Sequel.like(:title, /ruby/)).update(:category => 'ruby') === Accessing record values A model instance stores its values as a hash with column symbol keys, which you can access directly via the +values+ method: post.values #=> {:id => 123, :category => 'ruby', :title => 'hello world'} You can read the record values as object attributes, assuming the attribute names are valid columns in the model's dataset: post.id #=> 123 post.title #=> 'hello world' If the record's attributes names are not valid columns in the model's dataset (maybe because you used +select_append+ to add a computed value column), you can use Model#[] to access the values: post[:id] #=> 123 post[:title] #=> 'hello world' You can also modify record values using attribute setters or the []= method. post.title = 'hey there' post[:title] = 'hey there' That will just change the value for the object, it will not update the row in the database. To update the database row, call the +save+ method: post.save === Mass assignment You can also set the values for multiple columns in a single method call, using one of the mass-assignment methods. See the {mass assignment guide}[link:files/doc/mass_assignment_rdoc.html] for details. For example +set+ updates the model's column values without saving: post.set(:title=>'hey there', :updated_by=>'foo') and +update+ updates the model's column values and then saves the changes to the database: post.update(:title => 'hey there', :updated_by=>'foo') === Creating new records New records can be created by calling Model.create: post = Post.create(:title => 'hello world') Another way is to construct a new instance and save it later: post = Post.new post.title = 'hello world' post.save You can also supply a block to Model.new and Model.create: post = Post.new do |p| p.title = 'hello world' end post = Post.create{|p| p.title = 'hello world'} === Hooks You can execute custom code when creating, updating, or deleting records by defining hook methods. The +before_create+ and +after_create+ hook methods wrap record creation. The +before_update+ and +after_update+ hook methods wrap record updating. The +before_save+ and +after_save+ hook methods wrap record creation and updating. The +before_destroy+ and +after_destroy+ hook methods wrap destruction. The +before_validation+ and +after_validation+ hook methods wrap validation. Example: class Post < Sequel::Model def after_create super author.increase_post_count end def after_destroy super author.decrease_post_count end end Note the use of +super+ if you define your own hook methods. Almost all Sequel::Model class and instance methods (not just hook methods) can be overridden safely, but you have to make sure to call +super+ when doing so, otherwise you risk breaking things. For the example above, you should probably use a database trigger if you can. Hooks can be used for data integrity, but they will only enforce that integrity when you are modifying the database through model instances, and even then they are often subject to race conditions. It's best to use database triggers and constraints to enforce data integrity. === Deleting records You can delete individual records by calling +delete+ or +destroy+. The only difference between the two methods is that +destroy+ invokes +before_destroy+ and +after_destroy+ hook methods, while +delete+ does not: post.delete # => bypasses hooks post.destroy # => runs hooks Records can also be deleted en-masse by calling delete and destroy on the model's dataset. As stated above, you can specify filters for the deleted records: Post.where(:category => 32).delete # => bypasses hooks Post.where(:category => 32).destroy # => runs hooks Please note that if destroy is called, each record is deleted separately, but delete deletes all matching records with a single SQL query. === Associations Associations are used in order to specify relationships between model classes that reflect relationships between tables in the database, which are usually specified using foreign keys. You specify model associations via the +many_to_one+, +one_to_one+, +one_to_many+, and +many_to_many+ class methods: class Post < Sequel::Model many_to_one :author one_to_many :comments many_to_many :tags end +many_to_one+ and +one_to_one+ create a getter and setter for each model object: post = Post.create(:name => 'hi!') post.author = Author[:name => 'Sharon'] post.author +one_to_many+ and +many_to_many+ create a getter method, a method for adding an object to the association, a method for removing an object from the association, and a method for removing all associated objects from the association: post = Post.create(:name => 'hi!') post.comments comment = Comment.create(:text=>'hi') post.add_comment(comment) post.remove_comment(comment) post.remove_all_comments tag = Tag.create(:tag=>'interesting') post.add_tag(tag) post.remove_tag(tag) post.remove_all_tags Note that the remove_* and remove_all_* methods do not delete the object from the database, they merely disassociate the associated object from the receiver. All associations add a dataset method that can be used to further filter or reorder the returned objects, or modify all of them: # Delete all of this post's comments from the database post.comments_dataset.destroy # Return all tags related to this post with no subscribers, ordered by the tag's name post.tags_dataset.where(:subscribers=>0).order(:name).all === Eager Loading Associations can be eagerly loaded via +eager+ and the :eager association option. Eager loading is used when loading a group of objects. It loads all associated objects for all of the current objects in one query, instead of using a separate query to get the associated objects for each current object. Eager loading requires that you retrieve all model objects at once via +all+ (instead of individually by +each+). Eager loading can be cascaded, loading association's associated objects. class Person < Sequel::Model one_to_many :posts, :eager=>[:tags] end class Post < Sequel::Model many_to_one :person one_to_many :replies many_to_many :tags end class Tag < Sequel::Model many_to_many :posts many_to_many :replies end class Reply < Sequel::Model many_to_one :person many_to_one :post many_to_many :tags end # Eager loading via .eager Post.eager(:person).all # eager is a dataset method, so it works with filters/orders/limits/etc. Post.where{topic > 'M'}.order(:date).limit(5).eager(:person).all person = Person.first # Eager loading via :eager (will eagerly load the tags for this person's posts) person.posts # These are equivalent Post.eager(:person, :tags).all Post.eager(:person).eager(:tags).all # Cascading via .eager Tag.eager(:posts=>:replies).all # Will also grab all associated posts' tags (because of :eager) Reply.eager(:person=>:posts).all # No depth limit (other than memory/stack), and will also grab posts' tags # Loads all people, their posts, their posts' tags, replies to those posts, # the person for each reply, the tag for each reply, and all posts and # replies that have that tag. Uses a total of 8 queries. Person.eager(:posts=>{:replies=>[:person, {:tags=>[:posts, :replies]}]}).all In addition to using +eager+, you can also use +eager_graph+, which will use a single query to get the object and all associated objects. This may be necessary if you want to filter or order the result set based on columns in associated tables. It works with cascading as well, the API is very similar. Note that using +eager_graph+ to eagerly load multiple *_to_many associations will cause the result set to be a cartesian product, so you should be very careful with your filters when using it in that case. You can dynamically customize the eagerly loaded dataset by using using a proc. This proc is passed the dataset used for eager loading, and should return a modified copy of that dataset: # Eagerly load only replies containing 'foo' Post.eager(:replies=>proc{|ds| ds.where(Sequel.like(text, '%foo%'))}).all This also works when using +eager_graph+, in which case the proc is called with dataset to graph into the current dataset: Post.eager_graph(:replies=>proc{|ds| ds.where(Sequel.like(text, '%foo%'))}).all You can dynamically customize eager loads for both +eager+ and +eager_graph+ while also cascading, by making the value a single entry hash with the proc as a key, and the cascaded associations as the value: # Eagerly load only replies containing 'foo', and the person and tags for those replies Post.eager(:replies=>{proc{|ds| ds.where(Sequel.like(text, '%foo%'))}=>[:person, :tags]}).all === Extending the underlying dataset The recommended way to implement table-wide logic by defining methods on the dataset using +dataset_module+: class Post < Sequel::Model dataset_module do def posts_with_few_comments where{num_comments < 30} end def clean_posts_with_few_comments posts_with_few_comments.delete end end end This allows you to have access to your model API from filtered datasets as well: Post.where(:category => 'ruby').clean_posts_with_few_comments Sequel models also provide a +subset+ class method that creates a dataset method with a simple filter: class Post < Sequel::Model subset(:posts_with_few_comments){num_comments < 30} subset :invisible, Sequel.~(:visible) end === Model Validations You can define a +validate+ method for your model, which +save+ will check before attempting to save the model in the database. If an attribute of the model isn't valid, you should add a error message for that attribute to the model object's +errors+. If an object has any errors added by the validate method, +save+ will raise an error or return false depending on how it is configured (the +raise_on_save_failure+ flag). class Post < Sequel::Model def validate super errors.add(:name, "can't be empty") if name.empty? errors.add(:written_on, "should be in the past") if written_on >= Time.now end end ruby-sequel-4.1.1/Rakefile000066400000000000000000000167641220156535500154610ustar00rootroot00000000000000require "rake" require "rake/clean" NAME = 'sequel' VERS = lambda do require File.expand_path("../lib/sequel/version", __FILE__) Sequel.version end CLEAN.include ["**/.*.sw?", "sequel-*.gem", ".config", "rdoc", "coverage", "www/public/*.html", "www/public/rdoc*", '**/*.rbc'] SUDO = ENV['SUDO'] || 'sudo' # Gem Packaging and Release desc "Build sequel gem" task :package=>[:clean] do |p| sh %{#{FileUtils::RUBY} -S gem build sequel.gemspec} end desc "Install sequel gem" task :install=>[:package] do sh %{#{SUDO} #{FileUtils::RUBY} -S gem install ./#{NAME}-#{VERS.call} --local} end desc "Uninstall sequel gem" task :uninstall=>[:clean] do sh %{#{SUDO} #{FileUtils::RUBY} -S gem uninstall #{NAME}} end desc "Publish sequel gem to rubygems.org" task :release=>[:package] do sh %{#{FileUtils::RUBY} -S gem push ./#{NAME}-#{VERS.call}.gem} end ### Website desc "Make local version of website" task :website do sh %{#{FileUtils::RUBY} www/make_www.rb} end desc "Update Non-RDoc section of sequel.rubyforge.org" task :website_rf_base=>[:website] do sh %{rsync -rt www/public/*.html rubyforge.org:/var/www/gforge-projects/sequel/} end ### RDoc RDOC_DEFAULT_OPTS = ["--line-numbers", "--inline-source", '--title', 'Sequel: The Database Toolkit for Ruby'] allow_website_rdoc = begin # Sequel uses hanna-nouveau for the website RDoc. # Due to bugs in older versions of RDoc, and the # fact that hanna-nouveau does not support RDoc 4, # a specific version of rdoc is required. gem 'rdoc', '= 3.12.2' gem 'hanna-nouveau' RDOC_DEFAULT_OPTS.concat(['-f', 'hanna']) true rescue Gem::LoadError false end rdoc_task_class = begin require "rdoc/task" RDoc::Task rescue LoadError begin require "rake/rdoctask" Rake::RDocTask rescue LoadError, StandardError end end if rdoc_task_class RDOC_OPTS = RDOC_DEFAULT_OPTS + ['--main', 'README.rdoc'] rdoc_task_class.new do |rdoc| rdoc.rdoc_dir = "rdoc" rdoc.options += RDOC_OPTS rdoc.rdoc_files.add %w"README.rdoc CHANGELOG MIT-LICENSE lib/**/*.rb doc/*.rdoc doc/release_notes/*.txt" end if allow_website_rdoc desc "Make rdoc for website" task :website_rdoc=>[:website_rdoc_main, :website_rdoc_adapters, :website_rdoc_plugins] rdoc_task_class.new(:website_rdoc_main) do |rdoc| rdoc.rdoc_dir = "www/public/rdoc" rdoc.options += RDOC_OPTS + %w'--no-ignore-invalid' rdoc.rdoc_files.add %w"README.rdoc CHANGELOG MIT-LICENSE lib/*.rb lib/sequel/*.rb lib/sequel/{connection_pool,dataset,database,model}/*.rb doc/*.rdoc doc/release_notes/*.txt lib/sequel/extensions/migration.rb lib/sequel/extensions/core_extensions.rb" end rdoc_task_class.new(:website_rdoc_adapters) do |rdoc| rdoc.rdoc_dir = "www/public/rdoc-adapters" rdoc.options += RDOC_DEFAULT_OPTS + %w'--main Sequel --no-ignore-invalid' rdoc.rdoc_files.add %w"lib/sequel/adapters/**/*.rb" end rdoc_task_class.new(:website_rdoc_plugins) do |rdoc| rdoc.rdoc_dir = "www/public/rdoc-plugins" rdoc.options += RDOC_DEFAULT_OPTS + %w'--main Sequel --no-ignore-invalid' rdoc.rdoc_files.add %w"lib/sequel/{extensions,plugins}/**/*.rb" end desc "Update sequel.rubyforge.org" task :website_rf=>[:website, :website_rdoc] do sh %{rsync -rvt www/public/* rubyforge.org:/var/www/gforge-projects/sequel/} end end end ### Specs begin begin # RSpec 1 require "spec/rake/spectask" spec_class = Spec::Rake::SpecTask spec_files_meth = :spec_files= spec_opts_meth = :spec_opts= rescue LoadError # RSpec 2 require "rspec/core/rake_task" spec_class = RSpec::Core::RakeTask spec_files_meth = :pattern= spec_opts_meth = :rspec_opts= end spec = lambda do |name, files, d| lib_dir = File.join(File.dirname(File.expand_path(__FILE__)), 'lib') ENV['RUBYLIB'] ? (ENV['RUBYLIB'] += ":#{lib_dir}") : (ENV['RUBYLIB'] = lib_dir) desc "#{d} with -w, some warnings filtered" task "#{name}_w" do ENV['RUBYOPT'] ? (ENV['RUBYOPT'] += " -w") : (ENV['RUBYOPT'] = '-w') sh "#{FileUtils::RUBY} -S rake #{name} 2>&1 | egrep -v \"(spec/.*: warning: (possibly )?useless use of == in void context|: warning: instance variable @.* not initialized|: warning: method redefined; discarding old|: warning: previous definition of)|rspec\"" end desc d spec_class.new(name) do |t| t.send spec_files_meth, files t.send spec_opts_meth, ENV['SEQUEL_SPEC_OPTS'].split if ENV['SEQUEL_SPEC_OPTS'] end end spec_with_cov = lambda do |name, files, d, &b| spec.call(name, files, d) if RUBY_VERSION < '1.9' t = spec.call("#{name}_cov", files, "#{d} with coverage") t.rcov = true t.rcov_opts = File.file?("spec/rcov.opts") ? File.read("spec/rcov.opts").split("\n") : [] b.call(t) if b else desc "#{d} with coverage" task "#{name}_cov" do ENV['COVERAGE'] = '1' Rake::Task[name].invoke end end t end task :default => [:spec] spec_with_cov.call("spec", Dir["spec/{core,model}/*_spec.rb"], "Run core and model specs"){|t| t.rcov_opts.concat(%w'--exclude "lib/sequel/(adapters/([a-ln-z]|m[a-np-z])|extensions/core_extensions)"')} spec.call("spec_bin", ["spec/bin_spec.rb"], "Run bin/sequel specs") spec.call("spec_core", Dir["spec/core/*_spec.rb"], "Run core specs") spec.call("spec_model", Dir["spec/model/*_spec.rb"], "Run model specs") spec.call("_spec_model_no_assoc", Dir["spec/model/*_spec.rb"].delete_if{|f| f =~ /association|eager_loading/}, '') spec_with_cov.call("spec_core_ext", ["spec/core_extensions_spec.rb"], "Run core extensions specs"){|t| t.rcov_opts.concat(%w'--exclude "lib/sequel/([a-z_]+\.rb|adapters|connection_pool|database|dataset|model)"')} spec_with_cov.call("spec_plugin", Dir["spec/extensions/*_spec.rb"].sort_by{rand}, "Run extension/plugin specs"){|t| t.rcov_opts.concat(%w'--exclude "lib/sequel/([a-z_]+\.rb|adapters|connection_pool|database|dataset|model)"')} spec_with_cov.call("spec_integration", Dir["spec/integration/*_test.rb"], "Run integration tests") %w'postgres sqlite mysql informix oracle firebird mssql db2'.each do |adapter| spec_with_cov.call("spec_#{adapter}", ["spec/adapters/#{adapter}_spec.rb"] + Dir["spec/integration/*_test.rb"], "Run #{adapter} specs"){|t| t.rcov_opts.concat(%w'--exclude "lib/sequel/([a-z_]+\.rb|connection_pool|database|dataset|model|extensions|plugins)"')} end task :spec_travis=>[:spec, :spec_plugin, :spec_core_ext] do if defined?(RUBY_ENGINE) && RUBY_ENGINE == 'jruby' ENV['SEQUEL_SQLITE_URL'] = "jdbc:sqlite::memory:" ENV['SEQUEL_POSTGRES_URL'] = "jdbc:postgresql://localhost/sequel_test?user=postgres" ENV['SEQUEL_MYSQL_URL'] = "jdbc:mysql://localhost/sequel_test?user=root" else ENV['SEQUEL_SQLITE_URL'] = "sqlite:/" ENV['SEQUEL_POSTGRES_URL'] = "postgres://localhost/sequel_test?user=postgres" ENV['SEQUEL_MYSQL_URL'] = "mysql2://localhost/sequel_test?user=root" end Rake::Task['spec_sqlite'].invoke Rake::Task['spec_postgres'].invoke Rake::Task['spec_mysql'].invoke end desc "Run model specs without the associations code" task :spec_model_no_assoc do ENV['SEQUEL_NO_ASSOCIATIONS'] = '1' Rake::Task['_spec_model_no_assoc'].invoke end rescue LoadError task :default do puts "Must install rspec to run the default task (which runs specs)" end end desc "Print Sequel version" task :version do puts VERS.call end desc "Check syntax of all .rb files" task :check_syntax do Dir['**/*.rb'].each{|file| print `#{FileUtils::RUBY} -c #{file} | fgrep -v "Syntax OK"`} end ruby-sequel-4.1.1/bin/000077500000000000000000000000001220156535500145465ustar00rootroot00000000000000ruby-sequel-4.1.1/bin/sequel000077500000000000000000000146041220156535500157770ustar00rootroot00000000000000#!/usr/bin/env ruby require 'rubygems' require 'optparse' require 'sequel' code = nil copy_databases = nil dump_migration = nil dump_schema = nil env = nil migrate_dir = nil migrate_ver = nil backtrace = nil test = true load_dirs = [] exclusive_options = [] loggers = [] options = OptionParser.new do |opts| opts.banner = "Sequel: The Database Toolkit for Ruby" opts.define_head "Usage: sequel [options] [file]" opts.separator "" opts.separator "Examples:" opts.separator " sequel sqlite://blog.db" opts.separator " sequel postgres://localhost/my_blog" opts.separator " sequel config/database.yml" opts.separator "" opts.separator "For more information see http://sequel.rubyforge.org" opts.separator "" opts.separator "Options:" opts.on_tail("-h", "-?", "--help", "Show this message") do puts opts exit end opts.on("-c", "--code CODE", "run the given code and exit") do |v| code = v exclusive_options << :c end opts.on("-C", "--copy-databases", "copy one database to another") do copy_databases = true end opts.on("-d", "--dump-migration", "print database migration to STDOUT") do dump_migration = true exclusive_options << :d end opts.on("-D", "--dump-migration-same-db", "print database migration to STDOUT without type translation") do dump_migration = :same_db exclusive_options << :D end opts.on("-e", "--env ENV", "use environment config for database") do |v| env = v end opts.on("-E", "--echo", "echo SQL statements") do require 'logger' loggers << Logger.new($stdout) end opts.on("-I", "--include dir", "specify $LOAD_PATH directory") do |v| $: << v end opts.on("-l", "--log logfile", "log SQL statements to log file") do |v| require 'logger' loggers << Logger.new(v) end opts.on("-L", "--load-dir DIR", "loads all *.rb under specifed directory") do |v| load_dirs << v end opts.on("-m", "--migrate-directory DIR", "run the migrations in directory") do |v| migrate_dir = v exclusive_options << :m end opts.on("-M", "--migrate-version VER", "migrate the database to version given") do |v| migrate_ver = Integer(v) end opts.on("-N", "--no-test-connection", "do not test the connection") do test = false end opts.on("-r", "--require LIB", "require the library, before executing your script") do |v| load_dirs << [v] end opts.on("-S", "--dump-schema filename", "dump the schema for all tables to the file") do |v| dump_schema = v exclusive_options << :S end opts.on("-t", "--trace", "Output the full backtrace if an exception is raised") do backtrace = true end opts.on_tail("-v", "--version", "Show version") do puts "sequel #{Sequel.version}" exit end end opts = options opts.parse! db = ARGV.shift error_proc = lambda do |msg| $stderr.puts(msg) exit 1 end error_proc["Error: Must specify -m if using -M"] if migrate_ver && !migrate_dir error_proc["Error: Cannot specify #{exclusive_options.map{|v| "-#{v}"}.join(' and ')} together"] if exclusive_options.length > 1 connect_proc = lambda do |database| db = if database.nil? || database.empty? Sequel.connect('mock:///') elsif File.exist?(database) require 'yaml' env ||= "development" db_config = YAML.load_file(database) db_config = db_config[env] || db_config[env.to_sym] || db_config db_config.keys.each{|k| db_config[k.to_sym] = db_config.delete(k)} Sequel.connect(db_config) else Sequel.connect(database) end db.loggers = loggers db.test_connection if test db end begin DB = connect_proc[db] load_dirs.each{|d| d.is_a?(Array) ? require(d.first) : Dir["#{d}/**/*.rb"].each{|f| load(f)}} if migrate_dir Sequel.extension :migration, :core_extensions Sequel::Migrator.apply(DB, migrate_dir, migrate_ver) exit end if dump_migration DB.extension :schema_dumper puts DB.dump_schema_migration(:same_db=>dump_migration==:same_db) exit end if dump_schema DB.extension :schema_caching DB.tables.each{|t| DB.schema(Sequel::SQL::Identifier.new(t))} DB.dump_schema_cache(dump_schema) exit end if copy_databases Sequel.extension :migration DB.extension :schema_dumper db2 = ARGV.shift error_proc["Error: Must specify database connection string or path to yaml file as second argument for database you want to copy to"] if db2.nil? || db2.empty? start_time = Time.now TO_DB = connect_proc[db2] same_db = DB.database_type==TO_DB.database_type index_opts = {:same_db=>same_db} index_opts[:index_names] = :namespace if !DB.global_index_namespace? && TO_DB.global_index_namespace? puts "Databases connections successful" schema_migration = eval(DB.dump_schema_migration(:indexes=>false, :same_db=>same_db)) index_migration = eval(DB.dump_indexes_migration(index_opts)) fk_migration = eval(DB.dump_foreign_key_migration(:same_db=>same_db)) puts "Migrations dumped successfully" schema_migration.apply(TO_DB, :up) puts "Tables created" puts "Begin copying data" DB.transaction do TO_DB.transaction do DB.tables.each do |table| puts "Begin copying records for table: #{table}" time = Time.now to_ds = TO_DB.from(table) j = 0 DB.from(table).each do |record| if Time.now - time > 5 puts "Status: #{j} records copied" time = Time.now end to_ds.insert(record) j += 1 end puts "Finished copying #{j} records for table: #{table}" end end end puts "Finished copying data" puts "Begin creating indexes" index_migration.apply(TO_DB, :up) puts "Finished creating indexes" puts "Begin adding foreign key constraints" fk_migration.apply(TO_DB, :up) puts "Finished adding foreign key constraints" if TO_DB.database_type == :postgres TO_DB.tables.each{|t| TO_DB.reset_primary_key_sequence(t)} puts "Primary key sequences reset successfully" end puts "Database copy finished in #{Time.now - start_time} seconds" exit end if code eval(code) exit end rescue => e raise e if backtrace error_proc["Error: #{e.class}: #{e.message}#{e.backtrace.first}"] end if !ARGV.empty? ARGV.each{|v| load(v)} elsif !$stdin.isatty eval($stdin.read) else require 'irb' puts "Your database is stored in DB..." IRB.start end ruby-sequel-4.1.1/doc/000077500000000000000000000000001220156535500145435ustar00rootroot00000000000000ruby-sequel-4.1.1/doc/active_record.rdoc000066400000000000000000001153611220156535500202340ustar00rootroot00000000000000= Sequel for ActiveRecord Users This guide is aimed at helping ActiveRecord users transition to Sequel. It assumes the user is familiar with ActiveRecord 2, but if you are familiar with a newer ActiveRecord version, the transition should be even easier. == Introduction Both Sequel and ActiveRecord use the active record pattern of database access, where model instances are objects that wrap a row in a database table or view, encapsulating the database access, and adding domain logic on that data. Just like ActiveRecord, Sequel supports both associations and inheritance, though Sequel does so in a more flexible manner than ActiveRecord. Let's quickly run through the ActiveRecord README and show how it applies to Sequel. == Automatic Mapping Just like ActiveRecord, Sequel maps classes to tables and automatically creates accessor methods for the columns in the table, so if you have an albums table with a primary key named "id" and a string/varchar column named "name", the minimal model class is: class Album < Sequel::Model end Sequel will autogenerate the column accessors, so you can do: album = Album.new album.name = 'RF' If the table name for the class doesn't match the default one Sequel will choose, you can specify it manually: class Album < Sequel::Model(:records) end == Associations Sequel supports most of the same association types as ActiveRecord, but it uses names that reflect the database relationships instead of ones that imply ownership: class Album < Sequel::Model many_to_one :artist one_to_many :tracks many_to_many :tags end == Compositions Sequel's +composition+ plugin allows you to easily create accessor methods that are composed of one or more of the database's columns, similar to ActiveRecord's +composed_of+: class Artist < Sequel::Model plugin :composition composition :address, :mapping=>[:street, :city, :state, :zip] end == Validations Sequel's +validation_class_methods+ plugin is modeled directly on ActiveRecord's validations, but the recommended approach is to use the +validation_helpers+ plugin inside a +validate+ instance method: class Album < Sequel::Model plugin :validation_helpers def validate super validates_presence [:name, :copies_sold] validates_unique [:name, :artist_id] end end == Hooks/Callbacks Sequel's +hook_class_methods+ plugin is modeled directly on ActiveRecord's callbacks, but the recommended approach is to define your hooks/callbacks as instance methods: class Album < Sequel::Model def before_create self.copies_sold ||= 0 super end def after_update super AuditLog.create(:log=>"Updated Album #{id}") end end Observers can be implemented completely by hooks, so Sequel doesn't offer a separate observer class. == Inheritance Sequel supports both single table inheritance and class table inheritance using plugins: class Employee < Sequel::Model plugin :single_table_inheritance, :class_name_column # or plugin :class_table_inheritance end class Staff < Employee end class Manager < Employee end class Executive < Manager end == Transactions Sequel supports transactions via the Database object (we recommend using the DB constant for single database applications): DB.transaction do album.artist.num_albums -= 1 album.artist.save album.delete end For model classes, you can always access the database via +db+: Album.db.transaction do album.artist.num_albums -= 1 album.artist.save album.delete end == Reflection Just like ActiveRecord, Sequel has full reflection support for columns, associations, and many other things: Album.columns # => [:id, :name, :artist_id, :copies_sold] reflection = Album.association_reflection(:artist) reflection[:type] == :many_to_one == Direct Manipulation Just like ActiveRecord, Sequel doesn't use sessions, it lets you modify objects and have them be saved inside the call to +save+: album = Album[1234] # modify album album.save == Database Abstraction Sequel supports far more database abstractions than ActiveRecord, and setting up the database connection is easy: DB = Sequel.sqlite # memory database DB = Sequel.connect('postgres://user:pass@host/database') # connection string DB = Sequel.connect(:adapter=>'postgres', :user=>'?', :password=>'?', :host=>'?', :database=>'?') # option hash == Logging Sequel supports logging of all database queries by allowing multiple loggers for each database: DB.loggers << Logger.new($stdout) == Migrations Sequel supports migrations and has a migrator similar to ActiveRecord: Sequel.migration do change do create_table(:albums) do primary_key :id String :name end end end == Differences By now, you should have the idea that Sequel supports most things that ActiveRecord supports. The rest of this guide is going to go over how Sequel differs from ActiveRecord in terms of operation. === Method Chaining Unlike ActiveRecord 2 (and similar to ActiveRecord 3+), Sequel uses method chains on datasets for retrieving objects, so instead of: Album.find(:all, :conditions=>['name > ? AND artist_id = ?', 'RF', 1], :order=>'copies_sold', :select=>'id, name') Sequel uses: Album.where{name > 'RF'}.where(:artist_id=>1).order(:copies_sold). select(:id, :name).all Note that the records aren't retrieved until +all+ is called. ActiveRecord 3 adopts this method chaining approach, so if you are familiar with it, it should be even easier to transition to Sequel. === No Need for SQL String Fragments Like the example above, most ActiveRecord code uses SQL string fragments. With Sequel, you rarely need to. Sequel's DSL allows you to create complex queries without ever specifying SQL string fragments (called literal strings in Sequel). If you want to use SQL string fragments, Sequel makes it easy by using the Sequel.lit method: Album.select(Sequel.lit('id, name')) This usage is not encouraged, though. The recommended way is to use symbols to represent the columns: Album.select(:id, :name) Sequel keeps datasets in an abstract format, allowing for powerful capabilities. For example, let's say you wanted to join to the artists table. Sequel can automatically qualify all references in the current dataset, so that it can be safely joined: Album.select(:id, :name).qualify.join(:artists, :id=>:artist_id) This isn't possible when you use an SQL string fragment. Another case where using an SQL string fragment hurts you is when the SQL syntax cannot handle all cases: Album.filter('id NOT IN ?', ids_array) That will work fine if +ids_array+ is not empty, but will not work correctly if it is empty. With Sequel, you do: Album.exclude(:id=>ids_array) That will handle cases where +ids_array+ is empty correctly. A third reason to not use SQL string fragments is database independence. For example, if you want a case insensitive search that works on both PostgreSQL and MySQL, the following won't work: Album.filter('name LIKE ?', 'A%') This is because LIKE is case sensitive on PostgreSQL, but case insensitive on MySQL. With Sequel, you would do: Album.filter(Sequel.ilike(:name, 'A%')) This will do a case insensitive search on both databases. If you want a case sensitive search on both, you can use +like+ instead of +ilike+. String concatenation is a similar area, where MySQL and PostgreSQL handle things differently. With Sequel, the same code can work on both databases: Album.select(Sequel.join([:name, ' - Name'])) == Flexible Overriding Unlike ActiveRecord 2, which forces you to alias methods if you want to override them, with Sequel you just override the methods and call super: class Sequel::Model def after_update super AuditLog.create(:log=>"#{model.name} with primary key #{pk} updated") end end With that code, you have enabled auditing for all model object updates. Let's say you want to override accessor methods. In Sequel, instead of using +read_attribute+ and +write_attribute+, you can just call super: class Track < Sequel::Model # database holds length in integer seconds, # but you want it in minutes as a float def length=(minutes) super((minutes*60).to_i) end def length super/60.0 end end You can override almost all model class or instance methods this way, just remember to call +super+. == +method_missing+ Missing Sequel does not use +method_missing+ unless it's required that the object respond to potentially any method. Neither Sequel::Model nor Sequel::Dataset nor Sequel::Database implement +method_missing+ at either a class or instance level. So if you call +methods+, you can see which methods are available, and if they aren't listed, then the object won't respond to them. Among other things, this means Sequel does not support dynamic finders. So instead of: Album.find_or_create_by_name("RF") You just use: Album.find_or_create(:name=>"RF") At the instance level, this means that if you select columns that aren't in the models table, you need to use Model#[] to access them: album = Album.join(:artist, :id=>:artist_id). select(:albums__id, :albums__name, :artists__name___artist).first # SELECT albums.id, albums.name, artists.name AS artist album.artist # Error! album[:artist] # Works == Associations Sequel associations are similar to ActiveRecord associations in some ways, and much different in others. Sequel provides four association creation methods that map to ActiveRecord's associations: ActiveRecord :: Sequel +belongs_to+ :: +many_to_one+ +has_one+ :: +one_to_one+ +has_many+ :: +one_to_many+ +has_and_belongs_to_many+ :: +many_to_many+ Like ActiveRecord, when you create an association in Sequel, it creates an instance method with the same name that returns either the matching object or nil for the *_to_one associations, or an array of matching objects for the *_to_many associations. Updating *_to_many associations is very different, however. ActiveRecord makes the association method returns an association proxy that looks like an array, but has a bunch of added methods to manipulate the associated records. Sequel uses instance methods on the object instead of a proxy to modify the association. Here's a basic example: Artist.one_to_many :albums Album.many_to_one :artist artist = Artist[1] album = Album[1] artist.albums # array of albums album.artist # Artist instance or nil artist.add_album(album) # associate album to artist artist.remove_album(album) # disassociate album from artist artist.remove_all_albums # disassociate all albums from artist Sequel doesn't have a has_many :through association, instead you can use a +many_to_many+ association in most cases. Sequel ships with a +many_through_many+ plugin that allows you to set up a many to many relationship through an arbitrary number of join tables. Sequel doesn't come with support for polymorphic associations. Using polymorphic associations is generally bad from a database design perspective, as it violates referential integrity. If you have an old database and must have polymorphic associations, there is an external +sequel_polymorphic+ plugin that can handle them, just by using the standard association options provided by Sequel. Sequel doesn't directly support creating a bunch of associated objects and delaying saving them to the database until the main object is saved, like you can with the association.build methods in ActiveRecord. You can use +before_save or +after_save+ hooks, or the +nested_attributes+ or +instance_hooks+ plugins to get similar support. Sequel supports the same basic association hooks/callbacks as ActiveRecord. It also supports :after_load, which is run after the associated objects are loaded. For *_to_one associations, it supports +before_set+ and +after_set+ hooks, since a setter method is used instead of an add/remove method pair. If you pass a block to an association method, it's used to return a modified dataset used for the association, instead of to create an association extension: Artist.one_to_many :gold_albums, :class=>:Album do |ds| ds.where{copies_sold > 500000} end If you want to create an association extension, you can use the :extend association option with a module, which ActiveRecord also supports. In Sequel, the extensions are applied to the association dataset, not to the array of associated objects. You can access the association dataset using the +association_dataset+ method: artist.albums_dataset album.artist_dataset Association datasets are just like any other Sequel dataset, in that you can filter them and manipulate them further: gold_albums = artist.albums_dataset.where{copies_sold > 500000}.order(:name).all Sequel caches associated objects similarly to ActiveRecord, and you can skip the cache by passing +true+ to the association method, just like ActiveRecord. === Eager Loading ActiveRecord 2 tries to guess whether to use preloading or JOINs for eager loading by scanning the SQL string fragments you provide for table names. This is error prone and Sequel avoids it by giving you separate methods. In Sequel, +eager+ is used for preloading and +eager_graph+ is used for JOINs. Both have the same API: Artist.eager(:albums=>[:tags, :tracks]) Album.eager_graph(:artist, :tracks) With either way of eager loading, you must call +all+ to retrieve all records at once. You cannot use +each+, +map+, or one of the other Enumerable methods. Just like +each+, +all+ takes a block that iterates over the records: Artist.eager(:albums=>[:tags, :tracks]).each{|a| ...} # No cookie Artist.eager(:albums=>[:tags, :tracks]).all{|a| ...} # Cookie Like ActiveRecord, Sequel supports cascading of eager loading for both methods of eager loading. Unlike ActiveRecord, Sequel allows you to eager load custom associations using the :eager_loader and :eager_grapher association options. See the {Advanced Associations guide}[link:files/doc/advanced_associations_rdoc.html] for more details. Table aliasing when eager loading via +eager_graph+ is different in Sequel than ActiveRecord. Sequel will always attempt to use the association name, not the table name, for any associations. If the association name has already been used, Sequel will append _N to it, where N starts at 0 and increases by 1. For example, for a self referential association: class Node < Sequel::Model many_to_one :parent, :class=>self one_to_many :children, :class=>self, :key=>:parent_id end Node.eager_graph(:parent=>:parent, :children=>{:children=>:children}).all # SELECT nodes.id, nodes.parent_id, -- main table # parent.id AS parent_id_0, parent.parent_id AS parent_parent_id, -- parent # parent_0.id AS parent_0_id, parent_0.parent_id AS parent_0_parent_id, -- grandparent # children.id AS children_id, children.parent_id AS children_parent_id, -- children # children_0.id AS children_0_id, children_0.parent_id AS children_0_parent_id, -- grandchildren # children_1.id AS children_1_id, children_1.parent_id AS children_1_parent_id -- great grandchidren # FROM nodes -- main table # LEFT OUTER JOIN nodes AS parent ON (parent.id = nodes.parent_id) -- parent # LEFT OUTER JOIN nodes AS parent_0 ON (parent_0.id = parent.parent_id) -- grandparent # LEFT OUTER JOIN nodes AS children ON (children.parent_id = nodes.id) -- children # LEFT OUTER JOIN nodes AS children_0 ON (children_0.parent_id = children.id) -- grandchildren # LEFT OUTER JOIN nodes AS children_1 ON (children_1.parent_id = children_0.id) -- great grandchildren You can specify aliases on a per join basis, too: Node.eager_graph(:parent=>Sequel.as(:parent, :grandparent), :children=>{Sequel.as(:children, :grandchildren)=>Sequel.as(:children, :great_grandchildren)}).all # SELECT nodes.id, nodes.parent_id, # parent.id AS parent_id_0, parent.parent_id AS parent_parent_id, # grandparent.id AS grandparent_id, grandparent.parent_id AS grandparent_parent_id, # children.id AS children_id, children.parent_id AS children_parent_id, # grandchildren.id AS grandchildren_id, grandchildren.parent_id AS grandchildren_parent_id, # great_grandchildren.id AS great_grandchildren_id, great_grandchildren.parent_id AS great_grandchildren_parent_id # FROM nodes # LEFT OUTER JOIN nodes AS parent ON (parent.id = nodes.parent_id) # LEFT OUTER JOIN nodes AS grandparent ON (grandparent.id = parent.parent_id) # LEFT OUTER JOIN nodes AS children ON (children.parent_id = nodes.id) # LEFT OUTER JOIN nodes AS grandchildren ON (grandchildren.parent_id = children.id) # LEFT OUTER JOIN nodes AS great_grandchildren ON (great_grandchildren.parent_id = grandchildren.id) === Options Sequel supports many more association options than ActiveRecord, but here's a mapping of ActiveRecord association options to Sequel association options. Note that when you specify columns in Sequel, you use symbols, not strings. Where ActiveRecord would use an SQL string fragment with embedded commas for multiple columns, Sequel would use an array of column symbols. === Shared options These options are shared by more than one ActiveRecord association. ActiveRecord option :: Sequel option :class_name :: :class :conditions :: :conditions :select :: :select :order :: :order :extend :: :extend :limit :: :limit :offset :: :limit with an array with the second element being the offset :uniq :: :uniq :validate :: :validate :dependent :: The +associations_dependencies+ plugin :polymorphic, :as, :source_type :: The +sequel_polymorphic+ external plugin :include :: :eager, :eager_graph :readonly :: No equivalent, the Sequel :read_only option just means the modification methods are not created (it makes the association read only, not records retrieved through the association) :through, :source :: Use a +many_to_many+ association, or the +many_through_many+ plugin :touch :: The +touch+ plugin :autosave :: A +before_save+ or +after_save+ hook :finder_sql :: :dataset to set a custom dataset :counter_sql :: No direct equivalent, but a count on the dataset will use the custom dataset specified by :dataset :group :: Use the association block to add the group to the dataset :having :: Use the association block to add the having to the dataset === +belongs_to+ +belongs_to+ option :: +many_to_one+ option :foreign_key :: :key :primary_key :: :primary_key :counter_cache :: No equivalent === +has_one+, +has_many+ +has_one+, +has_many+ option :: +one_to_one+, +one_to_many+ option :foreign_key :: :key === +has_and_belongs_to_many+ +has_and_belongs_to_many+ option :: +many_to_many+ option :foreign_key :: :left_key :association_foreign_key :: :right_key :join_table :: :join_table :delete_sql :: :remover :insert_sql :: :adder == Validation Errors If there are errors when validating an object in Sequel, they are stored in a Sequel::Model::Errors instance. It's mostly similar to ActiveRecord::Errors, so this section will just go over the differences. Sequel::Model::Errors is a hash subclass where keys are usually column symbols (not required), and values are arrays of error messages. So if you have two error messages on the same column, +each+ will only yield once, not twice. The +add_on_blank+, +add_on_empty+, +add_to_base+, +each_full+, +generate_message+, invalid?, +on_base+, and +to_xml+ methods don't exist. [] should not be used directly, instead you should use +on+. You can think of Sequel::Model::Errors as a subset of ActiveRecord::Errors if you stick to +on+, +add+, and +full_messages+. == Sequel Configuration Flags Unlike ActiveRecord, Sequel's behavior depends on how you configure it. In Sequel, you can set flags at the global, class, and instance level that change the behavior of Sequel. Here's a brief description of the flags: +raise_on_save_failure+ :: Whether to raise an error instead of returning nil on a failure to save/create/save_changes/etc due to a validation failure or a before_* hook returning false. By default, an error is raised, when this is set to false, nil is returned instead. +raise_on_typecast_failure+ :: Whether to raise an error when unable to typecast data for a column (default: true). This should be set to false if you want to use validations to display nice error messages to the user (e.g. most web applications). You can use the +validates_schema_types+ validation in connection with this option to check for typecast failures. +require_modification+ :: Whether to raise an error if an UPDATE or DELETE query related to a model instance does not modify exactly 1 row. If set to false, Sequel will not check the number of rows modified (default: true if the database supports it). +strict_param_setting+ :: Whether new/set/update and their variants should raise an error if an invalid key is used. A key is invalid if no setter method exists for that key or the access to the setter method is restricted (e.g. due to it being a primary key field). If set to false, silently skip any key where the setter method doesn't exist or access to it is restricted. +typecast_empty_string_to_nil+ :: Whether to typecast the empty string ('') to nil for columns that are not string or blob. In most cases the empty string would be the way to specify a NULL SQL value in string form (nil.to_s == ''), and an empty string would not usually be typecast correctly for other types, so the default is true. +typecast_on_assignment+ :: Whether to typecast attribute values on assignment (default: true). If set to false, no typecasting is done, so it will be left up to the database to typecast the value correctly. +use_transactions+ :: Whether to use a transaction by default when saving/deleting records (default: true). If you are sending database queries in before or after hooks, you shouldn't change the default setting without a good reason. == ActiveRecord Method to Sequel Method Mapping This part of the guide will list Sequel equivalents for ActiveRecord methods, hopefully allowing you to convert your existing ActiveRecord code to Sequel code more easily. === Class Methods with Significantly Different Behavior ==== +abstract_class+, abstract_class=, abstract_class? With Sequel, these methods don't exist because it doesn't default to using single table inheritance in subclasses. ActiveRecord assumes that subclasses of Model classes use single table inheritance, and you have to set abstract_class = true to use an abstract class. In Sequel, you must use the +single_table_inheritance+ or +class_table_inheritance+ plugin to configure inheritance in the database. ==== +all+ In both Sequel and ActiveRecord, calling +all+ will give you an array of all records. However, while in ActiveRecord you pass options to +all+ to filter or order the results, in Sequel you call dataset methods to filter or order the results, and then end the method chain with a call to +all+ to return the records. ==== +column_names+ Sequel uses symbols for columns, so the +columns+ method returns an array of symbols. If you want an array of strings: Album.columns.map{|x| x.to_s} ==== +columns+ Sequel::Model.columns returns an array of column name symbols. The closest similar thing would be to get the database schema hash for each column: Album.columns.map{|x| Album.db_schema[x]} ==== +composed_of+ As mentioned earlier, Sequel uses the +composition+ plugin for this: class Artist < Sequel::Model plugin :composition composition :address, :mapping=>[:street, :city, :state, :zip] end ==== connected? Sequel::Model raises an exception if you haven't instantiated a Sequel::Database object before loading the model class. However, if you want to test the connection to the database: Sequel::Model.db.test_connection Note that +test_connection+ will return true if a connection can be made, but will probably raise an exception if it cannot be made. ==== +connection+ Sequel only uses connections for the minimum amount of time necessary, checking them out to do a query, and returning them as soon as the query finishes. If you do want direct access to the connection object: Sequel::Model.db.synchronize do |connection| ... end Note that the connection is yielded to the block, and the block ensures it is returned to the pool. Sequel doesn't have a method that returns a connection, since that would check it out with no ability to ensure it is returned to the pool. ==== +count_by_sql+ You can call +with_sql+ to set the SQL to use, and the +single_value+ to retrieve the result. Album.with_sql("SELECT COUNT(*) ...").single_value ==== +delete+, +delete_all+ You probably want to filter first, then call +delete+: Album.where(:id=>id).delete Album.where("artist_id = ?", 5).delete If you really want to delete all rows in the table,call +delete+ on the Model's dataset: Album.dataset.delete ==== +destroy+, +destroy_all+ Similar to +delete+, you filter first, then +destroy+: Album.where(:id=>id).destroy Album.where("artist_id = ?", 5).destroy If you really want to destroy all rows in the table,call +destroy+ on the Model's dataset: Album.dataset.destroy ==== +establish_connection+ If you want to use a specific Sequel::Database object, you can use db=: BACKUP_DB = Sequel.connect(...) Album.db = BACKUP_DB If you want a specific dataset in that database, you can use +set_dataset+ or dataset=: Album.set_dataset BACKUP_DB[:albums] Album.dataset = BACKUP_DB[:albums] ==== exists? You need to filter the dataset first, then call empty? and invert the result: !Album.where(:id=>1).empty? ==== +find+ ActiveRecord's +find+ can be used for a lot of different things. If you are trying to find a single object given a primary key: Album[1] Note that Sequel returns nil if no record is found, it doesn't raise an exception. To raise an exception if no record is found: Album.with_pk!(1) If you want to find multiple objects using an array of primary keys: Album.where(:id=>[1, 2, 3]).all If you are using find(:first, ...), you use a method chain instead of passing the options, and end it with +first+: Album.where(:artist_id=>1).order(:name).first If you are using find(:last, ...), you need to specify an order in Sequel, but the same method chain approach is used, which you end with +last+: Album.where(:artist_id=>1).order(:name).last # You could also do: Album.where(:artist_id=>1).reverse_order(:name).first If you are using find(:all, ...), you use a method chain instead of passing the options, and end it with +all+: Album.where(:artist_id=>1).order(:name).all Here's a mapping of ActiveRecord +find+ options to Sequel::Dataset methods: :conditions :: filter, where :order :: order :group :: group :limit :: limit :offset :: limit # second entry in limit array :joins :: join, left_join, etc. # many other join methods :include :: eager, eager_graph # eager does preloading, eager_graph does JOINs :select :: select :from :: from :read_only :: # No Sequel equivalent :lock :: for_update, lock_style ==== +find_by_sql+ Similar to +count_by_sql+, you use +with_sql+, followed by +all+: Album.with_sql("SELECT * FROM albums WHERE ...").all ==== +first+ Just like with find(:first, ...), you use a method chain instead of passing the options, and end it with +first+: Album.where(:artist_id=>1).order(:name).first ==== +last+ Just like with find(:last, ...), you use a method chain instead of passing the options, make sure it includes an order, and end it with +last+: Album.where(:artist_id=>1).order(:name).last ==== +named_scope+ For a pure filter, you can use +subset+: Album.subset(:debut, :position => 1) Album.subset(:gold){copies_sold > 500000} For anything more complex, you can use +dataset_module+: Album.dataset_module do def by_artist(artist_id) where(:artist_id=>artist_id) end def by_release_date order(:release_date) end end ==== +reset_column_information+ If you want to completely reload the schema for the table: Album.instance_variable_set(:@db_schema, nil) Album.send(:get_db_schema, true) ==== +serialize+, +seralized_attributes+ Sequel ships with a +serialization+ plugin that you can use. class Album < Sequel::Model plugin :serialization, :json, :permissions end For +serialized_attributes+, you can use +serialization_map+, which is also a hash, but keys are column symbols and values are callable objects used to serialize the values. ==== +set_inheritance_column+ This is something that must be specified when you are loading the +single_table_inheritance+ plugin: class Album < Sequel::Model plugin :single_table_inheritance, :column end ==== +set_sequence_name+ Sequel will usually auto discover the sequence to use. However, on Oracle this should be specified by making sure the model's dataset includes a sequence: class Album < Sequel::Model(ORACLE_DB[:albums].sequence('albums_seq')) end ==== table_exists? This is a Sequel::Database method: Album.db.table_exists?(Album.table_name) With the +schema+ plugin, you can use it directly: Album.plugin :schema Album.table_exists? ==== +transaction+ As mentioned earlier, +transaction+ is a database method in Sequel, which you can access via the +db+ method: Album.db.transaction{} ==== +update+, +update_all+ Just like +delete+ and +destroy+, you filter first, then +update+: Album.where(:id=>id).update(:name=>'RF') Album.where("artist_id = ?", 5).update(:copies_sold=>0) Likewise, to update all rows in the model: Album.dataset.update(:name=>'RF') Note that +update+ in that case will operate on a dataset, so it won't run model validations or hooks. If you want those run: Album[id].update(:name=>'RF') Album.where("artist_id = ?", 5).all{|a| a.update(:copies_sold=>0)} ==== +with_scope+ Sequel works a little differently than with_scope. Instead of using nested blocks, you just use a cleaner method chain. +with_scope+ is often used as an around_filter or similar construct, where in Sequel, you would just need to assign to a dataset in a before filter, and use that dataset in the action. === Class Methods with Roughly the Same Behavior Note that Sequel uses symbols almost everywhere to represent columns, while ActiveRecord often returns columns as strings. ActiveRecord Method :: Sequel Method +attr_accessible+ :: +set_allowed_columns+ +attr_protected+ :: +set_restricted_columns+ +average+ :: +avg+ +belongs_to+ :: +many_to_one+ +columns_hash+ :: +db_schema+ +count+ :: +count+ +changed+ :: +changed_columns+ +create+ :: +create+ +has_and_belongs_to_many+ :: +many_to_many+ +has_one+ :: +one_to_one+ +has_many+ :: +one_to_many+ +inheritance_column+ :: +sti_key+ +inspect+ :: +inspect+ +maximum+ :: +max+ +minimum+ :: +min+ +new+ :: +new+ +primary_key+ :: +primary_key+ respond_to? :: respond_to? +set_primary_key+ :: +set_primary_key+ +sum+ :: +sum+ +table_name+ :: +table_name+ +unscoped+ :: +unfiltered+ === Class Methods without an Equivalent ActiveRecord Method :: Notes, Workarounds +accepts_nested_attributes_for+ :: Use the +nested_attributes+ plugin +attr_readonly+ :: Don't update the columns (duh!), or add a before_update hook that deletes them from the values hash +attribute_method_suffix+ :: No equivalent +alias_attribute_with_dirty+ :: No equivalent +base_class+ :: Not needed internally, you can probably use sti_dataset.model if you are using single table inheritance +benchmark+ :: Just use the +benchmark+ library from ruby's stdlib +calculate+ :: No direct equivalent, just build the query manually and execute it +cache+ :: No equivalent cache_attribute? :: No equivalent +cache_attributes+ :: No equivalent +cached_attributes+ :: No equivalent changed? :: changed_columns.include?(column) +changes+ :: No equivalent clear_active_connections! :: Sequel doesn't leak connections like ActiveRecord, so you don't need to worry about this clear_reloadable_connections! :: Sequel doesn't leak connections like ActiveRecord, so you don't need to worry about this +content_columns+ :: Not needed internally, you can probably do Album.columns.map{|x| x.to_s}.delete_if{|x| x == Album.primary_key || x =~ /_(id|count)\z/} +decrement_counter+ :: Album.where(:id=>:id).update(:counter_name=>Sequel.-(:counter_name, 1)) +define_attribute_methods+, +define_read_methods+ :: def_column_accessor(*columns), a private method descends_from_active_record? :: Not needed internally, if using single table inheritance, Album.sti_dataset.model == Album +find_each+, +find_in_batches+ :: Use the +pagination+ extension generated_methods? :: No equivalent +increment_counter+ :: Album.where(:id=>:id).update(:counter_name=>Sequel.+(:counter_name, 1)) instance_method_already_implemented? :: No equivalent, Sequel does not create column accessors that override other methods, it just skips them. match_attribute_method? :: No equivalent +readonly_attributes+ :: No equivalent +remove_connection+ :: Not necessary in Sequel. If you want to disconnect an existing connection: Album.db.disconnect +require_mysql+ :: A public method, really? +silence+ :: No equivalent. Because the logger is handled at the Sequel::Database level, there is no thread-safe way to turn it off for specific blocks. +scopes+ :: No equivalent +sti_name+ :: No equivalent. +update_counters+ :: Album.where(:id=>:id).update(:counter_name=>:counter_name + 1, :other_counter=>:other_counter - 1) +uncached+ :: No equivalent === Instance Methods with Significantly Different Behavior ==== +attribute_names+ +keys+ returns the columns as unsorted symbols, so: album.keys.map{|x| x.to_s}.sort ==== +becomes+ Assuming the record already exists in the database: gold_album = GoldAlbum[1] album = Album.load(gold_album.values) If it is a new record: gold_album = GoldAlbum.new(:name=>'a') album = Album.new album.send(:set_values, gold_album.values) ==== +column_for_attribute+ You can access this through the +db_schema+ hash: album.db_schema[:column] ==== +connection+ Just like in the class method, you have to access it through the database: album.db.synchronize do |connection| end ==== +decrement+, +increment+ You can just modify the values hash directly: album.values[:column] ||= 0 album.values[:column] -= 1 # or += 1 for increment ==== decrement!, increment! Assuming you want the full behavior of saving just one column without validating: album.values[:column] ||= 0 album.values[:column] -= 1 # or += 1 for increment! album.save(:columns=>[:column], :validate=>false) ==== has_attribute? You have to check the values hash: album.values.has_key?(:column) ==== invalid? You can use unless valid? or !valid?. ==== +save+, save!, +save_with_validation+, save_with_validation! Sequel defaults to raising exceptions when +save+ fails, but this is configurable behavior by setting the +raise_on_save_failure+ flag on the class or instance: album.raise_on_save_failure = true album.save # raise exception if failure album.raise_on_save_failure = false album.save # return nil if failure You can pass the :validate=>false option to not validate the object when saving. ==== +toggle+, toggle No equivalent, but very easy to add: album.column = !album.column If you want to save just that column: album.save(:columns=>[:column], :validate=>false) ==== +transaction+ Just like in the class, you can access the transaction method through the +db+: album.db.transaction{} ==== +update_attribute+ To only set and save a specific column: album.set(:column => value) album.save(:columns=>[:column], :validate=>false) ==== +update_attributes+, update_attributes! These would both use +update+, but see the notes on the +raise_on_save_failure+ flag: album.update(:column1=>value1, :column2=>value2) === Instance Methods with Roughly the Same Behavior Note that Sequel uses symbols almost everywhere to represent columns, while ActiveRecord often returns columns as strings. ActiveRecord Method :: Sequel Method == :: ===, == compares by all values, not just id [] :: [] []= :: []= +after_create+ :: +after_create+ +after_destroy+ :: +after_destroy+ +after_save+ :: +after_save+ +after_update+ :: +after_update+ +after_validation+ :: +after_validation+ +attributes+ :: +values+ attributes= :: +set+ +before_create+ :: +before_create+ +before_destroy+ :: +before_destroy+ +before_save+ :: +before_save+ +before_update+ :: +before_update+ +before_validation+ :: +before_validation+ +cache_key+ :: +cache_key+, if using the +caching+ plugin +destroy+ :: +destroy+ eql? :: === +errors+ :: +errors+ +freeze+ :: +freeze+ frozen? :: frozen? +hash+ :: +hash+ +id+ :: +pk+ +inspect+ :: +inspect+ lock! :: lock! new_record? :: new? +reload_with_autosave_associations+ :: +reload+ +to_param+ :: +to_param+, if using the +active_model+ plugin +touch+ :: +touch+, if using the +touch+ plugin valid? :: valid? === Instance Methods without an Equivalent ActiveRecord Method :: Notes, Workarounds +after_validation_on_create+, +after_validation_on_update+ :: Use +after_validation+ and if new? or unless new? +as_json+, +from_json+, +to_json+ :: Use the +json_serializer+ plugin +from_xml+, +to_xml+ :: Use the +xml_serializer+ plugin +attribute_for_inspect+ :: album[:column].inspect attribute_present? :: !album[:column].blank? if using the +blank+ extension +attributes_before_type_cast+ :: Sequel typecasts at a low level, so model objects never see values before they are type cast +before_validation_on_create+, +before_validation_on_update+ :: Use +before_validation+ and if new? or unless new? id= :: Sequel doesn't have a special primary key setter method, but you can use: album.send("#{Album.primary_key}=", value) +mark_for_destruction+, marked_for_destruction? :: Use a +before_save+ or +after_save+ hook or the +instance_hooks+ plugin readonly! :: No equivalent readonly? :: No equivalent rollback_active_record_state! :: No equivalent +with_transaction_returning_status+ :: No equivalent ruby-sequel-4.1.1/doc/advanced_associations.rdoc000066400000000000000000000616021220156535500217450ustar00rootroot00000000000000= Advanced Associations Sequel::Model has the most powerful and flexible associations of any ruby ORM. == Background: Sequel::Model association options There are a bunch of advanced association options that are available to handle more complex cases. First we'll go over some of the simpler ones: All associations take a block that can be used to further filter/modify the default dataset. There's also an :eager_block option if you want to use a different block when eager loading via Dataset#eager. Association blocks are useful for things like: Artist.one_to_many :gold_albums, :class=>:Album do |ds| ds.where{copies_sold > 500000} end There are a whole bunch of options for changing how the association is eagerly loaded via Dataset#eager_graph: :graph_block, :graph_conditions, :graph_only_conditions, :graph_join_type (and :graph_join_table_* ones for JOINing to the join table in a many_to_many association). :graph_join_type :: The type of join to do (:inner, :left, :right) :graph_conditions :: Additional conditions to put on join (needs to be a hash or array of all two pairs). Automatically assumes unqualified symbols or first element of the pair to be columns of the associated model, and unqualified symbols of the second element of the pair to be columns of the current model. :graph_block :: A block passed to +join_table+, allowing you to specify conditions other than equality, or to use OR, or set up any arbitrary condition. The block is passed the associated table alias, current table alias, and an array of previous joins clause objects. :graph_only_conditions :: Use these conditions instead of the standard association conditions. This is necessary when you don't want to have an equal condition between the foreign key and primary key of the tables. You can also use this to have a JOIN USING (array of symbols), or a NATURAL or CROSS JOIN (nil, with the appropriate :graph_join_type). These can be used like this: # Makes Artist.eager_graph(:required_albums).all not return artists that # don't have any albums Artist.one_to_many :required_albums, :class=>:Album, :graph_join_type=>:inner # Makes sure all returned albums have the active flag set Artist.one_to_many :active_albums, :class=>:Album, \ :graph_conditions=>{:active=>true} # Only returns albums that have sold more than 500,000 copies Artist.one_to_many :gold_albums, :class=>:Album, \ :graph_block=>proc{|j,lj,js| Sequel.qualify(j, :copies_sold) > 500000} # Handles the case where the tables are associated by a case insensitive name string Artist.one_to_many :albums, :key=>:artist_name, \ :graph_only_conditions=>nil, \ :graph_block=>proc{|j,lj,js| {Sequel.function(:lower, Sequel.qualify(j, :artist_name))=>Sequel.function(:lower, Sequel.qualify(lj, :name))}} # Handles the case where both key columns have the name artist_name, and you want to use # a JOIN USING Artist.one_to_many :albums, :key=>:artist_name, :graph_only_conditions=>[:artist_name] Remember, using +eager_graph+ is generally only necessary when you need to filter/order based on columns in an associated table, it is recommended to use +eager+ for eager loading if possible. One advantage of using +eager_graph+ is that you can easily filter/order on columns in an associated table on a per-query basis, using regular Sequel dataset methods. For example, if you only want to retrieve artists who have albums that start with A, and eager load just those albums, ordered by the albums name, you can do: albums = Artist. eager_graph(:albums). where(Sequel.like(:albums__name, 'A%')). order(:albums__name). all For lazy loading (e.g. Model[1].association), the :dataset option can be used to specify an arbitrary dataset (one that uses different keys, multiple keys, joins to other tables, etc.). For eager loading via +eager+, the :eager_loader option can be used to specify how to eagerly load a complex association. This is an extremely powerful option. Though it can often be verbose (compared to other things in Sequel), it allows you complete control over how to eagerly load associations for a group of objects. :eager_loader should be a proc that takes a single hash argument, which will have at least the following keys: :id_map :: A mapping of key values to arrays of current model instances, usage described below :rows :: An array of model objects :associations :: A hash of dependent associations to eagerly load :self :: The dataset that is doing the eager loading :eager_block :: A dynamic callback for this eager load. Since you are given all of the records, you can do things like filter on associations that are specified by multiple keys, or do multiple queries depending on the content of the records (which would be necessary for polymorphic associations). Inside the :eager_loader proc, you should get the related objects and populate the associations cache for all objects in the array of records. The hash of dependent associations is available for you to cascade the eager loading down multiple levels, but it is up to you to use it. The id_map is a performance enhancement that is used by the default association loaders and is also available to you. It is a hash with keys foreign/primary key values, and values being arrays of current model objects having the foreign/primary key value associated with the key. This may be hard to visualize, so I'll give an example. Let's say you have the following associations Album.many_to_one :artist Album.one_to_many :tracks and the following three albums in the database: album1 = Album.create(:artist_id=>3) # id: 1 album2 = Album.create(:artist_id=>3) # id: 2 album3 = Album.create(:artist_id=>2) # id: 3 If you try to eager load this dataset: Album.eager(:artist, :tracks).all Then the id_map provided to the artist :eager_loader proc would be: {3=>[album1, album2], 2=>[album3]} The artist id_map contains a mapping of artist_id values to arrays of album objects. Since both album1 and album2 have the same artist_id, the are both in the array related to that key. album3 has a different artist_id, so it is in a different array. Eager loading of artists is done by looking for any artist having one of the keys in the hash: artists = Artist.where(:id=>id_map.keys).all When the artists are retrieved, you can iterate over them, find entries with matching keys, and manually associate them to the albums: artists.each do |artist| # Find related albums using the artist_id_map if albums = id_map[artist.id] # Iterate over the albums albums.each do |album| # Manually set the artist association for each album album.associations[:artist] = artist end end end The id_map provided to the tracks :eager_loader proc would be: {1=>[album1], 2=>[album2], 3=>[album3]} Now the id_map contains a mapping of id values to arrays of album objects (in this case each array only has a single object, because id is the primary key). So when looking for tracks to eagerly load, you only need to look for ones that have an album_id with one of the keys in the hash: tracks = Track.where(:album_id=>id_map.keys).all When the tracks are retrieved, you can iterate over them, find entries with matching keys, and manually associate them to the albums: tracks.each do |track| if albums = id_map[track.album_id] albums.each do |album| album.associations[:tracks] << track end end end === Two basic example eager loaders Putting the code in the above examples together, you almost have enough for a basic working eager loader. The main important thing that is missing is you need to set initial values for the eagerly loaded associations. For the artist association, you need to initial the values to nil: # rows here is the :rows entry in the hash passed to the eager loader rows.each{|album| album.associations[:artist] = nil} For the tracks association, you set the initial value to an empty array: rows.each{|album| album.associations[:track] = []} These are done so that if an album currently being loaded doesn't have an associated artist or any associated tracks, the lack of them will be cached, so calling the artist or tracks method on the album will not do another database lookup. So putting everything together, the artist eager loader looks like: :eager_loader=>(proc do |eo_opts| eo_opts[:rows].each{|album| album.associations[:artist] = nil} id_map = eo_opts[:id_map] Artist.where(:id=>id_map.keys).all do |artist| if albums = id_map[artist.id] albums.each do |album| album.associations[:artist] = artist end end end end) and the tracks eager loader looks like: :eager_loader=>(proc do |eo_opts| eo_opts[:rows].each{|album| album.associations[:tracks] = []} id_map = eo_opts[:id_map] Track.where(:id=>id_map.keys).all do |tracks| if albums = id_map[track.album_id] albums.each do |album| album.associations[:tracks] << track end end end end) Now, these are both overly simplistic eager loaders that don't respect cascaded associations or any of the association options. But hopefully they both provide simple examples that you can more easily build and learn from, as the custom eager loaders described later in this page are more complex. Basically, the eager loading steps can be broken down into: 1. Set default association values (nil/[]) for each of the current objects 2. Return just related associated objects by filtering the associated class to include only rows with keys present in the id_map. 3. Iterating over the returned associated objects, indexing into the id_map using the foreign/primary key value in the associated object to get current values associated to that specific object. 4. For each of those current values, updating the cached association value to include that specific object. Using the :eager_loader proc, you should be able to eagerly load all associations that can be eagerly loaded, even if Sequel doesn't natively support such eager loading. == ActiveRecord associations Sequel supports all of associations that ActiveRecord supports, though some require different approaches or custom :eager_loader options. === Association callbacks Sequel supports the same callbacks that ActiveRecord does for +one_to_many+ and +many_to_many+ associations: :before_add, :before_remove, :after_add, and :after_remove. For +many_to_one+ associations and +one_to_one+ associations, Sequel supports the :before_set and :after_set callbacks. On all associations, Sequel supports :after_load, which is called after the association has been loaded. Each of these options can be a symbol specifying an instance method that takes one argument (the associated object), or a proc that takes two arguments (the current object and the associated object), or an array of symbols and procs. For :after_load with a *_to_many association, the associated object argument is an array of associated objects. If any of the before callbacks return +false+, the adding/removing does not happen and it either raises a Sequel::BeforeHookFailed (the default), or returns false (if +raise_on_save_failure+ is false). === Association extensions All associations come with an association_dataset method that can be further filtered or otherwise modified: class Author < Sequel::Model one_to_many :authorships end Author.first.authorships_dataset.where{number < 10}.first You can extend a dataset with a module using the :extend association option. You can reference the model object that created the association dataset via the dataset's +model_object+ method, and the related association reflection via the dataset's +association_reflection+ method: module FindOrCreate def find_or_create(vals) first(vals) || model.create(vals.merge(association_reflection[:key]=>model_object.id)) end end class Author < Sequel::Model one_to_many :authorships, :extend=>FindOrCreate end Author.first.authorships_dataset.find_or_create(:name=>'Blah', :number=>10) === has_many :through associations +many_to_many+ handles the usual case of a has_many :through with a +belongs_to+ in the associated model. It doesn't break on the case where the join table is a model table, unlike ActiveRecord's +has_and_belongs_to_many+. ActiveRecord: class Author < ActiveRecord::Base has_many :authorships has_many :books, :through => :authorships end class Authorship < ActiveRecord::Base belongs_to :author belongs_to :book end @author = Author.find :first @author.books Sequel::Model: class Author < Sequel::Model one_to_many :authorships many_to_many :books, :join_table=>:authorships end class Authorship < Sequel::Model many_to_one :author many_to_one :book end @author = Author.first @author.books If you use an association other than +belongs_to+ in the associated model, such as a +has_many+, you still use a +many_to_many+ association, but you need to use some options: ActiveRecord: class Firm < ActiveRecord::Base has_many :clients has_many :invoices, :through => :clients end class Client < ActiveRecord::Base belongs_to :firm has_many :invoices end class Invoice < ActiveRecord::Base belongs_to :client has_one :firm, :through => :client end Firm.find(:first).invoices Sequel::Model: class Firm < Sequel::Model one_to_many :clients many_to_many :invoices, :join_table=>:clients, :right_key=>:id, :right_primary_key=>:client_id end class Client < Sequel::Model many_to_one :firm one_to_many :invoices end class Invoice < Sequel::Model many_to_one :client # has_one :through equivalent 1 # eager load with :eager=>:firm option on :client association, and eager loading :client def firm client.firm if client end # has_one :through equivalent 2 # eager load the usual way many_to_many :firms, :join_table=>:clients, :left_key=>:id, :left_primary_key=>:client_id, :right_key=>:firm_id def firm firms.first end # has_one :through equivalent 3 # eager loading requires custom :eager_loader proc many_to_one :firm, :dataset=>proc{Firm.join(:clients, :firm_id=>:id, :id=>client_id).select_all(:firms)} end Firm.first.invoices === Polymorphic Associations Sequel discourages the use of polymorphic associations, which is the reason they are not supported by default. All polymorphic associations can be made non-polymorphic by using additional tables and/or columns instead of having a column containing the associated class name as a string. Polymorphic associations break referential integrity and are significantly more complex than non-polymorphic associations, so their use is not recommended unless you are stuck with an existing design that uses them. If you must use them, look for the sequel_polymorphic external plugin, as it makes using polymorphic associations in Sequel about as easy as it is in ActiveRecord. However, here's how they can be done using Sequel's custom associations (the sequel_polymorphic plugin is just a generic version of this code): ActiveRecord: class Asset < ActiveRecord::Base belongs_to :attachable, :polymorphic => true end class Post < ActiveRecord::Base has_many :assets, :as => :attachable end class Note < ActiveRecord::Base has_many :assets, :as => :attachable end @asset.attachable = @post @asset.attachable = @note Sequel::Model: class Asset < Sequel::Model many_to_one :attachable, :reciprocal=>:assets, :setter=>(proc do |attachable| self[:attachable_id] = (attachable.pk if attachable) self[:attachable_type] = (attachable.class.name if attachable) end), :dataset=>(proc do klass = attachable_type.constantize klass.where(klass.primary_key=>attachable_id) end), :eager_loader=>(proc do |eo| id_map = {} eo[:rows].each do |asset| asset.associations[:attachable] = nil ((id_map[asset.attachable_type] ||= {})[asset.attachable_id] ||= []) << asset end id_map.each do |klass_name, id_map| klass = klass_name.constantize klass.where(klass.primary_key=>id_map.keys).all do |attach| id_map[attach.pk].each do |asset| asset.associations[:attachable] = attach end end end end) end class Post < Sequel::Model one_to_many :assets, :key=>:attachable_id, :reciprocal=>:attachable, :conditions=>{:attachable_type=>'Post'}, :adder=>proc{|asset| asset.update(:attachable_id=>pk, :attachable_type=>'Post')}, :remover=>proc{|asset| asset.update(:attachable_id=>nil, :attachable_type=>nil)}, :clearer=>proc{assets_dataset.update(:attachable_id=>nil, :attachable_type=>nil)} end class Note < Sequel::Model one_to_many :assets, :key=>:attachable_id, :reciprocal=>:attachable, :conditions=>{:attachable_type=>'Note'}, :adder=>proc{|asset| asset.update(:attachable_id=>pk, :attachable_type=>'Note')}, :remover=>proc{|asset| asset.update(:attachable_id=>nil, :attachable_type=>nil)}, :clearer=>proc{assets_dataset.update(:attachable_id=>nil, :attachable_type=>nil)} end @asset.attachable = @post @asset.attachable = @note == Other advanced associations === Joining on multiple keys Let's say you have two tables that are associated with each other with multiple keys. This can be handled using Sequel's built in composite key support for associations: # Both of these models have an album_id, number, and disc_number fields. # All FavoriteTracks have an associated track, but not all tracks have an # associated favorite track class Track < Sequel::Model many_to_one :favorite_track, :key=>[:disc_number, :number, :album_id], :primary_key=>[:disc_number, :number, :album_id] end class FavoriteTrack < Sequel::Model one_to_one :tracks, :key=>[:disc_number, :number, :album_id], :primary_key=>[:disc_number, :number, :album_id] end === Tree - All Ancestors and Descendents Let's say you want to store a tree relationship in your database, it's pretty simple: class Node < Sequel::Model many_to_one :parent, :class=>self one_to_many :children, :key=>:parent_id, :class=>self end You can easily get a node's parent with node.parent, and a node's children with node.children. You can even eager load the relationship up to a certain depth: # Eager load three generations of generations of children for a given node Node.filter(:id=>1).eager(:children=>{:children=>:children}).all.first # Load parents and grandparents for a group of nodes Node.filter{id < 10}.eager(:parent=>:parent).all What if you want to get all ancestors up to the root node, or all descendents, without knowing the depth of the tree? class Node < Sequel::Model many_to_one :ancestors, :class=>self, :eager_loader=>(proc do |eo| # Handle cases where the root node has the same parent_id as primary_key # and also when it is NULL non_root_nodes = eo[:rows].reject do |n| if [nil, n.pk].include?(n.parent_id) # Make sure root nodes have their parent association set to nil n.associations[:parent] = nil true else false end end unless non_root_nodes.empty? id_map = {} # Create an map of parent_ids to nodes that have that parent id non_root_nodes.each{|n| (id_map[n.parent_id] ||= []) << n} # Doesn't cause an infinte loop, because when only the root node # is left, this is not called. Node.where(Node.primary_key=>id_map.keys).eager(:ancestors).all do |node| # Populate the parent association for each node id_map[node.pk].each{|n| n.associations[:parent] = node} end end end) many_to_one :descendants, :eager_loader=>(proc do |eo| id_map = {} eo[:rows].each do |n| # Initialize an empty array of child associations for each parent node n.associations[:children] = [] # Populate identity map of nodes id_map[n.pk] = n end # Doesn't cause an infinite loop, because the :eager_loader is not called # if no records are returned. Exclude id = parent_id to avoid infinite loop # if the root note is one of the returned records and it has parent_id = id # instead of parent_id = NULL. Node.where(:parent_id=>id_map.keys).exclude(:id=>:parent_id).eager(:descendants).all do |node| # Get the parent from the identity map parent = id_map[node.parent_id] # Set the child's parent association to the parent node.associations[:parent] = parent # Add the child association to the array of children in the parent parent.associations[:children] << node end end) end Note that unlike ActiveRecord, Sequel supports common table expressions, which allows you to use recursive queries. The results are not the same as in the above case, as all descendents are stored in a single association, but all descendants can be both lazy loaded or eager loaded in a single query (assuming your database supports recursive common table expressions). Sequel ships with an +rcte_tree+ plugin that makes this easy: class Node < Sequel::Model plugin :rcte_tree end === Joining multiple keys to a single key, through a third table Let's say you have a database of songs, lyrics, and artists. Each song may or may not have a lyric (most songs are instrumental). The lyric can be associated to an artist in each of four ways: composer, arranger, vocalist, or lyricist. These may all be the same, or they could all be different, and none of them are required. The songs table has a lyric_id field to associate it to the lyric, and the lyric table has four fields to associate it to the artist (composer_id, arranger_id, vocalist_id, and lyricist_id). What you want to do is get all songs for a given artist, ordered by the song's name, with no duplicates? class Artist < Sequel::Model one_to_many :songs, :order=>:songs__name, \ :dataset=>proc{Song.select_all(:songs).join(Lyric, :id=>:lyric_id, id=>[:composer_id, :arranger_id, :vocalist_id, :lyricist_id])}, \ :eager_loader=>(proc do |eo| h = eo[:id_map] ids = h.keys eo[:rows].each{|r| r.associations[:songs] = []} Song.select_all(:songs). select_append(:lyrics__composer_id, :lyrics__arranger_id, :lyrics__vocalist_id, :lyrics__lyricist_id). join(Lyric, :id=>:lyric_id){Sequel.or(:composer_id=>ids, :arranger_id=>ids, :vocalist_id=>ids, :lyricist_id=>ids)}. order(:songs__name).all do |song| [:composer_id, :arranger_id, :vocalist_id, :lyricist_id].each do |x| recs = h[song.values.delete(x)] recs.each{|r| r.associations[:songs] << song} if recs end end eo[:rows].each{|r| r.associations[:songs].uniq!} end) end === Statistics Associations (Sum of Associated Table Column) In addition to getting associated records, you can use Sequel's association support to get aggregate information for columns in associated tables (sums, averages, etc.). Let's say you have a database with projects and tickets. A project can have many tickets, and each ticket has a number of hours associated with it. You can use the association support to create a Project association that gives the sum of hours for all associated tickets. class Project < Sequel::Model one_to_many :tickets many_to_one :ticket_hours, :read_only=>true, :key=>:id, :dataset=>proc{Ticket.where(:project_id=>id).select{sum(hours).as(hours)}}, :eager_loader=>(proc do |eo| eo[:rows].each{|p| p.associations[:ticket_hours] = nil} Ticket.where(:project_id=>eo[:id_map].keys). select_group(:project_id). select_append{sum(hours).as(hours)}. all do |t| p = eo[:id_map][t.values.delete(:project_id)].first p.associations[:ticket_hours] = t end end) # The association method returns a Ticket object with a single aggregate # sum-of-hours value, but you want it to return an Integer/Float of just the # sum of hours, so you call super and return just the sum-of-hours value. # This works for both lazy loading and eager loading. def ticket_hours if s = super s[:hours] end end end class Ticket < Sequel::Model many_to_one :project end Note that it is often better to use a sum cache instead of this approach. You can implement a sum cache using +after_create+ and +after_delete+ hooks, or preferrably using a database trigger. ruby-sequel-4.1.1/doc/association_basics.rdoc000066400000000000000000001651631220156535500212700ustar00rootroot00000000000000= Association Basics This guide is based on http://guides.rubyonrails.org/association_basics.html == Why Associations? Associations exist to simplify code that deals with related rows in separate database tables. Without associations, if you had classes such as: class Artist < Sequel::Model end class Album < Sequel::Model end And you wanted to get all of the albums for a given artist (assuming each album was associated with only one artist): Album.filter(:artist_id=>@artist.id).all Or maybe you want to add an album for a given artist: Album.create(:artist_id=>@artist.id, :name=>'RF') With Associations, you can make the above code simpler, by setting up associations between the two models: class Artist < Sequel::Model one_to_many :albums end class Album < Sequel::Model many_to_one :artist end Then, the code to retrieve albums related to the artist is simpler: @artist.albums As is the code to add a related album to an artist: @artist.add_album(:name=>'RF') == The Types of Associations Sequel has four different association types built in: * many_to_one * one_to_many * one_to_one * many_to_many === many_to_one The many_to_one association is used when the table for the current class contains a foreign key that references the primary key in the table for the associated class. It is named because there can be many rows in the current table for each row in the associated table. # Database schema: # albums artists # :id /--> :id # :artist_id --/ :name # :name class Album # Uses singular form of associated model name many_to_one :artist end === one_to_many The one_to_many association is used when the table for the associated class contains a foreign key that references the primary key in the table for the current class. It is named because for each row in the current table there can be many rows in the associated table: # Database schema: # artists albums # :id <----\ :id # :name \----- :artist_id # :name class Artist # Uses plural form of associated model name one_to_many :albums end === one_to_one The one_to_one association can be thought of as a subset of the one_to_many association, but where there can only be either 0 or 1 records in the associated table. It is the least frequently used of the four associations. If you assume each artist cannot be associated with more than one album: # Database schema: # artists albums # :id <----\ :id # :name \----- :artist_id # :name class Artist # Uses singular form of associated model name one_to_one :album end === many_to_many The many_to_many association allows each row in the current table to be associated to many rows in the associated table, and each row in the associated table to many rows in the current table, by using a join table to associate the two tables. If you assume each artist can have multiple albums and each album can have multiple artists: # Database schema: # albums # :id <----\ # :name \ albums_artists # \---- :album_id # artists /---- :artist_id # :id <-----/ # :name class Artist # Uses plural form of associated model name many_to_many :albums end class Album many_to_many :artists end === Differences Between many_to_one and one_to_one If you want to setup a 1-1 relationship between two models, you have to use many_to_one in one model, and one_to_one in the other model. How do you know which to use in which model? The simplest way to remember is that the model whose table has the foreign key uses many_to_one, and the other model uses one_to_one: # Database schema: # artists albums # :id <----\ :id # :name \----- :artist_id # :name class Artist one_to_one :album end class Album many_to_one :artist end == Most Common Options === :key The :key option must be used if the default column symbol that Sequel would use is not the correct column. For example: class Album # Assumes :key is :artist_id, based on association name of :artist many_to_one :artist end class Artist # Assumes :key is :artist_id, based on class name of Artist one_to_many :albums end However, if your schema looks like: # Database schema: # artists albums # :id <----\ :id # :name \----- :artistid # Note missing underscore # :name Then the default :key option will not be correct. To fix this, you need to specify an explicit :key option: class Album many_to_one :artist, :key=>:artistid end class Artist one_to_many :albumst, :key=>:artistid end For many_to_many associations, the :left_key and :right_key options can be used to specify the column names in the join table, and the :join_table option can be used to specify the name of the join table: # Database schema: # albums # :id <----\ # :name \ albumsartists # \---- :albumid # artists /---- :artistid # :id <-----/ # :name class Artist # Note that :left_key refers to the foreign key pointing to the # current table, and :right_key the foreign key pointing to the # associated table. many_to_many :albums, :left_key=>:artistid, :right_key=>:albumid, :join_table=>:albumsartists end class Album many_to_many :artists, :left_key=>:albumid, :right_key=>:artistid, :join_table=>:albumsartists end === :class If the class of the association can not be guessed directly by looking at the association name, you need to specify it via the :class option. For example, if you have two separate foreign keys in the albums table that both point to the artists table, maybe to indicate one artist is the vocalist and one is the composer, you'd have to use the :class option: # Database schema: # artists albums # :id <----\ :id # :name \----- :vocalist_id # \---- :composer_id # :name class Album many_to_one :vocalist, :class=>:Artist many_to_one :composer, :class=>:Artist end class Artist one_to_many :vocalist_albums, :class=>:Album, :key=>:vocalist_id one_to_many :composer_albums, :class=>:Album, :key=>:composer_id end == Self-referential Associations Self-referential associations are easy to handle in Sequel. The simplest example is a tree structure: # Database schema: # nodes # :id <--\ # :parent_id ---/ # :name class Node many_to_one :parent, :class=>self one_to_many :children, :key=>:parent_id, :class=>self end For many_to_many self_referential associations, it's fairly similar. Here's an example of a directed graph: # Database schema: # nodes edges # :id <----------- :successor_id # :name \----- :predecessor_id class Node many_to_many :direct_successors, :left_key=>:successor_id, :right_key=>:predecessor_id, :join_table=>:edges, :class=>self many_to_many :direct_predecessors, :right_key=>:successor_id, :left_key=>:predecessor_id, :join_table=>:edges, :class=>self end == Methods Added When you create an association, it's going to add instance methods to the class related to the association. All associations are going to have an instance method added with the same name as the association: @artist.albums @album.artists many_to_one and one_to_one associations will also have a setter method added to change the associated object: @album.artist = Artist.create(:name=>'YJM') many_to_many and one_to_many associations will have three methods added: - add_* to associate an object to the current object - remove_* to disassociate an object from the current object - remove_all_* to dissociate all currently associated objects Examples: @artist.add_album(@album) @artist.remove_album(@album) @artist.remove_all_albums == Caching Associations are cached after being retrieved: @artist.album # Not cached - Database Query @artist.album # Cached - No Database Query @album.artists # Not cached - Database Query @album.artists # Cached - No Database Query You can choose to ignore the cached versions and do a database query to retrieve results by passing a true argument to the association method: @album.artists # Not cached - Database Query @album.artists # Cached - No Database Query @album.artists(true) # Ignore cache - Database Query If you reload/refresh the object, it will automatically clear the associations cache for the object: @album.artists # Not cached - Database Query @album.artists # Cached - No Database Query @album.reload @album.artists # Not Cached - Database Query If you want direct access to the associations cache, use the associations instance method: @album.associations # {} @album.associations[:artists] # nil @album.artists # [, ...] @album.associations[:artists] # [, ...] == Dataset Method In addition to the above methods, associations also add a instance method ending in +_dataset+ that returns a dataset representing the objects in the associated table: @album.artist_id # 10 @album.artist_dataset # SELECT * FROM artists WHERE (id = 10) @artist.id # 20 @artist.albums_dataset # SELECT * FROM albums WHERE (artist_id = 20) The association dataset is just like any other Sequel dataset, in that it can be further filtered, ordered, etc.: @artist.albums_dataset. where(Sequel.like(:name, 'A%')). order(:copies_sold). limit(10) # SELECT * FROM albums # WHERE ((artist_id = 20) AND (name LIKE 'A%')) # ORDER BY copies_sold LIMIT 10 Records retrieved using the +_dataset+ method are not cached in the associations cache. @album.artists_dataset.all # [, ...] @album.associations[:artists] # nil == Dynamic Association Modification Similar to the +_dataset+ method, you can provide a block to the association method to customize the dataset that will be used to retrieve the records. So you can apply a filter in either of these two ways: @artist.albums_dataset.where(Sequel.like(:name, 'A%')) @artist.albums{|ds| ds.where(Sequel.like(:name, 'A%'))} While they both apply the same filter, using the +_dataset+ method does not apply any of the association callbacks or handle association reciprocals (see below for details about callbacks and reciprocals). Using a block instead handles all those things, and also caches its results in the associations cache (ignoring any previously cached value). == Filtering By Associations In addition to using the association method to get associated objects, you can also use associated objects in filters. For example, to get all albums for a given artist, you would usually do: @artist.albums # or @artist.albums_dataset for a dataset You can also do the following: Album.where(:artist=>@artist).all # or leave off the .all for a dataset For filtering by a single association, this isn't very useful. However, unlike using the association method, using a filter allows you to filter by multiple associations: Album.where(:artist=>@artist, :publisher=>@publisher) This will return all albums by that artist and published by that publisher. This isn't possible using just the association method approach, though you can combine the approaches: @artist.albums_dataset.where(:publisher=>@publisher) This doesn't just work for +many_to_one+ associations, it also works for +one_to_one+, +one_to_many+, and +many_to_many+ associations: Album.one_to_one :album_info # The album related to that AlbumInfo instance Album.where(:album_info=>AlbumInfo[2]) Album.one_to_many :tracks # The album related to that Track instance Album.where(:tracks=>Track[3]) Album.many_to_many :tags # All albums related to that Tag instance Album.where(:tags=>Tag[4]) Note that for +one_to_many+ and +many_to_many+ associations, you still use the plural form even though only a single model object is given. You can also exclude by associations: Album.exclude(:artist=>@artist).all This will return all albums not by that artist. You can also provide an array with multiple model objects: Album.where(:artist=>[@artist1, @artist2]).all Similar to using an array of integers or strings, this will return all albums whose artist is one of those two artists. You can also use +exclude+ if you want all albums not by either of those artists: Album.exclude(:artist=>[@artist1, @artist2]).all If you are using a +one_to_many+ or +many_to_many+ association, you may want to return records where the records matches all of multiple records, instead of matching any of them. For example: Album.where(:tags=>[@tag1, @tag2]) This matches albums that are associated with either @tag1 or @tag2 or both. If you only want ones that you are associated with both, you can use separate filter calls: Album.where(:tags=>@tag1).where(:tags=>@tag2) Or the the array form of condition specifiers: Album.where([[:tags, @tag1], [:tags, @tag2]]) These will return albums associated with both @tag1 and @tag2. You can also provide a dataset value when filtering by associations: Album.where(:artist=>Artist.where(Sequel.like(:name, 'A%'))).all This will return all albums whose artist starts with 'A'. Like the other forms, this can be inverted: Album.exclude(:artist=>Artist.where(Sequel.like(:name, 'A%'))).all This will return all albums whose artist does not start with 'A'. Note that filtering by associations only works correctly for simple associations (ones without conditions). == Name Collisions Because associations create instance methods, it's possible to override existing instance methods if you name an association the same as an existing method. For example, values and associations would be bad association names. == Database Schema Creating an association doesn't modify the database schema. Sequel assumes your associations reflect the existing database schema. If not, you should modify your schema before creating the associations. === many_to_one/one_to_many For example, for the following model code: class Album many_to_one :artist end class Artist one_to_many :albums end You probably want the following database schema: # albums artists # :id /--> :id # :artist_id --/ :name # :name Which could be created using the following Sequel code: DB.create_table(:artists) do # Primary key must be set explicitly primary_key :id String :name end DB.create_table(:albums) do primary_key :id # Table that foreign key references needs to be set explicitly # for a database foreign key reference to be created. foreign_key :artist_id, :artists String :name end If you already had a schema such as: # Database schema: # albums artists # :id :id # :name :name Then you just need to add the column: DB.alter_table(:albums) do add_foreign_key :artist_id, :artists end === many_to_many With many_to_many associations, the default join table for the association uses the sorted underscored names of both model classes. For example, with the following model code: class Album many_to_many :artists end class Artist many_to_many :albums end The default join table name would be albums_artists, not artists_albums, because: ["artists", "albums"].sort.join('_') # "albums_artists" Assume you already had the albums and artists tables created, and you just wanted to add an albums_artists join table to create the following schema: # Database schema: # albums # :id <----\ # :name \ albums_artists # \---- :album_id # artists /---- :artist_id # :id <-----/ # :name You could use the following Sequel code: DB.create_join_table(:album_id=>:albums, :artist_id=>:artists) # or DB.create_table(:albums_artists) do foreign_key :album_id, :albums foreign_key :artist_id, :artists end == Association Scope If you nest your Sequel::Model classes inside modules, then you should know that Sequel will only look in the same module for associations by default. So the following code will work fine: module App class Artist < Sequel::Model one_to_many :albums end class Album < Sequel::Model many_to_one :artist end end However, if you enclose your model classes inside two different modules, things will not work by default: module App1 class Artist < Sequel::Model one_to_many :albums end end module App2 class Album < Sequel::Model many_to_one :artist end end To fix this, you need to specify the full model class name using the :class option: module App1 class Artist < Sequel::Model one_to_many :albums, :class=>"App2::Album" end end module App2 class Album < Sequel::Model many_to_one :artist, :class=>"App1::Artist" end end If both classes are in the same module, but the default class name used is not correct, you need to specify the full class name with the :class option: module App1 class AlbumArtist < Sequel::Model one_to_many :albums end class Album < Sequel::Model many_to_one :artist, :class=>"App1::AlbumArtist" end end == Method Details In all of these methods, _association_ is replaced by the symbol you pass to the association. === _association_(reload = false) (e.g. albums) For +many_to_one+ and +one_to_one+ associations, the _association_ method returns either the single object associated, or nil if no object is associated. @artist = @album.artist For +one_to_many+ and +many_to_many+ associations, the _association_ method returns an array of associated objects, which may be empty if no objects are currently associated. @albums = @artist.albums === _association_=(object_to_associate) (e.g. artist=) [+many_to_one+ and +one_to_one+] The _association_= method sets up an association of the passed object to the current object. For +many_to_one+ associations, this sets the foreign key for the current object to point to the associated object's primary key. @album.artist = @artist For +one_to_one+ associations, this sets the foreign key of the associated object to the primary key value of the current object. For +many_to_one+ associations, this does not save the current object. For +one_to_one+ associations, this does save the associated object. === add_association(object_to_associate) (e.g. add_album) [+one_to_many+ and +many_to_many+] The add_association method associates the passed object to the current object. For +one_to_many+ associations, it sets the foreign key of the associated object to the primary key value of the current object, and saves the associated object. For +many_to_many+ associations, this inserts a row into the join table with the foreign keys set to the primary key values of the current and associated objects. Note that the singular form of the association name is used in this method. @artist.add_album(@album) In addition to passing an actual associated object, you can pass a hash, and a new associated object will be created from them: @artist.add_album(:name=>'RF') # creates Album object The add_association method returns the now associated object: @album = @artist.add_album(:name=>'RF') === remove_association(object_to_disassociate) (e.g. remove_album) [+one_to_many+ and +many_to_many+] The remove_association method disassociates the the passed object from the current object. For +one_to_many+ associations, it sets the foreign key of the associated object to NULL, and saves the associated object. For +many_to_many+ associations, this deletes the matching row in the join table. Similar to the add_association method, the singular form of the association name is used in this method. @artist.remove_album(@album) Note that this does not delete @album from the database, it only disassociates it from the @artist. To delete @album from the database: @album.destroy The add_association and remove_association methods should be thought of as adding and removing from the association, not from the database. In addition to passing the object directly to remove_association, you can also pass the associated object's primary key: @artist.remove_album(10) This will look up the associated object using the key, and remove that album. The remove_association method returns the now disassociated object: @album = @artist.remove_album(10) === remove_all_association (e.g. remove_all_albums) [+one_to_many+ and +many_to_many+] The remove_all_association method disassociates all currently associated objects. For +one_to_many+ associations, it sets the foreign key of all associated objects to NULL in a single query. For +many_to_many+ associations, this deletes all matching rows in the join table. Unlike the add_association and remove_association method, the plural form of the association name is used in this method. The remove_all_association method returns the number of rows updated for +one_to_many+ associations and the number of rows deleted for +many_to_many+ associations: @rows_modified = @artist.remove_all_albums === association_dataset (e.g. albums_dataset) The association_dataset method returns a dataset that represents all associated objects. This dataset is like any other Sequel dataset, in that it can be filtered, ordered, etc.: ds = @artist.albums_dataset.where(Sequel.like(:name, 'A%')).order(:copies_sold) Unlike most other Sequel datasets, association datasets have a couple of added methods: ds.model_object # @artist ds.association_reflection # same as Artist.association_reflection(:albums) For a more info on Sequel's reflection capabilities see the {Reflection page}[link:files/doc/reflection_rdoc.html]. == Overriding Method Behavior Sequel is designed to be very flexible. If the default behavior of the association modification methods isn't what you desire, you can override the methods in your classes. However, you should be aware that for each of the association modification methods described, there is a private method that is preceeded by an underscore that does the actual modification. The public method without the underscore handles caching and callbacks, and shouldn't be overridden by the user. In addition to overriding the private method in your class, you can also use association options to change which method Sequel defines. The only difference between the two is that if you use an association option to change the method Sequel defines, you cannot call super to get the default behavior. === _association= (:setter option) Let's say you want to set a specific field whenever associating an object using the association setter method. For example, let's say you have a file_under column for each album to tell you where to file it. If the album is associated with an artist, it should be filed under the artist's name and the album's name, otherwise it should just use the album's name. class Album < Sequel::Model many_to_one :artist private def _artist=(artist) if artist self.artist_id = artist.id self.file_under = "#{artist.name}-#{name}" else self.artist_id = nil self.file_under = name end end end The above example is contrived, as you would generally use a before_save model hook to handle such a modification. However, if you only modify the album's artist using the artist= method, this approach may perform better. === \_add_association (:adder option) Continuing with the same example, here's how would you handle the same case if you also wanted to handle the Artist#add_album method: class Artist < Sequel::Model one_to_many :albums private def _add_album(album) album.update(:artist_id => id, :file_under=>"#{name}-#{album.name}") end end === \_remove_association (:remover option) Continuing with the same example, here's how would you handle the same case if you also wanted to handle the Artist#remove_album method: class Artist < Sequel::Model one_to_many :albums private def _remove_album(album) album.update(:artist_id => nil, :file_under=>album.name) end end === \_remove_all_association (:clearer option) Continuing with the same example, here's how would you handle the same case if you also wanted to handle the Artist#remove_all_albums method: class Artist < Sequel::Model one_to_many :albums private def _remove_all_albums # This is Dataset#update, not Model#update, so the :file_under=>:name # ends up being "SET file_under = name" in SQL. albums_dataset.update(:artist_id => nil, :file_under=>:name) end end == Association Options Sequel's associations mostly share the same options. For ease of understanding, they are grouped here by section === Association Dataset Modification Options ==== block All association defining methods take a block that is passed the default dataset and should return a modified copy of the dataset to use for the association. For example, if you wanted an association that returns all albums of an artist that went gold (sold at least 500,000 copies): Artist.one_to_many :gold_albums, :class=>:Album do |ds| ds.where{copies_sold > 500000} end ==== :class This is the class of the associated objects that will be used. It's one of the most commonly used options. If it is not given, it guesses based on the name of the association. If a *_to_many association is used, uses the singular form of the association name. For example: Album.many_to_one :artist # guesses Artist Artist.one_to_many :albums # guesses Album However, for more complex associations, especially ones that add additional filters beyond the foreign/primary key relationships, the default class guessed will be wrong: # guesses GoldAlbum Artist.one_to_many :gold_albums do |ds| ds.where{copies_sold > 500000} end You can specify the :class option using the class itself, a Symbol, or a String: Album.many_to_one :artist, :class=>Artist # Class Album.many_to_one :artist, :class=>:Artist # Symbol Album.many_to_one :artist, :class=>"Artist" # String ==== :key For +many_to_one+ associations, is the foreign_key in current model's table that references associated model's primary key, as a symbol. Defaults to :association_id. Can use an array of symbols for a composite key association. Album.many_to_one :artist # :key=>:artist_id For +one_to_one+ and +one_to_many+ associations, is the foreign key in associated model's table that references current model's primary key, as a symbol. Defaults to :"#{self.name.underscore}_id". Artist.one_to_many :albums # :key=>:artist_id In both cases an array of symbols can be used for a composite key association: Apartment.many_to_one :building # :key=>[:city, :address] ==== :conditions The conditions to use to filter the association, can be any argument passed to +where+. If you use a hash or an array of two element arrays, this will also be used as a filter when using eager_graph to load the association. Artist.one_to_many :good_albums, :class=>:Album, :conditions=>{:good=>true} @artist.good_albums # SELECT * FROM albums WHERE ((artist_id = 1) AND (good IS TRUE)) ==== :order The column(s) by which to order the association dataset. Can be a singular column or an array. Artist.one_to_many :albums_by_name, :class=>:Album, :order=>:name Artist.one_to_many :albums_by_num_tracks, :class=>:Album, :order=>[:num_tracks, :name] ==== :select The columns to SELECT when loading the association. For most associations, it defaults to nil, so * is used. For +many_to_many+ associations, it defaults to the associated class's table_name.*, which means it doesn't include the columns from the join table. This is to prevent the common issue where the join table includes columns with the same name as columns in the associated table, in which case the joined table's columns would usually end up clobbering the values in the associated table. If you want to include the join table attributes, you can use this option, but beware that the join table columns can clash with columns from the associated table, so you should alias any columns that have the same name in both the join table and the associated table. Example: Artist.one_to_many :albums, :select=>[:id, :name] Album.many_to_many :tags, :select=>[Sequel.expr(:tags).*, :albums_tags__number] ==== :limit Limit the number of records to the provided value: Artist.one_to_many :best_selling_albums, :class=>:Album, :order=>:copies_sold, :limit=>5 # LIMIT 5 Use an array with two arguments for the value to specify a limit and an offset. Artist.one_to_many :next_best_selling_albums, :class=>:Album, :order=>:copies_sold, :limit=>[10, 5] # LIMIT 10 OFFSET 5 This probably doesn't make a lot of sense for *_to_one associations, though you could use it to specify an offset. ==== :join_table [+many_to_many+] Name of table that includes the foreign keys to both the current model and the associated model, as a symbol. Defaults to the name of current model and name of associated model, pluralized, underscored, sorted, and joined with '_'. Here's an example of the defaults: Artist.many_to_many :albums # :join_table=>:albums_artists Album.many_to_many :artists # :join_table=>:albums_artists Person.many_to_many :colleges # :join_table=>:colleges_people ==== :left_key [+many_to_many+] Foreign key in join table that points to current model's primary key, as a symbol. Defaults to :"#{model_name.underscore}_id". Album.many_to_many :tags # :left_key=>:album_id Can use an array of symbols for a composite key association. ==== :right_key [+many_to_many+] Foreign key in join table that points to associated model's primary key, as a symbol. Defaults to :"#{association_name.singularize}_id". Album.many_to_many :tags # :right_key=>:tag_id Can use an array of symbols for a composite key association. ==== :distinct Use the DISTINCT clause when selecting associating object, both when lazy loading and eager loading via eager (but not when using eager_graph). This is most useful for many_to_many associations that use join tables that contain more than just the foreign keys, where you are storing additional information. For example, if you have a database of people, degree types, and colleges, and you want to return all people from a given college, you may want to use :distinct so that if a person has two separate degrees from the same college, they won't show up twice. ==== :clone The :clone option clones an existing association, taking the options you specified for that association, and making a copy of them for this association. Other options provided by this association are then merged into the cloned options. This is commonly used if you have a bunch of similar associations that you want to DRY up: one_to_many :english_verses, :class=>:LyricVerse, :key=>:lyricsongid, :order=>:number, :conditions=>{:languageid=>1} one_to_many :romaji_verses, :clone=>:english_verses, :conditions=>{:languageid=>2} one_to_many :japanese_verses, :clone=>:english_verses, :conditions=>{:languageid=>3} Note that for the final two asociations, you didn't have to specify the :class, :key, or :order options, as they were copied by the :clone option. By specifying the :conditions option for the final two associations, it overrides the :conditions option of the first association, it doesn't attempt to merge them. In addition to the options hash, the :clone option will copy a block argument from the existing situation. If you want a cloned association to not have the same block as the association you are cloning from, specify the :block=>nil option in additon to the :clone option. ==== :dataset This is generally only specified for custom associations that aren't based on primary/foreign key relationships. It should be a proc that is instance_execed to get the base dataset to use before the other options are applied. If the proc accepts an argument, it is passed the related association reflection. For best performance, it's recommended that custom associations call the +associated_dataset+ method on the association reflection as the starting point for the dataset to return. The +associated_dataset+ method will return a dataset based on the associated class with most of the association options already applied, and the proc should return a modified copy of this dataset. Here's an example of an association of songs to artists through lyrics, where the artist can perform any one of four tasks for the lyric: Album.one_to_many :songs, :dataset=>(proc do |r| r.associated_dataset.select_all(:songs). join(Lyric, :id=>:lyricid, id=>[:composer_id, :arranger_id, :vocalist_id, :lyricist_id]) end) Artist.first.songs_dataset # SELECT songs.* FROM songs # INNER JOIN lyrics ON # lyrics.id = songs.lyric_id AND # 1 IN (composer_id, arranger_id, vocalist_id, lyricist_id) ==== :extend A module or array of modules to extend the dataset with. These are used to set up association extensions. For more information , please see the {Advanced Associations page}[link:files/doc/advanced_associations_rdoc.html]. ==== :primary_key The column that the :key option references, as a symbol. For +many_to_one+ associations, this column is in the associated table. For +one_to_one+ and +one_to_many+ associations, this column is in the current table. In both cases, it defaults to the primary key of the table. Can use an array of symbols for a composite key association. Artist.set_primary_key :arid Artist.one_to_many :albums # :primary_key=>:arid Album.one_to_many :artist # :primary_key=>:arid ==== :left_primary_key [+many_to_many+] Column in current table that :left_key option points to, as a symbol. Defaults to primary key of current table. Album.set_primary_key :alid Album.many_to_many :tags # :left_primary_key=>:alid Can use an array of symbols for a composite key association. ==== :right_primary_key [+many_to_many+] Column in associated table that :right_key points to, as a symbol. Defaults to primary key of the associated table. Tag.set_primary_key :tid Album.many_to_many :tags # :right_primary_key=>:tid Can use an array of symbols for a composite key association. ==== :join_table_block [+many_to_many+] A proc that can be used to modify the dataset used in the add/remove/remove_all methods. It's separate from the association block, as that is called on a join of the join table and the associated table, whereas this option just applies to the join table. It can be used to make sure that filters are used when deleting. Artist.many_to_many :lead_guitar_albums, :join_table_block=>proc do |ds| ds.where(:instrument_id=>5) end === Callback Options All callbacks can be specified as a Symbol, Proc, or array of both/either specifying a callback to call. Symbols are interpreted as instance methods that are called with the associated object. Procs are called with the receiver as the first argument and the associated object as the second argument. If an array is given, all of them are called in order. Before callbacks are often used to check preconditions, they can return false to signal Sequel to abort the modification. If any before callback returns false, the remaining before callbacks are not called and modification is aborted. Before callbacks are also commonly used to modify the current object or the associated object. After callbacks are often used for notification (logging, email) after a successful modification has been made. ==== :before_add [+one_to_many+, +many_to_many+] Called before adding an object to the association: class Artist # Don't allow adding an album to an artist if it has no tracks one_to_many :albums, :before_add=>proc{|ar, al| false if al.num_tracks == 0} end ==== :after_add [+one_to_many+, +many_to_many+] Called after adding an object to the association: class Artist # Log all associations of albums to an audit logging table one_to_many :albums, :after_add=>:log_add_album private def log_add_album(album) DB[:audit_logs].insert(:log=>"Album #{album.inspect} associated to #{inspect}") end end ==== :before_remove [+one_to_many+, +many_to_many+] Called before removing an object from the association: class Artist # Don't allow removing a self-titled album one_to_many :albums, :before_remove=>proc{|ar, al| false if al.name == ar.name} end ==== :after_remove [+one_to_many+, +many_to_many+] Called after removing an object from the association: class Artist # Log all disassociations of albums to an audit logging table one_to_many :albums, :after_remove=>:log_remove_album private def log_remove_album(album) DB[:audit_logs].insert(:log=>"Album #{album.inspect} disassociated from #{inspect}") end end ==== :before_set [+many_to_one+, +one_to_one+] Called before the _association= method is called to modify the objects: class Album # Don't associate the album with an artist if the year the album was # released is less than the year the artist/band started. many_to_one :artist, :before_set=>proc{|al, ar| false if al.year < ar.year_started} end ==== :after_set [+many_to_one+, +one_to_one+] Called after the _association= method is called to modify the objects: class Album # Log all disassociations of albums to an audit logging table many_to_one :artist, :after_set=>:log_artist_set private def log_artist_set(artist) DB[:audit_logs].insert(:log=>"Artist for album #{inspect} set to #{artist.inspect}") end end ==== :after_load Called after retrieving the associated records from the database. class Artist # Cache all album names to a single string when retrieving the # albums. one_to_many :albums, :after_load=>:cache_album_names attr_reader :album_names private def cache_album_names(albums) @album_names = albums.map{|x| x.name}.join(", ") end end Generally used if you know you will always want a certain action done when retrieving the association. For +one_to_many+ and +many_to_many+ associations, both the argument to symbol callbacks and the second argument to proc callbacks will be an array of associated objects instead of a single object. ==== :uniq [+many_to_many+] Adds a after_load callback that makes the array of objects unique. In many cases, using the :distinct option is a better approach. === Eager Loading via eager (query per association) Options ==== :eager The associations to eagerly load via eager when loading the associated object(s). This is useful for example if you always want to eagerly load dependent associations when loading this association. For example, if you know that any time that you want to load an artist's albums, you are also going to want access to the album's tracks as well: # Eager load tracks when loading the albums Artist.one_to_many :albums, :eager=>:tracks You can also use a hash or array to specify multiple dependent associations to eagerly load: # Eager load the albums' tracks and the tracks' tags when loading the albums Artist.one_to_many :albums, :eager=>{:tracks=>:tags} # Eager load the albums' tags and tracks when loading the albums Artist.one_to_many :albums, :eager=>[:tags, :tracks] # Eager load the albums' tags, tracks, and tracks' tags when loading the albums Artist.one_to_many :albums, :eager=>[:tags, {:tracks=>:tags}] ==== :eager_loader A custom loader to use when eagerly load associated objects via eager. For many details and examples of custom eager loaders, please see the {Advanced Associations guide}[link:files/doc/advanced_associations_rdoc.html]. ==== :eager_loader_key A symbol for the key column to use to populate the key hash for the eager loader. Generally does not need to be set manually, defaults to the key method used. Can be set to nil to not populate the key hash (better for performance if a custom eager loader does not use the key_hash). ==== :eager_block If given, should be a proc to use instead of the association method block when eagerly loading. To not use a block when eager loading when one is used normally, should to nil. It's very uncommon to need this option. === Eager Loading via eager_graph (one query with joins) Options ==== :eager_graph The associations to eagerly load via eager_graph when loading the associated object(s). This is useful for example if you always want to eagerly load dependent associations when loading this association, but you want to filter or order the association based on dependent associations: Artist.one_to_many :albums_with_short_tracks, :class=>:Album, :eager_graph=>:tracks do |ds| ds.where{tracks__seconds < 120} end Artist.one_to_many :albums_by_track_name, :class=>:Album, :eager_graph=>:tracks do |ds| ds.order(:tracks__name) end You can also use a hash or array of arguments for :eager_graph, similar to what the :eager option accepts. ==== :graph_conditions The additional conditions to use on the SQL join when eagerly loading the association via eager_graph. Should be a hash or an array of two element arrays. If not specified, the :conditions option is used if it is a hash or array of two element arrays. Artist.one_to_many :active_albums, :class=>:Album, :graph_conditions=>{:active=>true} Note that these conditions on the association are in addition to the default conditions specified by the foreign/primary keys. If you want to replace the conditions specified by the foreign/primary keys, you need the :graph_only_conditions options. ==== :graph_block The block to pass to Dataset#join_table when eagerly loading the association via eager_graph. This is useful to specify conditions that can't be specified in a hash or array of two element arrays. Artist.one_to_many :gold_albums, :class=>:Album, :graph_block=>proc{|j,lj,js| Sequel.qualify(j, :copies_sold) > 500000} ==== :graph_join_type The type of SQL join to use when eagerly loading the association via eager_graph. Defaults to :left_outer. This is useful if you want to ensure that only artists that have albums are returned: Artist.one_to_many :albums, :graph_join_type=>:inner # Will exclude artists without an album Artist.eager_graph(:albums).all ==== :graph_select A column or array of columns to select from the associated table when eagerly loading the association via eager_graph. Defaults to all columns in the associated table. ==== :graph_only_conditions The conditions to use on the SQL join when eagerly loading the association via eager_graph, instead of the default conditions specified by the foreign/primary keys. This option causes the :graph_conditions option to be ignored. This can be useful if the keys you are using are strings and you want to do a case insensitive comparison. For example, let's say that instead of integer keys, you used string keys based on the album or artist name, and that the album was associated to the artist by name. However, you weren't enforcing case sensitivity between the keys, so you still want to return albums where the artist's name differs in case: Artist.one_to_many :albums, :key=>:artist_name, :graph_only_conditions=>nil, :graph_block=>proc{|j,lj,js| {Sequel.function(:lower, Sequel.qualify(j, :artist_name))=> Sequel.function(:lower, Sequel.qualify(lj, :name))}} Note how :graph_only_conditions is set to nil to ignore any existing conditions, and :graph_block is used to set up the case insensitive comparison. Another case where :graph_only_conditions may be used is if you want to use a JOIN USING or NATURAL JOIN for the graph: # JOIN USING Artist.one_to_many :albums, :key=>:artist_name, :graph_only_conditions=>[:artist_name] # NATURAL JOIN Artist.one_to_many :albums, :key=>:artist_name, :graph_only_conditions=>nil, :graph_join_type=>:natural ==== :graph_alias_base The base name to use for the table alias when eager graphing. Defaults to the name of the association. If the alias name has already been used in the query, Sequel will create a unique alias by appending a numeric suffix (e.g. alias_0, alias_1, ...) until the alias is unique. This is mostly useful if you have associations with the same name in many models, and you want to be able to easily tell which table alias corresponds to which association when eagerly graphing multiple associations with the same name. You can override this option on a per-graph basis by specifying the association as an SQL::AliasedExpression instead of a symbol: Album.eager_graph(Sequel.as(:artist, :a)) ==== :eager_grapher Sets up a custom grapher to use when eager loading the objects via eager_graph. This is the eager_graph analogue to the :eager_loader option. This isn't generally needed, as one of the other eager_graph related association options is usually sufficient. If specified, should be a proc that accepts a single hash argument, which will contain at least the following keys: :self :: The dataset that is doing the eager loading :table_alias :: An alias to use for the table to graph for this association. :implicit_qualifier :: The alias that was used for the current table (since you can cascade associations). :callback :: A callback proc used to dynamically modify the dataset to graph into the current dataset, before such graphing is done. This is nil if no callback proc is used. Example: Artist.one_to_many :self_title_albums, :class=>:Album, :eager_grapher=>(proc do |eo| eo[:self].graph(Album, {:artist_id=>:id, :name=>:name}, :table_alias=>eo[:table_alias], :implicit_qualifier=>eo[:implicit_qualifier]) end) ==== :order_eager_graph Whether to add the order to the dataset's order when graphing via eager_graph. Defaults to true, so set to false to disable. Sequel has to do some guess work when attempting to add the association's order to an eager_graphed dataset. In most cases it does so correctly, but if it has problems, you'll probably want to set this option to false. ==== :graph_join_table_conditions [+many_to_many+] The additional conditions to use on the SQL join for the join table when eagerly loading the association via eager_graph. Should be a hash or an array of two element arrays. Let's say you have a database of people, colleges, and a table called degrees_received that includes a string field specifying the name of the degree, and you want to eager load all colleges for people where the person has received a specific degree: Person.many_to_many :bs_degree_colleges, :class=>:College, :join_table=>:degrees_received, :graph_join_table_conditions=>{:degree=>'BS'} ==== :graph_join_table_block [+many_to_many+] The block to pass to join_table for the join table when eagerly loading the association via eager_graph. This is used for similar reasons as :graph_block, but is only used for +many_to_many+ associations when graphing the join table into the dataset. It's used in the same place as :graph_join_table_conditions but like :graph_block, is needed for situations where the conditions can't be specified as a hash or array of two element arrays. Let's say you have a database of people, colleges, and a table called degrees_received that includes a string field specifying the name of the degree, and you want to eager load all colleges for people where the person has received a bachelor's degree (degree starting with B): Person.many_to_many :bachelor_degree_colleges, :class=>:College, :join_table=>:degrees_received, :graph_join_table_block=>proc{|j,lj,js| Sequel.qualify(j, :degree).like('B%')} This should be done when graphing the join table, instead of when graphing the final table, as :degree is a column of the join table. ==== :graph_join_table_join_type [+many_to_many+] The type of SQL join to use for the join table when eagerly loading the association via eager_graph. Defaults to the :graph_join_type option or :left_outer. This exists mainly for consistency in the unlikely case that you want to use a different join type when JOINing to the join table then you want to use for JOINing to the final table ==== :graph_join_table_only_conditions [+many_to_many+] The conditions to use on the SQL join for the join table when eagerly loading the association via eager_graph, instead of the default conditions specified by the foreign/primary keys. This option causes the :graph_join_table_conditions option to be ignored. This is only useful if you want to replace the default foreign/primary key conditions that Sequel would use when eagerly graphing. === Column Naming Conflict Options Sequel's association support historically called methods on model objects to get primary key or foreign key values instead of accessing the column values directly, in order to allow advanced features such as associations based on virtual column keys. Unfortunately, that causes issues if columns are used with names that clash with existing method names, which can happen if you want to name the association the same name as an existing column, or if the column has the same name as an already defined method such as object_id. Sequel has added the following options that allow you to work around the issue by either specifying the column name symbol or the method name symbol to use. In most cases, these methods are designed to be used with column aliases defined with Model.def_column_alias: # Example schema: # albums artists # :id /--> :id # :artist --/ :name # :name class Album < Sequel::Model def_column_alias(:artist_id, :artist) many_to_one :artist, :key=>:artist_id, :key_column=>:artist end # Example schema: # things objs # :id /--> :id # :object_id --/ :name # :name class Thing < Sequel::Model def_column_alias(:obj_id, :object_id) end class Obj < Sequel::Model one_to_many :things, :key=>:object_id, :key_method=>:obj_id end ==== :key_column [+many_to_one+] Like the :key option, but :key references the method name, while :key_column references the underlying column. ==== :primary_key_method [+many_to_one+] Like the :primary_key option, but :primary_key references the column name, while :primary_key_method references the method name. ==== :primary_key_column [+one_to_many+, +one_to_one+] Like the :primary_key option, but :primary_key references the method name, while :primary_key_column references the underlying column. ==== :key_method [+one_to_many+, +one_to_one+] Like the :key option, but :key references the column name, while :key_method references the method name. ==== :left_primary_key_column [+many_to_many+] Like the :left_primary_key option, but :left_primary_key references the method name, while :left_primary_key_column references the underlying column. ==== :right_primary_key_method [+many_to_many+] Like the :right_primary_key option, but :right_primary_key references the column name, while :right_primary_key_method references the method name. === Private Method Overriding Options These options override the private methods that Sequel defines to do the actual work of associating and deassociating objects. ==== :setter [*_to_one associations] This overrides the default behavior when you call an association setter method. Let's say you want to set a specific field whenever associating an object using the association setter method. class Album < Sequel::Model many_to_one :artist, :setter=>(proc do |artist| if artist self.artist_id = artist.id self.file_under = "#{artist.name}-#{name}" else self.artist_id = nil self.file_under = name end end) end ==== :adder [*_to_many associations] Continuing with the same example, here's how would you handle the same case if you also wanted to handle the Artist#add_album method: class Artist < Sequel::Model one_to_many :albums, :adder=>(proc do |album| album.update(:artist_id => id, :file_under=>"#{name}-#{album.name}") end) end ==== :remover [*_to_many associations] Continuing with the same example, here's how would you handle the same case if you also wanted to handle the Artist#remove_album method: class Artist < Sequel::Model one_to_many :albums, :remover=>(proc do |album| album.update(:artist_id => nil, :file_under=>album.name) end) end ==== :clearer [*_to_many associations] Continuing with the same example, here's how would you handle the same case if you also wanted to handle the Artist#remove_all_albums method: class Artist < Sequel::Model one_to_many :albums, :clearer=>(proc do albums_dataset.update(:artist_id => nil, :file_under=>:name) end) end === Advanced Options ==== :reciprocal The symbol name of the reciprocal association, if it exists. By default, Sequel will try to determine it by looking at the associated model's associations for a association that matches the current association's key(s). Set to nil to not use a reciprocal. Reciprocals are used in Sequel to modify the matching cached associations in associated objects when calling association methods on the current object. For example, when you retrieve objects in a one_to_many association, it'll automatically set the matching many_to_one association in the associated objects. The result of this is that code that does this: @artist.albums.each{|album| album.artist.name} only does one database query, because when the @artist's albums are retrieved, the cached artist association for each album is set to @artist. In addition to the one_to_many retrieval case, the association modification methods affect the reciprocals as well: # Sets the cached artist association for @album to @artist @artist.add_album(@album) # Sets the cached artist association for @album to nil @artist.remove_album(@album) # Sets the cached artist association to nil for the @artist's # cached albums association @artist.remove_all_albums # Remove @album from the artist1's cached albums association, and add @album # to @artist2's cached albums association. @album.artist # @artist1 @album.artist = @artist2 Sequel can usually guess the correct reciprocal, but if you have multiple associations to the same associated class that use the same keys, you may want to specify the :reciprocal option manually to ensure the correct one is used. ==== :read_only For +many_to_one+ and +one_to_one+ associations, do not add a setter method. For +one_to_many+ and +many_to_many+, do not add the add_association, remove_association, or remove_all_association methods. If the default modification methods would not do what you want, and you don't plan on overriding the internal modification methods to do what you want, it may be best to set this option to true. ==== :validate Set to false to not validate when implicitly saving any associated object. When using the +one_to_many+ association modification methods, the +one_to_one+ setter method, or creating a new object by passing a hash to the add_association method, Sequel will automatically save the object. If you don't want to validate objects when these implicit saves are done, the validate option should be set to false. ==== :allow_eager If set to false, you cannot load the association eagerly via eager or eager_graph. Artist.one_to_many :albums, :allow_eager=>false Artist.eager(:albums) # Raises Sequel::Error This is usually used if the association dataset depends on specific values in model instance that would not be valid when eager loading for multiple instances. ==== :cartesian_product_number The number of joins completed by this association that could cause more than one row for each row in the current table (default: 0 for *_to_one associations, 1 for *_to_many associations). This should only be modified in specific cases. For example, if you have a one_to_one association that can actually return more than one row (where the default association method will just return the first), or a many_to_many association where there is a unique index in the join table so that you know only one object will ever be associated through the association. ==== :methods_module The module that the methods created by the association will be placed into. Defaults to the module containing the model's columns. This is not included in the model's class, so you are responsible for doing that manually. This is only useful in rare cases, such as when a plugin that adds associations depends on another plugin that defines instance methods of the same name. In that case, the instance methods of the dependent plugin would override the association methods created by the main plugin. ==== :eager_limit_strategy This setting determines what strategy to use for loading the associations that use the :limit setting to limit the number of returned records. You can't use LIMIT directly, since you want a limit for each group of associated records, not a LIMIT on the total number of records returned by the dataset. By default, if a *_to_many association uses a limit or offset, or a one_to_one association uses an offset, Sequel will choose to use an eager limit strategy. The default strategy depends on the database being used. For databases which support window functions, a window function will be used. Other databases will just have an ruby array slice done on the entire record set. For one_to_one associations without offsets, no strategy is used by default because none is needed for a true one_to_one association (since there is only one associated record per current record). However, if you are using a one_to_one association where the relationship is really one_to_many, and using an order to pick the first matching row, then if you don't specify an :eager_limit_strategy option, you'll be loading all related rows just to have Sequel ignore all rows after the first. By using a strategy to change the query to only return one associated record per current record, you can get much better database performance. In general, Sequel picks an appropriate strategy, so it is not usually necessary to specify a specific strategy. The exception is for one_to_one associations where there is more than one associated record per current record. For those, you should probably specify true to this option to have Sequel pick an appropriate strategy. You can also specify a symbol to manually choose a strategy. The available strategies are: :distinct_on :: Uses DISTINCT ON to ensure only the first matching record is loaded (only used for one_to_one associations without offsets on PostgreSQL). :window_function :: Uses a ROW_NUMBER window functions to ensure the correctly limited/offset records are returned. :ruby :: Uses ruby array slicing to emulate database limiting/offsetting. ruby-sequel-4.1.1/doc/bin_sequel.rdoc000066400000000000000000000134431220156535500175470ustar00rootroot00000000000000= bin/sequel bin/sequel is the name used to refer to the "sequel" command line tool that ships with the sequel gem. By default, bin/sequel provides an IRB shell with the +DB+ constant set to a Sequel::Database object created using the database connection string provided on the command line. For example, to connect to a new in-memory SQLite database using the sqlite adapter, you can use the following: sequel sqlite:/ This is very useful for quick testing of ideas, and does not affect the environment, since the in-memory SQLite database is destroyed when the program exits. == Running from a git checkout If you've installed the sequel gem, then just running "sequel" should load the program, since rubygems should place the sequel binary in your load path. However, if you want to run bin/sequel from the root of a repository checkout, you should probably do: ruby -I lib bin/sequel The -I lib makes sure that you are using the repository checkout's code. == Choosing the Database to Connect to === Connection String In general, you probably want to provide a connection string argument to bin/sequel, indicating the adapter and database connection information you want to use. For example: sequel sqlite:/ sequel postgres://user:pass@host/database_name sequel mysql2://user:pass@host/database_name See the {Connecting to a database guide}[link:files/doc/opening_databases_rdoc.html] for more details about and examples of connection strings. === YAML Connection File Instead of specifying the database connection using a connection string, you can provide the path to a YAML configuration file containing the connection information. This YAML file can contain a single options hash, or it can contain a nested hash, where the top-level hash uses environment keys with hash values for each environment. Using the -e option with a yaml connection file, you can choose which environment to use if using a nested hash. sequel -e production config/database.yml Note that bin/sequel does not directly support ActiveRecord YAML configuration files, as they use different names for some options. === Mock Connection If you don't provide a connection string or YAML connection file, Sequel will start with a mock database. The mock database allows you to play around with Sequel without any database at all, and can be useful if you just want to test things out and generate SQL without actually getting results from a database. sequel Sequel also has the ability to use the mock adapter with database-specific syntax, allowing you to pretend you are connecting to a specific type of database without actually connecting to one. To do that, you need to use a connection string: sequel mock://postgres == Not Just an IRB shell bin/sequel is not just an IRB shell, it can also do far more. === Execute Code bin/sequel can also be used to execute other ruby files with +DB+ preset to the database given on the command line: sequel postgres://host/database_name path/to/some_file.rb On modern versions of Linux, this means that you can use bin/sequel in a shebang line: #!/path/to/bin/sequel postgres://host/database_name If you want to quickly execute a small piece of ruby code, you can use the -c option: sequel -c "p DB.tables" postgres://host/database_name Similarly, if data is piped into bin/sequel, it will be executed: echo "p DB.tables" | sequel postgres://host/database_name === Migrate Databases With -m option, Sequel will migrate the database given using the migration directory provided by -m: sequel -m /path/to/migrations/dir postgres://host/database You can use the -M attribute to set the version to migrate to: sequel -m /path/to/migrations/dir -M 3 postgres://host/database See the {migration guide}[link:files/doc/migration_rdoc.html] for more details about migrations. === Dump Schemas Using the -d or -D options, Sequel will dump the database's schema in Sequel migration format to the standard output: sequel -d postgres://host/database To save this information to a file, use a standard shell redirection: sequel -d postgres://host/database > /path/to/migrations/dir/001_base_schema.rb The -d option dumps the migration in database-independent format, the -D option dumps it in database-specific format. The -S option dumps the schema cache for all tables in the database, which can speed up the usage of Sequel with models when using the schema_caching extension. You should provide this option with the path to which to dump the schema: sequel -S /path/to/schema_cache.db postgres://host/database === Copy Databases Using the -C option, Sequel can copy the contents of one database to another, even between different database types. Using this option, you provide two connection strings on the command line: sequel -C mysql://host1/database postgres://host2/database2 This copies the table structure, table data, indexes, and foreign keys from the MySQL database to the PostgreSQL database. == Other Options Other options not mentioned above are explained briefly here. === -E -E logs all SQL queries to the standard output, so you can see all SQL that Sequel is sending the database. === -I include_directory -I is similar to ruby -I, and specifies an additional $LOAD_PATH directory. === -l log_file -l is similar to -E, but logs all SQL queries to the given file. === -L load_directory -L loads all *.rb files under the given directory. This is usually used to load Sequel::Model classes into bin/sequel. === -N -N skips testing the connection when creating the Database object. This is rarely needed. === -r require_lib -r is similar to ruby -r, requiring the given library. === -t -t tells bin/sequel to output full backtraces in the case of an error, which can aid in debugging. === -h -h prints the usage information for bin/sequel. === -v -v prints the Sequel version in use. ruby-sequel-4.1.1/doc/cheat_sheet.rdoc000066400000000000000000000134651220156535500177010ustar00rootroot00000000000000= Cheat Sheet == Open a database require 'sequel' DB = Sequel.sqlite('my_blog.db') DB = Sequel.connect('postgres://user:password@localhost/my_db') DB = Sequel.postgres('my_db', :user => 'user', :password => 'password', :host => 'localhost') DB = Sequel.ado('mydb') == Open an SQLite memory database Without a filename argument, the sqlite adapter will setup a new sqlite database in memory. DB = Sequel.sqlite == Logging SQL statements require 'logger' DB = Sequel.sqlite '', :loggers => [Logger.new($stdout)] # or DB.loggers << Logger.new(...) == Using raw SQL DB.run "CREATE TABLE users (name VARCHAR(255) NOT NULL, age INT(3) NOT NULL)" dataset = DB["SELECT age FROM users WHERE name = ?", name] dataset.map(:age) DB.fetch("SELECT name FROM users") do |row| p row[:name] end == Create a dataset dataset = DB[:items] dataset = DB.from(:items) == Most dataset methods are chainable dataset = DB[:managers].where(:salary => 5000..10000).order(:name, :department) == Insert rows dataset.insert(:name => 'Sharon', :grade => 50) == Retrieve rows dataset.each{|r| p r} dataset.all # => [{...}, {...}, ...] dataset.first # => {...} == Update/Delete rows dataset.exclude(:active).delete dataset.where('price < ?', 100).update(:active => true) == Datasets are Enumerable dataset.map{|r| r[:name]} dataset.map(:name) # same as above dataset.inject(0){|sum, r| sum + r[:value]} dataset.sum(:value) # better == Filtering (see also {Dataset Filtering}[link:files/doc/dataset_filtering_rdoc.html]) === Equality dataset.where(:name => 'abc') dataset.where('name = ?', 'abc') === Inequality dataset.where{value > 100} dataset.exclude{value <= 100} === Inclusion dataset.where(:value => 50..100) dataset.where{(value >= 50) & (value <= 100)} dataset.where('value IN ?', [50,75,100]) dataset.where(:value=>[50,75,100]) dataset.where(:id=>other_dataset.select(:other_id)) === Subselects as scalar values dataset.where('price > (SELECT avg(price) + 100 FROM table)') dataset.where{price > dataset.select(avg(price) + 100)} === LIKE/Regexp DB[:items].where(Sequel.like(:name, 'AL%')) DB[:items].where(:name => /^AL/) === AND/OR/NOT DB[:items].where{(x > 5) & (y > 10)}.sql # SELECT * FROM items WHERE ((x > 5) AND (y > 10)) DB[:items].where(Sequel.or(:x => 1, :y => 2)) & Sequel.~(:z => 3)).sql # SELECT * FROM items WHERE (((x = 1) OR (y = 2)) AND (z != 3)) === Mathematical operators DB[:items].where{x + y > z}.sql # SELECT * FROM items WHERE ((x + y) > z) DB[:items].where{price - 100 < avg(price)}.sql # SELECT * FROM items WHERE ((price - 100) < avg(price)) == Ordering dataset.order(:kind) dataset.reverse_order(:kind) dataset.order(Sequel.desc(:kind), :name) == Limit/Offset dataset.limit(30) # LIMIT 30 dataset.limit(30, 10) # LIMIT 30 OFFSET 10 == Joins DB[:items].left_outer_join(:categories, :id => :category_id).sql # SELECT * FROM items # LEFT OUTER JOIN categories ON categories.id = items.category_id DB[:items].join(:categories, :id => :category_id).join(:groups, :id => :items__group_id) # SELECT * FROM items # INNER JOIN categories ON categories.id = items.category_id # INNER JOIN groups ON groups.id = items.group_id == Aggregate functions methods dataset.count #=> record count dataset.max(:price) dataset.min(:price) dataset.avg(:price) dataset.sum(:stock) dataset.group_and_count(:category) dataset.select_group(:category).select_append{avg(:price)} == SQL Functions / Literals dataset.update(:updated_at => Sequel.function(:NOW)) dataset.update(:updated_at => Sequel.lit('NOW()')) dataset.update(:updated_at => Sequel.lit("DateValue('1/1/2001')") dataset.update(:updated_at => Sequel.function(:DateValue, '1/1/2001')) == Schema Manipulation DB.create_table :items do primary_key :id String :name, :unique => true, :null => false TrueClass :active, :default => true foreign_key :category_id, :categories DateTime :created_at index :created_at end DB.drop_table :items == Aliasing DB[:items].select(Sequel.as(:name, :item_name)) DB[:items].select(:name___item_name) DB[:items___items_table].select(:items_table__name___item_name) # SELECT items_table.name AS item_name FROM items AS items_table == Transactions DB.transaction do dataset.insert(:first_name => 'Inigo', :last_name => 'Montoya') dataset.insert(:first_name => 'Farm', :last_name => 'Boy') end # Either both are inserted or neither are inserted Database#transaction is re-entrant: DB.transaction do # BEGIN issued only here DB.transaction dataset << {:first_name => 'Inigo', :last_name => 'Montoya'} end end # COMMIT issued only here Transactions are aborted if an error is raised: DB.transaction do raise "some error occurred" end # ROLLBACK issued and the error is re-raised Transactions can also be aborted by raising Sequel::Rollback: DB.transaction do raise(Sequel::Rollback) if something_bad_happened end # ROLLBACK issued and no error raised Savepoints can be used if the database supports it: DB.transaction do dataset << {:first_name => 'Farm', :last_name => 'Boy'} # Inserted DB.transaction(:savepoint=>true) # This savepoint is rolled back dataset << {:first_name => 'Inigo', :last_name => 'Montoya'} # Not inserted raise(Sequel::Rollback) if something_bad_happened end dataset << {:first_name => 'Prince', :last_name => 'Humperdink'} # Inserted end == Miscellaneous: dataset.sql # "SELECT * FROM items" dataset.delete_sql # "DELETE FROM items" dataset.where(:name => 'sequel').exists # "EXISTS ( SELECT * FROM items WHERE name = 'sequel' )" dataset.columns #=> array of columns in the result set, does a SELECT DB.schema(:items) => [[:id, {:type=>:integer, ...}], [:name, {:type=>:string, ...}], ...] ruby-sequel-4.1.1/doc/code_order.rdoc000066400000000000000000000071631220156535500175300ustar00rootroot00000000000000= Code Order In Sequel, the order in which code is executed is important. This guide provides the recommended way to order your Sequel code. Some of these guidelines are not strictly necessary, but others are, and this guide will be specific about which are strictly necessary. == Require Sequel This is sort of a no brainer, but you need to require the library first. This is a strict requirement, none of the other code can be executed unless the library has been required first. Example: require 'sequel' == Add Global Extensions Global extensions are loaded with Sequel.extension, and affect other parts of Sequel or the general ruby environment. It's not necessary to load them first, but it is a recommended practice. The exception to this is global extensions that integrate with Database-specific extensions, where the Database-specific extension should be loaded first (such as some of the pg_* extensions). In those cases, the global extensions should be loaded after the Database-specific extensions. Example: Sequel.extension :blank == Add Extensions Applied to All Databases/Datasets If you want database or datasets extensions applied to all databases and datasets, you must use Sequel::Database.extension to load the extension before connecting to a database. If you connect to a database before using Sequel::Database.extension, it will not have that extension loaded. Example: Sequel::Database.extension :columns_introspection == Connect to Databases Connecting to a database is required before running any queries against that database, or creating any datasets or models. You cannot create model classes without having a database object created first. The convention for an application with a single Database instance is to store that instance in a constant named DB. Example: DB = Sequel.connect('postgres://user:pass@host/database') == Add Extensions Specific to a Database or All Datasets in that Database If you want specific databases to use specific extensions, or have all datasets in that database use a specific extension, you need to load that extension into the database after creating it using Sequel::Database#extension. Example: DB.extension :pg_array == Configure Global Model Behavior If you want to change the configuration for all model classes, you must do so before loading your model classes, as configuration is copied into the subclass when model subclasses are created. Example: Sequel::Model.raise_on_save_failure = false == Add Global Model Plugins If you want to load a plugin into all models classes, you must do so before loading your model classes, as plugin specific data may need to be copied into the subclass when model subclasses are created. Example: Sequel::Model.plugin :prepared_statements == Load Model Classes After you have established a database connection, and configured your global model configration and global plugins, you can load your model classes. It's recommended to have a separate file for each model class, unless the model classes are very simple. Example: Dir['./models/*.rb'].each{|f| require f} == Disconnect If Using Forking Webserver with Code Preloading If you are using a forking webserver such as unicorn or passenger, with a feature that loads your Sequel code before forking connections (code preloading), then you must disconnect your database connections before forking. If you don't do this, you can end up with child processes sharing database connections and all sorts of weird behavior. Sequel will automatically reconnect on an as needed basis in the child processes, so you only need to do the following in the parent process: DB.disconnect ruby-sequel-4.1.1/doc/core_extensions.rdoc000066400000000000000000000252611220156535500206310ustar00rootroot00000000000000= Sequel's Core Extensions == Background Historically, Sequel added methods to many of the core classes, and usage of those methods was the primary and recommended way to use Sequel. For example: DB[:table].select(:column.cast(Integer)). # Symbol#cast where(:column.like('A%')). # Symbol#like order({1=>2}.case(0, :a)) # Hash#case While Sequel never overrode any methods defined by ruby, it is possible that other libraries could define the same methods that Sequel defines, which could cause problems. Also, some rubyists do not like using libraries that add methods to the core classes. Alternatives for the core extension methods where added to Sequel, so the query above could be written as: DB[:table].select(Sequel.cast(:column, Integer)). where(Sequel.like(:column, 'A%')). order(Sequel.case({1=>2}, 0, :a)) Almost all of the core extension methods have a replacement on the Sequel module. So it is now up to the user which style to use. Using the methods on the Sequel module results in slightly more verbose code, but allows the code to work without modifications to the core classes. == Issues There is no recommendation on whether the core_extensions should be used or not. It is very rare that any of the methods added by core_extensions actually causes a problem, but some of them can make it more difficult to find other problems. For example, if you type: do_something if value | other_value while meaning to type: do_something if value || other_value and value is a Symbol, instead of a NoMethodError being raised because Symbol#| is not implemented by default, value | other_value will return a Sequel expression object, which if will evaluate as true, and do_something will be called. == Usage All of Sequel's extensions to the core classes are stored in Sequel's core_extensions extension, which you can load via: Sequel.extension :core_extensions == No Internal Dependency Sequel has no internal dependency on the core extensions. This includes Sequel's core, Sequel::Model, and all plugins and extensions that ship with Sequel. However, it is possible that external plugins and extensions will depend on the core extensions. Such plugins and extensions should be updated so that they no longer depend on the core extensions. == Core Extension Methods This section will briefly describe all of the methods added to the core classes, and what the alternative method is that doesn't require the core extensions. === Symbol & String ==== as Symbol#as and String#as return Sequel aliased expressions using the provided alias: :a.as(:b) # SQL: a AS b 'a'.as(:b) # SQL: 'a' AS b Alternative: Sequel.as: Sequel.as(:a, :b) ==== cast Symbol#cast and String#cast return Sequel cast expressions for typecasting in the database: :a.cast(Integer) # SQL: CAST(a AS integer) 'a'.cast(Integer) # SQL: CAST('a' AS integer) Alternative: Sequel.cast: Sequel.cast(:a, Integer) ==== cast_numeric Symbol#cast_numeric and String#cast_numeric return Sequel cast expressions for typecasting in the database, defaulting to integers, where the returned expression is treated as an numeric value: :a.cast_numeric # SQL: CAST(a AS integer) 'a'.cast_numeric(Float) # SQL: CAST('a' AS double precision) Alternative: Sequel.cast_numeric: Sequel.cast_numeric(:a) ==== cast_string Symbol#cast_string and String#cast_string return Sequel cast expressions for typecasting in the database, defaulting to strings, where the returned expression is treated as a string value: :a.cast_string # SQL: CAST(a AS varchar(255)) 'a'.cast_string(:text) # SQL: CAST('a' AS text) Alternative: Sequel.cast_string: Sequel.cast_string(:a) === Symbol ==== identifier Symbol#identifier wraps the symbol in a single identifier that will not be split. By default, Sequel will split symbols with double or triple underscores to do qualifying and aliasing. :table__column.identifier # SQL: table__column Alternative: Sequel.identifier: Sequel.identifier(:table__column) ==== asc Symbol#asc is used to define an ascending order on a column. It exists mostly for consistency with #desc, since ascending is the default order: :a.asc # SQL: a ASC Alternative: Sequel.asc: Sequel.asc(:a) ==== desc Symbol#desc is used to defined a descending order on a column. The returned value is usually passed to one of the dataset order methods. :a.desc # SQL: a DESC Alternative: Sequel.desc: Sequel.desc(:a) ==== +, -, *, / The standard mathematical operators are defined on Symbol, and return a Sequel numeric expression object representing the operation: :a + :b # SQL: a + b :a - :b # SQL: a - b :a * :b # SQL: a * b :a / :b # SQL: a / b Alternatives: Sequel.+(:a, :b) Sequel.-(:a, :b) Sequel.*(:a, :b) Sequel./(:a, :b) ==== * The * operator is overloaded on Symbol such that if it is called with no arguments, it represents a selection of all columns in the table: :a.* # SQL: a.* Alternative: Sequel.expr.*: Sequel.expr(:a).* ==== qualify Symbol#qualify qualifies the identifier (e.g. a column) with a another identifier (e.g. a table): :column.qualify(:table) # SQL: table.column Alternative: Sequel.qualify: Sequel.qualify(:table, :column) Note the reversed order of the arguments. For the Symbol#qualify method, the argument is the qualifier, while for Sequel.qualify, the qualifier is the first argument. ==== like Symbol#like returns a case sensitive LIKE expression between the identifier and the given argument: :a.like('b%') # SQL: a LIKE 'b%' Alternative: Sequel.like: Sequel.like(:a, 'b%') ==== like Symbol#ilike returns a case insensitive LIKE expression between the identifier and the given argument: :a.ilike('b%') # SQL: a ILIKE 'b%' Alternative: Sequel.ilike: Sequel.ilike(:a, 'b%') ==== sql_subscript Symbol#sql_subscript returns a Sequel expression representing an SQL array access: :a.sql_subscript(1) # SQL: a[1] Alternative: Sequel.subscript: Sequel.subscript(:a, 1) ==== extract Symbol#extract does a datetime part extraction from the receiver: :a.extract(:year) # SQL: extract(year FROM a) Alternative: Sequel.extract: Sequel.extract(:year, :a) Note the reversed order of the arguments. In Symbol#extract, the datetime part is the argument, while in Sequel.extract, the datetime part is the first argument. ==== sql_boolean, sql_number, sql_string These Symbol methods are used to force the treating of the object as a specific SQL type, instead of as a general SQL type. For example: :a.sql_boolean + 1 # NoMethodError :a.sql_number << 1 # SQL: a << 1 :a.sql_string + 'a' # SQL: a || 'a' Alternative: Sequel.expr: Sequel.expr(:a).sql_boolean Sequel.expr(:a).sql_number Sequel.expr(:a).sql_string ==== sql_function Symbol#sql_function returns an SQL function call expression object: :now.sql_function # SQL: now() :sum.sql_function(:a) # SQL: sum(a) :concat.sql_function(:a, :b) # SQL: concat(a, b) Alternative: Sequel.function: Sequel.function(:sum, :a) === String ==== lit String#lit creates a literal string, using placeholders if any arguments are given. Literal strings are not escaped, they are treated as SQL code, not as an SQL string: 'a'.lit # SQL: a '"a" = ?'.lit(1) # SQL: "a" = 1 Alternative: Sequel.lit: Sequel.lit('a') ==== to_sequel_blob String#to_sequel_blob returns the string wrapper in Sequel blob object. Often blobs need to be handled differently than regular strings by the database adapters. "a\0".to_sequel_blob # SQL: X'6100' Alternative: Sequel.blob: Sequel.blob("a\0") === Hash, Array, & Symbol ==== ~ Array#~, Hash#~, and Symbol#~ treat the receiver as a conditions specifier, not matching all of the conditions: ~{:a=>1, :b=>[2, 3]} # SQL: a != 1 OR b NOT IN (2, 3) ~[[:a, 1], [:b, [1, 2]]] # SQL: a != 1 OR b NOT IN (1, 2) Alternative: Sequel.~: Sequel.~(:a=>1, :b=>[2, 3]) === Hash & Array ==== case Array#case and Hash#case return an SQL CASE expression, where the keys are conditions and the values are results: {{:a=>[2,3]}=>1}.case(0) # SQL: CASE WHEN a IN (2, 3) THEN 1 ELSE 0 END [[{:a=>[2,3]}, 1]].case(0) # SQL: CASE WHEN a IN (2, 3) THEN 1 ELSE 0 END Alternative: Sequel.case: Sequel.case({:a=>[2,3]}=>1}, 0) ==== sql_expr Array#sql_expr and Hash#sql_expr treat the receiver as a conditions specifier, matching all of the conditions in the array. {:a=>1, :b=>[2, 3]}.sql_expr # SQL: a = 1 AND b IN (2, 3) [[:a, 1], [:b, [2, 3]]].sql_expr # SQL: a = 1 AND b IN (2, 3) Alternative: Sequel.expr: Sequel.expr(:a=>1, :b=>[2, 3]) ==== sql_negate Array#sql_negate and Hash#sql_negate treat the receiver as a conditions specifier, matching none of the conditions in the array: {:a=>1, :b=>[2, 3]}.sql_negate # SQL: a != 1 AND b NOT IN (2, 3) [[:a, 1], [:b, [2, 3]]].sql_negate # SQL: a != 1 AND b NOT IN (2, 3) Alternative: Sequel.negate: Sequel.negate(:a=>1, :b=>[2, 3]) ==== sql_or Array#sql_or nd Hash#sql_or treat the receiver as a conditions specifier, matching any of the conditions in the array: {:a=>1, :b=>[2, 3]}.sql_or # SQL: a = 1 OR b IN (2, 3) [[:a, 1], [:b, [2, 3]]].sql_or # SQL: a = 1 OR b IN (2, 3) Alternative: Sequel.or: Sequel.or(:a=>1, :b=>[2, 3]) === Array ==== sql_value_list Array#sql_value_list wraps the array in an array subclass, which Sequel will always treat as a value list and not a conditions specifier. By default, Sequel treats arrays of two element arrays as a conditions specifier. DB[:a].filter('(a, b) IN ?', [[1, 2], [3, 4]]) # SQL: (a, b) IN ((1 = 2) AND (3 = 4)) DB[:a].filter('(a, b) IN ?', [[1, 2], [3, 4]].sql_value_list) # SQL: (a, b) IN ((1, 2), (3, 4)) Alternative: Sequel.value_list: Sequel.value_list([[1, 2], [3, 4]]) ==== sql_string_join Array#sql_string_join joins all of the elements in the array in an SQL string concatentation expression: [:a].sql_string_join # SQL: a [:a, :b].sql_string_join # SQL: a || b [:a, 'b'].sql_string_join # SQL: a || 'b' ['a', :b].sql_string_join(' ') # SQL: 'a' || ' ' || b Alternative: Sequel.join: Sequel.join(['a', :b], ' ') === Hash & Symbol ==== & Hash#& and Symbol#& return a Sequel boolean expression, matching the condition specified by the receiver and the condition specified by the given argument: :a & :b # SQL: a AND b {:a=>1} & :b # SQL: a = 1 AND b {:a=>true} & :b # SQL: a IS TRUE AND b Alternative: Sequel.&: Sequel.&({:a=>1}, :b) ==== | Hash#| returns a Sequel boolean expression, matching the condition specified by the receiver or the condition specified by the given argument: :a | :b # SQL: a OR b {:a=>1} | :b # SQL: a = 1 OR b {:a=>true} | :b # SQL: a IS TRUE OR b Alternative: Sequel.|: Sequel.|({:a=>1}, :b) ruby-sequel-4.1.1/doc/dataset_basics.rdoc000066400000000000000000000110371220156535500203670ustar00rootroot00000000000000= Dataset Basics == Introduction Datasets are the primary way Sequel uses to access the database. While most database libraries have specific support for updating all records or only a single record, Sequel's ability to represent SQL queries themselves as objects is what gives Sequel most of its power. However, if you haven't been exposed to the dataset concept before, it can be a little disorienting. This document aims to give a basic introduction to datasets and how to use them. == What a Dataset Represents A Dataset can be thought of representing one of two concepts: * An SQL query * An abstract set of rows and some related behavior The first concept is more easily understood, so you should probably start with that assumption. == Basics The most basic dataset is the simple selection of all columns in a table: ds = DB[:posts] # SELECT * FROM posts Here, DB represents your Sequel::Database object, and ds is your dataset, with the SQL query it represents below it. One of the core dataset ideas that should be understood is that datasets use a functional style of modification, in which methods called on the dataset return modified copies of the dataset, they don't modify the dataset themselves: ds2 = ds.where(:id=>1) ds2 # SELECT * FROM posts WHERE id = 1 ds # SELECT * FROM posts Note how ds itself is not modified. This is because ds.where returns a modified copy of ds, instead of modifying ds itself. This makes using datasets both thread safe and easy to chain: # Thread safe: 100.times do |i| Thread.new do ds.where(:id=>i).first end end # Easy to chain: ds3 = ds.select(:id, :name).order(:name).where{id < 100} # SELECT id, name FROM posts WHERE id < 100 ORDER BY name Thread safety you don't really need to worry about, but chainability is core to how Sequel is generally used. Almost all dataset methods that affect the SQL produced return modified copies of the receiving dataset. Another important thing to realize is that dataset methods that return modified datasets do not execute the dataset's code on the database. Only dataset methods that return or yield results will execute the code on the database: # No SQL queries sent: ds3 = ds.select(:id, :name).order(:name).filter{id < 100} # Until you call a method that returns results results = ds3.all One important consequence of this API style is that if you use a method chain that includes both methods that return modified copies and a method that executes the SQL, the method that executes the SQL should generally be the last method in the chain: # Good ds.select(:id, :name).order(:name).filter{id < 100}.all # Bad ds.all.select(:id, :name).order(:name).filter{id < 100} This is because all will return an array of hashes, and select, order, and filter are dataset methods, not array methods. == Methods Most Dataset methods that users will use can be broken down into two types: * Methods that return modified datasets * Methods that execute code on the database === Methods that return modified datasets Most dataset methods fall into this category, which can be further broken down by the clause they affect: SELECT:: select, select_all, select_append, select_group, select_more FROM:: from, from_self JOIN:: join, left_join, right_join, full_join, natural_join, natural_left_join, natural_right_join, natural_full_join, cross_join, inner_join, left_outer_join, right_outer_join, full_outer_join, join_table WHERE:: where, filter, exclude, exclude_where, and, or, grep, invert, unfiltered GROUP:: group, group_by, group_and_count, select_group, ungrouped HAVING:: having, exclude_having, invert, unfiltered ORDER:: order, order_by, order_append, order_prepend, order_more, reverse, reverse_order, unordered LIMIT:: limit, unlimited compounds:: union, intersect, except locking:: for_update, lock_style common table expressions:: with, with_recursive other:: clone, distinct, naked, qualify, server, with_sql === Methods that execute code on the database Most other dataset methods commonly used will execute the dataset's SQL on the database: SELECT (All Records):: all, each, map, to_hash, to_hash_groups, select_map, select_order_map, select_hash, select_hash_groups SELECT (First Record):: first, last, [], single_record SELECT (Single Value):: get, single_value SELECT (Aggregates):: count, avg, max, min, sum, range, interval INSERT:: insert, <<, import, multi_insert UPDATE:: update DELETE:: delete other:: columns, columns!, truncate === Other methods See the Sequel::Dataset RDoc for other methods that are less commonly used. ruby-sequel-4.1.1/doc/dataset_filtering.rdoc000066400000000000000000000153271220156535500211140ustar00rootroot00000000000000= Dataset Filtering Sequel is very flexibile when it comes to filtering records. You can specify your conditions as a custom string, as a string with parameters, as a hash of values to compare against, or as ruby code that Sequel translates into SQL expressions. == Filtering using a custom filter string If you wish to write your SQL by hand, you can just supply it to the dataset's #where method: items.where('x < 10').sql #=> "SELECT * FROM items WHERE x < 10" In order to prevent SQL injection, you can replace literal values with question marks and supply the values as additional arguments: items.where('category = ?', 'ruby').sql #=> "SELECT * FROM items WHERE category = 'ruby'" You can also use placeholders with :placeholder and a hash of placeholder values: items.where('category = :category', :category=>'ruby').sql #=> "SELECT * FROM items WHERE category = 'ruby'" === Specifying SQL functions Sequel also allows you to specify functions by using the Sequel.function method: items.literal(Sequel.function(:avg, :price)) #=> "avg(price)" If you are specifying a filter/selection/order, you can use a virtual row block: items.select{avg(price)} You can also use the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html] and the +sql_function+ method: :avg.sql_function(:price) == Filtering using a hash If you just need to compare records against values, you can supply a hash: items.where(:category => 'ruby').sql #=> "SELECT * FROM items WHERE (category = 'ruby')" Sequel can check for null values: items.where(:category => nil).sql #=> "SELECT * FROM items WHERE (category IS NULL)" Or compare two columns: items.where(:x => :some_table__y).sql #=> "SELECT * FROM items WHERE (x = some_table.y)" And also compare against multiple values: items.where(:category => ['ruby', 'perl']).sql #=> "SELECT * FROM items WHERE (category IN ('ruby', 'perl'))" Ranges (both inclusive and exclusive) can also be used: items.where(:price => 100..200).sql #=> "SELECT * FROM items WHERE (price >= 100 AND price <= 200)" items.where(:price => 100...200).sql #=> "SELECT * FROM items WHERE (price >= 100 AND price < 200)" == Filtering using an array If you need to select multiple items from a dataset, you can supply an array: item_array = [1, 38, 47, 99] items.where(:id => item_array).sql #=> "SELECT * FROM items WHERE (id IN (1, 38, 47, 99))" == Filtering using expressions You can pass a block to where, which is evaluated in a special context: items.where{price * 2 < 50}.sql #=> "SELECT * FROM items WHERE ((price * 2) < 50) This works for the standard inequality and arithmetic operators: items.where{price + 100 < 200}.sql #=> "SELECT * FROM items WHERE ((price + 100) < 200) items.where{price - 100 > 200}.sql #=> "SELECT * FROM items WHERE ((price - 100) > 200) items.where{price * 100 <= 200}.sql #=> "SELECT * FROM items WHERE ((price * 100) <= 200) items.where{price / 100 >= 200}.sql #=> "SELECT * FROM items WHERE ((price / 100) >= 200) You use the overloaded bitwise and (&) and or (|) operators to combine expressions: items.where{(price + 100 < 200) & (price * 100 <= 200)}.sql #=> "SELECT * FROM items WHERE (((price + 100) < 200) AND ((price * 100) <= 200)) items.where{(price - 100 > 200) | (price / 100 >= 200)}.sql #=> "SELECT * FROM items WHERE (((price - 100) > 200) OR ((price / 100) >= 200)) To filter by equality, you use the standard hash, which can be combined with other expressions using Sequel.& and Sequel.|: items.where{Sequel.&({:category => 'ruby'}, (price + 100 < 200))}.sql #=> "SELECT * FROM items WHERE ((category = 'ruby') AND ((price + 100) < 200))" This works with other hash values, such as arrays and ranges: items.where{Sequel.|({:category => ['ruby', 'other']}, (:price - 100 > 200))}.sql #=> "SELECT * FROM items WHERE ((category IN ('ruby', 'other')) OR ((price - 100) <= 200))" items.where{Sequel.&({:price => (100..200)}, :active)).sql #=> "SELECT * FROM items WHERE ((price >= 100 AND price <= 200) AND active)" === Negating conditions You can use the exclude method to exclude conditions: items.exclude(:category => 'ruby').sql #=> "SELECT * FROM items WHERE (category != 'ruby')" items.exclude(:active).sql #=> "SELECT * FROM items WHERE NOT active" items.exclude{price / 100 >= 200}.sql #=> "SELECT * FROM items WHERE ((price / 100) < 200) === Comparing against column references You can also compare against other columns: items.where{credit > debit}.sql #=> "SELECT * FROM items WHERE (credit > debit) Or against SQL functions: items.where{price - 100 < max(price)}.sql #=> "SELECT * FROM items WHERE ((price - 100) < max(price))" == String search functions You can search SQL strings in a case sensitive manner using the Sequel.like method: items.where(Sequel.like(:name, 'Acme%')).sql #=> "SELECT * FROM items WHERE (name LIKE 'Acme%')" You can search SQL strings in a case insensitive manner using the Sequel.ilike method: items.where(Sequel.ilike(:name, 'Acme%')).sql #=> "SELECT * FROM items WHERE (name ILIKE 'Acme%')" You can specify a Regexp as a like argument, but this will probably only work on PostgreSQL and MySQL: items.where(Sequel.like(:name, /Acme.*/)).sql #=> "SELECT * FROM items WHERE (name ~ 'Acme.*')" Like can also take more than one argument: items.where(Sequel.like(:name, 'Acme%', /Beta.*/)).sql #=> "SELECT * FROM items WHERE ((name LIKE 'Acme%') OR (name ~ 'Beta.*'))" == String concatenation You can concatenate SQL strings using Sequel.join: items.where(Sequel.join([:name, :comment]).like('%acme%')).sql #=> "SELECT * FROM items WHERE ((name || comment) LIKE 'Acme%')" Sequel.join also takes a join argument: items.filter(Sequel.join([:name, :comment], ' ').like('%acme%')).sql #=> "SELECT * FROM items WHERE ((name || ' ' || comment) LIKE 'Acme%')" == Filtering using sub-queries One of the best features of Sequel is the ability to use datasets as sub-queries. Sub-queries can be very useful for filtering records, and many times provide a simpler alternative to table joins. Sub-queries can be used in all forms of filters: refs = consumer_refs.where(:logged_in).select(:consumer_id) consumers.where(:id => refs).sql #=> "SELECT * FROM consumers WHERE (id IN (SELECT consumer_id FROM consumer_refs WHERE logged_in))" Note that if you are checking for the inclusion of a single column in a subselect, the subselect should only select a single column. == Using OR instead of AND By default, if you chain calls to +where+, the conditions get ANDed together. If you want to use an OR for a condition, you can use the +or+ method: items.where(:name=>'Food').or(:vendor=>1).sql #=> "SELECT * FROM items WHERE ((name = 'Food') OR (vendor = 1))" ruby-sequel-4.1.1/doc/mass_assignment.rdoc000066400000000000000000000106501220156535500206110ustar00rootroot00000000000000= Sequel::Model Mass Assignment Most Model methods that take a hash of attribute keys and values, including Model.new, Model.create, Model#set and Model#update are subject to Sequel's mass assignment rules. When you pass a hash to these methods, each key has an = appended to it (the setter method), and if the setter method exists and access to it is not restricted, Sequel will call the setter method with the hash value. By default, there are two types of setter methods that are restricted. The first is methods like typecast_on_assignment= and ==, which don't affect columns. These methods cannot be enabled for mass assignment. The second is primary key setters. To enable use of primary key setters, you need to call +unrestrict_primary_key+ for that model: Post.unrestrict_primary_key Since mass assignment by default allows modification of all column values except for primary key columns, it can be a security risk in some cases. Sequel has multiple ways of securing mass assignment. The first way is using +set_allowed_columns+: Post.set_allowed_columns :title, :body, :category This explicitly sets which methods are allowed (title=, body=, and category=), all other methods will not be allowed. This method is useful in simple applications where the same columns are allowed in all cases, but not appropriate when different columns are allowed in different scenarios (e.g. admin access vs. user access). To handle cases where different columns are allowed in different cases, you can use +set_only+ or +update_only+: # user case post.set_only(params[:post], :title, :body) # admin case post.set_only(params[:post], :title, :body, :deleted) In this case, only the title= and body= methods will be allowed in the mass assignment in the user case, and only title=, body=, and deleted= will be allowed for mass assignment in the admin case. By default, if an invalid setter method call is attempted, Sequel raises a Sequel::Error exception. You can have Sequel silently ignore invalid calls by doing: # Global default Sequel::Model.strict_param_setting = false # Class level Post.strict_param_setting = false # Instance level post.strict_param_setting = false In addition to +set_only+ and +update_only+, Sequel also has +set_fields+ and +update_fields+ methods, and these may be a better mass assignment choice for most users. These methods take two arguments, the attributes hash as the first argument, and a single array of valid field names as the second argument: post.set_fields(params[:post], [:title, :body]) +set_fields+ and +update_fields+ differ in implementation from +set_only+ and +update_only+. With +set_only+ and +update_only+, the hash is iterated over and it checks each method call attempt to see if it is valid. With +set_fields+ and +update_fields+, the array is iterated over, and it just looks up the value in the hash and calls the appropriate setter method. +set_fields+ and +update_fields+ are designed for the case where you are expecting specific fields in the input, and want to ignore the other fields. They work great for things like HTML forms where the form fields are static. +set_only+ and +update_only+ are designed for cases where you are not sure what fields are going to be present in the input, but still want to make sure only certain setter methods can be called. They work great for flexible APIs. +set_fields+ and +update_fields+ take an optional argument hash, and currently handles the :missing option. With :missing=>:skip, +set_fields+ and +update_fields+ will just skip missing entries in the hash, allowing them to be used in flexible APIs. With :missing=>:raise, +set_fields+ and +update_fields+ will raise an error if one of the entries in the hash is missing, instead of just assigning the value to nil or whatever the hash's default value is. That allows stricter checks, similar to the :strict_param_checking setting for the default mass assignment methods. You can use the Model.default_set_fields_options= method to set the default options to use for +set_fields+ and +update_fields+ on a global or per-model basis. In all of the mass assignment cases, methods starting with +set+ will set the attributes without saving the object, while methods starting with +update+ will set the attributes and then save the changes to the object. ruby-sequel-4.1.1/doc/migration.rdoc000066400000000000000000000572441220156535500174210ustar00rootroot00000000000000= Migrations This guide is based on http://guides.rubyonrails.org/migrations.html == Overview Migrations make it easy to alter your database's schema in a systematic manner. They make it easier to coordinate with other developers and make sure that all developers are using the same database schema. Migrations are optional, you don't have to use them. You can always just create the necessary database structure manually using Sequel's schema modification methods or another database tool. However, if you are dealing with other developers, you'll have to send them all of the changes you are making. Even if you aren't dealing with other developers, you generally have to make the schema changes in 3 places (development, testing, and production), and it's probably easier to use the migrations system to apply the schema changes than it is to keep track of the changes manually and execute them manually at the appropriate time. Sequel tracks which migrations you have already run, so to apply migrations you generally need to run Sequel's migrator with bin/sequel -m: sequel -m path/to/migrations postgres://host/database Migrations in Sequel use a very simple DSL via the Sequel.migration method, and inside the DSL, use the Sequel::Database schema modification methods such as +create_table+ and +alter_table+. See the {schema modification guide}[link:files/doc/schema_modification_rdoc.html] for details on the schema modification methods you can use. == A Basic Migration Here is a fairly basic Sequel migration: Sequel.migration do up do create_table(:artists) do primary_key :id String :name, :null=>false end end down do drop_table(:artists) end end This migration has an +up+ block which adds an artist table with an integer primary key named id, and a varchar or text column (depending on the database) named +name+ that doesn't accept +NULL+ values. Migrations should include both up and +down+ blocks, with the +down+ block reversing the change made by up. However, if you never need to be able to migrate down (i.e. you are one of the people that doesn't make mistakes), you can leave out the +down+ block. In this case, the +down+ block just reverses the changes made by up, dropping the table. You can simplify the migration given above by using a reversible migration with a +change+ block: Sequel.migration do change do create_table(:artists) do primary_key :id String :name, :null=>false end end end The +change+ block acts exactly like an +up+ block. The only difference is that it will attempt to create a +down+ block for you, assuming that it knows how to reverse the given migration. The +change+ block can usually correctly reverse the following methods: * +create_table+ * +add_column+ * +add_index+ * +rename_column+ * +rename_table+ * +alter_table+ (supporting the following methods in the +alter_table+ block): * +add_column+ * +add_constraint+ * +add_foreign_key+ (with a symbol, not an array) * +add_primary_key+ (with a symbol, not an array) * +add_index+ * +add_full_text_index+ * +add_spatial_index+ * +rename_column+ If you use any other methods, you should create your own +down+ block. In normal usage, when Sequel's migrator runs, it runs the +up+ blocks for all migrations that have not yet been applied. However, you can use the -M switch to specify the version to which to migrate, and if it is lower than the current version, Sequel will run the +down+ block on the appropriate migrations. You are not limited to creating tables inside a migration, you can alter existing tables as well as modify data. Let's say your artist database originally only included artists from Sacramento, CA, USA, but now you want to branch out and include artists in any city: Sequel.migration do up do add_column :artists, :location, String from(:artists).update(:location=>'Sacramento') end down do drop_column :artists, :location end end This migration adds a +location+ column to the +artists+ table, and sets the +location+ column to 'Sacramento' for all existing artists. It doesn't use a default on the column, because future artists should not be assumed to come from Sacramento. In the +down+ block, it just drops the +location+ column from the +artists+ table, reversing the actions of the up block. Note that when updating the +artists+ table in the update, a plain dataset is used, from(:artists). This looks a little weird, but you need to be aware that inside an up or +down+ block in a migration, self always refers to the Sequel::Database object that the migration is being applied to. Since Database#from creates datasets, using from(:artists) inside the +up+ block creates a dataset on the database representing all columns in the +artists+ table, and updates it to set the +location+ column to 'Sacramento'. You should avoid referencing the Sequel::Database object directly in your migration, and always use self to reference it, otherwise you may run into problems. It is possible to use model classes inside migrations, as long as they are loaded into the ruby interpreter, but it's a bad habit as changes to your model classes can then break old migrations, and this breakage is often not caught until much later, such as when a new developer joins the team and wants to run all migrations to create their development database. == The +migration+ extension The migration code is not technically part of the core of Sequel. It's not loaded by default as it is only useful in specific cases. It is one of the extensions that ship with Sequel, which receive the same level of support as Sequel's core. If you want to play with Sequel's migration tools without using the bin/sequel tool, you need to load the migration extension manually: Sequel.extension :migration == Schema methods Migrations themselves do not contain any schema modification methods, but they make it easy to call any of the Sequel::Database modification methods, of which there are many. The main ones are +create_table+ and +alter_table+, but Sequel also comes with numerous other schema modification methods, most of which are shortcuts for +alter_table+ (all of these methods are described in more detail in the {schema modification guide}[link:files/doc/schema_modification_rdoc.html]): * add_column * add_index * create_view * drop_column * drop_index * drop_table * drop_view * rename_table * rename_column * set_column_default * set_column_type These methods handle the vast majority of cross database schema modification SQL. If you need to drop down to SQL to execute some database specific code, you can use the +run+ method: Sequel.migration do up{run 'CREATE TRIGGER ...'} down{run 'DROP TRIGGER ...'} end In this case, we are using { and } instead of do and end to define the blocks. Just as before, the +run+ methods inside the blocks are called on the +Database+ object, which just executes the code on the underlying database. == Errors when running migrations Sequel attempts to run migrations inside of a transaction if the database supports transactional DDL statements. On the databases that don't support transactional DDL statements, if there is an error while running a migration, it will not rollback the previous schema changes made by the migration. In that case, you will need to update the database by hand. It's recommended to always run migrations on a test database and ensure they work before running them on any production database. == Transactions You can manually specify to use transactions on a per migration basis. For example, if you want to force transaction use for a particular migration, call the transaction method in the Sequel.migration block: Sequel.migration do transaction change do # ... end end Likewise, you can disable transaction use via no_transaction: Sequel.migration do no_transaction change do # ... end end This is necessary in some cases, such as when attempting to use CREATE INDEX CONCURRENTLY on PostgreSQL (which supports transactional schema, but not that statement inside a transaction). You can also override the transactions setting at the migrator level, either by forcing transactions even if no_transaction is set, or by disabling transactions all together: # Force transaction use Sequel::Migrator.run(DB, '/path/to/migrations/dir', :use_transactions=>true) # Disable use of transactions Sequel::Migrator.run(DB, '/path/to/migrations/dir', :use_transactions=>false) == Migration files While you can create migration objects yourself and apply them manually, most of the benefit to using migrations come from using Sequel's +Migrator+, which is what the bin/sequel -m switch does. Sequel's +Migrator+ expects that each migration will be in a separate file in a specific directory. The -m switch requires an argument be specified that is the path to the directory containing the migration files. For example: sequel -m db/migrations postgres://localhost/sequel_test will look in the db/migrations folder relative to the current directory, and run unapplied migrations on the PostgreSQL database sequel_test running on localhost. == Two separate migrators Sequel actually ships with two separate migrators. One is the +IntegerMigrator+, the other is the +TimestampMigrator+. They both have plusses and minuses: === +IntegerMigrator+ * Simpler, uses migration versions starting with 1 * Doesn't allow duplicate migrations * Doesn't allow missing migrations by default * Just stores the version of the last migration run * Good for single developer or small teams with close communication * Lower risk of undetected conflicting migrations * Requires manual merging of simultaneous migrations === +TimeStampMigrator+ * More complex, use migration versions where the version should represent a timestamp * Allows duplicate migrations (since you could have multiple in a given second) * Allows missing migrations (since you obviously don't have one every second) * Stores the file names of all applied migrations * Good for large teams without close communication * Higher risk of undetected conflicting migrations * Does not require manual merging of simultaneous migrations === Filenames In order for migration files to work with the Sequel, they must be specified as follows: version_name.rb where version is an integer and name is a string which should be a very brief description of what the migration does. Each migration class should contain 1 and only 1 call to Sequel.migration. === +IntegerMigrator+ Filenames These are valid migration names for the +IntegerMigrator+: 1_create_artists.rb 2_add_artist_location.rb The only problem with this naming format is that if you have more than 9 migrations, the 10th one will look a bit odd: 1_create_artists.rb 2_add_artist_location.rb ... 9_do_something.rb 10_do_something_else.rb For this reasons, it's often best to start with 001 instead of 1, as that means you don't need to worry about that issue until the 1000th migration: 001_create_artists.rb 002_add_artist_location.rb ... 009_do_something.rb 010_do_something_else.rb Migrations start at 1, not 0. The migration version number 0 is important though, as it is used to mean that all migrations should be unapplied (i.e. all +down+ blocks run). In Sequel, you can do that with: sequel -m db/migrations -M 0 postgres://localhost/sequel_test === +TimestampMigrator+ Filenames With the +TimestampMigrator+, the version integer should represent a timestamp, though this isn't strictly required. For example, for 5/10/2010 12:00:00pm, you could use any of the following formats: # Date 20100510_create_artists.rb # Date and Time 20100510120000_create_artists.rb # Unix Epoch Time Integer 1273518000_create_artists.rb The important thing is that all migration files should be in the same format, otherwise when you update, it'll be difficult to make sure migrations are applied in the correct order, as well as be difficult to unapply some the affected migrations correctly. The +TimestampMigrator+ will be used if any filename in the migrations directory has a version greater than 20000101. Otherwise, the +IntegerMigrator+ will be used. You can force the use of the +TimestampMigrator+ in the API by calling TimestampMigrator.apply instead of Migrator.apply. === How to choose Basically, unless you need the features provided by the +TimestampMigrator+, stick with the +IntegerMigrator+, as it is simpler and makes it easier to detect possible errors. For a single developer, the +TimestampMigrator+ has no real benefits, so I would always recommend the +IntegerMigrator+. When dealing with multiple developers, it depends on the size of the development team, the team's communication level, and the level of overlap between developers. Let's say Alice works on a new feature that requires a migration at the same time Bob works on a separate feature that requires an unrelated migration. If both developers are committing to their own private respositories, when it comes time to merge, the +TimestampMigrator+ will not require any manually changes. That's because Alice will have a migration such as 20100512_do_this.rb and Bob will have one such as 20100512_do_that.rb. If the +IntegerMigrator+ was used, Alice would have 34_do_this.rb and Bob would have 34_do_that.rb. When the +IntegerMigrator+ was used, it would raise an exception due to the duplicate migration version. The only way to fix it would be to renumber one of the two migrations, and have the affected developer manually modify their database. So for unrelated migrations, the +TimestampMigrator+ works fine. However, let's say that the migrations are related, in such a way that if Bob's is run first, Alice's will fail. In this case, the +TimestampMigrator+ would not raise an error when Bob merges Alice's changes, since Bob ran his migration first. However, it would raise an error when Alice runs Bob's migration, and could leave the database in an inconsistent state if the database doesn't support transactional schema changes. With the +TimestampMigrator+, you are trading reliability for convenience. That's possibly a valid trade, especially if simultaneous related schema changes by separate developers are unlikely, but you should give it some thought before using it. == Ignoring missing migrations In some cases, you may want to allow a migration in the database that does not exist in the filesystem (deploying to an older version of code without running a down migration when deploy auto-migrates, for example). If required, you can pass :allow_missing_migration_files => true as an option. This will stop errors from being raised if there are migrations in the database that do not exist in the filesystem. == Modifying existing migrations Just don't do it. In general, you should not modify any migration that has been run on the database and been committed to the source control repository, unless the migration contains an error that causes data loss. As long as it is possible to undo the migration without losing data, you should just add another migration that undoes the actions of the previous bad migration, and maybe does the correct action afterward. The main problem with modifying existing migrations is that you will have to manually modify any databases that ran the migration before it was modified. If you are a single developer, that may be an option, but certainly if you have multiple developers, it's a lot more work. == Creating a migration Sequel doesn't come with generators that create migrations for you. However, creating a migration is as simple as creating a file with the appropriate filename in your migrations directory that contains a Sequel.migration call. The minimal do-nothing migration is: Sequel.migration{} However, the migrations you write should contain an +up+ block that does something, and a +down+ block that reverses the changes made by the +up+ block: Sequel.migration do up{...} down{...} end or they should use the reversible migrations feature with a +change+ block: Sequel.migration do change{...} end == What to put in your migration's +down+ block It's usually easy to determine what you should put in your migration's +up+ block, as it's whatever change you want to make to the database. The +down+ block is less obvious. In general, it should reverse the changes made by the +up+ block, which means it should execute the opposite of what the +up+ block does in the reverse order in which the +up+ block does it. Here's an example where you are switching from having a single artist per album to multiple artists per album: Sequel.migration do up do # Create albums_artists table create_table(:albums_artists) do foreign_key :album_id, :albums foreign_key :artist_id, :artists index [:album_id, :artist_id], :unique=>true end # Insert one row in the albums_artists table # for each row in the albums table where there # is an associated artist self[:albums_artists].insert([:album_id, :artist_id], self[:albums].select(:id, :artist_id).exclude(:artist_id=>nil)) # Drop the now unnecesssary column from the albums table drop_column :albums, :artist_id end down do # Add the foreign key column back to the artists table alter_table(:albums){add_foreign_key :artist_id, :artists} # If possible, associate each album with one of the artists # it was associated with. This loses information, but # there's no way around that. self[:albums_artists]. group(:album_id). select{[album_id, max(artist_id).as(artist_id)]}. having{artist_id > 0}. all do |r| self[:artists]. filter(:id=>r[:album_id]). update(:artist_id=>r[:artist_id]) end # Drop the albums_artists table drop_table(:albums_artists) end end Note that the operations performed in the +down+ block are performed in the reverse order of how they are performed in the +up+ block. Also note how it isn't always possible to reverse exactly what was done in the +up+ block. You should try to do so as much as possible, but if you can't, you may want to have your +down+ block raise a Sequel::Error exception saying why the migration cannot be reverted. == Running migrations You can run migrations using the +sequel+ command line program that comes with Sequel. If you use the -m switch, +sequel+ will run the migrator instead of giving you an IRB session. The -m switch requires an argument that should be a path to a directory of migration files: sequel -m relative/path/to/migrations postgres://host/database sequel -m /absolute/path/to/migrations postgres://host/database If you do not provide a -M switch, +sequel+ will migrate to the latest version in the directory. If you provide a -M switch, it should specify an integer version to which to migrate. # Migrate all the way down sequel -m db/migrations -M 0 postgres://host/database # Migrate to version 10 (IntegerMigrator style migrations) sequel -m db/migrations -M 10 postgres://host/database # Migrate to version 20100510 (TimestampMigrator migrations using YYYYMMDD) sequel -m db/migrations -M 20100510 postgres://host/database Whether or not migrations use the +up+ or +down+ block depends on the version to which you are migrating. If you don't provide a -M switch, all unapplied migrations will be migrated up. If you provide a -M, it will depend on which migrations that have been applied. Applied migrations greater than that version will be migrated down, while unapplied migrations less than or equal to that version will be migrated up. == Verbose migrations By default, sequel -m operates as a well behaved command line utility should, printing out nothing if there is no error. If you want to see the SQL being executed during a migration, as well as the amount of time that each migration takes, you can use the -E option to +sequel+ to set up a +Database+ logger that logs to +STDOUT+. You can also log that same output to a file using the -l option with a log file name. == Using models in your migrations Just don't do it. It can be tempting to use models in your migrations, especially since it's easy to load them at the same time using the -L option to +sequel+. However, this ties your migrations to your models, and makes it so that changes in your models can break old migrations. With Sequel, it should be easy to use plain datasets to accomplish pretty much anything you would want to accomplish in a migration. Even if you have to copy some code from a model method into a migration itself, it's better than having your migration use models and call model methods. == Dumping the current schema as a migration Sequel comes with a +schema_dumper+ extension that dumps the current schema of the database as a migration to +STDOUT+ (which you can redirect to a file using >). This is exposed in the +sequel+ command line tool with the -d and -D switches. -d dumps the schema in database independent format, while -D dumps the schema using a non-portable format, useful if you are using nonportable columns such as +inet+ in your database. Let's say you have an existing database and want to create a migration that would recreate the database's schema: sequel -d postgres://host/database > db/migrations/001_start.rb or using a nonportable format: sequel -D postgres://host/database > db/migrations/001_start.rb The main difference between the two is that -d will use the type methods with the database independent ruby class types, while -D will use the +column+ method with string types. Note that Sequel cannot dump constraints other than primary key and possibly foreign key constraints. If you are using database features such as constraints or triggers, you should use your database's dump and restore programs instead of Sequel's schema dumper. You can take the migration created by the schema dumper to another computer with an empty database, and attempt to recreate the schema using: sequel -m db/migrations postgres://host/database == Checking for Current Migrations In your application code, you may want to check that you are up to date in regards to migrations (i.e. you don't have any unapplied migrations). Sequel offers two separate methods to do that. The first is Sequel::Migrator.check_current. This method raises an exception if there are outstanding migrations that need to be run. The second is Sequel::Migrator.is_current?, which returns true if there are no outstanding migrations, and false if there are outstanding migrations. If you want to ensure that your application code is up to date, you may want to add the following code after connecting to your database: Sequel.extension :migration Sequel::Migrator.check_current(DB, '/path/to/migrations') This will cause your application to raise an error when you start it if you have any outstanding migrations. == Old-style migration classes Before the Sequel.migration DSL was introduced, Sequel used classes for Migrations: Class.new(Sequel::Migration) do def up end def down end end or: class DoSomething < Sequel::Migration def up end def down end end This usage is discouraged in new code, but will continue to be supported indefinitely. It is not recommended to convert old-style migration classes to the Sequel.migration DSL, but it is recommended to use the Sequel.migration DSL for all new migrations. ruby-sequel-4.1.1/doc/model_hooks.rdoc000066400000000000000000000332361220156535500177260ustar00rootroot00000000000000= Model Hooks This guide is based on http://guides.rubyonrails.org/activerecord_validations_callbacks.html == Overview Model hooks, also known as model callbacks, are used to specify actions that occur at a given point in a model instance's lifecycle, such as before or after the model object is saved, created, updated, destroyed, or validated. There are also around hooks for all types, which wrap the before hooks, the behavior, and the after hooks. == Basic Usage Sequel::Model uses instance methods for hooks. To define a hook on a model, you just add an instance method to the model class: class Album < Sequel::Model def before_create self.created_at ||= Time.now super end end The one important thing to note here is the call to +super+ inside the hook. Whenever you override one of Sequel::Model's methods, you should be calling +super+ to get the default behavior. Many of the plugins that ship with Sequel work by overriding the hook methods and calling +super+. If you use these plugins and override the hook methods but do not call +super+, it's likely the plugins will not work correctly. == Available Hooks Sequel calls hooks in the following order when saving/creating a new object (one that does not already exist in the database): * +around_validation+ * +before_validation+ * +validate+ method called * +after_validation+ * +around_save+ * +before_save+ * +around_create+ * +before_create+ * INSERT QUERY * +after_create+ * +after_save+ Sequel calls hooks in the following order when saving an existing object: * +around_validation+ * +before_validation+ * +validate+ method called * +after_validation+ * +around_save+ * +before_save+ * +around_update+ * +before_update+ * UPDATE QUERY * +after_update+ * +after_save+ Note that all of the hook calls are the same, except that +around_create+, +before_create+ and +after_create+ are used for a new object, and +around_update+, +before_update+ and +after_update+ are used for an existing object. Note that +around_save+, +before_save+, and +after_save+ are called in both cases. Also note that the validation hooks are not called if the :validate => false option is passed to save. However, the validation hooks are called if you call Model#valid? manually: * +around_validation+ * +before_validation+ * +validate+ method called * +after_validation+ Sequel calls hooks in the following order when destroying an existing object: * +around_destroy+ * +before_destroy+ * DELETE QUERY * +after_destroy+ Note that these hooks are only called when using Model#destroy, they are not called if you use Model#delete. == Special Hook-Related Instance Variables For after_save hooks, a @was_new instance variable is present that indicates whether the record was a new record that was just inserted, or an existing record that was updated. Sequel marks a record as existing as soon as it inserts the record, so in an after_save or after_create hook, the instance is no longer considered new. You have to check @was_new to see if the record was inserted. This exists so that you don't have to have separate after_create and after_update hooks that are mostly the same and only differ slightly depending on whether the record was a new record. For after_update hooks, a @columns_updated instance variable is present that is a hash of the values used to update the row (keys are column symbols, values are column values). This should be used by any code that wants to check what columns and values were used during the update. You can't just check the current values of the instance, since Sequel offers ways to manually specify which columns to use during the save. == Transaction-related Hooks There are four other model hooks that Sequel::Model supports, all related to transactions. These are +after_commit+, +after_rollback+, +after_destroy_commit+, and +after_destroy_rollback+. +after_commit+ is called after the transaction in which you saved the object commits, only if it commits. +after_rollback+ is called after the transaction in which you saved the object rolls back, if it rolls back. +after_destroy_commit+ is called after the transaction in which you destroyed the object commits, if it commits. +after_destroy_rollback+ is called after the transaction in which you destroyed the object rolls back, if it rolls back. If you aren't using transactions when saving or destroying model objects, and there isn't a currently open transaction, +after_commit+ and +after_destroy_commit+ will be called after +after_save+ and +after_destroy+, respectively, and +after_rollback+ and +after_destroy_rollback+ won't be called at all. The purpose of these hooks is dealing with external systems that are interacting with the same database. For example, let's say you have a model that stores a picture, and you have a background job library that makes thumbnails of all of the pictures. So when a model object is created, you want to add a background job that will create the thumbnail for the picture. If you used after_save for this and transactions are being used, you are subject to a race condition where the background job library will check the database table for the record before the transaction that saved the record commits, and it won't be able to see the record's data. Using after_commit, you are guaranteed that the background job library will not get notified of the record until after the transaction commits and the data is viewable. == Running Hooks Sequel does not provide a simple way to turn off the running of save/create/update hooks. If you attempt to save a model object, the save hooks are always called. All model instance methods that modify the database call save in some manner, so you can be sure that if you define the hooks, they will be called when you save the object. However, you should note that there are plenty of ways to modify the database without saving a model object. One example is by using plain datasets, or one of the model's dataset methods: Album.where(:name=>'RF').update(:copies_sold=>:copies_sold + 1) # UPDATE albums SET copies_sold = copies_sold + 1 WHERE name = 'RF' In this case, the +update+ method is called on the dataset returned by Album.where. Even if there is only a single object with the name RF, this will not call any hooks. If you want model hooks to be called, you need to make sure to operate on a model object: album = Album.first(:name=>'RF') album.update(:copies_sold=>album.copies_sold + 1) # UPDATE albums SET copies_sold = 2 WHERE id = 1 For the destroy hooks, you need to make sure you call +destroy+ on the object: album.destroy # runs destroy hooks == Skipping Hooks Sequel makes it easy to skip destroy hooks by calling +delete+ instead of +destroy+: album.delete # does not run destroy hooks However, skipping hooks is a bad idea in general and should be avoided. As mentioned above, Sequel doesn't allow you to turn off the running of save hooks. If you know what you are doing and really want to skip them, you need to drop down to the dataset level to do so. This can be done for a specific model object by using the +this+ method for a dataset that represents a single object: album.this # dataset The +this+ dataset works just like any other dataset, so you can call +update+ on it to modify it: album.this.update(:copies_sold=>album.copies_sold + 1)) If you want to insert a row into the model's table without running the creation hooks, you can use Model.insert instead of Model.create: Album.insert(:name=>'RF') # does not run hooks == Halting Hook Processing Sequel uses a convention that if any before_* hook method returns false (but not nil), that the action will be canceled and a Sequel::HookFailed raised (or +nil+ to be returned by +save+ if +raise_on_save_failure+ is +false+). You can use this to implement validation-like behavior, that will run even if validations are skipped. For example: class Album < Sequel::Model def before_save return false if name == '' super end end While returning false is not really recommended, you should be aware of this behavior so that you do not inadvertently return false. For around hooks, neglecting to call +super+ halts hook processing in the same way as returning +false+ in a before hook. You can't halt hook processing in after hooks, since by then the main processing has already taken place. By default, Sequel runs hooks other than validation hooks inside a transaction, so if you abort the hook by returning false in a before hook or by raising an exception in any hook, Sequel will rollback the transaction. However, note that the implicit use of transactions when saving and destroying model objects is conditional (it depends on the model instance's +use_transactions+ setting and the :transaction option passed to save). == Conditional Hooks Sometimes you only take to take a certain action in a hook if the object meets a certain condition. For example, let's say you only want to make sure a timestamp is set when updating if the object is at a certain status level: class Album < Sequel::Model def before_update self.timestamp ||= Time.now if status_id > 3 super end end Note how this hook action is made conditional just be using the standard ruby +if+ conditional. Sequel makes it easy to handle conditional hook actions by using standard ruby conditionals inside the instance methods. == Using Hooks in Multiple Classes If you want all your model classes to use the same hook, you can just define that hook in Sequel::Model: class Sequel::Model def before_create self.created_at ||= Time.now super end end Just remember to call +super+ whenever you override the method in a subclass. Note that +super+ is also used when overriding the hook in Sequel::Model itself. This is important as if you add any plugins to Sequel::Model itself, if you override a hook in Sequel::Model and do not call +super+, the plugin may not work correctly. If you don't want all classes to use the same hook, but want to reuse hooks in multiple classes, you should use a plugin or a simple module: === Plugin module SetCreatedAt module InstanceMethods def before_create self.created_at ||= Time.now super end end end Album.plugin(SetCreatedAt) Artist.plugin(SetCreatedAt) === Simple Module module SetCreatedAt def before_create self.created_at ||= Time.now super end end Album.send(:include, SetCreatedAt) Artist.send(:include, SetCreatedAt) == +super+ Ordering While it's not enforced anywhere, it's a good idea to make +super+ the last expression when you override a before hook, and the first expression when you override an after hook: class Album < Sequel::Model def before_save self.updated_at ||= Time.now super end def after_save super AuditLog.create(:log=>"Album #{name} created") end end This allows the following general principles to be true: * before hooks are run in reverse order of inclusion * after hooks are run in order of inclusion * returning false in any before hook will pass the false value down the hook method chain, halting the hook processing. So if you define the same before hook in both a model and a plugin that the model uses, the hooks will be called in this order: * model before hook * plugin before hook * plugin after hook * model after hook Again, Sequel does not enforce that, and you are free to call +super+ in an order other than the recommended one (just make sure that you call it). == Around Hooks Around hooks should only be used if you cannot accomplish the same results with before and after hooks. For example, if you want to catch database errors caused by the +INSERT+ or +UPDATE+ query when saving a model object and raise them as validation errors, you cannot use a before or after hook. You have use an +around_save+ hook: class Album < Sequel::Model def around_save super rescue Sequel::DatabaseError => e # parse database error, set error on self, and reraise a Sequel::ValidationFailed end end Likewise, let's say that upon retrieval, you associate an object with a file descriptor, and you want to ensure that the file descriptor is closed after the object is saved to the database. Let's assume you are always saving the object and you are not using validations. You could not use an +after_save+ hook safely, since if the database raises an error, the +after_save+ method will not be called. In this case, an +around_save+ hook is also the correct choice: class Album < Sequel::Model def around_save super ensure @file_descriptor.close end end == Hook related plugins === +instance_hooks+ Sequel also ships with an +instance_hooks+ plugin that allows you to define before and after hooks on a per instance basis. It's very useful as it allows you to delay action on an instance until before or after saving. This can be important if you want to modify a group of related objects together (which is how the +nested_attributes+ plugin uses +instance_hooks+). === +hook_class_methods+ While it's recommended to write your hooks as instance methods, Sequel ships with a +hook_class_methods+ plugin that allows you to define hooks via class methods. It exists mostly for legacy compatibility, but is still supported. However, it does not implement around hooks. === +after_initialize+ The after_initialize plugin adds an after_initialize hook, that is called for all model instances on creation (both new instances and instances retrieved from the database). It exists mostly for legacy compatibility, but it is still supported. ruby-sequel-4.1.1/doc/object_model.rdoc000066400000000000000000000464621220156535500200560ustar00rootroot00000000000000= The Sequel Object Model Sequel's dataset layer is mostly structured as an DSL, so it often obscures what actual objects are being used. For example, you don't usually create Sequel objects by calling #new on the object's class (other than Sequel::Model instances). However, just as almost everything in ruby is an object, all the methods you call in Sequel deal with objects behind the scenes. There are five main types of Sequel-specific objects that you deal with in Sequel: * Sequel::Database * Sequel::Dataset * Sequel::Model * Standard Ruby Types * Sequel::SQL::Expression (and subclasses) == Sequel::Database Sequel::Database is the main Sequel object that you deal with. It's usually created by the Sequel.connect method: DB = Sequel.connect('postgres://host/database') A Sequel::Database object represents the database you are connecting to. Sequel::Database handles things like Sequel::Dataset creation, dataset = DB[:table] schema modification, DB.create_table(:table) do primary_key :id String :name end and transactions: DB.transaction do DB[:table].insert(:column=>value) end Sequel::Database#literal can be used to take any object that Sequel handles and literalize the object to an SQL string fragment: DB.literal(DB[:table]) # (SELECT * FROM "table") == Sequel::Dataset Sequel::Dataset objects represent SQL queries, or more generally, they represent abstract collections of rows in the database. They are usually created from a Sequel::Database object: dataset = DB[:table] # SELECT * FROM "table" dataset = DB.from(table) # SELECT * FROM "table" dataset = DB.select(:column) # SELECT "column" Most Sequel::Dataset methods return modified copies of the receiver, and the general way to build queries in Sequel is via a method chain: dataset = DB[:test]. select(:column1, :column2). where(:column3 => 4). order(:column5) Such a method chain is a more direct way of doing: dataset = DB[:test] dataset = dataset.select(:column1, :column2) dataset = dataset.where(:column3 => 4) dataset = dataset.order(:column5) When you are ready to execute your query, you call one of the Sequel::Dataset action methods. For returning rows, you can do: dataset.first dataset.all dataset.each{|row| row} For inserting, updating, or deleting rows, you can do: dataset.insert(:column=>value) dataset.update(:column=>value) dataset.delete All datasets are related to their database object, which you can access via the Sequel::Dataset#db method: dataset.db # => DB == Sequel::Model Sequel::Model objects are wrappers around a particular Sequel::Dataset object that add custom behavior, both custom behavior for the entire set of rows in the dataset (the model's class methods), custom behavior for a subset of rows in the dataset (the model's dataset methods), and custom behavior for single rows in the dataset (the model's instance methods). Unlike most other Sequel objects, Sequel::Model classes and instances are generally created by the user using standard ruby syntax: class Album < Sequel::Model end album = Album.new All model classes are related to their Sequel::Dataset object, which you can access via the Sequel::Model.dataset method: Album.dataset # SELECT * FROM "albums" Additionally, all model classes are related to their dataset's Sequel::Database object, which you can access via the Sequel::Model.db method: Album.db # => DB == Standard Ruby Types Where possible, Sequel uses ruby's standard types to represent SQL concepts. In the examples here, the text to the right side of the # sign is the output if you pass the left side to Sequel::Database#literal. === Symbol For example, ruby symbols represent SQL identifiers (tables, columns, schemas): :table # "table" :column # "column" However, they can also represent qualified identifiers by including a double underscore inside a symbol: :table__column # "table"."column" They can also represent an aliased identifier by including a triple underscore inside a symbol: :column___alias # "column" AS "alias" You can combine both qualification and aliasing by using a double underscore and a triple underscore: :table__column___alias # "table"."column" AS "alias" === Integer, Float, BigDecimal, String, Date, Time, DateTime Ruby's Integer, Float, BigDecimal, String, Date, Time, and DateTime classes represent similar types in SQL: 1 # 1 1.0 # 1.0 BigDecimal.new('1.0') # 1.0 "string" # 'string' Date.new(2012, 5, 6) # '2012-05-06' Time.now # '2012-05-06 10:20:30' DateTime.now # '2012-05-06 10:20:30' === Hash Sequel generally uses hash objects to represent equality: {:column => 1} # ("column" = 1) However, if you use in array as the hash value, it will usually be used to represent inclusion: {:column => [1, 2, 3]} # ("column" IN (1, 2, 3)) You can also use a Sequel::Dataset instance as the hash value, which will be used to represent inclusion in the subselect: {:column => DB[:table].select(:column)} # ("column" IN (SELECT "column" FROM "table")) If you pass true, false, or nil as the hash value, it will be used to represent identity: {:column => nil} # ("column" IS NULL) If you pass a Range object, it will be used as the bounds for a greater than and less than operation: {:column => 1..2} # (("column" >= 1) AND ("column" <= 2)) {:column => 1...3} # (("column" >= 1) AND ("column" < 3)) If you pass a Regexp object as the value, it will be used as a regular expression operation (only supported on PostgreSQL and MySQL currently): {:column => /a.*b/} # ("column" ~ 'a.*b') === Array Sequel generally treats arrays as an SQL value list: [1, 2, 3] # (1, 2, 3) However, if all members of the array are arrays with two members, then the array is treated like a hash: [[:column, 1]] # ("column" = 1) The advantage of using an array over a hash for such a case is that a hash cannot include multiple objects with the same key, while the array can. == Sequel::SQL::Expression (and subclasses) If Sequel needs to represent an SQL concept that does not map directly to an existing ruby class, it will generally use a Sequel::SQL::Expression subclass to represent that concept. Some of the examples below show examples that require the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]. === Sequel::LiteralString Sequel::LiteralString is not actually a Sequel::SQL::Expression subclass. It is a subclass of String, but it is treated specially by Sequel, in that it is treated as literal SQL code, instead of as an SQL string that needs to be escaped: Sequel::LiteralString.new("co'de") # co'de The following shortcuts exist for creating Sequel::LiteralString objects: Sequel.lit("co'de") "co'de".lit # core_extensions extension === Sequel::SQL::Blob Sequel::SQL::Blob is also a String subclass, but it is treated as an SQL blob instead of an SQL string, as SQL blobs often have different literalization rules than SQL strings do: Sequel::SQL::Blob.new("blob") The following shortcuts exist for creating Sequel::SQL::Blob objects: Sequel.blob("blob") "blob".to_sequel_blob # core_extensions extension === Sequel::SQLTime Sequel::SQLTime is a Time subclass. However, it is treated specially by Sequel in that only the time component is literalized, not the date part. This type is used to represent SQL time types, which do not contain date information. Sequel::SQLTime.create(10, 20, 30) # "10:20:30" === Sequel::SQL::ValueList Sequel::SQL::ValueList objects always represent SQL value lists. Most ruby arrays represent value lists in SQL, except that arrays of two-element arrays are treated similar to hashes. Such arrays can be wrapped in this class to ensure they are treated as value lists. This is important when doing a composite key IN lookup, which some databases support. Sequel::SQL::ValueList is an ::Array subclass with no additional behavior, so it can be instantiated like a normal array: Sequel::SQL::ValueList.new([[1, 2], [3, 4]]) # ((1, 2), (3, 4)) In old versions of Sequel, these objects often needed to be created manually, but in newer versions of Sequel, they are created automatically in most cases where they are required. The following shortcuts exist for creating Sequel::SQL::ValueList objects: Sequel.value_list([[1, 2], [3, 4]]) [[1, 2], [3, 4]].sql_value_list # core_extensions extension === Sequel::SQL::Identifier Sequel::SQL::Identifier objects represent single identifiers. The main reason for their existance is that they are not checked for double or triple underscores, so no automatic qualification or aliasing happens for them: Sequel::SQL::Identifier.new(:col__umn) # "col__umn" The following shortcuts exist for creating Sequel::SQL::Identifier objects: Sequel.expr(:column) Sequel.identifier(:col__umn) :col__umn.identifier # core_extensions extension === Sequel::SQL::QualifiedIdentifier Sequel::SQL::QualifiedIdentifier objects represent qualified identifiers: Sequel::SQL::QualifiedIdentifier.new(:table, :column) # "table"."column" The following shortcuts exist for creating Sequel::SQL::QualifiedIdentifier objects: Sequel.expr(:table__column) Sequel.qualify(:table, :column) :column.qualify(:table) # core_extensions extension === Sequel::SQL::AliasedExpression Sequel::SQL::AliasedExpression objects represent aliased expressions in SQL. The alias is treated as an identifier, but the expression can be an arbitrary Sequel expression: Sequel::SQL::AliasedExpression.new(:column, :alias) # "column" AS "alias" The following shortcuts exist for creating Sequel::SQL::AliasedExpression objects: Sequel.expr(:column___alias) Sequel.as(:column, :alias) :column.as(:alias) # core_extensions extension === Sequel::SQL::ComplexExpression Sequel::SQL::ComplexExpression objects mostly represent SQL operations with arguments. There are separate subclasses for representing boolean operations such as AND and OR (Sequel::SQL::BooleanExpression), mathematical operations such as + and - (Sequel::SQL::NumericExpression), and string operations such as || and LIKE (Sequel::SQL::StringExpression). Sequel::SQL::BooleanExpression.new(:OR, :col1, :col2) # ("col1" OR "col2") Sequel::SQL::NumericExpression.new(:+, :column, 2) # ("column" + 2) Sequel::SQL::StringExpression.new(:"||", :column, "b") # ("column" || 'b') There are many shortcuts for creating Sequel::SQL::ComplexExpression objects: Sequel.or(:col1, :col2) :col1 | :col2 # core_extensions extension Sequel.+(:column, 2) :column + 2 # core_extensions extension Sequel.join([:column, 'b']) :column + 'b' # core_extensions extension === Sequel::SQL::CaseExpression Sequel::SQL::CaseExpression objects represent SQL CASE expressions, which represent branches in the database, similar to ruby case expressions. Like ruby's case expressions, these case expressions can have a implicit value you are comparing against: Sequel::SQL::CaseExpression.new({2=>1}, 0, :a) # CASE "a" WHEN 2 THEN 1 ELSE 0 END Or they can treat each condition separately: Sequel::SQL::CaseExpression.new({{:a=>2}=>1}, 0) # CASE WHEN ("a" = 2) THEN 1 ELSE 0 END In addition to providing a hash, you can also provide an array of two-element arrays: Sequel::SQL::CaseExpression.new([[2, 1]], 0, :a) # CASE "a" WHEN 2 THEN 1 ELSE 0 END The following shortcuts exist for creating Sequel::SQL::CaseExpression objects: Sequel.case({2=>1}, 0, :a) Sequel.case({{:a=>2}=>1}, 0) {2=>1}.case(0, :a) # core_extensions extension {{:a=>2}=>1}.case(0) # core_extensions extension === Sequel::SQL::Cast Sequel::SQL::Cast objects represent CAST expressions in SQL, which does explicit typecasting in the database. With Sequel, you provide the expression to typecast as well as the type to cast to. The type can either be a generic type, given as a ruby class: Sequel::SQL::Cast.new(:a, String) # (CAST "a" AS text) or a specific type, given as a symbol or string: Sequel::SQL::Cast.new(:a, :int4) # (CAST "a" AS int4) The following shortcuts exist for creating Sequel::SQL::Cast objects: Sequel.cast(:a, String) Sequel.cast(:a, :int4) :a.cast(String) # core_extensions extension :a.cast(:int4) # core_extensions extension === Sequel::SQL::ColumnAll Sequel::SQL::ColumnAll objects represent the selection of all columns from a table. They are pretty much only used as arguments to one of the Dataset select methods, and are not used much anymore since Dataset#select_all was expanded to take arguments. Still, it's possible they are still useful in some code: Sequel::SQL::ColumnAll.new(:table) # "table".* The following shortcut exists for creating Sequel::SQL::ColumnAll objects: Sequel.expr(:table).* :table.* # core_extensions extension === Sequel::SQL::Constant Sequel::SQL::Constant objects represent constants or psuedo-constants in SQL, such as TRUE, NULL, and CURRENT_TIMESTAMP. These are not designed to be created or used by the end user, but some existing values are predefined under the Sequel namespace: Sequel::CURRENT_TIMESTAMP # CURRENT_TIMESTAMP These objects are usually used as values in queries: DB[:table].insert(:time=>Sequel::CURRENT_TIMESTAMP) === Sequel::SQL::DelayedEvaluation Sequel::SQL::DelayedEvaluation objects represent an evaluation that is delayed until query literalization. Sequel::SQL::DelayedEvaluation.new(proc{some_model.updated_at}) The following shortcut exists for creating Sequel::SQL::DelayedEvaluation objects: Sequel.delay{some_model.updated_at} Note how Sequel.delay requires a block, while Sequel::SQL::DelayedEvaluation.new accepts a generic callable object. Let's say you wanted a dataset for the number of objects greater than some attribute of another object: ds = DB[:table].where{updated_at > some_model.updated_at} The problem with the above query is that it evaluates "some_model.updated_at" statically, so if you change some_model.updated_at later, it won't affect this dataset. You can use Sequel.delay to fix this: ds = DB[:table].where{updated_at > Sequel.delay{some_model.updated_at}} This will evaluate "some_model.updated_at" every time you literalize the dataset (usually every time it is executed). === Sequel::SQL::Function Sequel::SQL::Function objects represents database function calls, which take a function name and any arguments: Sequel::SQL::Function.new(:func, :a, 2) # func("a", 2) The following shortcuts exist for creating Sequel::SQL::Function objects: Sequel.function(:func, :a, 2) :func.sql_function(:a, 2) # core_extensions extension === Sequel::SQL::JoinClause Sequel::SQL::JoinClause objects represent SQL JOIN clauses. They are usually not created manually, as the Dataset join methods create them automatically. === Sequel::SQL::PlaceholderLiteralString Sequel::SQL::PlaceholderLiteralString objects represent a literal SQL string with placeholders for variables. There are three types of these objects. The first type uses question marks with multiple placeholder value objects: Sequel::SQL::PlaceholderLiteralString.new('? = ?', [:a, 1]) # "a" = 1 The second uses named placeholders with colons and a hash of placeholder value objects: Sequel::SQL::PlaceholderLiteralString.new(':b = :v', [{:b=>:a, :v=>1}]) # "a" = 1 The third uses an array instead of a string, with multiple placeholder objects, each one going in between the members of the array: Sequel::SQL::PlaceholderLiteralString.new(['', ' = '], [:a, 1]) # "a" = 1 For any of these three forms, you can also include a third argument for whether to include parentheses around the string: Sequel::SQL::PlaceholderLiteralString.new('? = ?', [:a, 1], true) # ("a" = 1) The following shortcuts exist for creating Sequel::SQL::PlaceholderLiteralString objects: Sequel.lit('? = ?', :a, 1) Sequel.lit(':b = :v', :b=>:a, :v=>1) Sequel.lit(['', ' = '], :a, 1) '? = ?'.lit(:a, 1) # core_extensions extension ':b = :v'.lit(:b=>:a, :v=>1) # core_extensions extension === Sequel::SQL::OrderedExpression Sequel::SQL::OrderedExpression objects represent ascending or descending sorts, used by the Dataset order methods. They take an expression, and whether to sort it ascending or descending: Sequel::SQL::OrderedExpression.new(:a) # "a" DESC Sequel::SQL::OrderedExpression.new(:a, false) # "a" ASC Additionally, they take an options hash, which can be used to specify how nulls can be sorted: Sequel::SQL::OrderedExpression.new(:a, true, :nulls=>:first) # "a" DESC NULLS FIRST Sequel::SQL::OrderedExpression.new(:a, false, :nulls=>:last) # "a" ASC NULLS LAST The following shortcuts exist for creating Sequel::SQL::OrderedExpression objects: Sequel.asc(:a) Sequel.desc(:a) Sequel.asc(:a, :nulls=>:first) Sequel.desc(:a, :nulls=>:last) :a.asc # core_extensions extension :a.desc # core_extensions extension :a.asc(:nulls=>:first) # core_extensions extension :a.desc(:nulls=>:last) # core_extensions extension === Sequel::SQL::Subscript Sequel::SQL::Subscript objects represent SQL database array access. They take an expression and an array of indexes (or a range for an SQL array slice): Sequel::SQL::Subscript.new(:a, [1]) # "a"[1] Sequel::SQL::Subscript.new(:a, [1, 2]) # "a"[1, 2] Sequel::SQL::Subscript.new(:a, [1..2]) # "a"[1:2] The following shortcuts exist for creating Sequel::SQL::Subscript objects: Sequel.subscript(:a, 1) Sequel.subscript(:a, 1, 2) Sequel.subscript(:a, 1..2) :a.sql_subscript(1) # core_extensions extension :a.sql_subscript(1, 2) # core_extensions extension :a.sql_subscript(1..2) # core_extensions extension === Sequel::SQL::VirtualRow Sequel::SQL::VirtualRow is a BasicObject subclass that is the backbone behind the block expression support: DB[:table].where{a < 1} In the above code, the block is instance-evaled inside a VirtualRow instance. These objects are usually not instantiated manually. See the {Virtual Row Guide}[link:files/doc/virtual_rows_rdoc.html] for details. === Sequel::SQL::Window Sequel::SQL::Window objects represent the windows used by Sequel::SQL::WindowFunction. They use a hash-based API, supporting the :frame, :order, :partition, and :window options: Sequel::SQL::Window.new(:order=>:a) # (ORDER BY "a") Sequel::SQL::Window.new(:parition=>:a) # (PARTITION BY "a") Sequel::SQL::Window.new(:parition=>:a, :frame=>:all) # (PARTITION BY "a" ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) === Sequel::SQL::WindowFunction Sequel::SQL::WindowFunction objects represent SQL window function calls. These just combine a Sequel::SQL::Function with a Sequel::SQL::Window: function = Sequel::SQL::Function.new(:f, 1) window = Sequel::SQL::Window.new(:order=>:a) Sequel::SQL::WindowFunction.new(function, window) # f(1) OVER (ORDER BY "a") Virtual rows offer a shortcut for creating Sequel::SQL::Window objects. === Sequel::SQL::Wrapper Sequel::SQL::Wrapper objects wrap arbitrary objects so that they can be used in Sequel expressions: o = Object.new def o.sql_literal(ds) "foo" end Sequel::SQL::Wrapper.new(o) # foo The advantage of wrapping the object is that you can the call Sequel methods on the wrapper that would not be defined on the object itself: Sequel::SQL::Wrapper.new(o) + 1 # (foo + 1) You can use the Sequel.expr method to wrap any object: Sequel.expr(o) However, note that that does not necessarily return a Sequel::SQL::Wrapper object, it may return a different class of object, such as a Sequel::SQL::ComplexExpression subclass object. ruby-sequel-4.1.1/doc/opening_databases.rdoc000066400000000000000000000443141220156535500210700ustar00rootroot00000000000000= Connecting to a database All Sequel activity begins with connecting to a database, which creates a Sequel::Database object. The Database object is used to create datasets and execute queries. Sequel provides a powerful and flexible mechanism for connecting to databases. There are two main ways to establish database connections: 1. Using the Sequel.connect method 2. Using the specialized adapter method (Sequel.sqlite, Sequel.postgres, etc.) The connection options needed depend on the adapter being used, though most adapters share the same basic connection options. If you are only connecting to a single database, it is recommended that you store the database object in a constant named DB. This is not required, but it is the convention that most Sequel code uses. == Using the Sequel.connect method The connect method usually takes a well-formed URI, which is parsed into connection options needed to open the database connection. The scheme/protocol part of the URI is used to determine the adapter to use: DB = Sequel.connect('postgres://user:password@localhost/blog') # Uses the postgres adapter You can use URI query parameters to specify options: DB = Sequel.connect('postgres://localhost/blog?user=user&password=password') You can also pass an additional option hash with the connection string: DB = Sequel.connect('postgres://localhost/blog' :user=>'user', :password=>'password') You can also just use an options hash without a connection string. If you do this, you must provide the adapter to use: DB = Sequel.connect(:adapter=>'postgres', :host=>'localhost', :database=>'blog', :user=>'user', :password=>'password') All of the above statements are equivalent. == Using the specialized adapter method The specialized adapter method is similar to Sequel.connect with an options hash, except that it automatically populates the :adapter option and assumes the first argument is the :database option, unless the first argument is a hash. So the following statements are equivalent to the previous statements. DB = Sequel.postgres('blog', :host=>'localhost', :user=>'user', :password=>'password') DB = Sequel.postgres(:host=>'localhost', :user=>'user', :password=>'password', :database=>'blog') Note that using an adapter method forces the use of the specified adapter, not a database type, even though some adapters have the same name as the database type. So if you want to connect to SQLite, for example, you can do so using the sqlite, do, jdbc, and swift adapters. If you want to connect to SQLite on JRuby using the jdbc adapter, you should not use Sequel.sqlite for example, as that uses the C-based sqlite3 gem. Instead, the Sequel.jdbc would be appropriate (though as mentioned below, using Sequel.connect is recommended instead of Sequel.jdbc). == Passing a block to either method Both the Sequel.connect method and the specialized adapter methods take a block. If you provide a block to the method, Sequel will create a Database object and pass it as an argument to the block. When the block returns, Sequel will disconnect the database connection. For example: Sequel.connect('sqlite://blog.db'){|db| puts db[:users].count} == General connection options These options are shared by all adapters unless otherwise noted. :adapter :: The adapter to use :database :: The name of the database to which to connect :host :: The hostname of the database server to which to connect :loggers :: An array of SQL loggers to log to :password :: The password for the user account :servers :: A hash with symbol keys and hash or proc values, used with master/slave/partitioned database configurations :single_threaded :: Whether to use a single-threaded (non-thread safe) connection pool :test :: Whether to test that a valid database connection can be made (false by default) :user :: The user account name to use logging in The following options can be specified and are passed to the the database's internal connection pool. :after_connect :: A proc called after a new connection is made, with the connection object (default: nil) :max_connections :: The maximum size of the connection pool (default: 4 connections on most databases) :pool_sleep_time :: The number of seconds to sleep before trying to acquire a connection again (default: 0.001 seconds) :pool_timeout :: The number of seconds to wait if a connection cannot be acquired before raising an error (default: 5 seconds) == Adapter specific connection options The following sections explain the options and behavior specific to each adapter. If the library the adapter requires is different from the name of the adapter scheme, it is listed specifically, otherwise you can assume that is requires the library with the same name. === ado Requires: win32ole The ADO adapter provides connectivity to ADO databases in Windows. It relies on WIN32OLE library, so it isn't usable on other operating systems (except possibly through WINE, but that's unlikely). The following options are supported: :command_timeout :: Sets the time in seconds to wait while attempting to execute a command before cancelling the attempt and generating an error. Specifically, it sets the ADO CommandTimeout property. If this property is not set, the default of 30 seconds is used. :driver :: The driver to use in the ADO connection string. If not provided, a default of "SQL Server" is used. :conn_string :: The full ADO connection string. If this is provided, the usual options are ignored. :provider :: Sets the Provider of this ADO connection (for example, "SQLOLEDB"). If you don't specify a provider, the default one used by WIN32OLE has major problems, such as creating a new native database connection for every query, which breaks things such as transactions and temporary tables. Pay special attention to the :provider option, as without specifying a provider, many things will be broken. The SQLNCLI10 provider appears to work well if you are connecting to Microsoft SQL Server, but it is not the default as that would break backwards compatability. Example connections: # SQL Server Sequel.connect('ado:///sequel_test?host=server%5cdb_instance') Sequel.connect('ado://user:password@server/database?host=server%5cdb_instance&provider=SQLNCLI10') # Access 2007 Sequel.ado(:conn_string=>'Provider=Microsoft.ACE.OLEDB.12.0;Data Source=drive:\\path\\filename.accdb') # Access 2000 Sequel.ado(:conn_string=>'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=drive:\\path\\filename.mdb') # Excel 2000 (for table names, use a dollar after the sheet name, e.g. Sheet1$) Sequel.ado(:conn_string=>'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=drive:\\path\\filename.xls;Extended Properties=Excel 8.0;') === amalgalite Amalgalite is an ruby extension that provides self contained access to SQLite, so you don't need to install SQLite separately. As amalgalite is a file backed database, the :host, :user, and :password options are not used. :database :: The name of the database file :timeout :: The busy timeout period given in milliseconds Without a database argument, assumes a memory database, so you can do: Sequel.amalgalite Handles paths in the connection string similar to the SQLite adapter, so see the sqlite section below for details. === cubrid cubrid is a ruby extension for accessing a CUBRID database. Currently, the ruby cubrid gem is in fairly rough state, with broken transaction support and some other issues, but most things work. === db2 Requires: db2/db2cli This is the older DB2 adapter. It's recommended you try the ibmdb adapter instead for new DB2 work, as it is better supported. === dbi Allows access to a multitude of databases via ruby-dbi. Additional options: :db_type :: Specifying 'mssql' allows Microsoft SQL Server specific syntax to be used. Otherwise has no effect. DBI connection strings are a preprocessed a bit, and are specified with a dbi- in front of the protocol. Examples: dbi-ado://... dbi-db2://... dbi-frontbase://... dbi-interbase://... dbi-msql://... dbi-mysql://... dbi-odbc://... dbi-oracle://... dbi-pg://... dbi-proxy://... dbi-sqlite://... dbi-sqlrelay://... While the DBI adapter does work, it is recommended that you use another adapter if your database supports it. === do Requires: data_objects The DataObjects adapter supports PostgreSQL, MySQL, and SQLite. One possible advantage of using DataObjects is that it does the typecasting in C, which may be faster than the other adapters. Similar to the JDBC adapter, the DO adapter only cares about connection strings, which can either be the String argument given to Sequel.connect directly or contained in a :uri or :url option. The DO adapter passes through the connection string directly to DataObjects, it does no processing of it (other than removing the do: prefix). Connection string examples: do:sqlite3::memory: do:postgres://user:password@host/database do:mysql://user:password@host/database === firebird Requires: fb (using code at http://github.com/wishdev/fb) Does not support the :port option. === ibmdb requires 'ibm_db' This connects to DB2 using IBM_DB. This is the recommended adapter if you are using a C-based ruby to connect to DB2. === informix Does not support the :host or :port options. Depending on the configuration of your server it may be necessary to either set DB.quote_identifier = false or set export DELIMIDENT=y in the scripts environment. === jdbc Requires: java Houses Sequel's JDBC support when running on JRuby. Support for individual database types is done using sub adapters. There are currently subadapters for PostgreSQL, MySQL, SQLite, H2, HSQLDB, Derby, Oracle, MSSQL, JTDS, AS400, Progress, Firebird, Informix, and DB2. For PostgreSQL, MySQL, SQLite, H2, HSQLDB, Derby, and JTDS, this can use the jdbc-* gem, for the others you need to have the .jar in your CLASSPATH or load the Java class manually before calling Sequel.connect. You just use the JDBC connection string directly, which can be specified via the string given to Sequel.connect or via the :uri, :url, or :database options. Sequel does no preprocessing of the string, it passes it directly to JDBC. So if you have problems getting a connection string to work, look up the JDBC documentation. Note that when using a JDBC adapter, the best way to use Sequel is via Sequel.connect, NOT Sequel.jdbc. Use the JDBC connection string when connecting, which will be in a different format than the native connection string. The connection string should start with 'jdbc:'. For PostgreSQL, use 'jdbc:postgresql:', and for SQLite you do not need 2 preceding slashes for the database name (use no preceding slashes for a relative path, and one preceding slash for an absolute path). Example connection strings: jdbc:sqlite::memory: jdbc:postgresql://localhost/database?user=username jdbc:mysql://localhost/test?user=root&password=root jdbc:h2:mem: jdbc:hsqldb:mem:mymemdb jdbc:derby:memory:myDb;create=true jdbc:sqlserver://localhost;database=sequel_test;integratedSecurity=true jdbc:jtds:sqlserver://localhost/sequel_test;user=sequel_test;password=sequel_test jdbc:oracle:thin:user/password@localhost:1521:database jdbc:db2://localhost:3700/database:user=user;password=password; jdbc:firebirdsql:localhost/3050:/path/to/database.fdb jdbc:jdbcprogress:T:hostname:port:database jdbc:cubrid:hostname:port:database::: You can also use JNDI connection strings: jdbc:jndi:java:comp/env/jndi_resource_name The following additional options are supported: :convert_types :: If set to false, does not attempt to convert some Java types to ruby types. Setting to false roughly doubles performance when selecting large numbers of rows. Note that you can't provide this option inside the connection string (as that is passed directly to JDBC), you have to pass it as a separate option. :login_timeout :: Set the login timeout on the JDBC connection (in seconds). === mysql Requires: mysqlplus (or mysql if mysqlplus is not available) The MySQL adapter does not support the pure-ruby MySQL adapter that used to ship with ActiveRecord, it requires the native adapter. The following additional options are supported: :auto_is_null :: If set to true, makes "WHERE primary_key IS NULL" select the last inserted id. :charset :: Same as :encoding, :encoding takes precedence. :compress :: Whether to compress data sent/received via the socket connection. :config_default_group :: The default group to read from the in the MySQL config file. :config_local_infile :: If provided, sets the Mysql::OPT_LOCAL_INFILE option on the connection with the given value. :encoding :: Specify the encoding/character set to use for the connection. :socket :: Can be used to specify a Unix socket file to connect to instead of a TCP host and port. :sql_mode :: Set the sql_mode(s) for a given connection. Can be single symbol or string, or an array of symbols or strings (e.g. :sql_mode=>[:no_zero_date, :pipes_as_concat]). :timeout :: Sets the wait_timeout for the connection, defaults to 1 month. :read_timeout :: Set the timeout in seconds for reading back results to a query. :connect_timeout :: Set the timeout in seconds before a connection attempt is abandoned. === mysql2 This is a newer MySQL adapter that does typecasting in C, so it is often faster than the mysql adapter. Supports the same additional options as the mysql adapter, except for :compress, and uses :timeout instead of :read_timeout and :connect_timeout. === odbc The ODBC adapter allows you to connect to any database with the appropriate ODBC drivers installed. The :database option given ODBC database should be the DSN (Descriptive Service Name) from the ODBC configuration. Sequel.odbc('mydb', :user => "user", :password => "password") The :host and :port options are not respected. The following additional options are supported: :db_type :: Can be specified as 'mssql', 'progress', or 'db2' to use SQL syntax specific to those databases. :drvconnect :: Can be given an ODBC connection string, and will use ODBC::Database#drvconnect to do the connection. Typical usage would be: Sequel.odbc(:drvconnect=>'driver={...};...') === openbase The :port option is ignored. === oracle Requires: oci8 The following additional options are supported: :autosequence :: Set to true to use Sequel's conventions to guess the sequence to use for datasets. False by default. :prefetch_rows :: The number of rows to prefetch. Defaults to 100, a larger number can be specified and may improve performance when retrieving a large number of rows. :privilege :: The Oracle privilege level. === postgres Requires: pg (or postgres if pg is not available) The Sequel postgres adapter works with the pg, postgres, and postgres-pr ruby libraries. The pg library is the best supported, as it supports real bound variables and prepared statements. If the pg library is being used, Sequel will also attempt to load the sequel_pg library, which is a C extension that optimizes performance when Sequel is used with pg. All users of Sequel who use pg are encouraged to install sequel_pg. The following additional options are supported: :charset :: Same as :encoding, :encoding takes precedence :convert_infinite_timestamps :: Whether infinite timestamps/dates should be converted on retrieval. By default, no conversion is done, so an error is raised if you attempt to retrieve an infinite timestamp/date. You can set this to :nil to convert to nil, :string to leave as a string, or :float to convert to an infinite float. :encoding :: Set the client_encoding to the given string :connect_timeout :: Set the number of seconds to wait for a connection (default 20, only respected if using the pg library). :sslmode :: Set to 'disable', 'allow', 'prefer', 'require' to choose how to treat SSL (only respected if using the pg library) :use_iso_date_format :: This can be set to false to not force the ISO date format. Sequel forces it by default to allow for an optimization. === sqlite Requires: sqlite3 As SQLite is a file-based database, the :host and :port options are ignored, and the :database option should be a path to the file. Examples: # In Memory databases: Sequel.sqlite Sequel.connect('sqlite:/') Sequel.sqlite(':memory:') # Relative Path Sequel.sqlite('blog.db') Sequel.sqlite('./blog.db') Sequel.connect('sqlite://blog.db') # Absolute Path Sequel.sqlite('/var/sqlite/blog.db') Sequel.connect('sqlite:///var/sqlite/blog.db') The following additional options are supported: :timeout :: the busy timeout to use in milliseconds (default: 5000). === swift swift is a ruby 1.9+ library, so you'll need to be running ruby 1.9+. It can connect to SQLite, MySQL, and PostgreSQL, and you must specify which database using the db_type option. You need to install one of the swift db adapters * swift-db-sqlite3 * swift-db-mysql * swift-db-postgres Examples: swift:///?database=:memory:&db_type=sqlite swift://root:root@localhost/test?db_type=mysql swift://root:root@localhost/test?db_type=postgres === tinytds Requires: tiny_tds The connection options are passed directly to tiny_tds, except that the tiny_tds :username option is set to the Sequel :user option. If you want to use an entry in the freetds.conf file, you should specify the :dataserver option with that name as the value. Some other options that you may want to set are :login_timeout, :timeout, :tds_version, :azure, :appname, and :encoding, see the tiny_tds README for details. Other Sequel specific options: :textsize :: Override the default TEXTSIZE setting for this connection. The FreeTDS default is small (around 64000 bytes), but can be set up to around 2GB. This should be specified as an integer. If you plan on setting large text or blob values via tinytds, you should use this option or modify your freetds.conf file. The Sequel tinytds adapter requires tiny_tds >= 0.4.5, and if you are using FreeTDS 0.91, you must at least be using 0.91rc2 (0.91rc1 does not work). ruby-sequel-4.1.1/doc/postgresql.rdoc000066400000000000000000000263511220156535500176260ustar00rootroot00000000000000= PostgreSQL-specific Support in Sequel Sequel's core database and dataset functions are designed to support the features shared by most common SQL database implementations. However, Sequel's database adapters extend the core support to include support for database-specific features. By far the most extensive database-specific support in Sequel is for PostgreSQL. This support is roughly broken into the following areas: * Database Types * DDL Support * DML Support * sequel_pg Note that while this guide is extensive, it is not exhaustive. There are additional rarely used PostgreSQL features that Sequel supports which are not mentioned here. == Adapter/Driver Specific Support Some of this this support depends on the specific adapter or underlying driver in use. postgres only will denote support specific to the postgres adapter (i.e. not available when connecting to PostgreSQL via the jdbc, do, or swift adapters). postgres/pg only will denote support specific to the postgres adapter when pg is used as the underlying driver (i.e. not available when using the postgres-pr or postgres drivers). == PostgreSQL-specific Database Type Support Sequel's default support on PostgreSQL only includes common database types. However, Sequel ships with support for many PostgreSQL-specific types via extensions. In general, you load these extensions via Database#extension. For example, to load support for arrays, you would do: DB.extension :pg_array The following PostgreSQL-specific type extensions are available: pg_array :: arrays (single and multidimensional, for any scalar type), as a ruby Array-like object pg_hstore :: hstore, as a ruby Hash-like object pg_inet :: inet/cidr, as ruby IPAddr objects pg_interval :: interval, as ActiveSupport::Duration objects pg_json :: json, as either ruby Array-like or Hash-like objects pg_range :: ranges (for any scalar type), as a ruby Range-like object pg_row :: row-valued/composite types, as a ruby Hash-like or Sequel::Model object In general, these extensions just add support for Database objects to return retrieved column values as the appropriate type (postgres only), and support for literalizing the objects correctly for use in an SQL string, or using them as bound variable values (postgres/pg only). There are also type-specific extensions that make it easy to use database functions and operators related to the type. These extensions are: pg_array_ops :: array-related functions and operators pg_hstore_ops :: hstore-related functions and operators pg_range_ops :: range-related functions and operators pg_row_ops :: row-valued/composite type syntax support == PostgreSQL-specific DDL Support === Exclusion Constraints In +create_table+ blocks, you can use the +exclude+ method to set up exclusion constraints: DB.create_table(:table) do daterange :during exclude([[:during, '&&']], :name=>:table_during_excl) end # CREATE TABLE "table" ("during" daterange, # CONSTRAINT "table_during_excl" EXCLUDE USING gist ("during" WITH &&)) You can also add exclusion constraints in +alter_table+ blocks using add_exclusion_constraint: DB.alter_table(:table) do add_exclusion_constraint([[:during, '&&']], :name=>:table_during_excl) end # ALTER TABLE "table" ADD CONSTRAINT "table_during_excl" EXCLUDE USING gist ("during" WITH &&) === Adding Foreign Key Constraints Without Initial Validation You can add a :not_valid=>true option when adding constraints to existing tables so that it doesn't check if all current rows are valid: DB.alter_table(:table) do # Assumes t_id column already exists add_foreign_key([:t_id], :table, :not_valid=>true, :name=>:table_fk) end # ALTER TABLE "table" ADD CONSTRAINT "table_fk" FOREIGN KEY ("t_id") REFERENCES "table" NOT VALID Such constraints will be enforced for newly inserted and updated rows, but not for existing rows. After all existing rows have been fixed, you can validate the constraint: DB.alter_table(:table) do validate_constraint(:table_fk) end # ALTER TABLE "table" VALIDATE CONSTRAINT "table_fk" === Creating Indexes Concurrently You can create indexes concurrently using the :concurrently=>true option: DB.add_index(:table, :t_id, :concurrently=>true) # CREATE INDEX CONCURRENTLY "table_t_id_index" ON "table" ("t_id") Similarly, you can drop indexes concurrently as well: DB.drop_index(:table, :t_id, :concurrently=>true) # DROP INDEX CONCURRENTLY "table_t_id_index" === Specific Conversions When Altering Column Types When altering a column type, PostgreSQL allows the user to specify how to do the conversion via a USING clause, and Sequel supports this using the :using option: DB.alter_table(:table) do # Assume unix_time column is stored as an integer, and you want to change it to timestamp set_column_type :unix_time, Time, :using=>(Sequel.cast('epoch', Time) + Sequel.cast('1 second', :interval) * :unix_time) end # ALTER TABLE "table" ALTER COLUMN "unix_time" TYPE timestamp # USING (CAST('epoch' AS timestamp) + (CAST('1 second' AS interval) * "unix_time")) === Creating Unlogged Tables PostgreSQL allows users to create unlogged tables, which are faster but not crash safe. Sequel allows you do create an unlogged table by specifying the :unlogged=>true option to +create_table+: DB.create_table(:table, :unlogged=>true){Integer :i} # CREATE UNLOGGED TABLE "table" ("i" integer) === Creating/Dropping Schemas, Languages, Functions, and Triggers Sequel has built in support for creating and dropping PostgreSQL schemas, procedural languages, functions, and triggers: DB.create_schema(:s) # CREATE SCHEMA "s" DB.drop_schema(:s) # DROP SCHEMA "s" DB.create_language(:plperl) # CREATE LANGUAGE plperl DB.drop_language(:plperl) # DROP LANGUAGE plperl DB.create_function(:set_updated_at, <<-SQL, :language=>:plpgsql, :returns=>:trigger) BEGIN NEW.updated_at := CURRENT_TIMESTAMP; RETURN NEW; END; SQL # CREATE FUNCTION set_updated_at() RETURNS trigger LANGUAGE plpgsql AS ' # BEGIN # NEW.updated_at := CURRENT_TIMESTAMP; # RETURN NEW; # END;' DB.drop_function(:set_updated_at) # DROP FUNCTION set_updated_at() DB.create_trigger(:table, :trg_updated_at, :set_updated_at, :events=>[:insert, :update], :each_row=>true) # CREATE TRIGGER trg_updated_at BEFORE INSERT OR UPDATE ON "table" FOR EACH ROW EXECUTE PROCEDURE set_updated_at() DB.drop_trigger(:table, :trg_updated_at) # DROP TRIGGER trg_updated_at ON "table" == PostgreSQL-specific DML Support === Returning Rows From Insert, Update, and Delete Statements Sequel supports the ability to return rows from insert, update, and delete statements, via Dataset#returning: DB[:table].returning.insert # INSERT INTO "table" DEFAULT VALUES RETURNING * DB[:table].returning(:id).delete # DELETE FROM "table" RETURNING "id" DB[:table].returning(:id, Sequel.*(:id, :id).as(:idsq)).update(:id=>2) # UPDATE "table" SET "id" = 2 RETURNING "id", ("id" * "id") AS "idsq" When returning is used, instead of returning the number of rows affected (for updated/delete) or the serial primary key value (for insert), it will return an array of hashes with the returned results. === Distinct On Specific Columns Sequel allows passing columns to Dataset#distinct, which will make the dataset return rows that are distinct on just those columns: DB[:table].distinct(:id).all # SELECT DISTINCT ON ("id") * FROM "table" === Using a Cursor to Process Large Datasets postgres only The postgres adapter offers a Dataset#use_cursor method to process large result sets without keeping all rows in memory: DB[:table].use_cursor.each{|row| } # BEGIN; # DECLARE sequel_cursor NO SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM "table"; # FETCH FORWARD 1000 FROM sequel_cursor # FETCH FORWARD 1000 FROM sequel_cursor # ... # FETCH FORWARD 1000 FROM sequel_cursor # CLOSE sequel_cursor # COMMIT === Truncate Modifiers Sequel supports PostgreSQL-specific truncate options: DB[:table].truncate(:cascade => true, :only=>true, :restart=>true) # TRUNCATE TABLE ONLY "table" RESTART IDENTITY CASCADE === COPY Support postgres/pg and jdbc/postgres only PostgreSQL's COPY feature is pretty much the fastest way to get data in or out of the database. Sequel supports getting data out of the database via Database#copy_table, either for a specific table or for an arbitrary dataset: DB.copy_table(:table, :format=>:csv) # COPY "table" TO STDOUT (FORMAT csv) DB.copy_table(DB[:table], :format=>:csv) # COPY (SELECT * FROM "table") TO STDOUT (FORMAT csv) It supports putting data into the database via Database#copy_into: DB.copy_into(:table, :format=>:csv, :columns=>[:column1, :column2], :data=>"1,2\n2,3\n") # COPY "table"("column1", "column2") FROM STDIN (FORMAT csv) === Anonymous Function Execution You can execute anonymous functions using a database procedural language via Database#do (the plpgsql language is the default): DB.do <<-SQL DECLARE r record; BEGIN FOR r IN SELECT table_schema, table_name FROM information_schema.tables WHERE table_type = 'VIEW' AND table_schema = 'public' LOOP EXECUTE 'GRANT ALL ON ' || quote_ident(r.table_schema) || '.' || quote_ident(r.table_name) || ' TO webuser'; END LOOP; END; SQL === Listening On and Notifying Channels You can use Database#notify to send notification to channels: DB.notify(:channel) # NOTIFY "channel" postgres/pg only You can listen on channels via Database#listen. Note that this blocks until the listening thread is notified: DB.listen(:channel) # LISTEN "channel" # after notification received: # UNLISTEN * Note that +listen+ by default only listens for a single notification. If you want to loop and process notifications: DB.listen(:channel, :loop=>true){|channel| p channel} === Locking Tables Sequel makes it easy to lock tables, though it is generally better to let the database handle locking: DB[:table].lock('EXCLUSIVE') do DB[:table].insert(:id=>DB[:table].max(:id)+1) end # BEGIN; # LOCK TABLE "table" IN EXCLUSIVE MODE; # SELECT max("id") FROM "table" LIMIT 1; # INSERT INTO "table" ("id") VALUES (2) RETURNING NULL; # COMMIT; == sequel_pg (postgres/pg only) When the postgres adapter is used with the pg driver, Sequel automatically checks for sequel_pg, and loads it if it is available. sequel_pg is a C extension that optimizes the fetching of rows, generally resulting in a 2-6x speedup. It is highly recommended to install sequel_pg if you are using the postgres adapter with pg. sequel_pg has additional optimizations when using the Dataset +map+, +to_hash+, +to_hash_groups+, +select_hash+, +select_hash_groups+, +select_map+, and +select_order_map+ methods, which avoids creating intermediate hashes and can add further speedups. In addition to optimization, sequel_pg also adds streaming support if used on PostgreSQL 9.2. Streaming support is similar to using a cursor, but it is faster and more transparent. You can enable the streaming support: DB.extension(:pg_streaming) Then you can stream individual datasets: DB[:table].stream.each{|row| } Or stream all datasets by default: DB.stream_all_queries = true ruby-sequel-4.1.1/doc/prepared_statements.rdoc000066400000000000000000000120241220156535500214640ustar00rootroot00000000000000= Prepared Statements and Bound Variables Sequel has support for prepared statements and bound variables. No matter which database you are using, the Sequel prepared statement/bound variable API remains the same. There is native support for prepared statements/bound variables on the following adapters: * ibmdb (prepared statements only) * jdbc * mysql (prepared statements only) * mysql2 (prepared statements only) * oracle (requires type specifiers for nil/NULL values) * postgres (when using the pg driver) * sqlite * tinytds Support on other adapters is emulated via string interpolation. You can use the prepared_statements model plugin to automatically use prepared statements for some common model actions such as saving or deleting a model instance, or looking up a model based on a primary key. == Placeholders Generally, when using prepared statements (and certainly when using bound variables), you need to put placeholders in your SQL to indicate where you want your bound arguments to appear. Database support and syntax vary significantly for placeholders (e.g. :name, $1, ?). Sequel abstracts all of that and allows you to specify placeholders by using the :$name format for placeholders, e.g.: ds = DB[:items].where(:name=>:$n) You can use these placeholders in most places where you can use the value directly. For example, if you want to use placeholders while also using raw SQL, you can do: ds = DB["SELECT * FROM items WHERE name = ?", :$n] == Bound Variables Using bound variables for this query is simple: ds.call(:select, :n=>'Jim') This will do the equivalent of selecting records that have the name 'Jim'. It returns all records, and can take a block that is passed to Dataset#all. Deleting or returning the first record works similarly: ds.call(:first, :n=>'Jim') # First record with name 'Jim' ds.call(:delete, :n=>'Jim') # Delete records with name 'Jim' For inserting/updating records, you should also specify a value hash, which may itself contain placeholders: # Insert record with 'Jim', note that the previous filter is ignored ds.call(:insert, {:n=>'Jim'}, :name=>:$n) # Change name to 'Bob' for all records with name of 'Jim' ds.call(:update, {:n=>'Jim', :new_n=>'Bob'}, :name=>$:new_n) == Prepared Statements Prepared statement support is similar to bound variable support, but you use Dataset#prepare with a name, and Dataset#call or Database#call later with the values: ds = DB[:items].filter(:name=>:$n) ps = ds.prepare(:select, :select_by_name) ps.call(:n=>'Jim') DB.call(:select_by_name, :n=>'Jim') # same as above The Dataset#prepare method returns a prepared statement, and also stores a copy of the prepared statement in the database for later use. For insert and update queries, the hash to insert/update is passed to +prepare+: ps1 = DB[:items].prepare(:insert, :insert_with_name, :name=>:$n) ps1.call(:n=>'Jim') DB.call(:insert_with_name, :n=>'Jim') # same as above ds = DB[:items].filter(:name=>:$n) ps2 = ds.prepare(:update, :update_name, :name=>:$new_n) ps2.call(:n=>'Jim', :new_n=>'Bob') DB.call(:update_name, :n=>'Jim', :new_n=>'Bob') # same as above == Implementation Issues Currently, creating a prepared statement uses Object#extend, which can hurt performance. For high performance applications, it's recommended to create all of your prepared statements upon application initialization, and not to create prepared statements dynamically at runtime. == Database support === PostgreSQL If you are using the ruby-postgres or postgres-pr driver, PostgreSQL uses the default emulated support. If you are using ruby-pg, there is native support for both prepared statements and bound variables. Prepared statements are always server side. === SQLite SQLite supports both prepared statements and bound variables. === MySQL/Mysql2 The MySQL/Mysql2 ruby drivers do not support bound variables, so the bound variable methods fall back to string interpolation. It uses server side prepared statements. === JDBC JDBC supports both prepared statements and bound variables. Whether these are server side or client side depends on the JDBC driver. For PostgreSQL over JDBC, you can add the prepareThreshold=N parameter to the connection string, which will use a server side prepared statement after N calls to the prepared statement. === TinyTDS Uses the sp_executesql stored procedure with bound variables, since Microsoft SQL Server doesn't support true prepared statements. === IBM_DB DB2 supports both prepared statements and bound variables. === Oracle Oracle supports both prepared statements and bound variables. Prepared statements (OCI8::Cursor objects) are cached per connection. If you ever plan to use a nil/NULL value as a bound variable/prepared statement value, you must specify the type in the placeholder using a __* suffix. You can use any of the schema types that Sequel supports, such as :$name__string or :$num__integer. Using blobs as bound variables is not currently supported. === All Others Support is emulated using interpolation. ruby-sequel-4.1.1/doc/querying.rdoc000066400000000000000000001023451220156535500172640ustar00rootroot00000000000000= Querying in Sequel This guide is based on http://guides.rubyonrails.org/active_record_querying.html == Purpose of this Guide Sequel is a simple to use, very flexible, and powerful database library that supports a wide variety of different querying methods. This guide aims to be a gentle introduction to Sequel's querying support. While you can easily use raw SQL with Sequel, a large part of the advantage you get from using Sequel is Sequel's ability to abstract SQL from you and give you a much nicer interface. Sequel also ships with a {core_extensions extension}[link:files/doc/core_extensions_rdoc.html], which better integrates Sequel's DSL into the ruby language. == Retrieving Objects Sequel provides a few separate methods for retrieving objects from the database. The underlying method is Sequel::Dataset#each, which yields each row as the Sequel::Database provides it. However, while Dataset#each can and often is used directly, in many cases there is a more convenient retrieval method you can use. === Sequel::Dataset If you are new to Sequel and aren't familiar with Sequel, you should probably read the {"Dataset Basics" guide}[link:files/doc/dataset_basics_rdoc.html], then come back here. === Retrieving a Single Object Sequel offers quite a few ways to to retrieve a single object. ==== Using a Primary Key [Sequel::Model] The Sequel::Model.[] is the easiest method to use to find a model instance by its primary key value: # Find artist with primary key (id) 1 artist = Artist[1] # SELECT * FROM artists WHERE id = 1 => #"YJM", :id=>1}> If there is no record with the given primary key, nil will be returned. If you want to raise an exception if no record is found, you can use Sequel::Model.with_pk!: artist = Artist.with_pk!(1) ==== Using +first+ If you just want the first record in the dataset, Sequel::Dataset#first is probably the most obvious method to use: artist = Artist.first # SELECT * FROM artists LIMIT 1 => #"YJM", :id=>1}> Any options you pass to +first+ will be used as a filter: artist = Artist.first(:name => 'YJM') # SELECT * FROM artists WHERE (name = 'YJM') LIMIT 1 => #"YJM", :id=>1}> artist = Artist.first(Sequel.like(:name, 'Y%')) # SELECT * FROM artists WHERE (name LIKE 'Y%') LIMIT 1 => #"YJM", :id=>1}> If there is no matching row, +first+ will return nil. If you want to raise an exception instead, use first!. Sequel::Dataset#[] is basically an alias for +first+, except it requires an argument: DB[:artists][:name => 'YJM'] # SELECT * FROM artists WHERE (name = 'YJM') LIMIT 1 => {:name=>"YJM", :id=>1} Note that while Model.[] allows you to pass a primary key directly, Dataset#[] does not (unless it is a model dataset). ==== Using +last+ If you want the last record in the dataset, Sequel::Dataset#last is an obvious method to use. Note that last requires that the dataset be ordered, unless the dataset is a model dataset. For a model dataset, +last+ will do a reverse order by the primary key field: artist = Artist.last # SELECT * FROM artists ORDER BY id DESC LIMIT 1 => #"YJM", :id=>1}> Note that what +last+ does is reverse the order of the dataset and then call +first+. This is why +last+ raises a Sequel::Error if there is no order on a plain dataset, because otherwise it would provide the same record as +first+, and most users would find that confusing. Note that +last+ is not necessarily going to give you the last record in the dataset unless you give the dataset an unambiguous order. ==== Retrieving a Single Column Value Sometimes, instead of wanting an entire row, you only want the value of a specific column. For this Sequel::Dataset#get is the method you want: artist_name = Artist.get(:name) # SELECT name FROM artists LIMIT 1 => "YJM" === Retrieving Multiple Objects ==== As an Array of Hashes or Model Objects In many cases, you want an array of all of the rows associated with the dataset, in which case Sequel::Dataset#all is the method you want to use: artists = Artist.all # SELECT * FROM artists => [#"YJM", :id=>1}>, #"AS", :id=>2}>] ==== Using an Enumerable Interface Sequel::Dataset uses an Enumerable Interface, so it provides a method named each that yields hashes or model objects as they are retrieved from the database: Artist.each{|x| p x.name} # SELECT * FROM artists "YJM" "AS" This means that all of the methods in the Enumerable module are available, such as +map+: artist_names = Artist.map{|x| x.name} # SELECT * FROM artists => ["YJM", "AS"] ==== As an Array of Column Values Sequel also has an extended +map+ method that takes an argument. If you provide an argument to +map+, it will return an array of values for the given column. So the previous example can be handled more easily with: artist_names = Artist.map(:name) # SELECT * FROM artists => ["YJM", "AS"] One difference between these two ways of returning an array of values is that providing +map+ with an argument is really doing: artist_names = Artist.map{|x| x[:name]} # not x.name Note that regardless of whether you provide +map+ with an argument, it does not modify the columns selected. If you only want to select a single column and return an array of the columns values, you can use +select_map+: artist_names = Artist.select_map(:name) # SELECT name FROM artists => ["YJM", "AS"] It's also common to want to order such a map, so Sequel provides a +select_order_map+ method as well: artist_names = Artist.select_order_map(:name) # SELECT name FROM artists ORDER BY name => ["AS", "YJM"] In all of these cases, you can provide an array of column symbols and an array of arrays of values will be returned: artist_names = Artist.select_map([:id, :name]) # SELECT id, name FROM artists => [[1, "YJM"], [2, "AS"]] ==== As a Hash Sequel makes it easy to take an SQL query and return it as a ruby hash, using the +to_hash+ method: artist_names = Artist.to_hash(:id, :name) # SELECT * FROM artists => {1=>"YJM", 2=>"AS"} As you can see, the +to_hash+ method uses the first symbol as the key and the second symbol as the value. So if you swap the two arguments the hash will have its keys and values transposed: artist_names = Artist.to_hash(:name, :id) # SELECT * FROM artists => {"YJM"=>1, "AS"=>2} Now what if you have multiple values for the same key? By default, +to_hash+ will just have the last matching value. If you care about all matching values, use +to_hash_groups+, which makes the values of the array an array of matching values, in the order they were received: artist_names = Artist.to_hash_groups(:name, :id) # SELECT * FROM artists => {"YJM"=>[1, 10, ...], "AS"=>[2, 20, ...]} If you only provide one argument to +to_hash+, it uses the entire hash or model object as the value: artist_names = DB[:artists].to_hash(:name) # SELECT * FROM artists => {"YJM"=>{:id=>1, :name=>"YJM"}, "AS"=>{:id=>2, :name=>"AS"}} and +to_hash_groups+ works similarly: artist_names = DB[:artists].to_hash_groups(:name) # SELECT * FROM artists => {"YJM"=>[{:id=>1, :name=>"YJM"}, {:id=>10, :name=>"YJM"}], ...} Model datasets have a +to_hash+ method that can be called without any arguments, in which case it will use the primary key as the key and the model object as the value. This can be used to easily create an identity map: artist_names = Artist.to_hash # SELECT * FROM artists => {1=>#1, :name=>"YGM"}>, 2=>#2, :name=>"AS"}>} There is no equivalent handling to +to_hash_groups+, since there would only be one matching record, as the primary key must be unique. Note that +to_hash+ never modifies the columns selected. However, just like Sequel has a +select_map+ method to modify the columns selected and return an array, Sequel also has a +select_hash+ method to modify the columns selected and return a hash: artist_names = Artist.select_hash(:name, :id) # SELECT name, id FROM artists => {"YJM"=>1, "AS"=>2} Likewise, +select_hash_groups+ also exists: artist_names = Artist.select_hash_groups(:name, :id) # SELECT name, id FROM artists => {"YJM"=>[1, 10, ...], "AS"=>[2, 20, ...]} == Modifying datasets Note that the retrieval methods discussed above just return the row(s) included in the existing dataset. In most cases, you aren't interested in every row in a table, but in a subset of the rows, based on some criteria. In Sequel, filtering the dataset is generally done separately than retrieving the records. There are really two types of dataset methods that you will be using: 1. Methods that return row(s), discussed above 2. Methods that return modified datasets, discussed below Sequel uses a method chaining, functional style API to modify datasets. Let's start with a simple example. This is a basic dataset that includes all records in the table +artists+: ds1 = DB[:artists] # SELECT * FROM artists Let's say we are only interested in the artists whose names start with "A": ds2 = ds1.where(Sequel.like(:name, 'A%')) # SELECT * FROM artists WHERE name LIKE 'A%' Here we see that +where+ returns a dataset that adds a +WHERE+ clause to the query. It's important to note that +where+ does not modify the receiver: ds1 # SELECT * FROM artists ds2 # SELECT * FROM artists WHERE name LIKE 'A%' In Sequel, most dataset methods that you will be using will not modify the dataset itself, so you can freely use the dataset in multiple places without worrying that its usage in one place will affect its usage in another place. This is what is meant by a functional style API. Let's say we only want to select the id and name columns, and that we want to order by name: ds3 = ds.order(:name).select(:id, :name) # SELECT id, name FROM artists WHERE name LIKE 'A%' ORDER BY name Note how you don't need to assign the returned value of order to a variable, and then call select on that. Because order just returns a dataset, you can call select directly on the returned dataset. This is what is meant by a method chaining API. Also note how you can call methods that modify different clauses in any order. In this case, the WHERE clause was added first, then the ORDER clause, then the SELECT clause was modified. This makes for a flexible API, where you can modify any part of the query at any time. == Filters Filtering is probably the most common dataset modifying action done in Sequel. Both the +where+ and +filter+ methods filter the dataset by modifying the dataset's WHERE clause. Both accept a wide variety of input formats, discussed below. === Hashes The most common format for providing filters is via a hash. In general, Sequel treats conditions specified with a hash as equality or inclusion. What type of condition is used depends on the values in the hash. Unless Sequel has special support for the value's class, it uses a simple equality statement: Artist.where(:id=>1) # SELECT * FROM artists WHERE id = 1 Artist.where(:name=>'YJM') # SELECT * FROM artists WHERE name = 'YJM' For arrays, Sequel uses the IN operator. Artist.where(:id=>[1, 2]) # SELECT * FROM artists WHERE id IN (1, 2) For datasets, Sequel uses the IN operator with a subselect: Artist.where(:id=>Album.select(:artist_id)) # SELECT * FROM artists WHERE id IN ( # SELECT artist_id FROM albums) For boolean values such as nil, true, and false, Sequel uses the IS operator: Artist.where(:id=>nil) # SELECT * FROM artists WHERE id IS NULL For ranges, Sequel uses a pair of inequality statements: Artist.where(:id=>1..5) # SELECT * FROM artists WHERE id >= 1 AND id <= 5 Finally, for regexps, Sequel uses an SQL regular expression. Note that this is probably only supported on PostgreSQL and MySQL. Artist.where(:name=>/JM$/) # SELECT * FROM artists WHERE name ~ 'JM$' If there are multiple arguments in the hash, the filters are ANDed together: Artist.where(:id=>1, :name=>/JM$/) # SELECT * FROM artists WHERE id = 1 AND name ~ 'JM$' This works the same as if you used two separate +where+ calls: Artist.where(:id=>1).where(:name=>/JM$/) # SELECT * FROM artists WHERE id = 1 AND name ~ 'JM$' === Array of Two Element Arrays If you use an array of two element arrays, it is treated as a hash. The only advantage to using an array of two element arrays is that it allows you to duplicate keys, so you can do: Artist.where([[:name, /JM$/], [:name, /^YJ/]]) # SELECT * FROM artists WHERE name ~ 'JM$' AND name ~ '^YJ' === Virtual Row Blocks If a block is passed to a filter, it is treated as a virtual row block: Artist.where{id > 5} # SELECT * FROM artists WHERE id > 5 You can learn more about virtual row blocks in the {"Virtual Rows" guide}[link:files/doc/virtual_rows_rdoc.html]. You can provide both regular arguments and a block, in which case the results will be ANDed together: Artist.where(:name=>'A'...'M'){id > 5} # SELECT * FROM artists WHERE name >= 'A' AND name < 'M' AND id > 5 === Symbols If you have a boolean column in the database, and you want only true values, you can just provide the column symbol to filter: Artist.where(:retired) # SELECT * FROM artists WHERE retired === SQL::Expression Sequel has a DSL that allows easily creating SQL expressions. These SQL expressions are instances of subclasses of Sequel::SQL::Expression. You've already seen an example earlier: Artist.where(Sequel.like(:name, 'Y%')) # SELECT * FROM artists WHERE name LIKE 'Y%' In this case Sequel.like returns a Sequel::SQL::BooleanExpression object, which is used directly in the filter. You can use the DSL to create arbitrarily complex expressions. SQL::Expression objects can be created via singleton methods on the Sequel module. The most common method is Sequel.expr, which takes any object and wraps it in a SQL::Expression object. In most cases, the SQL::Expression returned supports the & operator for +AND+, the | operator for +OR+, and the ~ operator for inversion: Artist.where(Sequel.like(:name, 'Y%') & (Sequel.expr(:b=>1) | Sequel.~(:c=>3))) # SELECT * FROM artists WHERE name LIKE 'Y%' AND (b = 1 OR c != 3) You can combine these expression operators with the virtual row support: Artist.where{(a > 1) & ~((b(c) < 1) | d)} # SELECT * FROM artists WHERE a > 1 AND b(c) >= 1 AND NOT d Note the use of parentheses when using the & and | operators, as they have lower precedence than other operators. The following will not work: Artist.where{a > 1 & ~(b(c) < 1 | d)} # Raises a TypeError, as it calls Integer#| with a Sequel::SQL::Identifier === Strings with Placeholders Assuming you want to get your hands dirty and write some SQL, Sequel allows you to use strings using placeholders for the values: Artist.where("name LIKE ?", 'Y%') # SELECT * FROM artists WHERE name LIKE 'Y%' This is the most common type of placeholder, where each question mark is substituted with the next argument: Artist.where("name LIKE ? AND id = ?", 'Y%', 5) # SELECT * FROM artists WHERE name LIKE 'Y%' AND id = 5 You can also use named placeholders with a hash, where the named placeholders use colons before the placeholder names: Artist.where("name LIKE :name AND id = :id", :name=>'Y%', :id=>5) # SELECT * FROM artists WHERE name LIKE 'Y%' AND id = 5 You don't have to provide any placeholders if you don't want to: Artist.where("id = 2") # SELECT * FROM artists WHERE id = 2 However, if you are using any untrusted input, you should definitely be using placeholders. In general, unless you are hardcoding values in the strings, you should use placeholders. You should never pass a string that has been built using interpolation, unless you are sure of what you are doing. Artist.where("id = #{params[:id]}") # Don't do this! Artist.where("id = ?", params[:id]) # Do this instead Artist.where(:id=>params[:id].to_i) # Even better === Inverting You may be wondering how to specify a not equals condition in Sequel, or the NOT IN operator. Sequel has generic support for inverting conditions, so to write a not equals condition, you write an equals condition, and invert it: Artist.where(:id=>5).invert # SELECT * FROM artists WHERE id != 5 Note that +invert+ inverts the entire filter: Artist.where(:id=>5).where{name > 'A'}.invert # SELECT * FROM artists WHERE id != 5 OR name <= 'A' In general, +invert+ is used rarely, since +exclude+ allows you to invert only specific filters: Artist.exclude(:id=>5) # SELECT * FROM artists WHERE id != 5 Artist.where(:id=>5).exclude{name > 'A'} # SELECT * FROM artists WHERE id = 5 OR name <= 'A' So to do a NOT IN with an array: Artist.exclude(:id=>[1, 2]) # SELECT * FROM artists WHERE id NOT IN (1, 2) Or to use the NOT LIKE operator: Artist.exclude(Sequel.like(:name, '%J%')) # SELECT * FROM artists WHERE name NOT LIKE '%J%' === Removing To remove all existing filters, use +unfiltered+: Artist.where(:id=>1).unfiltered # SELECT * FROM artists == Ordering Sequel offers quite a few methods to manipulate the SQL ORDER BY clause. The most basic of these is +order+: Artist.order(:id) # SELECT * FROM artists ORDER BY id You can specify multiple arguments to order by more than one column: Album.order(:artist_id, :id) # SELECT * FROM album ORDER BY artist_id, id Note that unlike +where+, +order+ replaces an existing order, it does not append to an existing order: Artist.order(:id).order(:name) # SELECT * FROM artists ORDER BY name If you want to add a column to the end of the existing order: Artist.order(:id).order_append(:name) # SELECT * FROM artists ORDER BY id, name If you want to add a column to the beginning of the existing order: Artist.order(:id).order_prepend(:name) # SELECT * FROM artists ORDER BY name, id === Reversing Just like you can invert an existing filter, you can reverse an existing order, using +reverse+: Artist.order(:id).reverse # SELECT FROM artists ORDER BY id DESC As you might expect, +reverse+ is not used all that much. In general, Sequel.desc is used more commonly to specify a descending order for columns: Artist.order(Sequel.desc(:id)) # SELECT FROM artists ORDER BY id DESC This allows you to easily use both ascending and descending orders: Artist.order(:name, Sequel.desc(:id)) # SELECT FROM artists ORDER BY name, id DESC === Removing Just like you can remove filters with +unfiltered+, you can remove orders with +unordered+: Artist.order(:name).unordered # SELECT * FROM artists == Selected Columns Sequel offers a few methods to manipulate the columns selected. As you may be able to guess, the main method used is +select+: Artist.select(:id, :name) # SELECT id, name FROM artists You just specify all of the columns that you are selecting as arguments to the method. If you are dealing with model objects, you'll want to include the primary key if you want to update or destroy the object. You'll also want to include any keys (primary or foreign) related to associations you plan to use. If a column is not selected, and you attempt to access it, you will get nil: artist = Artist.select(:name).first # SELECT name FROM artists LIMIT 1 artist[:id] # => nil Like +order+, +select+ replaces the existing selected columns: Artist.select(:id).select(:name) # SELECT name FROM artists To add to the existing selected columns, use +select_append+: Artist.select(:id).select_append(:name) # SELECT id, name FROM artists To remove specifically selected columns, and default back to all columns, use +select_all+: Artist.select(:id).select_all # SELECT * FROM artists To select all columns from a given table, provide an argument to +select_all+: Artist.select_all(:artists) # SELECT artists.* FROM artists === Distinct To treat duplicate rows as a single row when retrieving the records, use +distinct+: Artist.distinct.select(:name) # SELECT DISTINCT name FROM artists Note that DISTINCT is a separate SQL clause, it's not a function that you pass to select. == Limit and Offset You can limit the dataset to a given number of rows using +limit+: Artist.limit(5) # SELECT * FROM artists LIMIT 5 You can provide a second argument to +limit+ to specify an offset: Artist.limit(5, 10) # SELECT * FROM artists LIMIT 5 OFFSET 10 This would return the 11th through 15th records in the original dataset. To remove a limit from a dataset, use +unlimited+: Artist.limit(5, 10).unlimited # SELECT * FROM artists == Grouping The SQL GROUP BY clause is used to combine multiple rows based on the values of a given group of columns. To modify the GROUP BY clause of the SQL statement, you use +group+: Album.group(:artist_id) # SELECT * FROM albums GROUP BY artist_id You can remove an existing grouping using +ungrouped+: Album.group(:artist_id).ungrouped # SELECT * FROM albums A common use of grouping is to count based on the number of grouped rows, and Sequel provides a +group_and_count+ method to make this easier: Album.group_and_count(:artist_id) # SELECT artist_id, count(*) AS count FROM albums GROUP BY artist_id This will return the number of albums for each artist_id. If you want to select and group on the same columns, you can use +select_group+: Album.select_group(:artist_id) # SELECT artist_id FROM albums GROUP BY artist_id Usually you would add a +select_append+ call after that, to add some sort of aggregation: Album.select_group(:artist_id).select_append{sum(num_tracks).as(tracks)} # SELECT artist_id, sum(num_tracks) AS tracks FROM albums GROUP BY artist_id == Having The SQL HAVING clause is similar to the WHERE clause, except that filters the results after the grouping has been applied, instead of before. One possible use is if you only wanted to return artists who had at least 10 albums: Album.group_and_count(:artist_id).having{count(:*){} >= 10} # SELECT artist_id, count(*) AS count FROM albums # GROUP BY artist_id HAVING count(*) >= 10 Both the WHERE clause and the HAVING clause are removed by +unfiltered+: Album.group_and_count(:artist_id).having{count(:*){} >= 10}. where(:name.like('A%')).unfiltered # SELECT artist_id, count(*) AS count FROM albums GROUP BY artist_id == Joins Sequel makes it very easy to join a dataset to another table or dataset. The underlying method used is +join_table+: Album.join_table(:inner, :artists, :id=>:artist_id) # SELECT * FROM albums # INNER JOIN artists ON artists.id = albums.artist_id In most cases, you won't call +join_table+ directly, as Sequel provides shortcuts for all common (and most uncommon) join types. For example +join+ does an inner join: Album.join(:artists, :id=>:artist_id) # SELECT * FROM albums # INNER JOIN artists ON artists.id = albums.artist_id And +left_join+ does a LEFT JOIN: Album.left_join(:artists, :id=>:artist_id) # SELECT * FROM albums # LEFT JOIN artists ON artists.id = albums.artist_id === Table/Dataset to Join For all of these specialized join methods, the first argument is generally the name of the table to which you are joining. However, you can also provide a model class: Album.join(Artist, :id=>:artist_id) Or a dataset, in which case a subselect is used: Album.join(Artist.where{name < 'A'}, :id=>:artist_id) # SELECT * FROM albums # INNER JOIN (SELECT * FROM artists WHERE (name < 'A')) AS t1 # ON (t1.id = albums.artist_id) === Join Conditions The second argument to the specialized join methods is the conditions to use when joining, which is similar to a filter expression, with a few minor exceptions. ==== Implicit Qualification A hash used as the join conditions operates similarly to a filter, except that unqualified symbol keys are automatically qualified with the table from the first argument, and unqualified symbol values are automatically qualified with the first table or the last table joined. This implicit qualification is one of the reasons that joins in Sequel are easy to specify: Album.join(:artists, :id=>:artist_id) # SELECT * FROM albums # INNER JOIN artists ON artists.id = albums.artist_id Note how the :id symbol is automatically qualified with +artists+, while the +artist_id+ symbol is automatically qualified with +albums+. Because Sequel uses the last joined table for implicit qualifications of values, you can do things like: Album.join(:artists, :id=>:artist_id). join(:members, :artist_id=>:id) # SELECT * FROM albums # INNER JOIN artists ON artists.id = albums.artist_id # INNER JOIN members ON members.artist_id = artists.id Note that when joining to the +members+ table, +artist_id+ is qualified with +members+ and +id+ is qualified with +artists+. While a good default, implicit qualification is not always correct: Album.join(:artists, :id=>:artist_id). join(:tracks, :album_id=>:id) # SELECT * FROM albums # INNER JOIN artists ON artists.id = albums.artist_id # INNER JOIN tracks ON tracks.album_id = artists.id Note here how +id+ is qualified with +artists+ instead of +albums+. This is wrong as the foreign key tracks.album_id refers to albums.id, not artists.id. To fix this, you need to explicitly qualify when joining: Album.join(:artists, :id=>:artist_id). join(:tracks, :album_id=>:albums__id) # SELECT * FROM albums # INNER JOIN artists ON artists.id = albums.artist_id # INNER JOIN tracks ON tracks.album_id = albums.id Just like in filters, an array of two element arrays is treated the same as a hash, but allows for duplicate keys: Album.join(:artists, [[:id, :artist_id], [:id, 1..5]]) # SELECT * FROM albums INNER JOIN artists # ON artists.id = albums.artist_id # AND artists.id >= 1 AND artists.id <= 5 And just like in the hash case, unqualified symbol elements in the array are implicitly qualified. By default, Sequel only qualifies unqualified symbols in the conditions. However, You can provide an options hash with a :qualify=>:deep option to do a deep qualification, which can qualify subexpressions. For example, let's say you are doing a JOIN using case insensitive string comparison: Album.join(:artists, {Sequel.function(:lower, :name) => Sequel.function(:lower, :artist_name)}, :qualify => :deep) # SELECT * FROM albums INNER JOIN artists # ON (lower(artists.name) = lower(albums.artist_name)) Note how the arguments to lower were qualified correctly in both cases. Starting in Sequel 4, the :qualify=>:deep option is going to become the default. ==== USING Joins The most common type of join conditions is a JOIN ON, as displayed above. However, the SQL standard allows for join conditions to be specified with JOIN USING, which Sequel makes easy to use. JOIN USING is useful when the columns you are using have the same names in both tables. For example, if instead of having a primary column named +id+ in all of your tables, you use +artist_id+ in your +artists+ table and +album_id+ in your +albums+ table, you could do: Album.join(:artists, [:artist_id]) # SELECT * FROM albums INNER JOIN artists USING (artist_id) See here how you specify the USING columns as an array of symbols. ==== NATURAL Joins NATURAL Joins take it one step further than USING joins, by assuming that all columns with the same names in both tables should be used for joining: Album.natural_join(:artists) # SELECT * FROM albums NATURAL JOIN artists In this case, you don't even need to specify any conditions. ==== Join Blocks You can provide a block to any of the join methods that accept conditions. This block should accept 3 arguments, the table alias for the table currently being joined, the table alias for the last table joined (or first table), and an array of previous Sequel::SQL::JoinClauses. This allows you to qualify columns similar to how the implicit qualification works, without worrying about the specific aliases being used. For example, lets say you wanted to join the albums and artists tables, but only want albums where the artist's name comes before the album's name. Album.join(:artists, :id=>:artist_id) do |j, lj, js| Sequel.qualify(j, :name) < Sequel.qualify(lj, :name) end # SELECT * FROM albums INNER JOIN artists # ON artists.id = albums.artist_id # AND artists.name < albums.name Because greater than can't be expressed with a hash in Sequel, you need to use a block and qualify the tables manually. == From In general, the FROM table is the first clause populated when creating a dataset. For a standard Sequel::Model, the dataset already has the FROM clause populated, and the most common way to create datasets is with the Database#[] method, which populates the FROM clause. However, you can modify the tables you are selecting FROM using +from+: Album.from(:albums, :old_albums) # SELECT * FROM albums, old_albums Be careful with this, as multiple tables in the FROM clause use a cross join by default, so the number of rows will be number of albums times the number of old albums. Using multiple FROM tables and setting conditions in the WHERE clause is an old-school way of joining tables: DB.from(:albums, :artists).where(:artists__id=>:albums__artist_id) # SELECT * FROM albums, artists WHERE artists.id = albums.artist_id === Using the current dataset in a subselect In some cases, you may want to wrap the current dataset in a subselect. Here's an example using +from_self+: Album.order(:artist_id).limit(100).from_self.group(:artist_id) # SELECT * FROM (SELECT * FROM albums ORDER BY artist_id LIMIT 100) # AS t1 GROUP BY artist_id This is slightly different than without +from_self+: Album.order(:artist_id).limit(100).group(:artist_id) # SELECT * FROM albums GROUP BY artist_id ORDER BY name LIMIT 100 Without +from_self+, you are doing the grouping, and limiting the number of grouped records returned to 100. So assuming you have albums by more than 100 artists, you'll end up with 100 results. With +from_self+, you are limiting the number of records before grouping. So if the artist with the lowest id had 100 albums, you'd get 1 result, not 100. == Locking for Update Sequel allows you to easily add a FOR UPDATE clause to your queries so that the records returned can't be modified by another query until the current transaction commits. You just use the +for_update+ dataset method when returning the rows: DB.transaction do album = Album.for_update.first(:id=>1) # SELECT * FROM albums WHERE id = 1 FOR UPDATE album.num_tracks += 1 album.save end This will ensure that no other connection modifies the row between when you select it and when the transaction ends. === Optimistic Locking One of the model plugins that ships with Sequel is an optimistic locking plugin, which provides a database independent way to detect and raise an error if two different connections modify the same row. It's useful for things like web forms where you cannot keep a transaction open while the user is looking at the form, because of the web's stateless nature. == Custom SQL Sequel makes it easy to use custom SQL by providing it to the Database#[] method as a string: DB["SELECT * FROM artists"] # SELECT * FROM artists You can also use the +with_sql+ dataset method to return a dataset that uses that exact SQL: DB[:albums].with_sql("SELECT * FROM artists") # SELECT * FROM artists With either of these methods, you can use placeholders: DB["SELECT * FROM artists WHERE id = ?", 5] # SELECT * FROM artists WHERE id = 5 DB[:albums].with_sql("SELECT * FROM artists WHERE id = :id", :id=>5) # SELECT * FROM artists WHERE id = 5 Note that if you specify the dataset using custom SQL, you can still call the dataset modification methods, but in many cases they will appear to have no affect: DB["SELECT * FROM artists"].select(:name).order(:id) # SELECT * FROM artists If you must drop down to using custom SQL, it's recommended that you only do so for specific parts of a query. For example, if the reason you are using custom SQL is to use a custom operator in the database in the SELECT clause: DB["SELECT name, (foo !@# ?) AS baz FROM artists", 'bar'] it's better to use Sequel's DSL, and use a literal string for the custom operator: DB[:artists].select(:name, Sequel.lit("(foo !@# ?)", 'bar').as(:baz)) That way Sequel's method chaining still works, and it increases Sequel's ability to introspect the code. == Checking for Records If you just want to know whether the current dataset would return any rows, use empty?: Album.empty? # SELECT 1 FROM albums LIMIT 1 => false Album.where(:id=>0).empty? # SELECT 1 FROM albums WHERE id = 0 LIMIT 1 => true Album.where(Sequel.like(:name, 'R%')).empty? # SELECT 1 FROM albums WHERE name LIKE 'R%' LIMIT 1 => false == Aggregate Calculations The SQL standard defines a few helpful methods to get aggreate information about datasets, such as +count+, +sum+, +avg+, +min+, and +max+. There are dataset methods for each of these aggregate functions. +count+ just returns the number of records in the dataset. Album.count # SELECT count(*) AS count FROM albums LIMIT 1 => 2 The other methods take a column argument and call the aggregate function with the argument: Album.sum(:id) # SELECT sum(id) FROM albums LIMIT 1 => 3 Album.avg(:id) # SELECT avg(id) FROM albums LIMIT 1 => 1.5 Album.min(:id) # SELECT min(id) FROM albums LIMIT 1 => 1 Album.max(:id) # SELECT max(id) FROM albums LIMIT 1 => 2 ruby-sequel-4.1.1/doc/reflection.rdoc000066400000000000000000000133101220156535500175440ustar00rootroot00000000000000= Reflection Sequel supports reflection information in multiple ways. == Adapter in Use You can get the adapter in use using Database#adapter_scheme: DB.adapter_scheme # e.g. :postgres, :jdbc, :odbc == Database Connected To In some cases, the adapter scheme will be the same as the database to which you are connecting. However, many adapters support multiple databases. You can use the Database#database_type method to get the type of database to which you are connecting: DB.database_type # :postgres, :h2, :mssql == Tables in the Database Database#tables gives an array of table name symbols: DB.tables # [:table1, :table2, :table3, ...] == Views in the Database Database#views and gives an array of view name symbols: DB.views # [:view1, :view2, :view3, ...] == Indexes on a table Database#indexes takes a table name gives a hash of index information. Keys are index names, values are subhashes with the keys :columns and :unique : DB.indexes(:table1) # {:index1=>{:columns=>[:column1], :unique=>false}, :index2=>{:columns=>[:column2, :column3], :unique=>true}} Index information generally does not include partial indexes, functional indexes, or indexes on the primary key of the table. == Foreign Key Information for a Table Database#foreign_key_list takes a table name and gives an array of hashes of foreign key information: DB.foreign_key_list(:table1) # [{:columns=>[:column1], :table=>:referenced_table, :key=>[:referenced_column1]}] At least the following entries will be present in the hash: :columns :: An array of columns in the given table :table :: The table referenced by the columns :key :: An array of columns referenced (in the table specified by :table), but can be nil on certain adapters if the primary key is referenced. The hash may also contain entries for: :deferrable :: Whether the constraint is deferrable :name :: The name of the constraint :on_delete :: The action to take ON DELETE :on_update :: The action to take ON UPDATE == Column Information for a Table Database#schema takes a table symbol and returns column information in an array with each element being an array with two elements. The first elements of the subarray is a column symbol, and the second element is a hash of information about that column. The hash should include the following keys: :allow_null :: Whether NULL/nil is an allowed value for this column. Used by the Sequel::Model typecasting code. :db_type :: The type of column the database provided, as a string. Used by the schema_dumper plugin for a more specific type translation. :default :: The default value of the column, as either a string or nil. Uses a database specific format. Used by the schema_dumper plugin for converting to a ruby value. :primary_key :: Whether this column is one of the primary key columns for the table. Used by the Sequel::Model code to determine primary key columns. :ruby_default :: The default value of the column as a ruby object, or nil if there is no default or the default could not be successfully parsed into a ruby object. :type :: The type of column, as a symbol (e.g. :string). Used by the Sequel::Model typecasting code. Example: DB.schema(:table) # [[:column1, {:allow_null=>true, :db_type=>'varchar(255)', :default=>'blah', :primary_key=>false, :type=>:string}], ...] == Column Information for a Model Model#db_schema returns pretty much the same information, except it returns it as a hash with column keys instead of an array of two element arrays. Model.db_schema # {:column1=>{:allow_null=>true, :db_type=>'varchar(255)', :default=>'blah', :primary_key=>false, :type=>:string}, ...} == Columns used by a dataset/model Dataset#columns returns the columns of the current dataset as an array of symbols: DB[:table].columns # [:column1, :column2, :column3, ...] Dataset#columns! does the same thing, except it ignores any cached value. In general, the cached value should never be incorrect, unless the database schema is changed after the dataset is created. DB[:table].columns! # [:column1, :column2, :column3, ...] Model.columns does the same thing as Dataset#columns, using the model's dataset: Model.columns # [:column1, :column2, :column3, ...] == Associations Defined Sequel::Model offers complete introspection capability for all associations. You can get an array of association symbols with Model.associations: Model.associations # [:association1, :association2, ...] You can get the association reflection for a single association via the Model.association_reflection. Association reflections are subclasses of hash, for ease of use and introspection (and backwards compatibility): Model.association_reflection(:association1) # {:name=>:association1, :type=>:many_to_one, :model=>Model, ...} You can get an array of all association reflections via Model.all_association_reflections: Model.all_association_reflections # [{:name=>:association1, :type=>:many_to_one, :model=>Model, ...}, ...] Finally, you can get a hash of association reflections via Model.association_reflections: Model.association_reflections # {:association1=>{:name=>:association1, :type=>:many_to_one, :model=>Model, ...}, ...} == Validations Defined When using the validation_class_methods plugin, you can use the validation_reflections class method to get a hash with validation reflection information. This returns a hash keyed on the column name symbol: Model.validation_reflections[:column] # => [[:presence, {}], [:length, {:maximum=>255, :message=>'is just too long'}]] Similarly, when using the constraint_validations plugin, you can use the constraint_validation_reflections class method: Model.constraint_validation_reflections[:column] # => [[:presence, {}], [:max_length, {:argument=>255, :message=>'is just too long'}]] ruby-sequel-4.1.1/doc/release_notes/000077500000000000000000000000001220156535500173735ustar00rootroot00000000000000ruby-sequel-4.1.1/doc/release_notes/1.0.txt000066400000000000000000000026131220156535500204340ustar00rootroot00000000000000=== New code organization Sequel is now divided into two parts: sequel_core and sequel_model. These two parts are distributed as two separate gems. The sequel gem bundles sequel_core and sequel_model together. If you don't use Sequel::Model in your code, you can just install and use sequel_core. === New model hooks implementation The hooks implementation have been rewritten from scratch, is much more robust and offers a few new features: * More ways to define hooks: hooks can now be defined by supplying a block or a method name, or by overriding the hook instance method. * Inheritable hooks: Hooks can now be inherited, which means that you can define general hooks in a model superclass, and use them in subclasses. You can also define global hooks on Sequel::Model that will be invoked for all model classes. * Hook chains can be broken by returning false from within the hook. * New after_initialize hook, invoked after instance initialization. * The hook invocation order can no longer be changed. Hooks are invoked in order of definition, from the top of the class hierarchy (that is, from Sequel::Model) down to the specific class. === Miscellanea * Removed deprecated adapter stubs, and all other deprecations in both sequel_core and sequel_model. * Fixed String#to_time to raise error correctly for invalid time stamps. * Fixed error behavior when parse_tree or ruby2ruby are not available. ruby-sequel-4.1.1/doc/release_notes/1.1.txt000066400000000000000000000105641220156535500204410ustar00rootroot00000000000000=== DRY Sequel models With the new Sequel release you no longer need to explicitly specify the table name for each model class, assuming your model name is the singular of the table name (just like in ActiveRecord or DataMapper): class UglyBug < Sequel::Model end UglyBug.table_name #=> :ugly_bugs === New model validations and support for virtual attributes Sequel model now include validation functionality which largly follows the validations offered in ActiveRecord. Validations can be checked anytime by calling Model#valid?, with validation errors accessible through Model#errors: class Item < Sequel::Model validates_presence_of :name end my_item = Item.new my_item.valid? #=> false my_item.errors.full_messages #=> ["name is not present"] The Model#save method has been changed to check for validity before saving. If the model instance is not valid, the #save method returns false without saving the instance. You can also bypass the validity test by calling Model#save! instead. Model classes also now support virtual attributes, letting you assign values to any attribute (virtual or persistent) at initialization time: class User < Sequel::Model attr_accessor :password end u = User.new(:password => 'blah', ...) u.password #=> 'blah' Also, virtual attributes can be validated just like persistent attributes. === Other changes (long list!) * Added Model#reload as alias to Model#refresh. * Changed Model.create to accept a block (#126). * Fixed Model#initialize to accept nil values (#115). * Added Model#update_with_params method with support for virtual attributes and auto-filtering of unrelated parameters, and changed Model.create_with_params to support virtual attributes (#128). * Fixed Model.dataset to correctly set the dataset if using implicit naming or inheriting the superclass dataset (thanks celldee). * Finalized support for virtual attributes. * Fixed Model#set to work with string keys (#143). * Fixed Model.create to correctly initialize instances marked as new (#135). * Fixed Model#initialize to convert string keys into symbol keys. This also fixes problem with validating objects initialized with string keys (#136). * Added Dataset#table_exists? convenience method. * Changed Dataset#group_and_count to accept multiple columns (#134). * Added Dataset#select_all method. * Added Dataset#select_more, Dataset#order_more methods (#129). * Fixed Dataset#count to work correctly for grouped datasets (#144). * Fixed joining datasets using aliased tables (#140). * Added support for UNSIGNED constraint, used in MySQL? (#127). * Implemented constraint definitions inside Database#create_table. * Enhanced Database.connect to accept options with string keys, so it can now accept options loaded from YAML files. Database.connect also automatically converts :username option into :user for compatibility with existing YAML configuration files for AR and DataMapper. * Changed ODBC::Database to support connection using driver and database name, also added support for untitled columns in ODBC::Dataset (thanks Leonid Borisenko). * Changed MySQL adapter to support specifying socket option. * Fixed MySQL adapter to correctly format foreign key definitions (#123). * Changed MySQL::Dataset to allow HAVING clause on ungrouped datasets, and put HAVING clause before ORDER BY clause (#133). * Changed mysql adapter to default to localhost if :host option is not specified (#114). * Added String#to_date. Updated mysql adapter to use String#to_date for mysql date types (thanks drfreeze). * Fixed postgres adapter to define PGconn#async_exec as alias to #exec if not defined (for pure-ruby postgres driver). * Changed postgres adapter to quote column references using double quotes. * Applied patch for oracle adapter: fix behavior of limit and offset, transactions, #table_exists?, #tables and additional specs (thanks Liming Lian #122). * Added support additional field types in postgresql adapter (#146). * Added support for date field types in postgresql adapter (#145). * Added support for limiting and paginating datasets with fixed SQL, e.g. using Database#fetch. * Added new Dataset#from_self method that returns a dataset selecting from the original dataset. * Allow for additional filters on a grouped dataset (#119 and #120) * Refactored Sequelizer to use Proc#to_sexp (method provided by r2r). * Fixed bin/sequel to require sequel_model if available. ruby-sequel-4.1.1/doc/release_notes/1.3.txt000066400000000000000000000060671220156535500204460ustar00rootroot00000000000000=== Better model associations The latest release of sequel_model includes a new associations functionality written by Jeremy Evans which replaces the old relations code in previous versions. Please note that this version is not completely backward-compatible and you should therefore upgrade with caution. The new implementation supports three kinds of relations: one_to_many, many_to_one and many_to_many, which correspond to has_many, belongs_to and has_and_belongs_to_many relations in ActiveRecord. In fact, the new implementation includes aliases for ActiveRecord assocation macros and is basically compatible with ActiveRecord conventions. It also supports DRY implicit class name references. Here's a simple example: class Author < Sequel::Model has_many :books # equivalent to one_to_many end class Book < Sequel::Model belongs_to :author # equivalent to many_to_one has_and_belongs_to_many :categories # equivalent to many_to_many end class Category < Sequel::Model has_and_belongs_to_many :books end These macros will create the following methods: * Author#books, Author#add_book, Author#remove_book * Book#author, Book#categories, Book#add_category, Book#remove_category * Category#books, Category#add_book, Category#remove_book Unlike ActiveRecord, one_to_many and many_to_many association methods return a dataset: a = Author[1234] a.books.sql #=> 'SELECT * FROM books WHERE (author_id = 1234)' You can also tell Sequel to cache the association result set and return it as an array: class Author < Sequel::Model has_many :books, :cache => true end Author[1234].books.class #=> Array You can of course bypass the defaults and specify class names and key names: class Node < Sequel::Model belongs_to :parent, :class => Node belongs_to :session, :key => :producer_id end Another useful option is :order, which sets the order for the association dataset: class Author < Sequel::Model has_many :books, :order => :title end Author[1234].books.sql #=> 'SELECT * FROM books WHERE (author_id = 1234) ORDER BY title' More information about associations can be found in the Sequel documentation. === Other changes * Added configuration file for running specs (#186). * Changed Database#drop_index to accept fixed arity (#173). * Changed column definition sql to put UNSIGNED constraint before unique in order to satisfy MySQL (#171). * Enhanced MySQL adapter to support load data local infile_, added compress option for mysql connection by default (#172). * Fixed bug when inserting hashes in array tuples mode. * Changed SQLite adapter to catch RuntimeError raised when executing a statement and raise Error::InvalidStatement with the offending SQL and error message (#188). * Fixed Dataset#reverse to not raise for unordered dataset (#189). * Added Dataset#unordered method and changed #order to remove order if nil is specified (#190). * Fixed reversing order of ASC expression (#164). * Added support for :null => true option when defining table columns (#192). * Fixed Symbol#method_missing to accept variable arity (#185). ruby-sequel-4.1.1/doc/release_notes/1.4.0.txt000066400000000000000000000046161220156535500206030ustar00rootroot00000000000000Eager loading for all types of associations: Artist.eager(:albums).all Album.eager(:artist, :genre, :tracks).all Album.eager(:artist).eager(:genre).eager(:tracks).all Album.filter(:year=>2008).eager(:artist).all Eager loading supports cascading to an unlimited depth, and doesn't have any aliasing issues: Artist.eager(:albums=>:tracks).all Artist.eager(:albums=>{:tracks=>:genre}).all Unfortunately, eager loading comes at the expense of a small amount of backward compatibility. If you were using uncached associations (the default in sequel_model 0.5), they no longer work the same way. Now, all associations act as if :cache=>true (which is now set for all associations, so if you wrote a tool that worked with both cached and uncached associations, it should still work). One to many associations now populate the corresponding many to one instance variable (even when eagerly loaded): # Assuming: Album.one_to_many :tracks album = Album.first # This following code is only one query, # not a query for the album and one for each track album.tracks.each{|t| puts t.album.name} ActiveRecord style has_many :through associations are now supported via many_to_many. many_to_many will no longer select the entire result set, just the columns of the associated table (and not the join table), so it works for both has_and_belongs_to_many (simple join table) and has_many :through (join table model) scenarios. If you want to include all or part of the join table attributes, see the :select option for many_to_many associations. We reduced the number of gems from three (sequel, sequel_core, sequel_model) to two (sequel, sequel_core). Basically, sequel_model is now just sequel, and the old sequel gem metapackage is no longer. There isn't a reason to have a gem metapackage for two gems when one (sequel_model) depends on the other (sequel_core). This required a version bump for the model part of sequel from 0.5.0.2 to 1.4.0 (since the previous sequel gem version was 1.3). Sequel 1.4.0 has fixes for 11 trackers issues, including fixes to the MySQL and PostgreSQL adapters. We have switched the source control repository for Sequel from Google Code (which uses subversion) to github (which uses git). If you would like to contribute to Sequel, please fork the github repository, make your changes, and send a pull request. As before, posting patches on the Google Code issue tracker is fine as well. ruby-sequel-4.1.1/doc/release_notes/1.5.0.txt000066400000000000000000000117101220156535500205750ustar00rootroot00000000000000You can now graph a dataset and have the result split into component tables: DB[:artists].graph(:albums, :artist_id=>:id).first # => {:artists=>{:id=>artists.id, :name=>artists.name}, \ # :albums=>{:id=>albums.id, :name=>albums.name, :artist_id=>albums.artist_id}} This aliases columns if necessary so they don't stomp on each other, which is what usually happens if you just join the tables: DB[:artists].left_outer_join(:albums, :artist_id=>:id).first # => {:id=>(albums.id||artists.id), :name=>(albums.name||artist.names), \ :artist_id=>albums.artist_id} Models can use graph as well, in which case the values will be model objects: Artist.graph(Album, :artist_id=>:id) # => {:artists=>#, :albums=>#} Models can now eager load via .eager_graph, which will load all the results and all associations in a single query. This is necessary if you want to filter on columns in associated tables. It works exactly the same way as .eager, and supports cascading of associations as well: # Artist.one_to_many :albums # Album.one_to_many :tracks # Track.many_to_one :genre Artist.eager_graph(:albums=>{:tracks=>:genre}).filter( \ :tracks_name=>"Firewire").all This will give you all artists have have an album with a track named "Firewire", and calling .albums on one of those artists will only return albums that have a track named "Firewire", and calling .tracks on one of those albums will return only the track(s) named "Firewire". You can use set_graph_aliases to select specific columns: DB[:artists].graph(:albums, :artist_id=>:id).set_graph_aliases( \ :artist_name=>[:artists, :name], :album_name=>[:albums, :name]).first # => {:artists=>{:name=>artists.name}, :albums=>{:name=>albums.name}} You can use eager_graph with set_graph_aliases to have eager loading with control over the SELECT clause. All associations now update their reciprocal associations whenever the association methods are used, so you don't need to refresh the association or model to have the reciprocal association updated: Album.many_to_one :band Band.one_to_many :albums # Note that all of these associations are cached, # so after the first access there are no additional # database queries to fetch associated records. # many_to_one setter adds to reciprocal association band1.albums # => [] album1.band = band1 band1.albums # => [album1] band2.albums # => [] album1.band = band2 band1.albums # => [] band2.albums # => [album1] album1.band = band2 band2.albums # => [album1] album1.band = nil band2.albums # => [] # one_to_many add_* method sets reciprocal association # one_to_many remove_* method removes reciprocal association album1.band # => nil band1.add_album(album1) album1.band # => band1 band2.add_album(album1) album1.band # => band2 band2.remove_album(album1) album1.band # => nil Post.many_to_many :tags Tag.many_to_many :posts # many_to_many add_* method adds to reciprocal association # many_to_many remove_* method removes from reciprocal association post1.tags # => [] tag1.posts # => [] tag1.add_post(post1) post1.tags # => [tag1] tag1.posts # => [post1] tag1.remove_post(post1) post1.tags # => [] tag1.posts # => [] post1.add_tag(tag1) post1.tags # => [tag1] tag1.posts # => [post1] post1.remove_tag(tag1) post1.tags # => [] tag1.posts # => [] The MySQL and PostgreSQL adapters now support index types: index :some_column, :type => :hash # or :spatial, :full_text, :rtree, etc. Starting in Sequel 1.5.0, some methods are deprecated. These methods will be removed in Sequel 2.0.0. The deprecation framework is fairly flexible. You can choose where the messages get sent: Sequel::Deprecation.deprecation_message_stream = STDERR # the default Sequel::Deprecation.deprecation_message_stream = \ File.new('deprecation.txt', 'wb') # A file Sequel::Deprecation.deprecation_message_stream = nil # ignore the messages You can even have all deprecation messages accompanied by a traceback, so you can see exactly where in your code you are using a deprecated method: Sequel::Deprecation.print_tracebacks = true All deprecation methods come with an message telling you what alternative code will work. In addition to deprecating some methods, we removed the ability to have arrays returned instead of hashes. The array code still had debugging messages left it in, and we are not aware of anyone using it. Hashes have been returned by default since Sequel 0.3. We have also removed the Numeric date/time extensions (e.g. 3.days.ago). The existing extensions were incomplete, better ones are provided elsewhere, and the extensions were not really related to Sequel's purpose. Sequel no longer depends on ParseTree, RubyInline, or ruby2ruby. They are still required to use the block filters. Sequel's only gem dependency is on the tiny metaid. Sequel 1.5.0 has fixes for 12 tracker issues, including fixes to the Informix, MySQL, ODBC, ADO, JDBC, Postgres, and SQLite adapters. ruby-sequel-4.1.1/doc/release_notes/2.0.0.txt000066400000000000000000000266541220156535500206060ustar00rootroot00000000000000Blockless Filter Expressions ---------------------------- Before 2.0.0, in order to specify complex SQL expressions, you either had to resort to writing the SQL yourself in a string or using an expression inside a block that was parsed by ParseTree. Because ParseTree was required, only ruby 1.8.* was supported, and supporting other ruby versions (ruby 1.9, JRuby, Rubinius) would never be possible. With 2.0.0, you no longer need to use a block to write complex SQL expressions. The basics of the blockless filters are the usual arithmetic, inequality, and binary operators: + = addition - = subtraction * = multiplication / = division > = greater than < = less than >= = greater than or equal to <= = less than or equal to ~ = negation & = AND | = OR You can use these operators on Symbols, LiteralStrings, and other Sequel::SQL::Expressions. Note that there is no equal operator or not equal operator, to specify those, you use a Hash. Here are some examples: # Ruby code => SQL WHERE clause :active => active ~:active => NOT active ~~:active => active ~~~:active => NOT active :is_true[] => is_true() ~:is_true[] => NOT is_true() :x > 100 => (x > 100) :x < 100.01 => (x < 100.01) :x <= 0 => (x <= 0) :x >= 1 => (x >= 1) ~(:x > 100) => (x <= 100) {:x => 100} => (x = 100) {:x => 'a'} => (x = 'a') {:x => nil} => (x IS NULL) ~{:x => 100} => (x != 100) ~{:x => 'a'} => (x != 'a') ~{:x => nil} => (x IS NOT NULL) {:x => /a/} => (x ~ 'blah') # Default, MySQL different ~{:x => /a/} => (x !~ 'blah') # Default, MySQL different :x.like('a') => (x LIKE 'a') ~:x.like('a') => (x NOT LIKE 'a') :x.like(/a/) => (x ~ 'a') # Default, MySQL different ~:x.like('a', /b/) => ((x NOT LIKE 'a') AND (x !~ 'b')) # Default ~{:x => 1..5} => ((x < 1) OR (x > 5)) ~{:x => DB[:items].select(:i)} => (x NOT IN (SELECT i FROM items)) ~{:x => [1,2,3]} => (x NOT IN (1, 2, 3)) :x + 1 > 100 => ((x + 1) > 100) (:x * :y) < 100.01 => ((x * y) < 100.01) (:x - :y/2) >= 100 => ((x - (y / 2)) >= 100) (((:x - :y)/(:x + :y))*:z) <= 100 => ((((x - y) / (x + y)) * z) <= 100) ~((((:x - :y)/(:x + :y))*:z) <= 100) => ((((x - y) / (x + y)) * z) > 100) :x & :y => (x AND y) :x & :y & :z => ((x AND y) AND z) :x & {:y => :z} => (x AND (y = z)) {:y => :z} & :x => ((y = z) AND x) {:x => :a} & {:y => :z} => ((x = a) AND (y = z)) (:x > 200) & (:y < 200) => ((x > 200) AND (y < 200)) :x | :y => (x OR y) :x | :y | :z => ((x OR y) OR z) :x | {:y => :z} => (x OR (y = z)) {:y => :z} | :x => ((y = z) OR x) {:x => :a} | {:y => :z} => ((x = a) OR (y = z)) (:x > 200) | (:y < 200) => ((x > 200) OR (y < 200)) (:x | :y) & :z => ((x OR y) AND z) :x | (:y & :z) => (x OR (y AND z)) (:x & :w) | (:y & :z) => ((x AND w) OR (y AND z)) ~((:x | :y) & :z) => ((NOT x AND NOT y) OR NOT z) ~((:x & :w) | (:y & :z)) => ((NOT x OR NOT w) AND (NOT y OR NOT z)) ~((:x > 200) | (:y & :z)) => ((x <= 200) AND (NOT y OR NOT z)) ~('x'.lit + 1 > 100) => ((x + 1) <= 100) 'x'.lit.like(/a/) => (x ~ 'a') # (x ~ \'a\') None of these require blocks, you can use any directly in a call to filter: DB[:items].filter((:price * :tax) - :discount > 100) # => SELECT * FROM items WHERE (((price * tax) - discount) > 100) DB[:items].filter(:active & ~:archived) # => SELECT * FROM items WHERE (active AND NOT archived) SQL String Concatenation ------------------------ Sequel now has support for expressing SQL string concatenation in an easy way: [:name, :title].sql_string_join(" - ") # SQL: name || ' - ' || title You can use this in selecting columns, creating filters, ordering datasets, and possibly elsewhere. Schema Reflection Support/Typecasting on Assignment --------------------------------------------------- When used with PostgreSQL, MySQL, or SQLite, Sequel now has the ability to get information from the database's schema in regards to column types: DB.schema(:artist) => [[:id, {:type=>:integer, :db_type=>"integer", :max_chars=>0 :numeric_precision=>32, :allow_null=>false, :default=>"nextval('artist_id_seq'::regclass)"}], [:name, {:type=>:string, :default=>nil, :db_type=>"text", :numeric_precision=>0, :allow_null=>true, :max_chars=>0}]] Models now use this information to typecast values on attribute assignment. For example, if you have an integer column named number and a text (e.g. varchar) column named title: 1.5.1: model.number = '1' model.number # => '1' model.title = 1 model.title # => 1 2.0.0: model.number = '1' model.number # => 1 model.title = 1 model.title # => '1' Typecasting can be turned off on a global, per class, and per object basis: Sequel::Model.typecast_on_assignment = false # Global Album.typecast_on_assignment = false # Per Class Album.new.typecast_on_assignment = false # Per Object Typecasting is somewhat strict, it does not allow obviously bogus data to be used: model.number = 'a' # Raises error This is in contrast to how some other ORMs handle the situation: model.number = 'a' model.number # => 0 If Sequel is being used with a web framework and you want to display friendly error messages to the user, you should probably turn typecasting off and set up the necessary validations in your models. Model Association Improvements ------------------------------ Associations can now be eagerly loaded even if they have a block, though the block should not rely on being evaluated in the context of an instance. This allows you filter on associations when eagerly loading: Artist.one_to_many :albums_with_10_tracks, :class=>:Album do |ds| ds.filter(:num_tracks => 10) end Artist.filter(:name.like('A%)).eager(:albums_with_10_tracks).all # SELECT * FROM artists WHERE (name LIKE 'A%') # SELECT albums.* FROM albums WHERE ((artist_id IN (...)) AND # (num_tracks = 10)) Associations now have a remove_all_ method for removing all associated objects in a single query: Artist.many_to_many :albums Artist[1].remove_all_albums # DELETE FROM albums_artists WHERE artist_id = 1 Artist.one_to_many :albums Artist[1].remove_all_albums # UPDATE albums SET artist_id = NULL WHERE artist_id = 1 All associations can specify a :select option to change which columns are selected. Previously only many to many associations suppported this. The SQL used when eagerly loading through eager_graph can be modified via the :graph_join_type, :graph_conditions, and :graph_join_conditions options. :graph_join_type changes the join type from the default of :left_outer. This can be useful if you do not want any albums that don't have an artist in the result set: Album.many_to_one :artist, :graph_join_type=>:inner Album.eager_graph(:artist).sql # SELECT ... FROM albums INNER JOIN artists ... :graph_conditions adds conditions on the join to the table you are joining, the eager_graph equivalent of an association block argument in eager. It takes either a hash or an array where all elements are arrays of length two, similar to join_table, where key symbols specify columns in the joined table and value symbols specify columns in the last joined or primary table: Album.many_to_one :artist, :graph_conditions=>{:active=>true} Album.eager_graph(:artist).sql # SELECT ... FROM albums LEFT OUTER JOIN artists ON ((artists.id = # albums.artist_id) AND (artists.active = 't')) :graph_join_table_conditions exists for many to many associations only, and operates the same as :graph_conditions, except it specifies a condition on the many to many join table instead of the associated model's table. This is necessary if the join table is also model table with other columns on which you may want to filter: Album.many_to_many :genres, :join_table=>:ag, \ :graph_join_table_conditions=>{:active=>true} Album.eager_graph(:genres).sql # SELECT ... FROM albums LEFT OUTER JOIN ag ON ((ag.album_id = albums.id) AND (ag.active = 't')) LEFT OUTER JOIN genres ON (genres.id = ag.genre_id) Other Small Improvements ------------------------ * Dataset#invert returns a dataset that matches all records not matching the current filter. * Dataset#unfiltered returns a dataset that has any filters removed. * Dataset#last_page? and Dataset#first_page? for paginated datasets. * The sequel command line tool now support an -E or --echo argument that logs all SQL to the standard output. It also can take a path to a yaml file with database connection options, in addition to a database URL. * Databases can now have multiple SQL loggers, so you can log to the standard output as well as a file. * SQL identifiers (columns and tables) are now quoted by default (you can turn this off via Sequel.quote_identifiers = false if need be). * Sequel.connect now takes an optional block that will disconnect the database when the block finishes. * AlterTableGenerator now has add_primary_key and add_foreign_key methods. * Running the specs without ParseTree installed skips the specs that require ParseTree. * You can use an array of arrays instead of a hash when specifying conditions, which may be necessary in certain situations where you would be using the same hash key more than once. * Almost all documentation for Sequel was updated for 2.0.0, so if you found Sequel documentation lacking before, check out the new RDoc pages. * There have been many minor refactoring improvements, the code should now be easier to read and follow. * Sequel now has no external dependencies. * Sequel::Models now have before_validation and after_validation hooks. * Sequel::Model hooks that return false cause the methods that call them (such as save) to return false. * Sequel::Models can now load their schema on first instantiation, instead of when they are created, via Sequel::Model.lazy_load_schema=. This is helpful for certain web frameworks that reload all models on every request. * Hook methods that use blocks can now include an optional tag, which allows them to work well with web frameworks that load source files every time they are modified. The PostgreSQL adapter has been rewritten and now supports ruby-pg. There have also been improvements in the following adapters: DBI, MySQL, SQLite, Oracle, and MSSQL. All of the methods that have been deprecated in 1.5.0 have now been removed. If you are want to upgrade to Sequel 2.0.0 from version 1.4.0 or previous, upgrade to 1.5.1 first, fix all of the deprecation warnings that show up, and then upgrade to 2.0.0. There were some backwards incompatible changes made in 2.0.0 beyond the removal of deprecated methods. These are: * Inflector is no longer used, the inflection methods were moved directly into String (where they belong because inflections only make sense for strings). So to override singularization or pluralization rules, use String.inflections instead of Inflector.inflections. * MySQL tinyints are now returned as boolean values instead of integers. MySQL doesn't have a boolean type, and usually it is recommended to use tinyint for a boolean column. * You can no longer pass an array to Dataset#order or Dataset#select, you need to pass each argument separately (the * operator is your friend). * You must use '?' instead of '(?)' when interpolating an array argument into a string (e.g. filter('x IN ?', [1,2,3])) * You must pass an explicit table alias argument to join_table and related methods, you can no longer include the table alias inside the table argument. * sqlite:// URLs now operate the same as file:// URLs (2 slashes for a relative path, 3 for an absolute path). ruby-sequel-4.1.1/doc/release_notes/2.1.0.txt000066400000000000000000000230751220156535500206010ustar00rootroot00000000000000Model Improvements ------------------ * one_to_many/many_to_many associations now support a :limit option, adding a limit/offset to the records returned. This was possible before using a block, so it is just added for convenience. * Associations now support a :read_only option, which doesn't create methods that modify the database. * Associations now support a :graph_select option, which allows specifying the columns of associated models to include when using eager_graph. * one_to_many associations now have a :one_to_one option. When used it creates a getter and setter method similar to many_to_one. This fills the same role as ActiveRecord's has_one, but it is implemented as a couple of convenience methods over one_to_many, so it still requires that you specify the association name as a plural. * Model datasets now have to_hash augmented so that it can be called without any arguments, in which case it yields an identity map (a hash with keys being primary key values and values being model instances). * The Model.set_sti_key method was added, for easily setting up single table inheritance. It should be called only in the parent class. * Calls to def_dataset_method with a block are now cached and reapplied to the new dataset if set_dataset is called afterward, or in a subclass. * All validation methods can now be made conditional via an :if option, which takes either a symbol (which specifies an instance method) or a proc (which is instance_evaled). * Model#set and Model#update have been added back, they are now aliases of #set_with_params and #update_with_params. * Models now have set_only/set_except/update_only/update_except instance methods that take a hash (like you would provide to set or update) and additional arguments specifying which columns to allow or disallow. * Models now have a set_allowed_columns and set_restricted_columns methods, which operate similarly to ActiveRecord's attr_accessible and attr_protected. It is recommend that you use the set_only or update_only instead of these methods, though. You can ignore the allowed or restricted columns by using #set_all or #update_all. * The primary key column(s) is restricted by default. To allow it to be set via new/set/update, use: Sequel::Model.unrestrict_primary_key # Global Artist.unrestrict_primary_key # Per Class * It is now easy to override the one_to_many/many_to_many association methods that modify the database (add_/remove_/remove_all_), as they have been broken into two methods, one that handles the caching features and a private one (prepended with an _) that handles the database changes (and which you can easily override without worrying about the caching). Table Joining ------------- Dataset#join_table got a nice overhaul. You can now use any join type your database allows: DB[:artist].join_table(:natural, :albums) DB[:numbers].join_table(:cross, :numbers) You can now specify the conditions as * String: "a.b = c.d" # ON a.b = c.d * Expression :x < :y # ON x < y * Array of Symbols: [:x, :y, :z] # USING (x, y, z) * nil # no conditions, used for NATURAL or CROSS joins Dataset#join_table also takes a block that yields three arguments: * join_table_alias - The alias/name of the table currently being joined * last_join_table_alias - The alias name of the last table joined (if there was one) or the first FROM table (if not). * joins - An array of JoinClause objects for all previous joins in the query. Using the block you can specify conditions for complex joins without needing to know in advance what table aliases will be used. Expanded SQL Syntax Support --------------------------- SQL Case statements are now supported directly using hashes or arrays: {:x > 1 => 1}.case(0) # CASE WHEN x > 1 THEN 1 ELSE 0 END [[{:x=>1}, 0], [:x < 1, 1], [:x > 1, 2]].case(-1) # CASE WHEN x = 1 THEN 0 WHEN x < 1 THEN 1 WHEN x > 1 THEN 2 ELSE -1 END You should use an array instead of a hash for multiple conditions unless all conditions are orthogonal. The SQL extract function has special syntax: EXTRACT(day FROM date) This syntax is now supported via the following ruby code: :date.extract(:day) Other Notable Changes --------------------- * The sequel command line tool can now run migrations. The -m option specifies the directory holding the migration files, and the -M options specifies the version to which to migrate. * The PostgreSQL adapter supports nested transactions/savepoints. * The schema parser now understands decimal fields, and will typecast to BigDecimal. * PostgreSQL's numeric type is now recognized and returned as BigDecimal. * HAVING now comes before ORDER BY, which most databases seem to prefer. If your database wants HAVING after ORDER BY, please let us know. * Symbol#qualify now exists, to specify the table name for a given symbol, similar to the use of #as to specify an alias. This is mainly helpful in conjuction with the #join_table block, as that provides the table aliases to use to qualify the columns inside the block. * BitwiseMethods (&, |, ^, ~, <<, >>) have been added to the NumericExpression class, so you can do the following: (x + 1) ^ 10 # SQL: (x + 1) ^ 10 ~(x + 1) # SQL: ~(x + 1) Usually, &, |, and ~ operate in a logical manner, but for NumericExpressions, they take on their usual bitwise meaning, since logical operations only make sense for booleans. * #cast_numeric and #cast_string exist for Symbols, Strings, and other Sequel Expressions, which return the results casted and wrapped in either NumericExpression or StringExpression, so you can use the BitwiseMethods (&, |, ^, ~, <<, >>) or StringConcatenationMethods (+) directly. # Dataset#to_hash can take only one argument, in which case it uses that argument to specify the key, and uses the entire hash for the value. # Dataset#graph can now take an array of columns to select from the joined table via the :select option. # Dataset#filter and similar methods now combine the block and regular argument conditions if both are given, instead of ignoring the regular argument conditions. # Dataset#filter(false) can now be used to make sure that no records are returned. Dataset#filter(true) also works, but it's a no-op. Before, these raised errors. # Dataset#count does a subquery for a dataset using DISTINCT, since the otherwise it would yield a count for the query without DISTINCT. ParseTree Support Officially Deprecated --------------------------------------- The support for ParseTree-based block filters has officially been deprecated and will be removed in Sequel 2.2. To use the expression filters (which don't require ParseTree) inside blocks, use: SEQUEL_NO_PARSE_TREE = true require 'sequel' # OR require 'sequel' Sequel.use_parse_tree = false This is the default if ParseTree cannot be loaded. If ParseTree can be loaded, it remains the default, in order not to immediately break existing code. With this set, you can use the expression filters inside of blocks: dataset.filter{((:x + 1) & 10 < :y) & :z} That doesn't gain you all that much, but there are some methods that feed block arguments into filter, such as the following: dataset.first(5){((:x + 1) & 10 < :y) & :z} Which will get you the first 5 records matching the condition. Backwards Incompatible Changes ------------------------------ * To change the datetime classe used from Time to DateTime, you now use: Sequel.datetime_class = DateTime # instead of Sequel.time_class * Models now raise errors if you try to access a missing or restricted method via new/set/update, instead of just silently skipping that parameter. To get the old behavior: Sequel::Model.strict_param_setting = false * The association_dataset method now takes into account the :eager option and the block argument, where it didn't before. It also takes into account the new :limit option. * Association methods now raise errors in most cases if the model doesn't have a valid primary key. * Dataset#join_table used to allow a symbol as a conditions argument as a shortcut for a hash: DB[:artist].join(:albums, :artist_id) # ON albums.artist_id = artists.id With the changes to #join_table, this no longer works. It would now be interpreted as a boolean column: DB[:artist].join(:albums, :artist_id) # ON artists.id Use the following slightly longer version for the old behavior: DB[:artist].join(:albums, :artist_id=>:id) # ON albums.artist_id = artists.id * MySQL users need to be careful when upgrading, the following code will once again cause an error: DB[:artists].each{|artist| DB[:albums].each{|album| ...}} To fix it, change the code to: DB[:artists].all{|artist| DB[:albums].each{|album| ...}} The issue is the MySQL adapter doesn't release the database connection while running each, and the second call to each gets the same database connection (while the other query is still running), because it is in the same thread. Using #all for the outside query ensures that the database connection is released before the block is called. The reason for this change was that the workaround provided for MySQL could potentially cause issues with transactions for all adapters. * String#asc and String#desc are no longer defined, as ordering on a plain string column should be a no-op. They are still defined on LiteralStrings. * You can no longer abuse the SQL::Function syntax to use a table alias with specified columns (e.g. :table[:col1, :col2, :col3]) or to cast to types (e.g. :x.cast_as(:varchar[20])). Use a LiteralString in both cases. ruby-sequel-4.1.1/doc/release_notes/2.10.0.txt000066400000000000000000000334501220156535500206570ustar00rootroot00000000000000New Supported Adapters and Databases ------------------------------------ * A DataObjects adapter was added that supports PostgreSQL, MySQL, and SQLite. DataObjects is the underlying database library used by DataMapper, and has potential performance advantages by doing all typecasting in C. * A Firebird Adapter was added, it requires the modified Ruby Fb adapter found at http://github.com/wishdev/fb. * An H2 JDBC subadapter was added, based on the code used in JotBot. H2 is an embeddable Java database, and may be preferable to using SQLite on JDBC because SQLite requires native code. New Core Features ----------------- * Sequel now has database independent migrations. Before, column types in migrations were not translated per database, so it was difficult to set up a migration that worked on multiple databases. Sequel now accepts ruby classes as database types, in addition to symbols and strings. If a ruby class is used, it is translated to the most appropriate database type. Here is an example using all supported classes (with Sequel's default database type): DB.create_table(:cats) do primary_key :id, :type=>Integer # integer String :a # varchar(255) column :b, File # blob Fixnum :c # integer foreign_key :d, :other_table, :type=>Bignum # bigint Float :e # double precision BigDecimal :f # numeric Date :g # date DateTime :h # timestamp Time :i # timestamp Numeric :j # numeric TrueClass :k # boolean FalseClass :l # boolean end Type translations were tested on the PostgreSQL, MySQL, SQLite, and H2 databases. The default translations should work OK for most databases, but there will probably be a type or two that doesn't work. Please send in a patch if Sequel uses a column type that doesn't work on your database. Note that existing migrations still work fine, in most cases. If you were using strings or symbols for types before, they should still work. See the Backwards Compatibility section below for details. Also note that this doesn't relate solely to migrations, as any database schema modification method that accepts types will accept one of the above classes. * A ton of internal work was done to better support databases that fold unqouted identifiers to uppercase (which is the SQL standard). Sequel now allows you to set a method to call on identifiers going both into and out of the database. The default is to downcase identifiers coming out, and upcase identifiers going in, though this is overridden by the PostgreSQL, MySQL, and SQLite adapters to not do anything (since they fold to lowercase by default). The settings are called identifier_input_method and identifier_output_method, and like most Sequel settings, they can be set globally, per database, or per dataset: # Global (use uppercase in ruby and lowercase in the database) Sequel.identifier_input_method = :downcase Sequel.identifier_output_method = :upcase # Per Database (use camelized names in the database, and # underscored names in ruby) DB.identifier_input_method = :camelize DB.identifier_output_method = :underscore # Per Dataset (obfuscate your database columns!) class String; def rot_13; tr('A-Za-z', 'N-ZA-Mn-za-m') end end ds = DB[:table] ds.identifier_input_method = :rot_13 ds.identifier_output_method = :rot_13 * Schema parsing support was added to the JDBC adapter, using the JDBC metadata methods. This means that models that use the JDBC adapter will typecast data in their column setters and automatically select the correct primary key column(s). This is currently the only adapter that supports schema parsing when using an MSSQL or Oracle database. * Database#create_table now takes options, which you can use to specify a MySQL engine, charset, and/or collation. You can also set a default engine, charset, and collation for MySQL to use: Sequel::MySQL.default_engine = 'InnoDB' Sequel::MySQL.default_charset = 'utf8' Sequel::MySQL.default_collate = 'utf8' The defaults will be used if the options are not provided. If a default engine is set, you can specify :engine=>nil to not use it (same goes for charset and collate). * The Sequel::DatabaseConnectionError exception class was added. It is raised by the connection pool if there is an error attempting to instantiate a database connection. Also, if the adapter returns nil instead of raising an error for faulty connection parameters, DatabaseConnectionError will be raised immediately, instead of the connection pool busy waiting until if gives up with a PoolTimeoutError. * Database#tables is now supported on the JDBC adapter, returning an Array of table name symbols. * Sequel now converts the following Java types returned by the JDBC adapter into ruby types: Java::JavaSQL::Timestamp, Java::JavaSQL::Time, Java::JavaSQL::Date, Java::JavaMath::BigDecimal, and Java::JavaIo::BufferedReader. * When using the PostgreSQL adapter with the postgres-pr driver, Sequel will use a custom string escaping routine unless force_standard_strings = false. This means that using Sequel's defaults, postgres-pr will correctly escape strings now. * The SQLite adapter now returns float, real, and double precision columns as Floats. * The SQLite adapter logs beginning, committing, and rolling back transactions. * Sequel now has an internal version (before, the only way to tell the version was to look at the gem being used). It is accessible at Sequel.version. New Model Features ------------------ * A new validates_not_string validation was added for Sequel Models. It is intended to be used with the raise_on_typecast_failure = false setting. In this case, for a non-string database column, if there is a string value when the record is going to be saved, it is due to the fact that Sequel was not able to typecast the given data correctly (so it is almost certainly not valid). This should make Sequel easier to use with web applications. * An :allow_missing validation option was added to all standard validations. This option skips the validation if the attribute is not in the object's values. It is different from :allow_nil, which will skip the value if it is present but nil in the values. The intended use case for this option is when the database provides a good default. If the attribute is not present in values, the database will use its default. If the attribute is present in the values but equals nil, Sequel will attempt to insert it into the database as a NULL value, instead of using the database's default. If you don't want Sequel to insert a NULL value in the database, but you want the database to provide the default, this is the option to use. * validates_each now accepts :allow_nil and :allow_blank options, so it is easier to create custom validations with the same options as the standard validations. * Before_* hooks now run in the reverse order that they were added. The purpose of hooks is to wrap existing functionality, and making later before_* hooks run before previous before_* hooks is the correct behavior. * You can now add you own hook types, via Model.add_hook_type. This is intended for plugin use. All of the standard hooks are now implemented using this method. * The value of new? in a after_save hook now reflects the previous state of the model (so true for a create and false for an update), instead of always being false. This makes it easier to have a complex after_save hook that still needs to differentiate between a newly created record and an updated record, without having to add separate after_create and after_update hooks. * The value of changed_columns in an after_update hook now reflects the value before the update occurred, instead of usually being empty. Previously, to have this functionality, you generally had to save the value to an instance variable in a before_update hook so you could reference it in the after_update hook. Other Improvements ------------------ * Sequel now longer overwrites the following Symbol instance methods when running on ruby 1.9: [], <, <=, >, and >=. One of Sequel's principals is that it does not override methods defined by ruby, and now that ruby 1.9 defines the above methods on Symbol, Sequel shouldn't be overwriting them. Sequel already provides a way to work around this issue when another library adds the same methods to Symbol that Sequel does. For example, you need to change the following: dataset.filter(:number > 1) dataset.filter(:number >= 2) dataset.filter(:name < 'M') dataset.filter(:name <= 'I') dataset.filter(:is_bool[:x]) To: dataset.filter{|o| o.number > 1} dataset.filter{|o| o.number >= 2} dataset.filter{|o| o.name < 'M'} dataset.filter{|o| o.name <= 'I'} dataset.filter{|o| o.is_bool(:x)} The argument provided to the block is a Sequel::SQL::VirtualRow. This class uses method_missing so that any methods called on it return Sequel::SQL::Identifiers (if no arguments are provided) or Sequel::SQL::Function (if arguments are provided). If you were using one of the above symbol methods outside of a filter, you can to call sql_string, sql_number, or sql_function on the symbol. So the following would also work: dataset.filter(:number.sql_number > 1) dataset.filter(:number.sql_number >= 2) dataset.filter(:name.sql_string < 'M') dataset.filter(:name.sql_number <= 'I') dataset.filter(:is_bool.sql_function(:x)) Using the block argument makes for a nicer API, though, so I recommend using it when possible. Note that if you are running ruby 1.8 or jruby without the --1.9 flag, you don't need to worry. If you are running ruby 1.9 or jruby --1.9, or you plan to at some point in the future, you should inspect your code for existing uses of these methods. Here are a couple command lines that should find most uses: # Find :symbol[] egrep -nr ':['\''"]?[a-zA-Z_0-9]*['\''"]?\[' * # Find :symbol (<|>|<=|>=) egrep -nr '[^:]:['\''"]?[a-zA-Z_0-9]*['\''"]? *[<>]=?' * * Database#quote_identifiers now affects future schema modifications when using the database. Previous, it only affected future schema modifications if a schema modification method had not yet been called. * Literalization of Times and DateTimes is now correct when using the MySQL JDBC subadapter. * Literalization of Blobs is now correct when using the PostgreSQL JDBC subadapter. * Index and table names are quoted when creating indices in the PostgreSQL adapter. * Dataset#delete was changed in the SQLite adapter to add a where clause that is always true, instead of doing an explicit count first and the deleting. This is simpler, though it could potentially have performance implications. * The sequel command line tool now supports symbol keys and unnested hashes in YAML files, so it should work with Merb's database.yml. It also includes the error class in the case of an error. * The integration type tests were greatly expanded. Generally, running the integration tests is a good way to determine how well your database is supported. * Dataset#quote_identifier now returns LiteralStrings as-is, instead of treating them as regular strings. * Sequel no longer modifies the MySQL::Result class when using the MySQL adapter. Backwards Compatibilty ---------------------- * If you were previously using a database that returned uppercase identifiers, it will probably return lowercase identifiers by default now. To get back the old behavior: DB.identifier_output_method = nil * The module hierarchy under Sequel::SQL has changed. Now, modules do not include other modules, and the following modules were removed since they would have been empty after removing the modules they included: Sequel::SQL::SpecificExpressionMethods and Sequel::SQL::GenericExpressionMethods. * Sequel no longer assumes the public schema by default when connecting to PostgreSQL. You can still set the default schema to use (even to public). * The ability to load schema information for all tables at once was removed from the PostgreSQL adapter. While it worked, it had some issues, and it was difficult to keep it working when some new features were used. This ability wasn't exposed to the user, and was purely an optimization. If you have any code like: DB.schema by itself after the Database object was instantiated, you should remove it. * The Database#primary_key API changed in the PostgreSQL shared adapter, it now accepts an options hash with :server and :conn keys instead of a server symbol. Also, quite a few private Database instance methods changed, as well as some constants in the AdapterMethods. * It is possible that some migrations will break, though it is unlikely. If you were using any of the classes mentioned above as a method inside a migration, it might be broken. However, since String, Float, and Integer wouldn't have worked as methods before, it is unlikely that anyone used this. * The meaning of #String, #Integer, and #Float inside Sequel::SQL::Generator (i.e. inside a Database#create_table block) has changed. Before, these used to call private Kernel methods, now, they set up columns with the appropriate database type. * The Database#lowercase method in the DBI adapter was removed, as its use case is now met by the identifier_output_method support. * Database#uri is now aliased explicitly via a real method, to allow for easier subclassing. * You can no longer pass nil as the second argument to Database#create_table. ruby-sequel-4.1.1/doc/release_notes/2.11.0.txt000066400000000000000000000200221220156535500206470ustar00rootroot00000000000000Optimizations ------------- * Model.[] was optimized to use static SQL in cases where doing so should result in the same output. This should result in a 30-40% performance increase. Since this can be the most significant or only method call in a web application action, this has potential to significantly enhance the performance of web application code. In order for this optimization to have an effect, you need to make sure that you are calling set_dataset with a Symbol and not a Dataset object: # Optimized: class Foo < Sequel::Model; end class Foo < Sequel::Model(:foos); end class Foo < Sequel::Model set_dataset :foos end # Not Optimized, but otherwise equivalent: class Foo < Sequel::Model(Model.db[:foos]); end class Foo < Sequel::Model set_dataset db[:foos] end * Dataset#literal was refactored for performance reasons to make overriding it in subclasses unnecessary. The changes made result in a 20-25% performance increase. Sequel can spend about 10% of it's time in Dataset#literal, so this may be only a 2% overall performance improvement. New Features ------------ * Association datasets now know about the model objects that created them, as well as the related association reflection. This makes association extensions much more powerful. For example, you can now create generic association extensions such as: module FindOrCreate def find_or_create(vals) first(vals) || association_reflection.associated_class. \ create(vals.merge(association_reflection[:key]=> \ model_object.id)) end end The above will work for any standard one_to_many association: Artist.one_to_many :albums, :extend=>FindOrCreate # Create an album named Foo related to this artist, # unless such an album already exists Artist.first.albums_dataset.find_or_create(:name=>'Foo') Before, the only way to do the above was to use a closure inside the :dataset option proc, which couldn't be done generically for multiple associations. * A :conditions association option was added, which allows simple filters to be set up without defining :graph_conditions and an association block: # 2.10.0 one_to_many(:japanese_verses, :class=>:Verse, \ :graph_conditions=>{:languageid=>3})do |ds| ds.filter(:languageid=>3) end # 2.11.0 one_to_many(:japanese_verses, :class=>:Verse, \ :conditions=>{:languageid=>3}) * A :clone association option was added, which allows you to clone an existing association. This is most useful when you are dealing with a legacy schema and had to define the same options redundantly for each type of association. You can now do: many_to_many :deputies, :class=>:Employee, \ :join_table=>:employeecurrentaudits, :left_key=>:currentauditid, \ :right_key=>:employeeid, :order=>[:firstname, :lastname] do |ds| ds.filter(:active).filter(:capacity=>1) end many_to_many :project_managers, :clone=>:deputies do |ds| ds.filter(:active).filter(:capacity=>2) end many_to_many :team_leaders, :clone=>:deputies do |ds| ds.filter(:active).filter(:capacity=>3) end All of the above would use the same :class, :join_table, :left_key, :right_key, and :order options. If you don't provide an association block, but you are cloning an association that has one, the cloned association's block is used. You can use the :block=>nil option to not use a block even if the cloned association has a block. * Dataset#select, #select_more, #order, #order_more, and #get all take a block that yields a Sequel::SQL::VirtualRow instance, similar to the behavior of filter. This allows for the easier use of SQL functions on Ruby 1.9: # 2.10.0 dataset.select(:prettify.sql_function(:name)) # 2.11.0 dataset.select{|o| o.prettify(:name)} * String#lit can now accept arguments and return an SQL literal string. This allows you to do things that were previously hard or at least unnecessarily verbose. For example, you can now easily use the SQL standard SUBSTRING function: column = :user pattern = params[:pattern] dataset.select{|o| o.substring('? from ?'.lit(column, pattern))} * A validates_inclusion_of validation method was added to Model. You can provide a Range or an Array in the :in option to specify the allowed values: validates_inclusion_of :value, :in=>1..5 validates_inclusion_of :weekday, :in=>%w'Monday Tuesday ...' * Dataset#with_sql was added, which returns a copy of the dataset with static SQL. This is useful if you want to keep the same row_proc/graph/transform/etc., but want to use your own custom SQL. Other Improvements ------------------ * You can now use Sequel's database independent types when casting: dataset.select(:number.cast(String)) Among other things, the default cast types for cast_string and cast_numeric now work in the MySQL adapter. * Model#set_associated_object was added. The many_to_one association setter method calls it. This allows you to easily override the association setters for all many_to_one associations of a class by modifying a single method. * Typecasting invalid date strings now raises a Sequel::Error::InvalidValue instead of an argument error, which means that you can use raise_on_typecast_failure = false and not have an error raised when an invalid date format is used. * String#to_sequel_blob was added and should now be used instead of String#to_blob. sqlite3-ruby defines String#to_blob differently, which could cause problems. * Blob columns are now fully supported in the SQLite adapter, with the hex escape syntax being used for input, and returning columns of type Sequel::SQL::Blob on output. * The SQLite adapter drop_column support is now significantly more robust. * The SQLite adapter now supports rename_column. * The MySQL adapter now supports stored procedures with multiple arguments. * The MySQL adapter can now not use a compressed connection to the server via the :compress=>false option. * The MySQL adapter now sets a default timeout of 30 days to the database connection, you can change it via the :timeout option, which accepts a number of seconds. * The MySQL adapter now sets SQL_AUTO_IS_NULL to false by default, you can use the :auto_is_null=>true option to not do this. * The MySQL adapter now sets the encoding option on the database connection itself, so it works across reconnects. * Sequel itself no longer uses String#lit or Symbol#* internally, so it shouldn't break if another library defines them. * The default index name is now generated correctly if a non-String or Symbol column is used. * Some ruby -w warnings have been fixed. * INSERTs are now sent to the master database instead of the slave database(s) if using a master/slave database configuration and PostgreSQL 8.2+ or Firebird. * DateTime literalization has been fixed in the Firebird adapter. * Date literalization has been fixed in the H2 JDBC subadapter. * Release notes for versions from 1.0 to the present are now included in the Sequel repository and the RDoc documentation, see http://sequel.rubyforge.org/rdoc/files/doc/release_notes/ Backwards Compatibilty ---------------------- * The optimization of Model.[] may break if you modify the model's dataset behind its back. Always use Model.set_dataset if you want to change a Model's dataset. * Sequel::Dataset::UnsupportedExceptIntersect and Sequel::Dataset::UnsupportedExceptIntersectAll will now only be defined if you are using an adapter that requires them. * The private Model#cache_delete_unless_new method has been removed. * Sequel::SQL::IrregularFunction was removed, as it was a bad hack that is not used by Sequel anymore. Unless you were instantiating it directly or using a plugin/extension that did, this shouldn't affect you. Using a Sequel::SQL::Function with a Sequel::SQL::PlaceholderLiteralString is recommended instead, see the substring example above. ruby-sequel-4.1.1/doc/release_notes/2.12.0.txt000066400000000000000000000462631220156535500206670ustar00rootroot00000000000000Overview -------- Sequel 2.12 is really just a stepping stone to Sequel 3.0, which will be released next month. All major changes currently planned for 3.0 have been made in 2.12, but 2.12 contains many features that have been deprecated and will be removed or moved into extensions or plugins in 3.0. Deprecation Logging ------------------- If you use a deprecated method or feature, Sequel will by default print a deprecation message and 10 lines of backtrace to standard error to easily allow you to figure out which code needs to be updated. You can change where the deprecation messages go and how many lines of backtrace are given using the following: # Log deprecation information to a file Sequel::Deprecation.output = File.open('deprecated.txt', 'wb') # Use 5 lines of backtrace when logging deprecation messages Sequel::Deprecation.backtraces = 5 # Use all backtrace lines when logging deprecation messages Sequel::Deprecation.backtraces = true # Don't include backtraces in the deprecation logging Sequel.Deprecation.backtraces = false # Turn off all deprecation logging Sequel::Deprecation.output = nil Deprecated Features Moving to Extensions ---------------------------------------- * Migrations are being moved into sequel/extensions/migration. There isn't any reason that they should be loaded in normal use since they are used so rarely. The sequel command line tool uses this extension to run the migrations. * Adding the blank? method to all objects has been moved into sequel/extensions/blank. * Dataset#print and Sequel::PrettyTable have been moved into sequel/extensions/pretty_table. * Dataset#query and related methods have been moved into sequel/extensions/query. * Dataset#paginate and related methods have been moved into sequel/extensions/pagination. * String inflection methods (e.g. "people".singularize) have been moved into sequel/extensions/inflector. * String date/time conversion methods (e.g. '2000-01-01'.to_date) have been moved into sequel/extensions/string_date_time. Deprecated Model Features Moving to Plugins ------------------------------------------- * Model validation class methods have been moved to a plugin. Sequel users are encouraged to write their own validate instance method instead. A new validation_helpers plugin has been added to make this easier, it's explained in the New Features section. If you want to continue using the validation class methods: Sequel::Model.plugin :validation_class_methods * Model hook class methods have been moved to a plugin. Sequel users are encouraged to write their own hook instance methods, and call super to get hooks specified in superclasses or plugins. If you want to continue using the hook class methods: Sequel::Model.plugin :hook_class_methods * Model schema methods (e.g. Model.set_schema, Model.create_table, Model.drop_table) have been moved to a plugin. The use of these methods has been discouraged for a long time. If you want to use them: Sequel::Model.plugin :schema * Model.set_sti_key has been moved to a plugin. So you should change: MyModel.set_sti_key :key_column to: MyModel.plugin :single_table_inheritance, :key_column * Model.set_cache has been moved to a plugin. So you should change: MyModel.set_cache cache_store, opts to: MyModel.plugin :caching, cache_store, opts * Model.serialize has been moved to a plugin. So you should change: MyModel.serialize :column, :format=>:yaml to: MyModel.plugin :serialization, :yaml, :column Because the previous serialization support depended on dataset transforms, the new serialization support is implemented differently, and behavior may not be identical in all cases. However, this should be a drop in replacement for most users. Deprecated Features To Be Removed in Sequel 3.0 ----------------------------------------------- * Dataset#transform is deprecated without any replacement planned. It was announced on the Sequel mailing list that transforms would be removed unless someone said they needed them, and nobody said that they did. * Dataset#multi_insert and Dataset#import are no longer aliases of each other. Dataset#multi_insert now takes an array of hashes, and Dataset#import now takes an array of columns and an array of arrays of values. Using multi_insert with import's API or vice-versa is deprecated. * Calling Dataset#[] with no arguments or an integer argument is deprecated. * Calling Dataset#map with both an argument and a block is deprecated. * Database#multi_threaded? and Database#logger are both deprecated. * Calling Database#transaction with a symbol to specify which server to use is deprecated. You should now call it with an option hash with a :server key. * Array#extract_options! and Object#is_one_of? are both deprecated. * The metaprogramming methods taken from metaid are deprecated and have been moved into Sequel::Metaprogramming. If you want them available to specific objects/classes, just include or extend with Sequel::Metaprogramming. If you want all objects to have access to the metaprogramming methods, install metaid. Note that the class_def method from metaid doesn't exist in Sequel::Metaprogramming, since it really isn't different from define_method (except it is public instead of private). * Module#class_attr_overridable, #class_attr_reader, and #metaalias are deprecated. * Using Model#set or #update when the columns for the model are not set and you provide a hash with symbol keys is deprecated. Basically, you must have setter methods now for any columns used in #set or #update. * Model#set_with_params and #update_with_params are deprecated, use #set and #update instead. * Model#save! is deprecated, use #save(:validate=>false). * Model.is and Model.is_a are deprecated, use Model.plugin. * Model.str_columns, Model#str_columns, #set_values, and #update_values are deprecated. You should use #set and #update instead of #set_values and #update_values, though they operate differently. * Model.delete_all, Model.destroy_all, Model.size, and Model.uniq are deprecated, use .delete, .destroy, .count, and .distinct. * Model.belongs_to, Model.has_many, and Model.has_and_belongs_to_many are deprecated, use .many_to_one, .one_to_many, and .many_to_many. * Model#dataset is deprecated, use Model.dataset. * SQL::CastMethods#cast_as is deprecated, use #cast. * Calling Database#schema without a table argument is deprecated. * Dataset#uniq is deprecated, use Dataset#distinct. * Dataset#symbol_to_column_ref is deprecated, use #literal. * Dataset#quote_column_ref is deprecated, use #quote_identifier. * Dataset#size is deprecated, use #count. * Passing options to Dataset#each, #all, #single_record, #single_value, #sql, #select_sql, #update, #update_sql, #delete, #delete_sql, and #exists is deprecated. Modify the options first using clone or a related method, then call one of the above methods. * Dataset#create_view and #create_or_replace_view are deprecated, use the database methods instead. * Dataset.dataset_classes, #model_classes, #polymorphic_key, and #set_model are deprecated. * Database#>> is deprecated. * String#to_blob and SQL::Blob#to_blob are deprecated, use #to_sequel_blob. * The use of Symbol#| to create array subscripts is deprecated, use Symbol#sql_subscript. * Symbol#to_column_ref is deprecated, use Dataset#literal. * String#expr is deprecated, use String#lit. * Array#to_sql, String#to_sql, and String#split_sql are deprecated. * Passing an array to Database#<< is deprecated. * Range#interval is deprecated. * Enumerable#send_each is deprecated. * When using ruby 1.8, Hash#key is deprecated. * Sequel.open is deprecated, use Sequel.connect. * Sequel.use_parse_tree and Sequel.use_parse_tree= are deprecated. * All upcase_identifier methods and the :upcase_identifiers database option are deprecated, use identifier_input_method = :upcase instead. * Using a virtual row block without an argument is deprecated, see Sequel.virtual_row_instance_eval= under New Features. * When using the JDBC adapter, Java::JavaSQL::Timestamp#usec is deprecated. Sequel has returned Java::JavaSQL::Timestamp as DateTime or Time for a few versions, so this shouldn't affect most people. * Sequel will no longer require bigdecimal/util, enumerator, or yaml in 3.0. If you need them in your code, make sure you require them yourself. Using features added by requiring these standard libaries will not bring up a deprecation warning, for obvious reasons. * Sequel::Error::InvalidTransform, Sequel::Error::NoExistingFilter, and Sequel::Error::InvalidStatement exceptions will be removed in Sequel 3.0. You will not get a deprecation message if you reference them in 2.12. * Sequel::Model::Validation::Errors is deprecated, use Sequel::Model::Errors instead. Referencing the old name will not bring up a deprecation message. New Features ------------ * Sequel.virtual_row_instance_eval= was added, which lets you give Sequel 2.12 the behavior that will be the standard in 3.0. It changes blocks passed to Dataset#filter, #select, or #order that don't accept arguments (or accept any number of arguments) to instance eval the block in the context of a new VirtualRow instance instead of passing a new VirtualRow instance to the block. It allows you to change code that looks like this: dataset.filter{|o| (o.number > 10) & (o.name > 'M')} to: dataset.filter{(number > 10) & (name > 'M')} When instance_eval is used, only local variables are available to the block. Any calls to instance methods will be interpreted as calling VirtualRow#method_missing, which generates identifiers or functions. When virtual_row_instance_eval is enabled, the following type of code will break: # amount is a instance method dataset.filter{:number + amount > 0} Just like this example, the only type of code that should break is when a virtual row block was used when it wasn't necessary (since it doesn't use the VirtualRow argument). When Sequel.virtual_row_instance_eval = false, using a virtual row block that doesn't accept an argument will cause a deprecation message. Here's a regular expression that should catch most places where you are using a virtual row block without an argument. egrep -nr '[^A-Za-z0-9_](filter|select|select_more|order|order_more|get|where|having|from|first|and|or|exclude|find|subset|constraint|check)( *(\([^)]*\) *)?){*[^|]' * An RDoc page explaining virtual row blocks was added as well. * A validation_helpers model plugin was added that allows you to do validations similar to the old class level validations inside the Model#validate instance method. The API has changed, but it's capable of most of the same validations. It doesn't handle acceptance_of or confirmation_of validations, as those shouldn't be handled in the model. # Old class level validations validates_format_of :col, :with=>/.../ validates_length_of :col, :maximum=>5 validates_length_of :col, :minimum=>3 validates_length_of :col, :is=>4 validates_length_of :col, :within=>3..5 validates_not_string :col validates_numericality_of :col validates_numericality_of :col, :only_integer=>true validates_presence_of :col validates_inclusion_of :col, :in=>[3, 4, 5] validates_uniqueness_of :col, :col2 validates_uniqueness_of([:col, :col2]) # New instance level validations def validate validates_format /.../, :col validates_max_length 5, :col validates_min_length 3, :col validates_exact_length 4, :col validates_length_range 3..5, :col validates_not_string :col validates_numeric :col validates_integer :col validates_presence :col validates_includes([3,4,5], :col) validates_unique :col, :col2 validates_unique([:col, :col2]) end Another change made is to specify the same type of validation on multiple attributes, you must use an array: # Old validates_length_of :name, :password, :within=>3..5 # New def validate validates_length_range 3..5, [:name, :password] end The :message, :allow_blank, :allow_missing, and :allow_nil options are still respected. The :tag option is not needed as instance level validations work with code reloading without workarounds. The :if option is also not needed for instance level validations: # Old validates_presence_of :name, :if=>:new? validates_presence_of :pass, :if=>{flag > 3} # New def validate validates_presence(:name) if new? validates_presence(:pass) if flag > 3 end The validates_each also doesn't have an equivalent instance method, since it is much easier to just write your own validation: # Old validates_each(:date) do |o,a,v| o.errors.add(a, '...') unless v > Date.today end # New def validate errors.add(:date, '...') unless date > Date.today end * MySQL adapter datasets now have on_duplicate_key_update and insert_ignore methods which modify the SQL used to support ON DUPLICATE KEY UPDATE and INSERT INGORE syntax in multi_insert and import. * If you use the MySQL native adapter, you can set: Sequel::MySQL.convert_invalid_date_time = nil to return dates like "0000-00-00" and times like "25:00:00" as nil values instead of raising an error. You can also set it to :string to return the values as strings. * You can now use Sequel without modifying any core classes, by defining a SEQUEL_NO_CORE_EXTENSIONS constant or environment variable. In 2.12, this may still add some deprecated methods to the core classes, but in 3.0 no methods will be added to the core classes if you use this. * You can now use Sequel::Model without the associations implementation by defining a SEQUEL_NO_ASSOCIATIONS constant or environment variable. Other Improvements ------------------ * Model column accessors have been made faster and the overhead of creating them has been reduced significantly. * ~{:bool_col=>true} now generates an bool_col IS NOT TRUE filter instead of bool_col != TRUE. This makes it return records with NULL values. If you only want to have false records, you should use {:bool_col=>false}. This works better with SQL's 3 valued boolean logic. It is slightly inconsistent with ~{:col=>1}, since that won't return values where col is NULL, but it gives the user the ability to create an IS [NOT] (TRUE|FALSE) filter, which Sequel previously did not support. If you really want the old behavior, you can change it to ~{true=>:bool_col}. * Model.use_transactions was added for setting whether model objects should use transactions when destroying or saving records. Like most Sequel options, it's settable on a global, per model, and per object basis: Sequel::Model.use_transactions = false MyModel.use_transactions = true my_model.use_transactions = false You can also turn it on or off for specific save calls: my_model.save(:transaction=>true) * The Oracle adapter now supports schema parsing. * When using Model.db=, all current dataset options are copied to a new dataset created with the new db. * Model::Errors#count was refactored to improve performance. * Most exception classes that were located under Sequel::Error are now located directly under Sequel. The old names are not deprecated (unless mentioned above), but their use is now discouraged. The exceptions have the same name except that Sequel::Error::PoolTimeoutError changed to Sequel::PoolTimeout. * Dataset#where now always affects the WHERE clause. Before, it was just an alias of filter, so it modified the HAVING clause if the dataset already had a HAVING clause. * The optimization of Model.[] introduced in 2.11.0 broke on databases that didn't support LIMIT. The optimization now works on those databases. * All of the the RDoc documentation was reviewed and many updates were made, resulting in significantly improved documentation quality. * Model.def_dataset_method now works when the model doesn't have an associated dataset, as it will add the method to a dataset given to set_dataset in the future. * Database#get and #select now take a block that is passed to the dataset they create. * You can disable the use of INSERT RETURNING in the shared PostgreSQL adapter using disable_insert_returning. This is mostly useful if you are inserting a large number of records. * A bug relating to aliasing columns in the JDBC adapter has been fixed. * Sequel can now create and drop schema-qualified views. * Performance of Dataset#destroy for model datasets was improved. * The specs now run on Rspec 1.2. * Internal dependence on the methods that Sequel adds to core classes has been eliminated, any internal use of methods that Sequel adds to the core classes is now considered a bug. * A possible bug where Database#rename_table would not remove a cached schema entry has been fixed. * The Oracle and MySQL adapters now raise an error as soon as you call distinct on a dataset, instead of waiting until the SQL is generated. Backwards Compatibilty ---------------------- * Saving a newly inserted record in an after_create or after_save hook is no longer broken. It broke in 2.10 as a side effect of allowing the hook to check whether or not the record was a new record. The code has been changed so that a @was_new instance variable will be defined to true if the record was just created. Similarly, instead of not modifying changed_columns until after the after hooks run, a @columns_updated instance variable will be available in the after hooks that is a hash of exactly what attribute keys and values were used in the update. These changes break compatibility with 2.11.0 and 2.10.0, but restore compatibility with 2.9.0 and previous versions. * PostgreSQL no longer uses savepoints for nested transactions by default. If you want to use a savepoint, you have to pass the :savepoint option to the transaction method. Using savepoints by default broke expectations when a method raised Rollback to rollback the transaction, and it only rolled back to the last savepoint. * The anonymous model classes created by Sequel::Model() are now stored in Model::ANONYMOUS_MODEL_CLASSES instead of the @models class instance variable of the main module. * The mappings of adapter schemes to classes are now stored in Sequel::ADAPTER_MAP instead of the Database @@adapters class variable. * Model instances no longer contain a reference to their class's @db_schema. * Database schema sql methods (e.g. alter_table_sql) are now private. * Database#[] no longer accepts a block. It's not possible to call it with a block in general usage, anyway. * The Sequel::Schema::SQL module no longer exists, the methods it included were placed directly in the Sequel::Database class. * The Sequel::SQL::SpecificExpression class has been removed, subclasses now inherit from Sequel::SQL::Expression. * Sequel now requires its own files with an absolute path. * The file hierarchy of the sequel library changed significantly. ruby-sequel-4.1.1/doc/release_notes/2.2.0.txt000066400000000000000000000225151220156535500206000ustar00rootroot00000000000000The Most Powerful and Flexible Associations of Any Ruby ORM ----------------------------------------------------------- Sequel can now support any association type supported by ActiveRecord, and many association types ActiveRecord doesn't support. Association callbacks (:before_add, :after_add, :before_remove, :after_remove) have been added, and work for all association types. Each of the callback options can be a Symbol specifying an instance method that takes one argument (the associated object), or a Proc that takes two arguments (the current object and the associated object), or an array of Symbols and Procs. Additionally, an :after_load callback is available, which is running after loading the associated record(s) from the database. Association extensions are now supported: class FindOrCreate def find_or_create(vals) first(vals) || create(vals) end end class Author < Sequel::Model one_to_many :authorships, :extend=>FindOrCreate end Author.first.authorships_dataset.find_or_create(:name=>'Bob') Sequel has been able to support most has_many :through style associations since 1.3, via many_to_many (since it doesn't break on join tables that are also model tables, unlike ActiveRecord's has_and_belongs_to_many). Now it can also support has_many :through style associations where it goes through a has_many association. Sequel can now support polymorphic associations. Polymorphic associations are really a design flaw, so Sequel doesn't support them directly, but the tools that Sequel gives you make them pretty easy to implement. Sequel can also support associations that ActiveRecord does not. For example, a belongs_to association where the column referenced in the associated table is not the primary key, an association that depends on multiple columns in each table, or even situations where the association has a column in the primary table that can be referenced by any of multiple columns in a second table that has a has_one style association with the table you want to associate with. Some of those associations can be supported for a single object using custom SQL in ActiveRecord, but none are supported when eager loading or allow further filtering. Not only can all of these cases be supported with Sequel::Model, all can be supported with eager loading, and can allow for further filtering. See http://sequel.rubyforge.org/files/sequel/doc/advanced_associations_rdoc.html for details and example code for all association types covered above. There have also been many additional options added for controlling eager loading via eager_graph. Every part of the SQL JOINs can now be controlled via one of the options, so you can use JOIN USING, NATURAL JOIN, or arbitrary JOIN ON conditions. Finally, just to show off the power that Sequel gives you when eager loading, here is example code that will eagerly load all descendants and ancestors in a tree structure, without knowing the depth of the tree: class Node < Sequel::Model set_schema do primary_key :id foreign_key :parent_id, :nodes end create_table many_to_one :parent one_to_many :children, :key=>:parent_id # Only useful when eager loading many_to_one :ancestors, :eager_loader=>(proc do |key_hash, nodes, associations| # Handle cases where the root node has the same parent_id as primary_key # and also when it is NULL non_root_nodes = nodes.reject do |n| if [nil, n.pk].include?(n.parent_id) # Make sure root nodes have their parent association set to nil n.associations[:parent] = nil true else false end end unless non_root_nodes.empty? id_map = {} # Create an map of parent_ids to nodes that have that parent id non_root_nodes.each{|n| (id_map[n.parent_id] ||= []) << n} # Doesn't cause an infinte loop, because when only the root node # is left, this is not called. Node.filter(Node.primary_key=>id_map.keys).eager(:ancestors).all do |node| # Populate the parent association for each node id_map[node.pk].each{|n| n.associations[:parent] = node} end end end) many_to_one :descendants, :eager_loader=>(proc do |key_hash, nodes, associations| id_map = {} nodes.each do |n| # Initialize an empty array of child associations for each parent node n.associations[:children] = [] # Populate identity map of nodes id_map[n.pk] = n end # Doesn't cause an infinite loop, because the :eager_loader is not called # if no records are returned. Exclude id = parent_id to avoid infinite loop # if the root note is one of the returned records and it has parent_id = id # instead of parent_id = NULL. Node.filter(:parent_id=>id_map.keys).exclude(:id=>:parent_id).eager(:descendants).all do |node| # Get the parent from the identity map parent = id_map[node.parent_id] # Set the child's parent association to the parent node.associations[:parent] = parent # Add the child association to the array of children in the parent parent.associations[:children] << node end end) end nodes = Node.filter(:id < 10).eager(:ancestors, :descendants).all New Adapter Features -------------------- * PostgreSQL bytea fields are now fully supported. * The PostgreSQL adapter now uses the safer connection-specific string escaping if you are using ruby-pg. * The SQLite adapter supports drop_column and add_index. * You can now use URL parameters in the connection string, enabling you to connect to PostgreSQL via a socket using postgres://user:password@blah/database?host=/tmp Other New Features ------------------ * Dataset#graph now takes a block which it passes to join_table. * Symbol#identifier has been added, which can be used if another library defines the same operator(s) on Symbol that Sequel defines. * Filter blocks now yield a VirtualRow instance, which can yield Identifiers, QualifiedIdentifiers, or Functions. Like Symbol#identifier, this is useful if another library defines the same operator(s) on Symbol that Sequel defines. * You can now call Model.to_hash to get an identity map for all rows (before this required Model.dataset.to_hash). * A model that can get it's column information from the schema will set it in the dataset, potentially saving many queries. * Model.validates_presence_of now works correctly for boolean columns. Notable Bug Fixes ----------------- * Caching now works with Model subclasses. * Model validation methods now work with source reloading. * The PostgreSQL adapter no longer raises an Error if you try to insert a record with the primary key already specified. * Sequel no longer messes with the native MySQL adapter, so you can use Sequel and ActiveRecord with MySQL in the same process. * Dataset#count now works correctly for limited dataset. * PostgreSQL Database#transaction method yields a connection, similar to the other adapters. * Using a hash argument in #distinct, #order, or #group is treated as an expression instead of a column alias. * Cloned datasets no longer ignore the existing columns unless it is necessary. * The :quote_identifiers and :single_threaded Database options now work correctly. Backwards Incompatible Changes ------------------------------ * ParseTree support, deprecated in 2.1.0, has been removed in 2.2.0. You should use the expression filter syntax instead, perferably without the block (though it can be used inside a block as well). This usually involves the following types of changes: filter{:x == :y} => filter(:x => :y) filter{:x << :y} => filter(:x => :y) filter{:x && :y} => filter(:x & :y) # Don't forget about change filter{:x || :y} => filter(:x | :y) # in operator precedence filter{:x.like?('%blah%')} => filter(:x.like('%blah%')) filter do => filter((:x > 1) & (:y < 2)) :x > 1 :y < 2 end * Attempts to save an invalid Model instance will raise an error by default. To revert to returning a nil value, use: Sequel::Model.raise_on_save_failure = false # Global Album.raise_on_save_failure = false # Class album = Album.new album.raise_on_save_failure = false # Instance Note that before, save would return false where now it returns nil if you disable raising on save failure. * Dataset#update no longer takes a block, as it's use of the block depended on ParseTree. With the introduction of the expression syntax in 2.0.0, it's no longer necessary. You should use a hash with an expression as the value instead: DB[:table].update(:column=>:column + 1) * validates_presence of now considers false as present instead of absent. This is so it works with boolean columns. * Dataset#graph ignores any previously selected columns when it is called for the first time. * Dataset#columns ignores any filtering, ordering, or distinct clauses. This shouldn't cause issues unless you were using SQL functions with side effects and expecting them to be called when columns was called (unlikely at best). One significant point of note is that the 2.2.0 release will be the last release with both a sequel_core and sequel gem. Starting with 2.3.0 they will be combined into one sequel gem. You will still be able to get just the sequel_core part by requiring 'sequel_core', but they will be packaged together. ruby-sequel-4.1.1/doc/release_notes/2.3.0.txt000066400000000000000000000064561220156535500206070ustar00rootroot00000000000000JRuby and Ruby 1.9 Officially Supported --------------------------------------- Sequel now officially supports JRuby 1.1.3 and Ruby 1.9 (svn revision 18194 at least). Using JRuby with the JDBC adapter, PostgreSQL, MySQL, and SQLite now enjoy almost full support, though not everything works the same as using the native adapter. Depending on what you are doing, it may make sense to use postgres-pr on JRuby instead of PostgreSQL-JDBC. To use the new JDBC support, the database connection string you give Sequel is now passed directly to JDBC, here are a few examples: Sequel.connect('jdbc:postgresql://host/database?user=*&password=*') Sequel.connect('jdbc:mysql://host/database?user=*&password=*') Sequel.connect('jdbc:sqlite::memory:') Sequel.connect('jdbc:sqlite:relative/path.db') Sequel.connect('jdbc:sqlite:/absolute/path.db') Single Gem ---------- Sequel is now distributed as a single gem named sequel, by combining the previous sequel_core and sequel gems. You can still just "require 'sequel_core'" if you don't want the model functionality. Database Adapter Improvements ----------------------------- * Dataset#empty? now works using the MySQL adapter. * The Oracle adapter now works with a nonstandard database port. * The JDBC adapter should load JDBC drivers automatically for PostgreSQL, MySQL, SQLite, Oracle, and MSSQL. For PostgreSQL, MySQL, and SQLite, the jdbc-* gem can be used, for the others, you must have the correct .jar in your CLASSPATH. * The PostgreSQL adapter no longer raises an error when inserting records into a table without a primary key. * Database#disconnect now works for the ADO adapter. * The ADO adapter no longer raises an error if the dataset contains no records. * The ODBC adapter no longer errors when converting ::ODBC::Time values. Backwards Incompatible Changes ------------------------------ * Sequel::Worker has been removed. There are no known users, and the specs caused problems on JRuby. * Assigning an empty string to a non-string, non-blob model attribute converts it to nil by default. You can use "Model.typecast_empty_string_to_nil = false" to get the old behavior. This should make web development with Sequel significantly easier, hopefully at no expense to other uses. * Database.uri_to_options is now a private class method. * Model.create_table! now acts the same as Database.create_table!, dropping the table unconditionally and then creating it. This was done for consistency. If you are using Model.create_table! in production code, you should change it to "Model.create_table unless Model.table_exists?", otherwise you risk wiping out your production data. I recommended you use the migration feature instead of Model.set_schema, as that handles altering existing tables. Other Notable Changes --------------------- * Using validates_length_of more than once on the same attribute with different options without a tag no longer causes the first use to be ignored. This was a side effect of the validation tags added in 2.2.0. * Other than the adapters, Sequel now has 100% code coverage (line coverage). * Model#set* methods now return self. * An integration test suite was added, testing Sequel against a live database with nothing mocked, which helped greatly when testing the new support for JDBC adapters. ruby-sequel-4.1.1/doc/release_notes/2.4.0.txt000066400000000000000000000072431220156535500206030ustar00rootroot00000000000000Prepared Statements/Bound Variables =================================== Sequel now supports prepared statements and bound variables. No matter which database you are using, Sequel uses exactly the same API. To specify placeholders, you use the :$placeholder syntax: ds = DB[:items].filter(:name=>:$n) To use a bound variable: ds.call(:select, :n=>'Jim') This will do the equivalent of selecting records that have the name 'Jim'. In addition to :select, you can use :first or :delete. There is also support for bound variables when inserting or updating records: ds.call(:update, {:n=>'Jim', :new_n=>'Bob'}, :name=>:$new_n) Which will update all records that have the name 'Jim' to have the name 'Bob'. Prepared statement support is very similar to bound variable support, except that the statement is first prepared with a name: ps = ds.prepare(:select, :select_by_name) It is then called later with the bound arguments to use: ps.call(:n=>'Jim') DB.call(:select_by_name, :n=>'Jim') # same as above For inserting or updating, the hash to use when inserting or updating is given to prepare: ps2 = ds.prepare(:update, :update_name, :name=>:$new_n) ps2.call(:n=>'Jim', :new_n=>'Bob') There is some level of native support for these features in the PostgreSQL, MySQL, SQLite, and JDBC adapters. For other adapters, support is emulated, but it shouldn't be too difficult to add native support for them. For more details see: http://sequel.rubyforge.org/rdoc/files/doc/prepared_statements_rdoc.html Read-Only Slave/Writable Master and Database Sharding ===================================================== Sequel now has built in support for master/slave database configurations, just by setting an option in Sequel.connect: DB=Sequel.connect('postgres://master_server/database', \ :servers=>{:read_only=>{:host=>'slave_server'}}) That will use slave_server for SELECT queries and master_server for other queries. It's fairly easy to use multiple slaves or even multiple masters, examples are included in the link below. Sharding support requires some code other than the database configuration, but is still fairly simple. For example, to set up a 16 shard configuration based on a hex character: servers = {} (('0'..'9').to_a + ('a'..'f').to_a).each do |hex| servers[hex.to_sym] = {:host=>"hash_host_#{hex}"} end DB=Sequel.connect('postgres://hash_host/hashes', :servers=>servers) To set which shard to use for a query, use the Dataset#server method: DB[:hashes].server(:a).filter(:hash=>/31337/) For more details see: http://sequel.rubyforge.org/rdoc/files/doc/sharding_rdoc.html Other Changes ============= * The sequel.rubyforge.org website has a new design thanks to boof. The online RDoc is now located at http://sequel.rubyforge.org/rdoc. * Support was added for anonymous column names in the ADO adapter. * Better MSSQL support in the ADO, ODBC, and JDBC adapters. The odbc_mssql adapter has been removed. If you use MSSQL with ODBC, please use the odbc adapter with a :db_type=>'mssql' option. * The following Sequel::Error exception subclasses were removed: InvalidExpression, InvalidFilter, InvalidJoinType, and WorkerStop. * Documentation was added for the PostgreSQL, MySQL, SQLite, and JDBC adapters. * Various internal interfaces were refactored. For example, if you use an adapter not included with Sequel, it probably won't work until you update it to the new internal API. * Many low level methods (such as Database#transaction), now take an optional server argument to indicate which server to use. * Model plugins that have a DatasetMethods module with non-public methods no longer have Model methods created that call those methods. ruby-sequel-4.1.1/doc/release_notes/2.5.0.txt000066400000000000000000000110021220156535500205700ustar00rootroot00000000000000New Features ------------ * The values that are used to insert/update records can now be scoped similar to how filter expressions can be scoped. set_defaults is used to set defaults which can be overridden, and set_overrides is used to set defaults which cannot be overridden: DB[:t].set_defaults(:x=>1).insert_sql # => INSERT INTO t (x) VALUES (1) DB[:t].set_defaults(:x=>1).insert_sql(:x=>2) # => INSERT INTO t (x) VALUES (2) DB[:t].set_defaults(:x=>1).insert_sql(:y=>2) # => INSERT INTO t (x, y) VALUES (1, 2) DB[:t].set_overrides(:x=>1).insert_sql(:x=>2) # => INSERT INTO t (x) VALUES (1) The difference between set_defaults and set_overrides is that with set_defaults, the last value takes precedence, while with set_overrides, the first value takes precedence. * The schema generators now support creating and altering tables with composite primary and/or foreign keys: DB.create_table(:items) do integer :id text :name primary_key [:id, :name] foreign_key [:id, :name], :other_table, \ :key=>[:item_id, :item_name] end DB.alter_table(:items) do add_primary_key [:id, :name] add_foreign_key [:id, :name], :other_table, \ :key=>[:item_id, :item_name] end * The AlterTableGenerator now supports unique constraints: DB.alter_table(:items) do add_unique_constraint [:aaa, :bbb, :ccc], :name => :con3 end * The schema generators now support ON UPDATE (previously, they only supported ON DELETE): DB.create_table(:items) do foreign_key :project_id, :projects, :on_update => :cascade end * When connecting to a PostgreSQL server version 8.2 and higher, Sequel now uses the INSERT ... RETURNING ... syntax, which should speed up row inserts on PostgreSQL. In addition, Sequel Models use RETURNING * to speed up model object creation. * You can now validate multiple attributes at once. This is useful if the combination of two or more attribute values is important, such as checking the uniqueness of multiple columns. validates_uniqueness_of now supports this directly: validates_uniqueness_of [:column1, :column2] This protects against the database having multiple rows with the same values for both :column1 and :column2. This is different from: validates_uniqueness_of :column1, :column2 Which checks that the value of column1 is unique in the table, and that the value of column2 is unique in the table (which is much more restrictive). Other Improvements ------------------ * Dataset methods insert_sql, delete_sql, and update_sql respect the :sql option, allowing you to do things such as: ds = DB['INSERT INTO t (time) VALUES (CURRENT_TIMESTAMP)'] ds.insert ds.insert * The database adapters (at least MySQL, PostgreSQL, SQLite, and JDBC) generally raise Sequel::DatabaseError for database problems, making it easier to tell what is a true database error versus an error raised by Sequel itself. * Sequel uses the async features of ruby-pg so that the entire interpreter is not blocked while waiting for the results of queries. * Sequel now supports the 2008.08.17 version of ruby-pg. * MSSQL support has been improved when using the ODBC and ADO adapters. * Index names are quoted and creating or dropping indexes. * Automatically generated column accessor methods no longer override instance methods specified by plugins. * Inserting a row with an already specified primary key inside a transaction now works correctly when using PostgreSQL. * before_save and before_update hooks now work as expected when using save_changes. * count and paginate now work correctly on graphed datasets. Backwards Compatibility ----------------------- * The SQLite adapter now raises Sequel::DatabaseError instead of Sequel::Error::InvalidStatement whenever an SQLite3::Exception is raised by the SQLite3 driver. * Date and DateTime conversions now convert 2 digit years. To revert to the previous behavior: Sequel.convert_two_digit_years = false Note that Ruby 1.8 and 1.9 handle Date parsing differently, so there is no backwards compatibility change for Ruby 1.9. However, this also means that the MM/DD/YY date syntax commonly used in the United States is not always parsed correctly on Ruby 1.9, greatly limiting the use of 2 digit year conversion. * You can no longer abuse the SQL function syntax for specifying database types. For example, you must change: :type=>:varchar[255] to: :type=>:varchar, :size=>255 ruby-sequel-4.1.1/doc/release_notes/2.6.0.txt000066400000000000000000000140541220156535500206030ustar00rootroot00000000000000New Features ------------ * Schema parsing was refactored, resulting in a huge speedup when using MySQL. MySQL now uses the DESCRIBE statement instead of the INFORMATION_SCHEMA. PostgreSQL now uses the pg_* system catalogs instead of the INFORMATION schema. * The schema information now includes the :primary_key field. Models now use this field to automatically determine the primary key for a table, so it no longer needs to be specified explicitly. Models even handle the composite primary key case. * The raise_on_typecast_failure switch was added, with it being true by default (so no change in behavior). This allows the user to silently ignore errors when typecasting fails, at the global, class, and instance levels. Sequel::Model.raise_on_typecast_failure = false # Global Artist.raise_on_typecast_failure = true # Class artist = Artist.new artist.raise_on_typecast_failure = false # Instance Album.raise_on_typecast_failure = true Album.new(:numtracks=>'a') # => raises Sequel::Error::InvalidValue Album.raise_on_typecast_failure = false Album.new(:numtracks=>'a') # => #"a"}> * Associations' orders are now respected when eager loading via eager_graph. Sequel will qualify the columns in the order with the alias being used, so you can have overlapping columns when eager loading multiple associations. Artist.one_to_many :albums, :order=>:name Album.one_to_many :tracks, :order=>:number Artist.order(:artists__name).eager_graph(:albums=>:tracks).sql # => ... ORDER BY artists.name, albums.name, tracks.number * The support for CASE expressions has been enhanced by allowing the use of an optional expression: {1=>2}.case(0, :x) # => CASE x WHEN 1 THEN 2 ELSE 0 END [[:a, 1], [:b, 2], [:c, 3]].case(4, :y) # => CASE y WHEN a THEN 1 WHEN b THEN 2 WHEN c THEN 3 ELSE 4 END Previously, to get something equivalent to this, you had to do: {{:x=>1}=>2}.case(0) # => CASE WHEN (x = 1) THEN 2 ELSE 0 END [[{:y=>:a}, 1], [{:y=>:b}, 2], [{:y=>:c}, 3]].case(4) # => CASE WHEN (y = a) THEN 1 WHEN (y = b) THEN 2 WHEN (y = c) THEN 3 ELSE 4 END * You can now change the NULL/NOT NULL value of an existing column using the set_column_allow_null method. # Set NOT NULL DB.alter_table(:artists){set_column_allow_null :name, false} # Set NULL DB.alter_table(:artists){set_column_allow_null :name, true} * You can now get the schema information for a table in a non-public schema in PostgreSQL using the implicit :schema__table syntax. Before, the :schema option had to be given explicitly to Database#schema. This allows models to get schema information for tables outside the public schema. * Transactions are now supported on MSSQL. * Dataset#tables now returns all tables in the database for MySQL databases accessed via JDBC. * Database#drop_view can now drop multiple views at once. Other Improvements ------------------ * The SQLite adapter now respects the Sequel.datetime_class option for timestamp and datetime columns. * Adding a unique constraint no longer explicity creates a unique index. If you want a unique index, use index :unique=>true. * If no language is specified when creating a full text index on PostgreSQL, the simple language is assumed. * Errors when typecasting fails are now Sequel::Error::InvalidValue instead of the more generic Sequel::Error. * Specifying constraints now works correctly for all types of arguments. Previously, it did not work unless a block or interpolated string were used. * Loading an association with the same name as a table in the FROM clause no longer causes an error. * When eagerly loading many_to_one associations where no objects have an associated object, the negative lookup is now cached. * String keys can now be used with Dataset#multi_insert, just like they can be used for Dataset#insert. * Dataset#join_table now generates the correct SQL when doing the first join to a dataset where the first source is a dataset, when an unqualified column is used in the conditions. * Cascading associations after *_to_many associations can now be eagerly loaded via eager_graph. * Eagerly loading *_to_many associations that are cascaded behind a many_to_one association now have their duplicates removed if a cartesian product join is done. * The SQLite adapter now uses string literals in all of the AS clauses. While the SQL standard specifies that identifiers should be used, SQLite documentation explicitly states that string literals are expected (though it generally works with identifiers by converting them implicitly). * Database methods that modify the schema now remove the cached schema entry. * The hash keys that Database#schema returns when no table is requested are now always supposed to be symbols. * The generation of SQL for composite foreign keys on MySQL has been fixed. * A schema.rdoc file was added to the documentation explaining the various parts of Sequel related to schema generation and modification and how they interact (http://sequel.rubyforge.org/rdoc/files/doc/schema_rdoc.html). * The RDoc template for the website was changed from the default template to the hanna template. Backwards Compatibility ----------------------- * The :numeric_precision and :max_chars schema entries have been removed. Use the :db_type entry to determine this information, if available. * The SQLite adapter used to always return Time instances for timestamp types, even if Sequel.datetime_class was DateTime. For datetime types it always returned a DateTime instance. It now returns an instance of Sequel.datetime_class in both cases. * It's possible that the including of associations' orders when eager loading via eager_graph could cause problems. You can use the :order_eager_graph=>false option to not use the :order option when eager loading via :eager_graph. * There were small changes in SQL creation where the AS keyword is now used explicitly. These should have no effect, but could break tests for explicit SQL. ruby-sequel-4.1.1/doc/release_notes/2.7.0.txt000066400000000000000000000145431220156535500206070ustar00rootroot00000000000000Performance Optimizations ------------------------- * Fetching a large number of records with the PostgreSQL adapter is significantly faster (up to 3-4 times faster than before). * Instantiating model objects has been made much faster, as many options (such as raise_on_save_failure) are now lazily loaded, and hook methods are now much faster if no hooks have been defined for that type of hook. New Association Options ----------------------- * The :eager_grapher option has been added allowing you to supply your own block to implement eager loading via eager_graph. * many_to_one and one_to_many associations now have a :primary_key option, specifying the name of the column that the :key option references. * many_to_many associations now have :left_primary_key and :right_primary_key options, specifying the columns that :left_key and :right_key reference, respectively. * many_to_many associations now have a :uniq option, that adds an :after_load callback that makes the returned array of objects unique. Other New Features ------------------ * Dataset#set_graph_aliases now allows you to supply a third argument for each column you want graph into the dataset, allowing you to use arbitrary SQL expressions that are graphed into the correct table: ds.set_graph_aliases!(:a=>[:b, :c], :d=>[:e, :f, 42]) # SELECT b.c AS a, 42 AS d FROM ... ds.first # => {:b=>{:c=>?}, :e=>{:f=>42}} * Dataset#add_graph_aliases was added, that adds additional graph aliases instead of replacing the existing ones (as #set_graph_aliases does). It's basically the equivalent of select_more for graphs. * Dataset#join_table changed it's final argument from a symbol specifying a table name to an option hash (with backwards compatibility kept), and adds support for a :implicit_qualifier option, which it uses instead of the last joined table to qualify columns. * Association's :after_load callbacks are now called when eager loading via eager (but not when eager loading via eager_graph). * Any expression can now be used as the argument to Symbol#like, which means that you can pattern match columns to other columns. Before, it always transformed the argument to a string. :a.like(:b) # 2.6.0: a LIKE 'b' # 2.7.0: a LIKE b * Array#sql_array was added, allowing you to specify that an array in ruby be treated like an array in SQL. This is true anyway, except for arrays of all two pairs, which are treated like hashes, for specifying multiple conditions with the same key: DB[:foo].filter([:a,:b] => [[1,2],[3,4]].sql_array) # => SELECT * FROM foo WHERE ((a, b) IN ((1, 2), (3, 4))) * ComplexExpression#== and #sql? were added, allowing for easier testing. * Full text searching on PostgreSQL now joins multiple columns with a space, to prevent joining border words, and it works when there is a match in one column but the other column is NULL. Other Improvements ------------------ * Instance methods added by creating associations are added to an anonymous module included by the class, so they can be overridden in the class while still allowing the use of super to get the default behavior (this is similar to column accessor methods). * Many improvements were added to support using multiple schemas in PostgreSQL. * Model::Validation::Errors objects are now more compatible with Rails, by adding a #count method and making #on return nil if there are no error messages for that attribute. * Serialized columns in models are no longer typecast. * Associations are now inherited when a model class is subclassed. * Many improvements were made that should make adding custom association types easier. * A corner case in eager_graph where the wrong table name would be used to qualify a column name has been fixed. * Dataset's cached column information is no longer modified if #each is called with an option that modifies the columns. * You should now be able to connect to Oracle via the JDBC adapter, and with the same support it has when using the oracle adapter. * Model.association_reflections is now a public methods, so you can grab a hash of all association reflections at once (keyed by association name symbol). * The :encoding/:charset option now works in the PostgreSQL adapter if the postgres-pr driver is used. * The numeric(x,y) type is now interpreted as decimal. Backwards Compatibilty ---------------------- * The first argument to Model#initialize must be a hash, you can no longer use nil. For example, the following code will break if :album is not in params: Album.new(params[:album]) Additionally, Model#initialize does not call the block if the second argument is true. * The Sequel::Model.lazy_load_schema setting was removed. It should no longer be necessary now that schema loading is relatively speedy, and schemas can be loaded at startup and cached. * The PostgreSQL adapter will default to using a unix socket in /tmp if no host is specified. Before, a TCP/IP socket to localhost was used if no host was specified. This change makes Sequel operate similarly to the PostgreSQL command line tools. * The ASSOCIATION_TYPES constant has changed from an array to a hash and it has been moved. The RECIPROCAL_ASSOCIATIONS constant has been removed. This is unlikely to matter unless you were using custom association types. * The PostgreSQL adapter now sets the PostgreSQL DateStyle, in order to implement an optimization. To turn this off, set Sequel::Postgres.use_iso_date_format = false. * When using the PostgreSQL adapter, in many places the schema is specified explicitly. If you do not specify a schema, a default one is used (public by default). If you use a schema other than public for your work, use Database#default_schema= to set it. For any table outside of the default schema, you should specify the schema explicitly, even if it is in the PostgreSQL search_path. * Model::Validation::Errors#on now returns nil instead of [] if there are no errors for an attribute. * Hooks added to a superclass after a subclass has been created no longer have an effect on the subclass. * The Postgres.string_to_bool method has been removed. * PostgreSQL full text searching now always defaults to using the simple dictionary. If you want to use another dictionary, it must be specified explicitly, both when searching and when creating a full text index. ruby-sequel-4.1.1/doc/release_notes/2.8.0.txt000066400000000000000000000143701220156535500206060ustar00rootroot00000000000000New Features ------------ * Sequel now supports database stored procedures similar to its support for prepared statements. The API is as follows: DB[:table].call_sproc(:select, :mysp, 'param1', 'param2') # or sp = DB[:table].prepare_sproc(:select, :mysp) sp.call('param1', 'param2') sp.call('param3', 'param4') This works with Model datasets as well, allowing them to return model objects: Album.call_sproc(:select, :new_albums) #=> [#, #] You can call a stored procedure directly on the Database object if you want to, but the results and API are adapter dependent, and you definitely shouldn't do it if the stored procedure returns rows: DB.call_sproc(:mysp, :args=>['param1', 'param2']) Currently, the MySQL and JDBC adapters support stored procedures. Other adapters may support them in a future version. * The connection pool code can now remove connections if the adapter raises a Sequel::DatabaseDisconnectError indicating that the connection has been lost. When a query is attempted and the adapter raises this error, the connection pool removes the connection from the pool, and reraises the error. The Oracle and PostgreSQL adapters currently support this, and other adapters may support it in a future version. * Whether to upcase or quote identifiers can now be set separately. Previously, upcasing was done when quoting except when using SQLite, PostgreSQL, or MySQL. Now, you can turn upcasing off while still quoting. This may be necessary if you are using a MSSQL database that has lower case table names that conflict with reserved words. It also allows you to uppercase identifiers when using SQLite, PostgreSQL, or MySQL, which may be beneficial in certain cases. To turn upcasing on or off: # Global Sequel.upcase_identifiers = true # Database DB = Sequel.connect("postgres://...", :upcase_identifiers=>true) DB.upcase_identifiers = false # Dataset ds = DB[:items] ds.upcase_identifiers = true * Options are now supported when altering a columns type: DB.alter_table(:items) do set_column_type :score, :integer, :unsigned=>true set_column_type :score, :varchar, :size=>30 set_column_type :score, :enum, :elements=>['a', 'b'] end * Standard conforming strings are now turned on by default in the PostgreSQL adapter. This makes PostgreSQL not interpret backslash escapes. This is the PostgreSQL recommended setting, which will be the default setting in a future version of PostgreSQL. If you don't want for force the use of standard strings, use: Sequel::Postgres.force_standard_strings = false You need to do that after you call Sequel.connect but before you use the database for anything, since that setting is set on initial connection. * Sequel now raises an error if you attempt to use EXCEPT [ALL] or INTERSECT [ALL] on a database that doesn't support it. * Sequel now raises an error if you attempt to use DISTINCT ON with MySQL or Oracle, which don't support it. * A subadapter for the Progress RDBMS was added to the ODBC adapter. To connect to a Progress database, use the :db_type=>'progress' option. This adapter targets Progress 9. * The ODBC adapter now supports transactions. * The MSSQL shared adapter now supports multi_insert (for inserting multiple rows at once), and unicode string literals. Other Improvements ------------------ * There were many improvements related to using schemas in databases. Using schema-qualified tables should work in most if not all cases now. Model associations, getting the schema, joins, and many other parts of Sequel were modified to allow the use of schema-qualifed tables. * You can now use literal strings with placeholders as well as subselects when using prepared statements. For example, the following all work now: DB[:items].filter("id = ?", :$i).call(:select, :i=>1) DB[:items].filter(:id=>DB[:items].select(:id)\ .filter(:id=>:$i)).call(:select, :i=>1) DB["SELECT * FROM items WHERE id = ?", :$i].call(:select, :i=>1) * Model#initialize received a few more micro-optimizations. * Model#refresh now clears the changed columns as well as the associations. * You can now drop columns inside a transaction when using SQLite. * You can now submit multiple SQL queries at once in the MySQL adapter: DB['SELECT 1; SELECT 2'].all #=> [{:"1"=>1, :"2"=>2}] This may fix issues if you've seen a MySQL "commands out of sync" message. Note that this doesn't work if you are connecting to MySQL via JDBC. * You can now use AliasedExpressions directly in table names given to join_table: DB.from(:i.as(:j)).join(:k.as(:l), :a=>:b) #=> ... FROM i AS j INNER JOIN k AS l ON (l.a = j.b) * Database#rename_table once again works on PostgreSQL. It was broken in 2.7.0. * The interval type is now treated as it's own type. It was previously treated as an integer type. * Subselects are now aliased correctly when using Oracle. * UNION, INTERSECT, and EXCEPT statements now appear before ORDER and LIMIT on most databases. If you use these constructs, please test and make sure that they work correctly with your database. * SQL EXCEPT clause now works on Oracle, which uses MINUS instead. * Dataset#exists now returns a LiteralString, to make it easier to use. * The Sequel.odbc_mssql method was removed, as the odbc_mssql adapter was removed in a previous version. Instead, use: Sequel.odbc(..., :db_type=>'mssql') Backwards Compatibilty ---------------------- * The hash returned by Database#schema when no table name is provided uses quoted strings instead of symbols as keys. The hash has a default proc, so using the symbol will return the same value as before, but if you use each to iterate through the hash, the keys will be different. This was necessary to handle schema-qualified tables. * Database#table_exists? no longer checks the output of Database#tables. If the table exists in the schema, it returns true, otherwise, it does a query. This was necessary because table_exists? accepts multiple formats for table names and Database#tables is an array of symbols. * When getting the schema on PostgreSQL, the default schema is now used even if the :schema=>nil option is used. ruby-sequel-4.1.1/doc/release_notes/2.9.0.txt000066400000000000000000000072531220156535500206110ustar00rootroot00000000000000New Features ------------ * Compound SQL statement (i.e. UNION, EXCEPT, and INTERSECT) support is much improved. Chaining compound statement calls now longer wipes out previous compound statements calls of the same type. Also, the ordering of the compound statements is no longer fixed per adapter, it now reflects the order they were called on the object. For example, the following now work as expected: ds1.union(ds2).union(ds3) ds1.except(ds2).except(ds3) ds1.intersect(ds2).intersect(ds3) ds1.union(ds2).except(ds3) ds1.except(ds2).intersect(ds3) ds1.intersect(ds2).union(ds3) * Exception classes ValidationFailure and BeforeHookFailure were added so it is eaiser to catch a failed validation. These are both subclasses of Sequel::Error, so there shouldn't be any backwards compatibility issues. Error messages are also improved, as the ValidationFailure message is a string containing all validation failures and the BeforeHookFailure message contains which hook type caused the failure (i.e. before_save, before_create, or before_validate). * The sequel command line tool now has a -L option to load all files in the given directory. This is mainly useful for loading a directory of model files. The files are loaded after the database connection is set up. * Methods to create and drop database functions, triggers, and procedural languages were added to the PostgreSQL adapter. Other Improvements ------------------ * Database#schema now raises an error if you pass a table that doesn't exist. Before, some adapters would return an empty schema. The bigger problem with this is that it made table_exists? return the wrong value, since it looks at the Database's schema. Generally, this bug would show up in the following code: class Blah < Sequel::Model end Blah.table_exists? # True even if blahs is not a table * AlterTableGenerator#add_foreign_key now works for MySQL. * Error messages in model association methods that add/remove an associated object are now more descriptive. * Dataset#destroy for model datasets now works with databases that can't handle nested queries. However, it now loads all model objects being destroyed before attempting to destroy any of them. * Dataset#count now works correctly for compound SQL statements (i.e. UNION, EXCEPT, and INTERSECT). * BigDecimal NaN and (+/-)Infinity values are now literalized correctly. Database support for this is hit or miss. Sqlite will work correctly, PostgreSQL raises an error if you try to store an infinite value in a numeric column (though it works for float columns), and MySQL converts all three to 0. * The SQLite adapter no longer loses primary key information when dropping columns. * The SQLite adapter now supports dropping indicies. * A bug in the MSSQL adapter's literalization of LiteralStrings has been fixed. * The literalization of blobs on PostgreSQL (bytea columns) has been fixed. * Sequel now raises an error if you attempt to subclass Sequel::Model before setting up a database connection. * The native postgresql adapter has been changed to only log client messages of level WARNING by default. You can modify this via: Sequel::Postgres.client_min_messages = nil # Use Server Default Sequel::Postgres.client_min_messages = :notice # Use NOTICE level * Model#inspect now calls Model#inspect_values for easier overloading. Backwards Compatibilty ---------------------- * The API to Model#save_failure (a private method) was changed to remove the second argument. * SQLite columns with type numeric, decimal, or money are now returned as BigDecimal values. Before, they were probably returned as strings. ruby-sequel-4.1.1/doc/release_notes/3.0.0.txt000066400000000000000000000203601220156535500205730ustar00rootroot00000000000000Deprecated Methods/Features Removed ----------------------------------- Methods and features that were deprecated in 2.12.0 have been removed in 3.0.0. Many features were moved into plugins or extensions, so in many cases you just need to require an extension or use Model.plugin and not make any changes to your code. See the 2.12.0 release notes for the list of methods/features deprecated in 2.12.0. If you are upgrading from a previous 2.x release, please upgrade to 2.12.0 first, fix your code to remove all deprecation warnings, and then upgrade to 3.0.0. New Adapter ----------- * Sequel now has an Amalgalite adapter. Amalgalite is a ruby extension that embeds SQLite without requiring a separate SQLite installation. The adapter is functionality complete but significantly slower than the native SQLite adapter. New Features ------------ * The JDBC, PostgreSQL, MySQL, and SQLite adapters all now have a Database#indexes method that returns indexes for a given table: DB.indexes(:songs) => {:songs_name_index=>{:unique=>true, :columns=>[:name]}, :songs_lyricid_index=>{:unique=>false, :columns=>[:lyricid]}} * A schema_dumper extension was added to Sequel. It supports dumping the schema of a table (including indexes) as a string that can be evaluated in the context of a Database object to create the table. It also supports dumping all tables in the database as a string containing a Migration subclass that will rebuild the database. require 'sequel/extensions/schema_dumper' DB.dump_table_schema(:table) DB.dump_schema_migration DB.dump_schema_migration(:same_db=>true) DB.dump_schema_migration(:indexes=>false) DB.dump_indexes_migration The :same_db option causes Sequel to not translate column types to generic column types. By default, the migration created will use generic types so it will run on other databases. However, if you only want to support a single database, using the :same_db option will make the migration use the exact database type parsed from the database. The :indexes=>false option causes indexes not be included in the migration. The dump_indexes_migration can be used to create a separate migration with the indexes. This can be useful if you plan on loading a lot of data right after creating the tables, since it is faster to add indexes after the data has been added. * Using options with the generic database types is now supported to a limited extent. For example, the following code now works: DB.create_table(:table) do String :a, :size=>50 # varchar(50) String :b, :text=>true # text String :c, :fixed=>true, :size=>30 # char(30) Time :ts # timestamp Time :t, :only_time=>true # time end * Using Dataset#filter and related methods with multiple arguments now works much more intuitively: # 2.12.0 dataset.filter(:a, :b=>1) # a IS NULL AND (b = 1) IS NULL # 3.0.0 dataset.filter(:a, :b=>1) # a AND b = 1 * You can now create temporary tables by passing the :temp=>true option to Database#create_table. * The Oracle shared adapter now supports emulation of autoincrementing primary keys by creating a sequence and a trigger, similar to how the Firebird adapter works. * The Database#database_type method was added that returns a symbol specifying the database type being used. This can be different than Database.adapter_scheme if you are using an adapter like JDBC that allows connecting to multiple different types of databases. * Database#drop_index and related methods now support an options hash that respects the :name option, so they can now be used to drop an index that doesn't use the default index name. * The PostgreSQL shared adapter now supports a Database#reset_primary_key_sequence method to reset the primary key sequence for a given table, based on code from ActiveRecord. * SQL::QualifiedIdentifiers can now be qualified, allowing you to do: :column.qualify(:table).qualify(:schema) * Using the :db_type=>'mssql' option with the DBI adapter will now load the MSSQL support. * The MySQL shared adapter now supports Dataset#full_text_sql, which you can use in queries like the following: ds.select(:table.*, ds.full_text_sql(:column, 'value').as(:ft)) Other Improvements ------------------ * Sequel will now release connections from the connection pool automatically if they are held by a dead thread. This can happen if you are using MRI 1.8 and you are heavily multithreaded or you call Thread#exit! or similar method explicitly. Those methods skip the execution of ensure blocks which normally release the connections when the threads exit. * Model#save will now always use the same server when refreshing data after an insert. This fixes an issue when Sequel's master/slave database support is used with models. * SQL Array references are now quoted correctly, so code like this now works: :table__column.sql_subscript(1) * The PostgreSQL shared adapter now handles sequences that need to be quoted correctly (previously these were quoted twice). * String quoting on Oracle no longer doubles backslashes. * Database#count now works correctly when used on MSSQL when using an adapter that doesn't handle unnamed columns. * Full text searching in the MySQL adapter now works correctly when multiple search terms are used. * Altering a column's name, type, default, or NULL/NOT NULL status on MySQL now keeps other relevent column information. For example, if you alter a column's type, it'll keep an existing default. This functionality isn't complete, there may be other column information that is lost. * Fix creation of an index with a given type on MySQL, since MySQL's documentation lies. * The schema parser now handles decimal types with size specifiers, fixing use on MySQL. * Dataset#quote_identifier now works correctly when given an SQL::Identifier. This allows you to do: dataset.select{sum(hours).as(hours)} Backwards Compatibility ----------------------- * Sequel will now use instance_eval on all virtual row blocks without an argument. This can lead to much nicer code: dataset.filter{(number > 10) & (name > 'M')} # WHERE number > 10 AND name > 'M' 2.12.0 raised a deprecation warning if you used a virtual row block without an argument and you hadn't set Sequel.virtual_row_instance_eval = true. * Dataset#exclude now inverts the given argument, instead of negating it. This only changes its behavior if it is called with a hash or array of all two pairs that have more than one element. # 2.12.0 dataset.exclude(:a=>1, :b=>1) # a != 1 AND b != 1 # 3.0.0 dataset.exclude(:a=>1, :b=>1) # a != 1 OR b != 1 This was done for consistency, since exclude would only negate a hash if it was given an argument, it would invert the same hash if you used a block: # 2.12.0 dataset.exclude{{:a=>1, :b=>1}} # a != 1 OR b != 1 If you want the previous behavior, change the code to the following: dataset.filter({:a=>1, :b=>1}.sql_negate) * As noted above, the methods/features deprecated in 2.12.0 were removed. * The private Dataset#select_*_sql methods now only take a single argument, the SQL string being built. * Dataset#from when called without arguments would previously cause an error to be raised when the SQL string is generated. Now it causes no FROM clause to be used, similar to how Dataset#select with no arguments causes SELECT * to be used. * The internals of the generic type support and the schema generators were changed significantly, which could have some fallout in terms of old migrations breaking if they used the generic types and were relying on some undocumented behavior (such as using Integer as a type with the :unsigned option). * The Firebird adapter no longer translates the text database specific type. Use the following instead: String :column, :text=>true * The MySQL shared adapter used to use the timestamp type for Time, now it uses datetime. This is because the timestamp type cannot represent everything that the ruby Time class can represent. * Metaprogramming#metaattr_accessor and #metaattr_reader methods were removed. * Dataset#irregular_function_sql was removed. ruby-sequel-4.1.1/doc/release_notes/3.1.0.txt000066400000000000000000000375761220156535500206150ustar00rootroot00000000000000New Plugins ----------- 3 new plugins were added that implement features supported by DataMapper: identity_map, tactical_eager_loading, and lazy_attributes. These plugins don't add any real new features, since you can do most of what they allow before simply by being a little more explicit in your Sequel code. However, some people prefer a less explicit approach that uses a bit more magic, and now Sequel can accomodate them. * The identity_map plugin allows you to create a 1-1 correspondence of model objects to database rows via a temporary thread-local identity map. It makes the following statment true: Sequel::Model.with_identity_map do Album.filter{(id > 0) & (id < 2)}.first.object_id == \ Album.first(:id=>1).object_id end As the code above implies, you need to use the with_identity_map method with a block to use the identity mapping feature. By itself, identity maps don't offer much, but Sequel uses them as a cache when looking up objects by primary key or looking up many_to_one associated objects. Basically, it can be used as a performance enhancer, and it also allows the support of the lazy_attributes plugin. The identity_map plugin is expected to be most useful in web applications. With that in mind, here's a Rack middleware that wraps each request in a with_identity_map call, so the identity_map features are available inside the web app: Sequel::Model.plugin :identity_map class SequelIdentityMap def initialize(app) @app = app end def call(env) Sequel::Model.with_identity_map{@app.call(env)} end end * The tactical_eager_loading plugin allows you to eagerly load an association for all models retrieved in the same group whenever one of the models accesses the association: # 2 queries total Album.filter{id<100}.all do |a| a.artists end In order for this correctly, you must use Dataset#all to load the records, you cannot iterate over them via Dataset#each. This is because eager loading requires that you have all records in advance, and when using Dataset#each you cannot know about later records in the dataset. Before, you could just be explicit about the associations you needed and make sure to eagerly load them using eager before calling Dataset#all. * The lazy_attributes plugin builds on the identity_map and tactical_eager_loading plugins and allows you to create attributes that are lazily loaded from the database: Album.plugin :lazy_attributes, :review This will remove the :review attribute from being selected by default. If you try to access the attribute after it is selected, it'll retrieve the value from the database. If the object was retrieved with a group of other objects and an identity map is in use, it'll retrieve the lazy attribute for the entire group of objects at once, similar to the tatical_eager_loading plugin: # 2 queries total Sequel::Model.with_identity_map do Album.filter{id<100}.all do |a| a.review end end Before, you could just set the default selected columns for a model to not include the lazy attributes, and just use select_more to add them to any query where the resulting model objects will need the attributes. * A many_through_many plugin was also added. This very powerful plugin allows you to create associations to multiple objects through multiple join tables. Here are some examples: # Assume the following many to many associations: Artist.many_to_many :albums Album.many_to_many :tags # Same as Artist.many_to_many :albums Artist.many_through_many :albums, [[:albums_artists, :artist_id, :album_id]] # All tags associated to any album this artist is associated to Artist.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] # All artists associated to any album this artist is associated to Artist.many_through_many :artists, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id]] # All albums by artists that are associated to any album this # artist is associated to Artist.many_through_many :artist_albums, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id], [:artists, :id, :id], [:albums_artists, :artist_id, :album_id]] Basically, for each join table between this model and the associated model, you use an array with a join table name, left key name (key closer to this model), and right key name (key closer to the associated model). In usual Sequel fashion, this association type works not just for single objects, but it can also be eagerly loaded via eager or eager_graph. There are numerous additional configuration options, please see the RDoc for details. New bin/sequel Features ----------------------- The bin/sequel command line tool now supports the following options: * -C: Copies one database to another. You must specify two database arguments. Works similar to Taps, copying the table schema, then the table data, then creating the indexes. * -d: Dump the schema of the database in the database-independent migration format. * -D: Dump the schema of the database in the database-specific migration format. * -h: Display the help * -t: Output the full backtrace if an exception is raised The bin/sequel tool is now better about checking which options can be used together. It also now supports using the -L option multiple times and having it load model files from multiple directory trees. New Features ------------ * Dataset#qualify_to and #qualify_to_first_source were added. They allow you to qualify unqualified columns in the current dataset to the given table or the first source. This can be used to join a dataset that has unqualified columns to a new table which has columns with the same name. For example, take this dataset: ds = DB[:albums].select(:name).order(:name).filter(:id=>1) # SELECT name FROM albums WHERE (id = 1) ORDER BY name Let's say you want to join it to the artists table: ds2 = ds.join(:artists, :id=>:artist_id) # SELECT name FROM albums # INNER JOIN artists ON (artists.id = albums.artist_id) # WHERE (id = 1) ORDER BY name That's going to give you an error, as the artists table already has columns named id and name. This new feature allows you to do the following: ds2 = ds.qualify_to_first_source.join(:artists, :id=>:artist_id) # SELECT albums.name FROM albums # INNER JOIN artists ON (artists.id = albums.artist_id) # WHERE (albums.id = 1) ORDER BY albums.name By doing this, all unqualified columns are qualified, so you get a usable query. This is expected to be most useful for users that have a default order or filter on their models and want to join the model to another table. Before you had to replace the filters, selection, etc. manually, or use qualified columns by default even though the weren't needed in most cases. * Savepoints are now supported using SQLite and MySQL, assuming you are using a database version that supports them. You need to pass the :savepoint option to Database#transaction to use a savepoint. * Model plugins can now depend on other plugins, simply by calling the Model.plugin method inside the plugin's apply method: module LazyAttributes def self.apply(model) model.plugin :tactical_eager_loading end * Model.plugin now takes a block with is passed to the plugin's apply and configure method (see Backwards Compatibility section for more information on the configure method). * You can see which plugins are loaded for a model by using Model.plugins. * You can use Sequel.extension method to load extensions: Sequel.extension :pagination, :query This will only load extensions that ship with Sequel, unlike the Model.plugin method which will also load external plugins. * You can now use Database#create_table? to create the table if it doesn't already exist (a very common need, it seems). The schema plugin now supports Model.create_table? as well. * #sql_subscript is now an allowed method on most SQL expression objects that Sequel generates. Also, arguments to #sql_subscript can now be other expressions instead of just integers. * Associations can now take a :cartesian_product_number option, which can be used to tell Sequel whether to turn on duplicate object detection when eagerly loading objects through eager_graph. This number should be 0 if the association can never create multiple rows for each row in the current table, 1 if it can create multiple rows in the each row in the current table, and 2 if the association itself causes a cartesian product. * On MySQL, Dataset#insert_ignore now affects #insert as well as multi_insert and import. * Database#create_table now supports an :ignore_index_errors option, and Database#add_index now supports an :ignore_errors option. These are used by the schema_dumper when dumping an database schema to be restored on another database type, since indexes aren't usually required for proper operation and some indexes can't be transferred. * The ADO adapter now takes a :provider option, which can be used to set the provider. * The ADO adapter now takes a :command_timeout option, which tells the connection how long to wait before giving up and raising an exception. * The Sequel.amalgalite adapter method was added. Like the Sequel.sqlite method, you can call it with no arguments to get an in memory database. Other Improvements ------------------ * MySQL "commands out of sync" errors should no longer occur unless you are nesting queries (calling Dataset#each inside Dataset#each). A bug dating at least to 2007 and possibly since the initial creation of the Sequel MySQL adapter was the cause. Before, SQL that caused a result set that was sent using a method where Sequel doesn't yield a result set would cause the "commands out of sync" error on the following query. For example, the following code would cause the error: DB << "SHOW DATABASES" If for some reason a "commands out of sync" error does occur, Sequel will disconnect the connection from the connection pool, so it won't continually stay in the pool and raise errors every time it is used. * The schema_dumper extension is much better about parsing defaults from the database. It can now correctly parse most defaults on MySQL, SQLite, and PostgreSQL databases. It no longer includes defaults that it can't parse to a ruby object unless a database- specific dump is requested. * The schema_dumper extension now dumps tables in alphabetical order. * Ordered and limited datasets are now handled correctly when using union, intersect, and except. Also, union, intersect, and except now always return a from_self dataset, so further limiting, filtering, and ordering of them now works as expected. * Dataset#graph now works correctly with a complex dataset without having to use from_self. Before, code like the following didn't do what was expected: DB[:albums]. graph(DB[:artists].filter{name > 'M'}, :id=>:artist_id) Before, the filter on DB[:artists] would be dropped. Now, Sequel correctly uses a subselect. * You can now specify serialization formats per column in the serialization plugin, either by calling the plugin multiple times or by using the new serialize_attributes method: Album.plugin :serialization Album.serialize_attributes :marshal, :review Album.serialize_attributes :yaml, :name Album.serialization_map #{:name=>:yaml, :review=>:marshal} The public API for the serialization plugin is still backwards compatible, but the internals have changed slightly to support this new feature. * You can now use eager_graph to eagerly load associations for models that lack primary keys. * The :eager_graph association option now works when lazily-loading many_to_many associations. * Dataset#add_graph_aliases now works correctly even if set_graph_aliases hasn't been used. * The PostgreSQL Database#tables method now assumes the public schema if a schema is not given and there is no default_schema. * The PostgreSQL Database#indexes method no longer returns partial indexes or functional indexes. * The MySQL Database#indexes method no longer returns indexes on partial columns (prefix indexes). * Default values for String :text=>true and File columns on MySQL are ignored, since MySQL doesn't support them. They are not ignored if you use text and blob, since then you are using the database-specific syntax and Sequel doesn't do translation when the database-specific syntax is used. * On PostgreSQL, attempting the reset the primary key sequence for a table without a primary key no longer causes an error. * Using a placeholder string in an association's :condition option now works correctly (e.g. :conditions=>['n = ?', 1]) * An error is no longer raised if you attempt to load a plugin that has a DatasetMethods module but no public dataset methods. * The check for dataset[n] where n is an integer was fixed. It now raises an error inside of returning a limited dataset. * On PostgreSQL, Dataset#insert with static SQL now works correctly. * A reflection.rdoc file was added giving an overview of Sequel's reflection support. * The Migrator now works correctly with file names like 001_12312412_file_name.rb. * The association code now requires the classes match when looking for a reciprocal association. * An unlikely threading bug (race condition) was possible when using the validation_class_methods plugin. The plugin was refactored and now uses a mutex to avoid the issue. One of the refactoring changes makes it so that you can no longer use a class level vaildation inside a Class.new block (since inherited isn't called until the block finishes). * The exception messages when Sequel.string_to_* fail have been fixed. * The String :text=>true generic database type has been fixed when using the Firebird adapter. Backwards Compatibility ----------------------- * A plugin's apply method is now only called the first time a plugin is loaded. Plugins can now have a configure method that is called every time the plugin is loaded, and is always called after the instance methods, class methods, and dataset method submodules have been added to the model. This is different from apply, which is called before the submodules are loaded. If you are a plugin author, please check your implementation to make sure this doesn't cause problems for you. If you have questions, please post on the Sequel mailing list. This new plugin feature will make certain things a lot easier, and it should be mostly backwards compatible. However, if a plugin was previously expected to be loaded multiple times with the apply method called each time, it will no longer work correctly. * The plugin_opts methods defined now include multiple args in an array if multiple args are given. Before, the plugin_opts methods just returned the first argument. * Database#table_exists? no longer checks the cached schema information. By default, it will always do a database query (unless overridden in an adapter). This shouldn't affect the results, but if were using the method a lot and expecting it to use cached information, it doesn't have the same performance characteristics. * The internal storage of the :select option for datasets have changed. You can no longer use a hash as a way of aliasing columns. Dataset#select now does the translation from the hash to SQL::AliasedExpression instances. Basically, if you were using Dataset#clone directly with a :select option with hashes for aliasing, you should switch to using Dataset#select or changing the hashes to AliasedExpressions yourself. ruby-sequel-4.1.1/doc/release_notes/3.10.0.txt000066400000000000000000000270451220156535500206630ustar00rootroot00000000000000New Features ------------ * A real one_to_one association was added to Sequel, replacing the previous :one_to_one option of the one_to_many association. This is a fully backwards incompatible change, any code that uses the :one_to_one option of one_to_many will be broken in Sequel 3.10.0, as that option now raises an exception. Keeping backwards compatibility was not possible, as even the name of the association needs to be changed. Here are the code changes you need to make: * The association definition needs to change from one_to_many to one_to_one, with no :one_to_one option, and with the association name changed from the plural form to the singular form: # Before Lyric.one_to_many :songs, :one_to_one=>true # After Lyric.one_to_one :song * All usage of the association when eager loading or when getting reflections need to use the new singular association name: # Before Lyric.eager(:songs).all Lyric.eager_graph(:songs).all Lyric.association_reflection(:songs) # After Lyric.eager(:song).all Lyric.eager_graph(:song).all Lyric.association_reflection(:song) Any Sequel plugins or extensions that deal with the internals of associations need to be made aware of the one_to_one association, and how it is different than one_to_many's previous :one_to_one option. Here are some internal changes that may affect you: * one_to_one associations are now cached like many_to_one associations instead of like one_to_many associations. So the cache includes the associated object or nil, instead of an array. Note that this change means that all custom :eager_loader options for one_to_one associations need to change to use this new caching scheme. * The one_to_one association setter method is now handled similarly to the many_to_one setter method, instead of using the internal one_to_many association add method. * Instead of raising an error when multiple rows are returned, one_to_one associations now use limit(1) to only return a single row. There were some other fixes made during these changes: * The one_to_one setter now accepts nil to disassociate the record. Previously, this raised an error. * If the one_to_one association already had a separate object associated, and you assigned a different object in the setter method, Sequel now disassociates the old object before associating the new object, fixing some potential issues if there is a UNIQUE constraint on the foreign key column. * Using the many_to_one association setter where the reciprocal association is a one_to_one association with a currently different cached associated object no longer raises an exception. * The nested_attributes and association_dependencies plugins both now correctly handle one_to_one associations. If you need any help migrating, please post on the Sequel Google Group or ask in the #sequel IRC channel. * Both many_to_one and one_to_one associations now use before_set and after_set callbacks instead of trying to make the one_to_many and many_to_many associations' (before|after)_(add|remove) callbacks work. This change makes the code simpler, makes writing callbacks easier, and no longer requires Sequel to send a query to the database to get the currently associated object in the many_to_one association setter method (you can still do so manually in a before_set callback if you want to). * Dataset#for_update was added as a default dataset method. Previously, it was only supported on PostgreSQL. It has been tested to work on PostgreSQL, MySQL, SQLite (where it is ignored), H2, and MSSQL. * Dataset#lock_style was added as a backbone for Dataset#for_update, but allowing you to specify custom lock styles. These can either be symbols recognized by the adapters, or strings which are treated as literal SQL. * Model#lock! was added, which uses Dataset#for_update to lock model rows for specific instances. Combined with the Dataset#for_update, Sequel now has an equivalent to ActiveRecord's pessimistic locking support. * A composition plugin was added, given similar functionality as ActiveRecord's composed_of. The composition plugin allows you to easily define getter and setter instance methods for a class where the backing data is composed of other getters and decomposed to other setters. A simple example of this is when you have a database table with separate columns for year, month, and day, but where you want to deal with Date objects in your ruby code. This can be handled with: Model.composition :date, :mapping=>[:year, :month, :day] The :mapping option is optional, but if not used, you need define custom composition and decomposition procs via the :composer and :decomposer options. Note that when using the composition object, you should not modify the underlying columns if you are also instantiating the composition, as otherwise the composition object values will override any underlying columns when the object is saved. * An rcte_tree plugin was added, which uses recursive common table expressions to load all ancestors and descendants in a single query. If your database supports recursive common table expressions (PostgreSQL 8.4+, MSSQL 2005+, newer versions of Firebird), using recursive common table expressions to load all ancestors and descendants is significantly faster than storing trees as nested sets and using nested set queries. Usage: Model.plugin :rcte_tree # Lazy loading model = Model.first model.parent model.children model.ancestors # Populates :parent association as well model.descendants # Populates :children association as well # Eager loading - also populates the :parent and children # associations for all ancestors and descendants Model.filter(:id=>[1, 2]).eager(:ancestors, :descendants).all # Eager loading children and grandchildren Model.filter(:id=>[1, 2]).eager(:descendants=>2).all # Eager loading children, grandchildren, and great grandchildren Model.filter(:id=>[1, 2]).eager(:descendants=>3).all * Dataset#first_source_table was added, giving you the unaliased version of the table for the first source. * Add Sequel::BasicObject.remove_methods!, useful on ruby 1.8 if you require other libraries after Sequel that add methods to Object. For example, if YAML is required after sequel, then the following will raise an error: DB[:a].filter{x > y} because YAML adds the y method to all objects. Now, you can call Sequel::BasicObject.remove_methods!, which will remove those methods from Sequel::BasicObject, allowing them to be used as intended in the above DSL. * Sequel associations now accept an :eager_loader_key option, which can be useful for associations to specify the column to use for the key_hash for custom :eager_loaders. * A JDBC subadapter for the AS400 database was added. Other Improvements ------------------ * The one_to_one setter method and the one_to_many and many_to_many remove_all methods now apply the association options (such as filters) on the appropriate dataset: Artist.one_to_many :good_albums, :class=>:Album, :conditions=>{:good=>true} a = Artist[10] a.remove_all_good_albums # Before: WHERE artist_id = 10 # After: WHERE artist_id = 10 AND good IS TRUE * Plugin loading now works correctly when the plugin module name is the same name as an already defined top level constant. This means that the active_model plugin should now work correctly if you require active_model before loading the Sequel plugin. * The nested_attributes plugin now preserves nested attributes for *_to_one associations on validation failures. * Transactions now work correctly on Oracle when using the JDBC adapter. * Dataset#limit once again works correctly on MSSQL 2000. It was broken in Sequel 3.9.0. * many_to_one associations now use limit(1) to ensure only one record is returned. If you don't want this (because maybe you are using the :eager_graph association option), you need to set the :key option to nil and use a custom :dataset option. * many_to_one and one_to_many associations now work correctly with the association :eager option to eagerly load associations specified by :eager when lazy loading the association. * The typecast_on_load plugin now correctly handles reloading/refreshing the object, both explicitly and implicitly on object creation. * The schema parser and dumper now return tinyint columns as booleans when connecting to mysql using the do adapter, since DataObjects now returns the columns as booleans. * The schema dumper now deals better with unusual or database specific primary key types when using the :same_db option. * On ruby 1.8, Sequel::BasicObject now undefs private methods in addition to public and protected methods. So the following code now works as expected: DB[:a].filter{x > p} # WHERE x > p * Sequel.connect with a block now returns the value of the block: max_price = Sequel.connect('sqlite://items.db') do |db| db[:items].max(:price) end * MSSQL emulated offset support now works correctly when Sequel's core extensions are not loaded. * Sequel::BasicObject now works correctly on rubinius, and almost all Sequel specs now pass on rubinius. * The nested_attributes plugin now uses a better exception message no matching associated object is found. * Sequel now raises a more informative error if you attempt to use the native sqlite adapter with the sqlite3 gem instead of the sqlite3-ruby gem. * Multiple complex expressions with the same operator are now combined for simpler SQL: DB[:a].filter(:a=>1, :b=>2).filter(:c=>3) # Before: (((a = 1) AND (b = 2)) AND (c = 3)) # After: ((a = 1) AND (b = 2) AND (c = 3)) * The Sequel::Model dataset methods (class methods proxied to the model's dataset) and the Sequel::Dataset mutation methods (methods that have a ! counterpart to modify the object in place) have both been updated to use new dataset methods added in recent versions. Backwards Compatibility ----------------------- * The :one_to_one option of the one_to_many associations now raises an exception. Please see the section above about the new real one_to_one association. * The change to apply the association options to the one_to_many and many_to_many remove_all methods has the potential to break some code that uses the remove_all method on associations that use association options. This is especially true for many_to_many associations, as filters in many_to_many associations will often reference columns in the associated table, while the dataset used in the remove_all method only contains the join table. Such cases should be handled by manually overriding the _remove_all association instance method in the class. It was determined that it was better to issue possibly invalid queries than to issue queries that make unexpected modifications. * Dataset#group_and_count now longer orders the dataset by the count. Since it returns a modified dataset, if you want to order the dataset, just call order on the returned dataset. * many_to_one associations now require a working :class option. Previously, if you provided a custom :dataset option, a working :class option was not required in some cases. * The MSSQL shared adapter dataset methods switched from using the :table_options internal option key to using the :lock internal option key. ruby-sequel-4.1.1/doc/release_notes/3.11.0.txt000066400000000000000000000236011220156535500206560ustar00rootroot00000000000000= New Features * A few new features were added to query logging. Sequel now includes execution time when logging queries. Queries that raise exceptions are now logged at ERROR level. You can now set the log_warn_duration attribute on the Database instance and queries that take longer than that will be logged at WARN level. By using different log levels, you can now only log queries that raise errors, or only log queries that take a long time. # The default - Log all successful queries at INFO level DB.log_warn_duration = nil # Log all successful queries at WARN level DB.log_warn_duration = 0 # Log successful queries that take the database more than half a # second at WARN level, other successful queries at INFO level DB.log_warn_duration = 0.5 All adapters included with Sequel have been modified to support the new logging API. The previous API is still available, so any external adapters should still work, though switching to the new logging API is encouraged. * Sequel::Model now has a require_modification flag. If not set explicitly, it is enabled by default if the dataset provides an accurate number of rows matched by an update or delete statement. When this setting is enabled, Sequel will raise an exception if you attempt to update or delete a model object and it doesn't end up affecting exactly one row. For example: DB.create_table(:as){primary_key :id} class A < Sequel::Model; end a = A.create # delete object from database a.delete a.require_modification = false a.save # no error! a.delete # no error! a.require_modification = true a.save # Sequel::NoExistingObject exception raised a.delete # Sequel::NoExistingObject exception raised Like many other Sequel::Model settings, this can be set on a global, per class, and per instance level: Sequel::Model.require_modification = false # global Album.require_modification = true # class album.require_modification = false # instance * An instance_filters plugin was added to the list of built in plugins, allowing you to add arbitrary filters when updating or destroying an instance. This allows you to continue using models when previously you would have had to drop down to using datasets to get the desired behavior: class Item < Sequel::Model plugin :instance_filters end # These are two separate objects that represent the same # database row. i1 = Item.first(:id=>1, :delete_allowed=>false) i2 = Item.first(:id=>1, :delete_allowed=>false) # Add an instance filter to the object. This filter is in effect # until the object is successfully updated or deleted. i1.instance_filter(:delete_allowed=>true) # Attempting to delete the object where the filter doesn't # match any rows raises an error. i1.delete # raises Sequel::Error # The other object that represents the same row has no # instance filters, and can be updated normally. i2.update(:delete_allowed=>true) # Even though the filter is now still in effect, since the # database row has been updated to allow deleting, # delete now works. i1.delete * An :after_connect database option is now supported. If provided, the option value should be a proc that takes a single argument. It will be called with the underlying connection object before connection object is added to the connection pool, allowing you to set per connection options in a thread-safe manner. This is useful for customizations you want set on every connection that Sequel doesn't already support. For example, on PostgreSQL if you wanted to set the schema search_path on every connection: DB = Sequel.postgres('dbname', :after_connect=>(proc do |conn| conn.execute('SET search_path TO schema1,schema2') end)) * A :test database option is now supported. If set to true, it automatically calls test_connection to make sure a connection can be made before returning a Database instance. For backwards compatibility reasons, this is not set to true by default, but it is possible that the default will change in a future version of Sequel. * The Dataset#select_append method was added, which always appends to the existing selected columns. It operates identically to select_more, except in the case that no columns are currently selected: ds = DB[:a] # SELECT * FROM items ds.select_more({:id=>DB[:b].select(:a_id)}.as(:in_b)) # SELECT id IN (SELECT a_id FROM b) AS in_b FROM a ds.select_append({:id=>DB[:b].select(:a_id)}.as(:in_b)) # SELECT *, id IN (SELECT a_id FROM b) AS in_b FROM a * The Dataset#provides_accurate_rows_matched? method was added which allows you to see if the dataset will return the actual number of rows matched/affected by an update or delete call. * Sequel will now emulate DISTINCT ON support using GROUP BY on MySQL. On MySQL, GROUP BY is similar to DISTINCT ON, except that the order of returned rows is not deterministic. * Support for connecting to Microsoft SQL Server using the JTDS JDBC driver was added to the jdbc adapter. * JDNI connection strings are now supported in the JDBC adapter. * The JDBC adapter should now work in situations where driver auto-loading has problems, just as when using Tomcat or Trinidad. * Sequel's JDBC adapter schema parsing now supports a :scale option, useful for numeric/decimal columns. * Sequel's schema parsing on Microsoft SQL Server now supports :column_size and :scale options. * When connecting to SQLite, a Database#sqlite_version method is available that gives you the SQLite version as an integer (e.g. 30613 for 3.6.13). = Other Improvements * Sequel no longer raises an error if you give Dataset#filter or related method an empty argument such as {}, [], or ''. This allows code such as the following to work: h = {} h[:name] = name if name h[:number] = number if number ds = ds.filter(h) Before, this would raise an error if both name and number were nil. * Numeric and decimal columns with a 0 scale are now treated as integer columns by the model typecasting code, since such columns cannot store non-integer values. * Calling Database#disconnect when using the single threaded connection pool no longer raises an error if there is no current connection. * When using the :ignore_index_errors options to Database#create_table, correctly swallow errors raised by Sequel due to the adapter not supporting the given index type. * The JDBC adapter no longer leaks ResultSets when retrieving metadata. * You can now connect to PostgreSQL when using ruby 1.9 with the -Ku switch. * When using the native MySQL adapter, only tinyint(1) columns are now returned as booleans when using the convert_tinyint_to_bool setting (the default). Previously, all tinyint columns would be converted to booleans if the setting was enabled. * Correctly handle inserts returning the autogenerated keys when using MySQL JDBC Driver version 5.1.12 with the jdbc adapter. * The native MySQL adapter now supports :config_default_group and :config_local_infile options. * When connecting to SQLite, you can provide the :auto_vacuum, :foreign_keys, :synchronous, and :temp_store options for making the appropriate PRAGMA setting on the database in a thread-safe manner. The previous thread-unsafe PRAGMA setting methods are available, but their use is discouraged. * Sequel will not enable savepoints when connecting to SQLite unless the version is 3.6.8 or greater. * Using limit with distinct now works correctly on Microsoft SQL Server. * Database#rename_table now works correctly on Microsoft SQL Server. * If you specify an explicit :provider when using the ADO adapter, transactions will now work correctly. The default :provider uses a new native connection for each query, so it cannot work with transactions, or things like temporary tables. * If you specify an explicit :provider when connecting to Microsoft SQL Server using the ADO adapter (e.g. SQLNCLI10 or SQLNCLI), Sequel is now able to provide an accurate number of rows modified and deleted. * Using set_column_allow_null with a decimal column with a precision and scale now works correctly when connecting to Microsoft SQL Server. * You can now connect to Microsoft SQL Server using the dbi adapter. * Sequel now recognizes the NUMBER database type as a synonym for NUMERIC and DECIMAL, which may help some Oracle users. * Transactions can now be rolled back correctly when connecting to Oracle via JDBC. * The active_model plugin now supports ActiveModel 3.0.0beta2. * Many documentation improvements were made, including the addition of a dataset basics guide, an association basics guide, an expanded virtual row guide, and the separation of the Sequel::Dataset RDoc page into sections. Additional, the RDoc class/method documentation now contains links to the appropriate guides. = Backwards Compatibility * When connecting to SQLite, Sequel now automatically sets the foreign_keys PRAGMA to true, which will make SQLite 3.6.19+ use database enforced foreign key constraints. If you do not want the database to enforce the foreign key constraints, you should use the :foreign_keys=>false option when connecting to the database. * Sequel no longer creates #{plugin_name}_opts class, instance, and dataset methods for each plugin loaded. No built-in plugin used them, and I couldn't find an external plugin that did either. * The Model#associations method is no longer available if the default Associations plugin is not loaded due to the SEQUEL_NO_ASSOCIATIONS constant or environment variable being set. * DISTINCT ON support is turned off by default, and only enabled when using PostgreSQL, since that appears to be the only database that supports it. Previously, it was enabled by default and most common adapters turned it off. ruby-sequel-4.1.1/doc/release_notes/3.12.0.txt000066400000000000000000000312101220156535500206520ustar00rootroot00000000000000= Migration Changes * A TimestampMigrator has been added to Sequel, and is automatically used if any migration has a version greater than 20000100. This migrator operates similarly to the default ActiveRecord migrator, in that it allows missing migrations. It differs from the ActiveRecord migrator in that it supports migrations with the same timestamp/version as well as a wide variety of timestamp formats (though the ActiveRecord default of YYYYMMDDHHMMSS is recommended and should be used in portable code). Sequel still defaults to the old migrator, but you can use the new one without making changes to your old migrations. Just make sure your new migration starts with a version greater than 20000100, and Sequel will automatically convert the previous schema table to the new format. * A new migration DSL was added: Sequel.migration do up do end down do end end The old style of using a Sequel::Migration subclass is still supported, but it is recommended that new code use the new DSL. * The default migrator also had significant issues fixed. First, it now saves the migration version after each migration, instead of after all migrations, which means Sequel won't attempt to apply already applied migrations if there was previously an error when applying multiple migrations at once on a database that didn't support transactional schema modification. Second, duplicate migration versions in the default migrator now raise an exception, as do missing migration versions. Neither should happen when using the default migrator, which requires consecutive integer versions, similar to the old ActiveRecord migrator. * Execution times for migrations are now logged to the database's loggers. = New Plugins * A sharding plugin has been added that allows model objects to work well with sharded databases. When using it, model objects know which shard they were retrieved from, so when you save the object, it is saved back to that shard. The sharding plugin also works with associations, so associated records are retrieved from the same shard the main object was retreived from. The sharding plugin also works with both methods of eager loading, and provides methods that you can use to create objects on specific shards. * An update_primary_key plugin has been added that allows Sequel to work correctly if you modify the primary key of a model object. This should not be necessary if you are using surrogate keys, but if your database uses natural primary keys which can change, this should be helpful. * An association_pks plugin has been added that adds association_pks and association_pks= methods to model objects for both one_to_many and many_to_many associations. The association_pks method returns an array of primary key values for the associated objects, and the association_pks= method modifies the database to ensure that the object is only associated to the objects specified by the array of primary keys provided to it. * A string_stripper plugin has been added that strips all strings that are assigned to attribute values. This is useful for web applications where you want to easily remove leading and trailing whitespace in form entries before storing them in the database. * A skip_create_refresh plugin has been added that skips the refresh of after you save a new model object. On most databases, Sequel refreshes the model object after inserting it in order to get values for all of the columns. For performance reasons, you can use this plugin to skip the refresh if it isn't necessary for you. = Other New Features * Sequel::Model#set_fields and update_fields were added. These methods have a similar API to set_only and update_only, but they operate differently. While set_only and update_only operate over the hash, these methods operate over the array of fields, so they don't raise errors if the hash contains fields not in the array: params = {:a=>1, :b=>2, :c=>3} album = Album[1] # raises Error because :a is not in the fields album.set_only(params, [:b, :c]) # Just sets the value of album.b and album.c album.set_fields(params, [:b, :c]) Other than handling entries in the hash that aren't in the array, set_fields and update_fields also handle entries not in the hash differently: # Doesn't modify the object, since the hash is empty album.set_only({}, [:b, :c]) # Sets album.b and album.c to nil, since they aren't in the hash album.set_fields({}, [:b, :c]) * The :eager_loader association option has a new API, though the previous API still works. Instead of accepting three arguments, it can now accept a single hash argument, which will use the :key_hash, :rows, and :association keys for the previous three arguments. The hash will also contain a :self key whose value is the dataset doing the eager load, which was not possible to determine using the old API. * Sequel::SQL::Expression#hash has been added so that the objects are now safe to use as hash keys. * A Dataset#order_prepend method has been added allowing you to prepend to an existing order. This is useful if want to modify a dataset's order such that it first orders by the columns you provide, but for any rows where the columns you provide are equal, uses the existing order to further order the dataset: ds.order(:albums__name).order_prepend(:artists__name) # ORDER BY artists.name, albums.name * When creating foreign key columns, you can now use a :deferrable option to set up a foreign key constraint that is not checked until the end of the transaction: DB.create_table(:albums) do primary_key :id String :name foreign_key :artist_id, :artists, :deferrable=>true end * many_to_many associations now support a :join_table_block option that is used by the add/remove/remove_all methods. It can modify the dataset to ensure that certain columns are included when inserting or to add a filter so that only certain records are deleted. It's useful if you have a many_to_many association that is filtered to only a subset of the matching rows in the join table. * The single_table_inheritance plugin now supports :model_map and :key_map options to set up a custom mapping of column values to model classes. For simple situations such as when you are mapping integer values to certain classes, a :model_map hash is sufficient: Employee.plugin :single_table_inheritance, :type_id, :model_map=>{1=>:Staff, 2=>:Manager} Here the :model_map keys are type_id column values, and the :model_map values are symbols or strings specifying class names. For more complex conditions, you can use a pair of procs: Employee.plugin :single_table_inheritance, :type_name, :model_map=>proc{|v| v.reverse}, :key_map=>proc{|klass| klass.name.reverse} Here the type_name column is a string column holding the reverse of the class's name. * The single_table_inheritance plugin now correctly sets up subclass filters for middle tables in a class hierarchy with more than 2 levels. For example, with this code: class Employee < Sequel::Model; end Employee.plugin :single_table_inheritance, :kind class Manager < Employee; end class Executive < Manager; end Sequel previously would not return Executives if you used Manager.all. It now correctly recognizes subclasses so that it will return both Managers and Executives. * Sequel::Model.qualified_primary_key_hash has been added, giving you a hash that can be used for filtering. It is similar to primary_key_hash, but it qualifies the keys with the model's table. It's useful if you have joined the table to another table that has columns with the same name, but you want to only look for a single model object in that dataset. * For consistency, you can now use Dataset#order_append as an alias for order_more. = Other Improvements * Sequel now correctly removes schema entries when altering tables. Previously, some adapters that had to query the existing schema when altering tables resulted in the previous schema being cached. * Sequel::Model::Errors#on now always returns nil if there are no errors on the attribute. Previously, it would return an empty array in certain cases. Additionally, Sequel::Model::Errors#empty? now returns true if there are no errors, where it certain cases it would return false even if there were no errors. * The schema_dumper extension now works with tables specified as Sequel::SQL::Identifiers. * Sequel now recognizes the timestamp(N) with(out) time zone column type. * The lazy_attributes plugin no longer requires the core extensions to work correctly. * DatabaseDisconnectError support has been added to the ODBC adapter, allowing Sequel to detect disconnects and remove the connection from the connection pool. * A leak of JDBC statement objects when using transactions was fixed in the jdbc adapter. * The jdbc adapter now gives a nicer error message if you use a connection string that it doesn't recognize and there is an error when connecting. * Temporary table creation was fixed on Microsoft SQL Server, but it is not recommended as it changes the name of the table. If you use Microsoft SQL Server, you should prefix your temporary table names with # and use the regular create table method. * A large number of guides were added to Sequel to make it easier for new and existing users to learn more about Sequel. The following guides were added: * Querying in Sequel * Migration and Schema Modification * Model Hooks * Model Validations * Sequel for SQL Users * Sequel for ActiveRecord Users * RDoc section support was added to Sequel::Database, making the method documentation easier to read. = Backwards Compatibility * Sequel::Database now defines the indexes and tables methods, even if the adapter does not implement them, similar to how connect and execute are defined. Previously, you could use respond_to? to check if the adapter supported them, now they raise Sequel::NotImplemented if the database adapter does not implement them. * Sequel used to raise NotImplementedError in certain default definitions of methods inside Sequel::Database and Sequel::Dataset, when the methods were supposed to be overridden in subclasses. Sequel now uses a Sequel::NotImplemented exception class for these exceptions, which is a subclass of Sequel::Error. * Sequel no longer applies all association options to the dataset used to remove all many_to_many associated objects. You should use the new :join_table_block option to get similar behavior if you were filtering the many_to_many association based on columns in the join table and you wanted remove_all to only remove the related columns. * Sequel now calls certain before and after hook actions in plugins in a different order than before. This should not have an effect unless you were relying on them being called in the previous order. Now, when overriding before hooks in plugins, Sequel always does actions before calling super, and when overriding after hooks in plugins, Sequel always does actions after calling super. * The hook_class_methods plugin no longer skips later after hooks if a previous after hook returns false. That behavior now only occurs for before hooks. * Sequel now only removes primary key values when updating objects if you are saving the entire object and you have not modified the values of the primary keys. Previously, Sequel would remove primary key values when updating even if you specified the primary key column specifically or the primary key column was modified and you used save_changes/update. * Sequel now uses explicit methods instead of aliases for certain methods. This should only affect you if for example you overrode Dataset#group to do one thing and wanted Dataset#group_by to do the default action. Now, Dataset#group_by, and methods like it, are explicit methods that just call the methods they previously aliased. This also means that if you were overriding Dataset#group and explicitly aliasing group_by to it, you no longer need the alias. * The single_table_inheritance plugin now uses IN instead of = for subclass filters. This could lead to poor performance if the database has a very bad query planner. * The private transaction_statement_object method was removed from the JDBC adapter, and Sequel will no longer check for the presence of the method in the transaction code. * The Sequel::Migrator object is now a class instead of a module, and has been pretty much rewritten. If you were using any methods of it besides apply and run, they no longer work. ruby-sequel-4.1.1/doc/release_notes/3.13.0.txt000066400000000000000000000200551220156535500206600ustar00rootroot00000000000000= New Plugins * A json_serializer plugin was added that allows you to serialize model instances or datasets to JSON using to_json. It requires the json library. The API was modeled on ActiveRecord's JSON serialization support. You can use :only and :except options to specify the columns included, :include to specify associations to include, as well pass options to nested associations using a hash. In addition to serializing to JSON, it also adds support for parsing JSON to model objects via JSON.parse or #from_json. * An xml_serializer plugin was added that allows you to serialize model instances or datasets to XML. It requries the nokogiri library. It has a similar API to the json_serializer plugin, using to_xml instead of to_json, and the from_xml class method instead of JSON.parse. * A tree plugin was added that allows you to treat Sequel::Model objects as being part of a tree. It provides similar features to rcte_tree, but works on databases that don't support recursive common table expressions. In addition to the standard parent and children associations, it provides instance methods to get the ancestors, descendants, and siblings of the given tree node, and class methods to get the roots of the tree. * A list plugin was added that allows you to treat Sequel::Model objects as being part of a list. This adds instance methods to get the next and prev items in the list, or to move the item to a specific place in the list. You can specify that all rows in the table belong to the same list, or specify arbitrary scopes so that the same table can contain many separate lists. = Other New Features * Sequel is now compatible with Ruby 1.9.2pre3. * Sequel now supports prepared transactions/two-phase commit on PostgreSQL, MySQL, and H2. You can specify that you want to use prepared transactions using the :prepare option which should be some transaction id string: DB.transaction(:prepare=>'some string') do ... end Assuming that no exceptions are raised in the transaction block, Sequel will prepare the transaction. You can then commit the transaction later: DB.commit_prepared_transaction('some string') If you need to rollback the prepared transaction, you can do so as well: DB.rollback_prepared_transaction('some string') * Sequel now supports customizable transaction isolation levels on PostgreSQL, MySQL, and Microsoft SQL Server. You can specify the transaction isolation level to use for any transaction using the :isolation option with an :uncommitted, :committed, :repeatable, or :serializable value: DB.transaction(:isolation=>:serializable) do ... end You can also set the default isolation level for transactions via the transaction_isolation_level Database attribute: DB.transaction_isolation_level = :committed If you are connecting to Microsoft SQL Server, it is recommended that you set a default transaction isolation level if you plan on using this feature. * You can specify a NULLS FIRST/LAST ordering by using the :nulls=>:first/:last option to asc and desc: Album.filter(:release_date.desc(:nulls=>:first), :name.asc(:nulls=>:last)) # ORDER BY release_date DESC NULLS FIRST, # name ASC NULLS LAST This syntax is supported by PostgreSQL 8.3+, Firebird 1.5+, Oracle, and probably some other databases as well, and makes it possible for the user to specify whether NULL values should sort before or after other values. * Sequel::Model.find_or_create now accepts a block that is a yielded a new model object to be created if an existing model object is not found. Node.find_or_create(:name=>'A'){|i| i.parent_id = 4} * The :frame option for windows and window functions can now be a string that is used literally in the SQL. This is necessary if you want to specify a custom frame, such as one that uses a specific number of rows preceding or following. * Savepoints are now supported on H2. * A :methods_module association option was added, allowing you to specify the module into which association instance methods are placed. By default, it uses the module containing the column accessor methods. = Other Improvements * The :encoding option for the native MySQL adapter should now work correctly in all cases. This fix was included in 3.12.1. * Sequel now handles arrays of two element arrays automatically when using them as the value of a filter hash: DB[a].filter([:a, :b]=>[[1, 2], [3, 4]]) Previously, you had to call .sql_array on the array in order to tell Sequel that it was a value list and not a conditions specifier. * Sequel no longer attempts to use class polymorphism in the class_table_inheritance plugin if you don't specify a cti_key. * When using the native SQLite adapter, prepared statements are now cached per connection for increased performance. Previously, Sequel prepared a new statement for every query. * tinyint(1) columns are now handled as booleans when connecting to MySQL via JDBC. * On PostgreSQL, if no :schema option is provided for Database#tables, #table_exists?, or #schema, and no default_schema is used, assume all schemas except the default non-public ones. Previously, it assumed the public schema for tables and table_exists?, but did not assume any schema for #schema. This fixes issues if you use table names that overlap with table names in the information_schema, such as domains. It's still recommended that you specify a default_schema if you are using a schema other than public. * Unsigned integers are now handled correctly in the schema dumper. * Sequel::SQL::PlaceholderLiteralString is now a GenericExpression subclass, allowing you to treat it like most other Sequel expression objects: '(a || ?)'.lit(:b).like('Test%') # ((a || b) LIKE 'Test%') * Sequel now supports the bitwise shift operators (<< and >>) on Microsoft SQL Server by emulating them. * Sequel now supports most bitwise operators (&, |, ^, <<, >>) on H2 by emulating them. The bitwise complement operator is not yet supported. * Sequel now logs the SQL queries that are sent when connecting to MySQL. * If a plugin cannot be loaded, Sequel now gives a more detailed error message. = Backwards Compatibility * Array#sql_array and the Sequel::SQL::SQLArray class are now considered deprecated. Use the Array#sql_value_list and the Sequel::SQL::ValueList class instead. SQLArray is now just an alias for ValueList, but it now is an Array subclass instead of a Sequel::SQL::Expression subclass. * Using the ruby bitwise xor operator (^) on PostgreSQL now uses PostgreSQL's bitwise xor operator (#) instead of PostgreSQL's exponentiation operator (^). If you want exponentiation, use the power function. * Using the ruby bitwise complement operator (~) on MySQL now returns a signed integer instead of an unsigned integer, for better compatibility with other databases. * Using nil as a case expression value (the 2nd argument to Hash#case and Array#case) will now use NULL as the case expression value, instead of omitting the case expression value: # 3.12.0 {1=>2}.case(0, nil) # CASE WHEN 1 THEN 2 ELSE 0 END # 3.13.0 {1=>2}.case(0, nil) # CASE NULL WHEN 1 THEN 2 ELSE 0 END In general, you would never use nil explicitly, but the new behavior makes more sense if you have a variable that might be nil: parent_id = Node[1].parent_id {1=>2}.case(0, parent_id) If parent_id IS NULL/nil, then previously Sequel would have generated unexpected SQL. If you don't want a case expression value to be used, do not pass a second argument to #case. * Some internal transaction methods now take an optional options hash, so if you have a custom adapter, you will need to make changes. * Some internal association methods now take an optional options hash. * Some Rakefile task names were modified in the name of consistency: spec_coverage -> spec_cov integration -> spec_integration integration_cov -> spec_integration_cov ruby-sequel-4.1.1/doc/release_notes/3.14.0.txt000066400000000000000000000105011220156535500206540ustar00rootroot00000000000000= New Features * Dataset#grep now accepts :all_patterns, :all_columns, and :case_insensitive options. Previously, grep would use a case sensitive search where it would match if any pattern matched any column. These three options give you more control over how the pattern matching will work: dataset.grep([:a, :b], %w'%test% foo') # WHERE ((a LIKE '%test%') OR (a LIKE 'foo') # OR (b LIKE '%test%') OR (b LIKE 'foo')) dataset.grep([:a, :b], %w'%foo% %bar%', :all_patterns=>true) # WHERE (((a LIKE '%foo%') OR (b LIKE '%foo%')) # AND ((a LIKE '%bar%') OR (b LIKE '%bar%'))) dataset.grep([:a, :b], %w'%foo% %bar%', :all_columns=>true) # WHERE (((a LIKE '%foo%') OR (a LIKE '%bar%')) # AND ((b LIKE '%foo%') OR (b LIKE '%bar%'))) dataset.grep([:a, :b], %w'%foo% %bar%', :all_patterns=>true,:all_columns=>true) # WHERE ((a LIKE '%foo%') AND (b LIKE '%foo%') # AND (a LIKE '%bar%') AND (b LIKE '%bar%')) dataset.grep([:a, :b], %w'%test% foo', :case_insensitive=>true) # WHERE ((a ILIKE '%test%') OR (a ILIKE 'foo') # OR (b ILIKE '%test%') OR (b ILIKE 'foo')) * When using the schema plugin, you can now provide a block to the create_table methods to set the schema and create the table in the same call: class Artist < Sequel::Model create_table do primary_key :id String :name end end * The tree plugin now accepts a :single_root option, which uses a before_save hook to attempt to ensure that there is only a single root in the tree. It also adds a Model.root method to get the single root of the tree. * The tree plugin now adds a Model#root? instance method to check if the current node is a root of the tree. * Model#save now takes a :raise_on_failure option which will override the object's raise_on_save_failure setting. This makes it easier to get the desired behavior (raise or just return false) in library code without using a begin/ensure block. * The Database#adapter_scheme instance method was added, which operates the same as the class method. * Sequel now handles the literalization of OCI8::CLOB objects in the Oracle adapter. = Other Improvements * When using the timezone support, Sequel will now correctly load times and datetimes in standard time when the current timezone is in daylight time, or vice versa. Previously, if you tried to to load a time or datetime in December when in July in a timezone that used daylight time, it would be off by an hour. * The rcte_tree plugin now works correctly when a :conditions option is used. * The single_table_inheritance plugin now works correctly when the class discriminator column has the same name as an existing ruby method (such as type). * Database#each_server now works correctly when a connection string is used to connect, instead of an options hash. * Model#destroy now respects the object's use_transactions setting, instead of always using a transaction. * Model#exists? now uses a simpler and faster query. * Sequel now handles the aggregate methods such as count and sum correctly on Microsoft SQL Server when using an ordered dataset with a clause such as DISTINCT or GROUP and without a limit. * Sequel now handles rename_table correctly on Microsoft SQL Server when using a case sensitive collation, or when qualifying the table with a schema. * Sequel now parses the schema correctly on Oracle when the same table name is used in multiple schemas. * Sequel now handles OCIInvalidHandle errors when disconnecting in the Oracle adapter. * Sequel now raises a Sequel::Error instead of an ArgumentError if the current or target migration version does not exist. * When a mismatched number of composite keys are used in associations, Sequel now uses a more detailed error message. * Significant improvements were made to the Dataset and Model RDoc documentation. = Backwards Compatibility * Model#valid? now must accept an optional options hash. * The Model#save_failure private method was renamed to raise_hook_failure. * The LOCAL_DATETIME_OFFSET_SECS and LOCAL_DATETIME_OFFSET constants have been removed from the Sequel module. * Sequel now uses obj.to_json instead of JSON.generate(obj). This shouldn't affect backwards compatibility, but did fix a bug in certain cases. ruby-sequel-4.1.1/doc/release_notes/3.15.0.txt000066400000000000000000000055401220156535500206640ustar00rootroot00000000000000= Performance Enhancements * A mysql2 adapter was added to Sequel. It offers a large (2-6x) performance increase over the standard mysql adapter. In order to use it, you need to install mysql2, and change your connection strings to use mysql2:// instead of mysql://. * Support for sequel_pg was added to the native postgres adapter, when pg is being used as the backend. sequel_pg also offers a large (2-6x) performance increase over the default row fetching code that the Sequel postgres adapter uses. In order to use it, you just need to install sequel_pg, and the postgres adapter will pick it up automatically. * Mass assignment has been made about 10x faster by caching the allowed setter methods in the model. = Other Improvements * The following construct is now safe to use in environments that reload code without unloading existing constants: class MyModel < Sequel::Model(DB[:table]) end Previously, this would raise a superclass mismatch TypeError. * Sequel now handles the case where both an implicit and an explicit table alias are given to join_table, preferring the explicit alias. This can happen if you are using models with aliased table names and eager graphing them. Previously, this would result in invalid SQL, with both aliases being used. * You can use use an aliased table for the :join_table option of a many_to_many association. * The active_model plugin now supports the final release of ActiveModel 3.0.0. * Typecasting now works correctly for attributes loaded lazily when using the lazy_attributes plugin. * The class_table_inheritance plugin now works with non-integer primary keys on SQLite. * Temporary tables are now ignored when parsing the schema on PostgreSQL. * On MySQL, an :auto_increment key with a true value is added to the Database#schema output hash if the related column is auto incrementing. * The mysql adapter now handles Mysql::Error exceptions raised when disconnecting. * On SQLite, emulated alter_table commands that require dropping the table now preserve the foreign key information, if SQLite foreign key support is enabled (it is by default). * DSN-less connections now work correctly in more cases in the ODBC adapter. * A workaround has been added for a bug in the Microsoft SQL Server JDBC Driver 3.0, involving it incorrectly returning a smallint instead of a char type for the IS_AUTOINCREMENT metadata value. * An bug in the error handling when connecting to PostgreSQL using the do (DataObjects) adapter has been fixed. = Backwards Compatibility * The caching of allowed mass assignment methods can result in the incorrect exception class being raised if you manually undefine instance setter methods in the model class. If you do this, you need to clear the setter methods cache manually: MyModel.clear_setter_methods_cache ruby-sequel-4.1.1/doc/release_notes/3.16.0.txt000066400000000000000000000033461220156535500206670ustar00rootroot00000000000000= New Adapter * A swift adapter was added to Sequel. Swift is a relatively new ruby database library, built on top of a relatively new backend called dbic++. While not yet considered production ready, it is very fast. The swift adapter is about 33% faster and 40% more memory efficient for selects than the postgres adapter using pg with sequel_pg, though it is slower and less memory efficient for inserts and updates. Sequel's swift adapter currently supports only PostgreSQL and MySQL, but support for other databases will probably be added in the future. = Other Improvements * Sequel now correctly literalizes DateTime objects on ruby 1.9 for databases that support fractional seconds. * The identity_map plugin now handles composite keys in many_to_one associations. * The rcte_tree plugin now works when the model's dataset does not select all columns. This can happen when using the lazy_attributes plugin on the same model. * Sequel now supports INTERSECT and EXCEPT on Microsoft SQL Server 2005+. * The Database#create_language method in the shared PostgreSQL adapter now accepts a :replace option to replace the currently loaded procedural language if it already exists. This option is ignored for PostgreSQL versions before 9.0. * The identity_map plugin now handles cases where the plugin is loaded separately by two different models. = Backwards Compatibility * While not technically backwards compatibility related, it was discovered that the identity_map plugin is incompatible with the standard eager loading of many_to_many and many_through_many associations. If you want to eagerly load those associations and use the identity_map plugin, you should use eager_graph instead of eager. ruby-sequel-4.1.1/doc/release_notes/3.17.0.txt000066400000000000000000000044451220156535500206710ustar00rootroot00000000000000= New Features * You can now change the level at which Sequel logs SQL statements, by calling Database#sql_log_level= with the method name symbol. The default is still :info for backwards compatibility. Previously, you had to use a proxy logger to get similar capability. * You can now specify graph aliases where the alias would be the same as the table column name using just the table name symbol, instead having to repeat the alias as the second element of an array. More clearly: # < 3.17.0: DB[:a].graph(:b, :a_id=>:id). set_graph_aliases(:c=>[:a, :c], :d=>[:b, :d]) # >= 3.17.0: DB[:a].graph(:b, :a_id=>:id).set_graph_aliases(:c=>:a, :d=>:b) Both of these now yield the SQL: SELECT a.c, b.d FROM a LEFT OUTER JOIN b ON (b.a_id = a.id) * You should now be able to connect to MySQL over SSL in the native MySQL adapter using the :sslca, :sslkey, and related options. * Database#views and Database#view_exists? methods were added to the Oracle adapter, allowing you to get a an array of view name symbols and to check whether a given view exists. = Other Improvements * The nested_attributes plugin now avoids unnecessary update calls when deleting associated objects, resulting in better performance. * The optimistic_locking plugin now increments the lock column if no other columns were modified but the Model#modified! was called. This means it now works correctly with the nested_attributes plugin when no changes to the main model object are made. * The xml_serializer plugin can now round-trip nil values correctly. Previously, nil values would be converted into empty strings. This is accomplished by including a nil attribute in the xml tag. * Database#each_server now works correctly when using the jdbc and do adapters and a connection string without a separate :adapter option. * You can now clone many_through_many associations. * The default wait_timeout used by the mysql and mysql2 adapters was decreased slightly so that it works correctly with MySQL database servers that run on Windows. * Many improvements were made to the AS400 jdbc subadapter. * Many improvements were made to the swift adapter and subadapters. * Dataset#ungraphed now removes any cached graph aliases set with set_graph_aliases or add_graph_aliases. ruby-sequel-4.1.1/doc/release_notes/3.18.0.txt000066400000000000000000000101221220156535500206570ustar00rootroot00000000000000= New Features * Reversible migration support has been added: Sequel.migration do change do create_table(:artists) do primary_key :id String :name, :null=>false end end end The change block acts the same way as an up block, except that it automatically creates a down block that reverses the changes. So the above is equivalent to: Sequel.migration do up do create_table(:artists) do primary_key :id String :name, :null=>false end end down do drop_table :artists end end The following methods are supported in a change block: * create_table * add_column * add_index * rename_column * rename_table * alter_table (supporting the following methods): * add_column * add_constraint * add_foreign_key (with a symbol, not an array) * add_primary_key (with a symbol, not an array) * add_index * add_full_text_index * add_spatial_index * rename_column Use of an other method in a change block will result in the creation of a down block that raises an exception. * A to_dot extension has been added that adds a Dataset#to_dot method, which returns a string that can be used as input to the graphviz dot program in order to create visualizations of the dataset's abstract syntax tree. Examples: * http://sequel.heroku.com/images/to_dot_simple.gif * http://sequel.heroku.com/images/to_dot_complex.gif * http://imgpaste.com/i/lxngy.gif Both the to_dot extension and reversible migrations support were inspired by Aaron Patterson's recent work on ActiveRecord and ARel. * The user can now control how the connection pool handles attempts to access shards that haven't been configured. The default is still to assume the :default shard. However, you can specify a different shard using the :servers_hash option when connecting to the database: DB = Sequel.connect(..., :servers_hash=>Hash.new(:some_shard)) You can also use this feature to raise an exception if an unconfigured shard is used: DB = Sequel.connect(..., :servers_hash=>Hash.new{raise ...}) * The mysql and mysql2 adapters now both support the :read_timeout and :connect_timeout options. read_timeout is the timeout in seconds for reading back results of a query, and connect_timeout is the timeout in seconds before a connection attempt is abandoned. = Other Improvements * The json_serializer plugin will now typecast column values for columns with unrestricted setter methods when parsing JSON into model objects. It now also calls the getter method when creating the JSON, instead of directly taking the values from the underlying hash. * When parsing the schema for a model with an aliased table name, the unaliased table name is now used. * The SQLite adapter has been updated to not rely on the native type_translation support, since that will be removed in the next major version of sqlite3-ruby. Sequel now implements it's own type translation in the sqlite adapter, similarly to how the mysql and postgres adapters handle type translation. * On SQLite, when emulating natively unsupported schema methods such as drop_column, Sequel will now attempt to recreate applicable indexes on the underlying table. * A more informative error message is now used when connecting fails when using the jdbc adapter. * method_missing is no longer removed from Sequel::BasicObject on ruby 1.8. This should improve compatibility in some cases with Rubinius. = Backwards Compatibility * On SQLite, Sequel no longer assumes that a plain integer in a datetime or timestamp field represents a unix epoch time. * Previously, saving a model object that used the instance_hooks plugin removed all instance hooks. Now, only the applicable hooks are removed. So if you save a new object, the update instance hooks won't be removed. And if you save an existing object, delete instance hooks won't be removed. * The private Dataset#identifier_list method has been moved into the SQLite adapter, since that is the only place it was used. ruby-sequel-4.1.1/doc/release_notes/3.19.0.txt000066400000000000000000000052351220156535500206710ustar00rootroot00000000000000= New Features * The add_* association methods now accept a primary key, and associates the receiver to the associated model object with that primary key: artist.add_album(42) # equivalent to: artist.add_album(Album[42]) * The validation_class_methods plugin now has the ability to reflect on validations: Album.plugin :validation_class_methods Album.validates_acceptance_of(:a) Album.validation_reflections # => {:a=>[[:acceptance, {:tag=>:acceptance, :allow_nil=>true, :message=>"is not accepted", :accept=>"1"}]]} = Other Improvements * In the postgres, mysql, and sqlite adapters, typecasting now uses methods instead of procs. Since methods aren't closures (while procs are), this makes typecasting faster (up to 15%). * When typecasting model column values, the classes of the new and existing values are checked in addition to the values themselves. Previously, if the new and existing values were equal (i.e. 1.0 and 1), it wouldn't update the value. Now, if the classes are different, it always updates the value. * Date and DateTime objects are now handled correctly when using prepared statements/bound variables in the jdbc adapter. * Date, DateTime, Time, true, false, and SQL::Blob objects are now handled correctly when using prepared statements/bound variables in the sqlite adapter. * Sequel now uses varbinary(max) instead of image for the generic File (blob) type on Microsoft SQL Server. This makes it possible to use an SQL::Blob object as a prepared statement argument. * Sequel now handles blobs better in the Amalgalite adapter. * When disconnecting a connection using the sqlite adapter, all open prepared statements are now closed first. Previously, attempting to disconnect a connection with open prepared statements resulted in an error. * The license file has been renamed from COPYING to MIT-LICENSE, to make it easier to determine at a glance which license is used. = Backwards Compatibility * Because Sequel switched the generic File type from image to varbinary(max) on Microsoft SQL Server, any migrations/schema modification methods that used the File type will now result in a different column type than before. * The MYSQL_TYPE_PROCS, PG_TYPE_PROCS, and SQLITE_TYPE_PROCS constants have been removed from the mysql, postgres, and sqlite adapters, respectively. The UNIX_EPOCH_TIME_FORMAT and FALSE_VALUES constants have also been removed from the sqlite adapter. * Typecasting in the sqlite adapters now uses to_i and to_f instead of Integer() and Float() with rescues. If you put non-numeric data in numeric columns on SQLite, this could cause problems. ruby-sequel-4.1.1/doc/release_notes/3.2.0.txt000066400000000000000000000250451220156535500206020ustar00rootroot00000000000000New Features ------------ * Common table expressions (CTEs) are now supported. CTEs use the SQL WITH clause, and specify inline views that queries can use. They also support a recursive mode, where the CTE can recursively query its own output, allowing you do do things like load all branches for a given node in a plain tree structure. The standard with takes an alias and a dataset: DB[:vw].with(:vw, DB[:table].filter{col < 1}) # WITH vw AS (SELECT * FROM table WHERE col < 1) # SELECT * FROM vw The recursive with takes an alias, a nonrecursive dataset, and a recursive dataset: DB[:vw].with_recursive(:vw, DB[:tree].filter(:id=>1), DB[:tree].join(:vw, :id=>:parent_id). select(:vw__id, :vw__parent_id)) # WITH RECURSIVE vw AS (SELECT * FROM tree # WHERE (id = 1) # UNION ALL # SELECT vw.id, vw.parent_id # FROM tree # INNER JOIN vw ON (vw.id = tree.parent_id)) # SELECT * FROM vw CTEs are supported by Microsoft SQL Server 2005+, DB2 7+, Firebird 2.1+, Oracle 9+, and PostgreSQL 8.4+. * SQL window functions are now supported, and a DSL has been added to ease their creation. Window functions act similarly to aggregate functions but operate on sliding ranges of rows. In virtual row blocks (blocks passed to filter, select, order, etc.) you can now provide a block to method calls to change the default behavior to create functions that weren't possible previously. The blocks aren't called, but their presence serves as a flag. What function is created depends on the arguments to the method: * If there are no arguments, an SQL::Function is created with the name of method used, and no arguments. Previously, it was not possible to create functions without arguments using the virtual row block DSL. Example: DB.dataset.select{version{}} # SELECT version() * If the first argument is :*, an SQL::Function is created with a single wildcard argument (*). This is mostly useful for count: DB[:t].select{count(:*){}} # SELECT count(*) FROM t * If the first argument is :distinct, an SQL::Function is created with the keyword DISTINCT prefacing all remaining arguments. This is useful for aggregate functions such as count: DB[:t].select{count(:distinct, col1){}} # SELECT count(DISTINCT col1) FROM t * If the first argument is :over, the second argument, if provided, should be a hash of options to pass to SQL::Window. The options hash can also contain :*=>true to use a wildcard argument as the function argument, or :args=>... to specify an array of arguments to use as the function arguments. DB[:t].select{rank(:over){}} # SELECT rank() OVER () DB[:t].select{count(:over, :*=>true){}} # SELECT count(*) OVER () DB[:t].select{sum(:over, :args=>col1, :partition=>col2, :order=>col3){}} # SELECT sum(col1) OVER (PARTITION BY col2 ORDER BY col3) PostgreSQL also supports named windows. Named windows can be specified by Dataset#window, and window functions can reference them using the :window option. * Schema information for columns now includes a :ruby_default entry which contains a ruby object that represents the default given by the database (which is stored in :default). Not all :default entries can be parsed into a :ruby_default, but if the schema_dumper extension previously supported it, it should work. * Methods to create compound datasets (union, intersect, except), now take an options hash instead of a true/false flag. The previous API is still supported, but switching to specifying the ALL setting using :all=>true is recommended. Additionally, you can now set :from_self=>false to not wrap the returned dataset in a "SELECT * FROM (...)". * Dataset#ungraphed was added that removes the graphing information from the dataset. This allows you to use Dataset#graph for the automatic aliasing, or #eager_graph for the automatic aliasing and joining, and then remove the graphing information so that the resulting objects will not be split into subhashes or associations. * There were some introspection methods added to Dataset to describe which capabilities that dataset does or does not support: supports_cte? supports_distinct_on? supports_intersect_except? supports_intersect_except_all? supports_window_functions? In addition to being available for the user to use, these are also used internally, so attempting to use a CTE on a dataset that doesn't support it will raise an Error. * Dataset#qualify was added, which is like qualify_to with a default of first_source. Additionally, qualify now affects PlaceholderLiteralStrings. It doesn't scan the string (as Sequel never attempts to parse SQL), but if you provide the column as a symbol placeholder argument, it will qualify it. * You can now specify the table and column Sequel::Migrator will use to record the current schema version. The new Migrator.run method must be used to use these new options. * The JDBC adapter now accepts :user and :password options, instead of requiring them to be specified in the connection string and handled by the JDBC driver. This should allow connections to Oracle using the Thin JDBC driver. * You can now specify the max_connections, pool_timeout, and single_threaded settings directly in the connection string: postgres:///database?single_threaded=t postgres:///database?max_connections=10&pool_timeout=20 * Dataset#on_duplicate_key_update now affects Dataset#insert when using MySQL. * You can now specify the :opclass option when creating PostgreSQL indexes. Currently, this only supports a single operator class for all columns. If you need different operator classes per column, please post on sequel-talk. * Model#autoincrementing_primary_key was added and can be used if the autoincrementing key isn't the same as the primary key. The only likely use for this is on MySQL MyISAM tables with composite primary keys where only one of the composite parts is autoincrementing. * You can now use database column values as search patterns and specify the text to search as a String or Regexp: String.send(:include, Sequel::SQL::StringMethods) Regexp.send(:include, Sequel::SQL::StringMethods) 'a'.like(:x) # ('a' LIKE x) /a/.like(:x) # ('a' ~ x) /a/i.like(:x) # ('a' ~* x) /a/.like(:x, 'b') # (('a' ~ x) OR ('a' ~ 'b')) * The Dataset#dataset_alias private method was added. It can be overridden if you have tables named t0, t1, etc and want to make sure the default dataset aliases that Sequel uses do not clash with existing table names. * Sequel now raises an Error if you call Sequel.connect with something that is not a Hash or String. * bin/sequel now accepts a -N option to not test the database connection. * An opening_databases.rdoc file was added to the documentation directory, which should be a good introduction for new users about how to set up your Database connection. Other Improvements ------------------ * MySQL native adapter SELECT is much faster than before, up to 75% faster. * JDBC SELECT is about 10% faster than before. It's still much slower than the native adapters, due to conversion issues. * bin/sequel now works with a YAML file on ruby 1.9. * MySQL foreign key table constraints have been fixed. * Database#indexes now works on PostgreSQL if the schema used is a Symbol. It also works on PostgreSQL versions all the way back to 7.4. * Graphing of datasets with dataset sources has been fixed. * Changing a columns name, type, or NULL status on MySQL now supports a much wider selection of column defaults. * The stored procedure code is now thread-safe. Sequel is thread-safe in general, but due to a bug the previous stored procedure code was not thread-safe. * The ODBC adapter now drops statements automatically instead of requiring the user to do so manually, making it more similar to other adapters. * The single_table_inheritance plugin no longer overwrites the STI field if the field already has a value. This allows you to use create in the generic class to insert a value that will be returned as a subclass: Person.create(:kind => "Manager") * When altering colums on MySQL, :unsigned, :elements, :size and other options given are no longer ignored. * The PostgreSQL shared adapter's explain and analyze methods have been fixed, they had been broken in 3.0. * Parsing of the server's version is more robust on PostgreSQL. It should now work correctly for 8.4 and 8.4rc1 type versions. Backwards Compatibility ----------------------- * Dataset#table_exists? has been removed, since it never worked perfectly. Use Database#table_exists? instead. * Model.grep now calls Dataset#grep instead of Enumerable#grep. If you are using Model.grep, you need to modify your application. * The MSSQL shared adapter previously used the :with option for storing the NOLOCK setting of the query. That option has been renamed to :table_options, since :with is now used for CTEs. This should not have an effect unless you where using the option manually. * Previously, providing a block to a method calls in virtual row blocks did not change behavior, where now it causes a different code path to be used. In both cases, the block is not evaluated, but that may change in a future version. * Dataset#to_table_reference protected method was removed, as it was no longer used. * The pool_timeout setting is now converted to an Integer, so if you used to pass in a Float, it no longer works the same way. * Most files in adapters/utils have been removed, in favor of integrating the code directly into Database and Dataset. If you were previously checking for the UnsupportedIntersectExcept or related modules, use the Dataset introspection methods instead (e.g. supports_intersect_except?). * If you were using the ODBC adapter and manually dropping returned statements, you should note that now statements are dropped automatically, and the execute method doesn't return a statement object. * The MySQL adapter on_duplicate_key_update_sql is now a private method. * If you were modifying the :from dataset option directly, note that Sequel now expects this option to be preprocessed. See the new implementation of Dataset#from for an idea of the changes required. * Dataset#simple_select_all? now returns false instead of true for a dataset that selects from another dataset. ruby-sequel-4.1.1/doc/release_notes/3.20.0.txt000066400000000000000000000027701220156535500206620ustar00rootroot00000000000000= New Features * The swift adapter now supports an SQLite subadapter. Use the :db_type => 'sqlite' option when connecting. You can use an in memory database with the following connection string: swift:///?database=:memory:&db_type=sqlite * Arbitrary JDBC properties can now be set in the JDBC adapter using the :jdbc_properties option when connecting. The value of this option should be a hash where keys and values are JDBC property keys and values. * Basic Microsoft Access support was added to the ado adapter. The creation of autoincrementing primary key integers now works, and identifiers are now quoted with []. * The Database#indexes method now supports a :partial option when connecting to MySQL, which makes it include partial indexes (which are usually excluded). = Other Improvements * The class_table_inheritance plugin now handles subclass associations better. Previously, the implicit eager loading code had issues when you called an association method that only existed in the subclass. * The error message used when a validates_max_length validation is applied to a nil column value has been improved. You can override the message yourself using the :nil_message option. * The read_timeout and connect_timeout options now work correctly in the mysql adapter. * Another MySQL disconnect error message is now recognized. = Backwards Compatibility * The swift adapter was upgraded to support swift 0.8.1. Older versions of swift are no longer supported. ruby-sequel-4.1.1/doc/release_notes/3.21.0.txt000066400000000000000000000066401220156535500206630ustar00rootroot00000000000000= New Features * A tinytds adapter was added, enabling Sequel users on a C-based ruby running on *nix easy access to Microsoft SQL Server. Previously, the best way to connect to Microsoft SQL Server from a C-based ruby on *nix was to use the ODBC adapter with unixodbc and freetds. However, setting that up is nontrivial, while setting up tinytds is very easy. Note that the tinytds adapter currently requires the git master branch of tiny_tds, but tiny_tds 0.4.0 should include the related changes. * An association_autoreloading plugin has been added to Sequel, which removes stale many_to_one associations from the cache when the associated foreign key setter is used to change the value of the foreign key. * bin/sequel now operates more like a standard *nix utility. If given a file on the command line after the connection string, it will assume that file has ruby code and load it. If stdin is not a tty, it will read from stdin and execute it as ruby code. For recent Linux users, this means you can have a shebang line such as: #!/usr/bin/sequel postgres://user:pass@host/db to create a self contained script. * bin/sequel now supports -r and -I options similar to ruby's -r and -I options. * MySQL datasets now have a calc_found_rows method that uses SQL_CALC_FOUND_ROWS, which provides a fast way to limit the number of results returned by a dataset while having an easy way to determine how many rows would have been returned if no limit was applied. = Other Improvements * The Sequel::Model.plugin method can now be overridden just like any other method. Previously, it was the only method that was defined directly on the class. This allows the creation of plugins that can modify the plugin system itself. * Symbol splitting (:table__column___alias) now works correctly for identifiers that include characters that aren't in [\w ]. Among other things, this means that identifiers with accented characters or even kanji characters can be used with symbol splitting. * If cover? is defined, it is now used in preference to include? for the validates_includes/validates_inclusion_of validations. ruby 1.9 defines include? differently for some ranges and can be very slow, while cover? is similar to the 1.8 behavior of just checking the beginning and end of the range. * The bin/sequel -L option now takes effect even if the -m, -C, -d, or -D options are used. * The schema_dumper extension now recognizes the "bigint unsigned" type. * On Microsoft SQL Server, if joining to a subselect that uses a common table expression, that common table expression is promoted to the main dataset. This allows most queries to work correctly, but is vulnerable to issues if both the current dataset and the joined dataset use common table expressions with the same name. Unfortunately, unlike PostgreSQL, Microsoft SQL Server does not allow common table expressions to occur in subselects. * The NULL/NOT NULL, DEFAULT, and UNIQUE column options now use the proper order on H2 and Oracle, so they can now be used in conjunction with each other. * Row locks are now enabled on Oracle. * The join_table method on MySQL no longer ignores the block it was given. * The informix adapter now supports ruby-informix version >= 0.7.3, while still being backwards compatible with older versions. * The specs now run under both RSpec 2 and RSpec 1. ruby-sequel-4.1.1/doc/release_notes/3.22.0.txt000066400000000000000000000027501220156535500206620ustar00rootroot00000000000000= New Features * Support COLLATE in column definitions. At least MySQL and Microsoft SQL Server support them, and PostgreSQL 9.1 should as well. * When connecting to Microsoft SQL Server, you can use the mssql_unicode_strings accessor to turn of the default usage of unicode strings (N'') and use regular strings (''). This can improve performance, but changes the behavior. It's set to true by default for backwards compatibility. You can change it at both the dataset and database level: DB.mssql_unicode_strings = false # default for datasets dataset.mssql_unicode_strings = false # just this dataset * In the oracle adapter, if Sequel.application_timezone is :utc, set the timezone for the connection to use the 00:00 timezone. = Other Improvements * In the single_table_inheritance plugin, correctly handle a multi-level class hierarchy so that loading instances from a middle level of the hierarchy can return instances of subclasses. * Don't use a schema when creating a temporary table, even if default_schema is set. * Fix the migrator when a default_schema is used. * In the ado adapter, assume a connection to SQL Server if the :conn_string is given and doesn't indicate Access/Jet. * Fix fetching rows in the tinytds adapter when the identifier_output_method is nil. * The tinytds adapter now checks for disconnect errors, but it might not be reliable until the next release of tiny_tds. * The odbc adapter now handles ODBC::Time instances correctly. ruby-sequel-4.1.1/doc/release_notes/3.23.0.txt000066400000000000000000000154611220156535500206660ustar00rootroot00000000000000= New Features * Sequel now allows dynamic customization for eager loading. Previously, the parameters for eager loading were fixed at association creation time. Now, they can be modified at query time. To dynamically modify an eager load, you use a hash with the proc as the value. For example, if you have this code: Artist.eager(:albums) And you only want to eagerly load albums where the id is greater than or equal to some number provided by the user, you do: min = params[:min].to_i Artist.eager(:albums=>proc{|ds| ds.where{id > min}}) This also works when eager loading via eager_graph: Artist.eager_graph(:albums=>proc{|ds| ds.where{id > min}}) For eager_graph, the dataset is the dataset to graph into the current dataset, and filtering it will result in an SQL query that joins to a subquery. You can also use dynamic customization while cascading to also eagerly load dependent associations, by making the hash value a single entry hash with a proc key and the value being the dependent associations to eagerly load. For example, if you want to eagerly load tracks for those albums: Artist.eager(:albums=>{proc{|ds| ds.where{id > min}}=>:tracks}) * Sequel also now allows dynamic customization for regular association loading. Previously, this was possible by using the association's dataset: albums = artist.albums_dataset.filter{id > min} However, then there was no handling of caching, callbacks, or reciprocals. For example: albums.each{|album| album.artist} Would issue one query per album to get the artist, because the reciprocal association was not set. Now you can provide a block to the association method: albums = artist.albums{|ds| ds.filter{id > min}} This block is called with the dataset used to retrieve the associated objects, and should return a modified version of that dataset. Note that ruby 1.8.6 doesn't allow blocks to take block arguments, so you have to pass the block as a separate proc argument to the association method if you are still using 1.8.6. * Sequel now supports filtering by associations. This wasn't previously supported as filtering is a dataset level feature and associations are a model level feature, and datasets do not depend on models. Now, model datasets have the ability to filter by associations. For example, to get all albums for a given artist, you could do: artist = Artist[1] Album.filter(:artist=>artist) Since the above can also be accomplished with: artist.albums this may not seem like a big improvement, but it allows you to filter on multiple associations simultaneously: Album.filter(:artist=>artist, :publisher=>publisher) For simple many_to_one associations, the above is just a simpler way to do: Album.filter(:artist_id=>artist.id, :publisher_id=>publisher.id) Sequel supports this for all association types, including many_to_many and many_through_many, where a subquery is used, and it also works when composite key associations are used: Album.filter(:artist=>artist, :tags=>tag) This will give you the albums for that artist that are also tagged with that tag. To provide multiple values for the same association, mostly useful for many_to_many associations, you can either use separate filter calls or specify the conditions as an array: Album.filter(:tags=>tag1).filter(:tags=>tag2) Album.filter([[:tags, tag1], [:tags, tag2]]) * A columns_introspection extension has been added that makes datasets attempt to guess their columns in some cases instead of issuing a database query. This can improve performance in cases where the columns are needed implicitly, such as graphing. After loading the extension, you can enable the support for specific datasets by extending them with Sequel::ColumnIntrospection. To enable introspection for all datasets, use: Sequel::Dataset.introspect_all_columns * A serialization_modification_detection plugin has been added. Previously, Sequel could not detect modifications made to serialized objects. It could detect modification if you assigned a new value: model.hash_column = model.hash_column.merge(:foo=>:bar) but not if you just modified the object directly: model.hash_columns[:foo] = :bar With this plugin, such modifications can be detected, at a potentially significant performance cost. = Other Improvements * When using a migration directory containing both older integer migrations and newer timestamp migrations, where some integer migrations have not been applied, make sure to apply the remaining integer migrations before the timestamp migrations. Previously, they could be applied out of order due to a lexicographic sort being used instead of a numeric sort. * If a model does not select all columns from its table, the insert_select optimization is no longer used. Previously, creating a new model object for such a model could result in the object containing columns that the model does not select. * You can now use :select=>[] as an option for many_to_many associations to select all columns from both the associated table and the join table. Previously, this raised an error and required you do :select=>'*'.lit as a workaround. The default remains to select all columns in the associated table and none from the join table. * The xml_serializer plugin now handles namespaced models by using __ instead of / as the namespace separator. Previously, / was used and caused problems as it is not valid XML. * The :eager_grapher association option can now accept a proc that takes a single hash of options instead of a fixed 3 arguments. This is the recommended way going forward of writing custom :eager_graphers, and all of the internal ones have been converted. The previous way of using 3 arguments is still supported. * A bug in the identity_map plugin for many_to_one associations without full association reflection information has been fixed. * Sequel is now using GitHub Issues for issue tracking. Old issues have been migrated from Google Code. = Backwards Compatibility * The filter by associations support breaks backward compatibilty for users who previously added an sql_literal instance method to Sequel::Model. Usually, that was done to for reasons similar to but inferior than the filter by association support. The following code can be used as a temporary workaround until you can modify your program to use the new filter by associations support: Sequel::Model::Associations::DatasetMethods. send(:remove_method, :complex_expression_sql) * The private Sequel::Model#_load_associated_objects method now takes an additional, optional options hash. Plugins that override that method need to be modified. ruby-sequel-4.1.1/doc/release_notes/3.24.0.txt000066400000000000000000000377651220156535500207020ustar00rootroot00000000000000= Prepared Statement Plugins * The prepared_statements plugin makes Sequel::Model classes use prepared statements for creating, updating, and destroying model instances, as well as looking up model objects by primary key. With this plugin, all of the following will use prepared statements: Artist.plugin :prepared_statements Artist.create(:name=>'Foo') a = Artist[1] a.update(:name=>'Bar') a.destroy * The prepared_statements_safe plugin reduces the number of prepared statements that can be created by doing two things. First, it makes the INSERT statements used when creating instances to use as many columns as possible, setting specific values for all columns with parseable default values. Second, it changes save_changes to just use save, saving all columns instead of just the changed ones. The reason for this plugin is that Sequel's default behavior of using only the values specifically set when creating instances and having update only set changed columns by default can lead to a large number of prepared statements being created. For prepared statements to be used, each set of columns in the insert and update statements needs to have its own prepared statement. If you have a table with 1 primary key column and 4 other columns, you can have up to 2^4 = 16 prepared statements created, one for each subset of the 4 columns. If you have 1 primary key column and 20 other columns, there are over a million subsets, and you could hit your database limit for prepared statements (a denial of service attack). Using the prepared_statements_safe plugin mitigates this issue by reducing the number of columns that may or may not be present in the query, in many cases making sure that each model will only have a single INSERT and a single UPDATE prepared statement. * The prepared_statements_associations plugin allows normal association method calls to use prepared statements if possible. For example: Artist.plugin :prepared_statements_associations Artist.many_to_one :albums Artist[1].albums Will use a prepared statement to return the albums for that artist. This plugin works for all supported association types. There are some associations (filtered and custom associations) that Sequel cannot currently use a prepared statement reliably, for those Sequel will use a regular query. * The prepared_statements_with_pk plugin allows the new Dataset#with_pk method (explained below) to use prepared statements. For example: Artist.plugin :prepared_statements_with_pk Artist.filter(...).with_pk(1) Will use a prepared statement for this query. The most benefit from prepared statements come from queries that are expensive to parse and plan but quick to execute, so using this plugin with a complex filter can in certain cases yield significant performance improvements. However, this plugin should be considered unsafe as it is possible that it will create an unbounded number of prepared statements. It extracts parameters from the dataset using Dataset#unbind (explained below), so if your code has conditions that vary per query but that Dataset#unbind does not handle, an unbounded number of prepared statements can be created. For example: Artist.filter(:a=>params[:b].to_i).with_pk[1] Artist.exclude{a > params[:b].to_i}.with_pk[1] are safe, but: Artist.filter(:a=>[1, params[:b].to_i]).with_pk[1] Artist.exclude{a > params[:b].to_i + 2}.with_pk[1] are not. For queries that are not safe, Dataset#with_pk should not be used with this plugin, you should switch to looking up by primary key manually (for a regular query): Artist.filter(:a=>[1, params[:b].to_i])[:id=>1] or using the prepared statement API to create a custom prepared statement: # PS = {} PS[:name] ||= Artist.filter(:a=>[1, :$b], :id=>:$id). prepare(:select, :name) PS[:name].call(:b=>params[:b].to_i, :id=>1) = Other New Features * Filtering by associations got a lot more powerful. Sequel 3.23.0 introduced filtering by associations: Album.filter(:artist=>artist) This capability is much expanded in 3.24.0, allowing you to exclude by associations: Album.exclude(:artist=>artist) This will match all albums not by that artist. You can also filter or exclude by multiple associated objects: Album.filter(:artist=>[artist1, artist2]) Album.exclude(:artist=>[artist1, artist2]) The filtered dataset will match all albums by either of those two artists, and the excluded dataset will match all albums not by either of those two artists. You can also filter or exclude by using a model dataset: Album.filter(:artist=>Artist.filter(:name.like('A%'))).all Album.exclude(:artist=>Artist.filter(:name.like('A%'))).all Here the filtered dataset will match all albums where the associated artist has a name that begins with A, and the excluded dataset will match all albums where the associated artist does not have a name that begins with A. All of these types of filtering and excluding work with all of association types that ship with Sequel, even the many_through_many plugin. * Sequel now supports around hooks, which wrap the related before hook, behavior, and after hook. Like other Sequel hooks, these are implemented as instance methods. For example, if you wanted to log DatabaseErrors raised during save: class Artist < Sequel::Model def around_save super rescue Sequel::DatabaseError => e # log the error raise end end All around hooks should call super, not yield. If an around hook doesn't call super or yield, it is treated as a hook failure, similar to before hooks returning false. For around_validation, the return value of super should be whether the object is valid. For other around hooks, the return value of super is currently true, but it's possible that will change in the future. * Dataset#with_pk has been added to model datasets that allows you to find the object with the matching primary key: Artist.filter(:name.like('A%')).with_pk(1) This should make easier the common case where you want to find a particular object that is associated to another object: Artist[1].albums_dataset.with_pk(2) Before, there was no way to do that without manually specifying the primary key: Artist[1].albums_dataset[:id=>2] To use a composite primary key with with_pk, you have to provide an array: Artist[1].albums_dataset.with_pk([1, 2]) * Dataset#[] for model datasets will now call with_pk if given a single Integer argument. This makes the above case even easier: Artist[1].albums_dataset[2] Note that for backwards compatibility, this only works for single integer primary keys. If you have a composite primary key or a string/varchar primary key, you have to use with_pk. * Dataset#unbind has been added, which allows you to take a dataset that uses static bound values and convert them to placeholders. Currently, the only cases handled are SQL::ComplexExpression objects that use a =, !=, <, >, <=, or >= operator where the first argument is a Symbol, SQL::Indentifier, or SQL::QualifiedIdentifier, and the second argument is a Numeric, String, Date, or Time. Dataset#unbind returns a two element array, where the first element is a modified copy of the receiver, and the second element is a bound variable hash: ds, bv = DB[:table].filter(:a=>1).unbind ds # DB[:table].filter(:a=>:$a) bv # {:a=>1} The purpose of doing this is that you can then use prepare or call on the returned dataset with the returned bound variables: ds.call(:select, bv) # SELECT * FROM table WHERE (a = ?); [1] ps = ds.prepare(:select, :ps_name) # PREPARE ps_name AS SELECT * FROM table WHERE (a = ?) ps.call(bv) # EXECUTE ps_name(1) Basically, Dataset#unbind takes a specific statement and attempts to turn it into a generic statement, along with the placeholder values it extracted. Unfortunately, Dataset#unbind cannot handle all cases. For example: DB[:table].filter{a + 1 > 10}.unbind will not unbind any values. Also, if you have a query with multiple different values for a variable, it will raise an UnbindDuplicate exception: DB[:table].filter(:a=>1).or(:a=>2).unbind * A defaults_setter plugin has been added that makes it easy to automatically set default values when creating new objects. This plugin makes Sequel::Model behave more like ActiveRecord in that new model instances (before saving) will have default values parsed from the database. Unlike ActiveRecord, only values with non-NULL defaults are set. Also, Sequel allows you to easily modify the default values used: Album.plugin :default_values Album.new.values # {:copies_sold => 0} Album.default_values[:copies_sold] = 42 Album.new.values # {:copies_sold => 42} Before, this was commonly done in an after_initialize hook, but that's slower as it is also called for model instances loaded from the database. * A Database#views method has been added that returns an array of symbols representing view names in the database. This works just like Database#tables except it returns views. * A Sequel::ASTTransformer class was added that makes it easy to write custom transformers of Sequel's internal abstract syntax trees. Dataset#qualify now uses a subclass of ASTTransformer to do its transformations, as does the new Dataset#unbind. = Other Improvements * Database#create_table? now uses a single query with IF NOT EXISTS if the database supports such syntax. Previously, it issued a SELECT query to determine table existence. Sequel currently supports this syntax on MySQL, H2, and SQLite 3.3.0+. The Database#supports_create_table_if_not_exists? method was added to allow users to determine whether this syntax is supported. * Multiple column IN/NOT IN emulation now works correctly with model datasets (or other datasets that use a row_proc). * You can now correctly invert SQL::Constant instances: Sequel::NULL # NULL ~Sequel::NULL # NOT NULL Sequel::TRUE # TRUE ~Sequel::TRUE # FALSE * A bug in the association_pks plugin has been fixed in the case where the associated table had a different primary key column name than the current table. * The emulated prepared statement support now supports nil and false as bound values. * The to_dot extension was refactored for greater readability. The only change was a small fix in the display for SQL::Subscript instances. * The Dataset#supports_insert_select? method is now available to let you know if the dataset supports insert_select. You should use this method instead of respond_to? for checking for insert_select support. * Prepared statements/bound variable can now use a new :insert_select type for preparing a statement that will insert a row and return the row inserted, if the dataset supports insert_select. * The Model#initialize_set private method now exists for easier plugin writing. It is only called for new model objects, with the hash given to initialize. By default, it just calls set. * A small bug when creating anonymous subclasses of Sequel::Model on ruby 1.9 has been fixed. * If Thread#kill is used inside a transaction on ruby 1.8 or rubinius, the transaction is rolled back. This situation is not handled correctly on JRuby or ruby 1.9, and I'm not sure it's possible to handle correctly on those implementations. * The postgres adapter now supports the Sequel::Postgres::PG_NAMED_TYPES hash for associating conversion procs for custom types that don't necessarily have the same type oid on different databases. This hash uses symbol keys and proc values: Sequel::Postgres::PG_NAMED_TYPES[:interval] = proc{|v| ...} The conversion procs now use a separate hash per Database object instead of a hash shared across all Database objects. You can now modify the types for a particular Database object, but you have to use the type oid: DB.conversion_procs[42] = proc{|v| ...} * On SQLite and MSSQL, literalization of true and false values given directly to Dataset#filter has been fixed. So the following now works correctly on those databases: DB[:table].filter(true) DB[:table].filter(false) Unfortunately, because SQLite and MSSQL don't have a real boolean type, these will not work: DB[:table].filter{a & true} DB[:table].filter{a & false} You currently have to work around the issue by doing: DB[:table].filter{a & Sequel::TRUE} DB[:table].filter{a & Sequel::FALSE} It is possible that a future version of Sequel will remove the need for this workaround, but that requires having a separate literalization method specific to filters. * The MySQL bit type is no longer treated as a boolean. On MySQL, the bit type is a bitfield, which is very different than the MSSQL bit type, which is the closest thing to a boolean on MSSQL. * The bool database type is now recognized as a boolean. Some SQLite databases use bool, such as the ones used in Firefox. * SQL_AUTO_IS_NULL=0 is now set by default when connecting to MySQL using the swift or jdbc adapters. Previously, it was only set by default when using the mysql or mysql2 adapters. * Dataset#limit now works correctly on Access, using the TOP syntax. * Dataset#limit now works correctly on DB2, using the FETCH FIRST syntax. * The jdbc mssql subadapter was split into separate subadapters for sqlserver (using Microsoft's driver) and jtds (using the open source JTDS driver). * The jdbc jtds subadapter now supports converting Java CLOB objects to ruby strings. * Tables from the INFORMATION_SCHEMA are now ignored when parsing schema on JDBC. * The informix adapter has been split into shared/specific parts, and a jdbc informix subadapter has been added. * Dataset#insert_select now works correctly on MSSQL when the core extensions are disabled. * The sqlite adapter now logs when preparing a statement. * You no longer need to be a PostgreSQL superuser to run the postgres adapter specs. * The connection pool specs are now about 10 times faster and not subject to race conditions due to using Queues instead of sleeping. = Backwards Compatibility * Model#save no longer calls Model#valid?. It now calls the Model#_valid? private method that Model#valid? also calls. To mark a model instance invalid, you should override the Model#validate method and add validation errors to the object. * The BeforeHookFailure exception class has been renamed to HookFailure since hook failures can now be raised by around hooks that don't call super. BeforeHookFailure is now an alias to HookFailure, so no code should break, but you should update your code to reflect the new name. * Any custom argument mappers used for prepared statements now need to implement the prepared_arg? private instance method and have it return true. * If your databases uses bit as a boolean type and isn't MSSQL, it's possible that those columns will no longer be treated as booleans. Please report such an issue on the bugtracker. * It is possible that the filtering and excluding by association datasets will break backwards compatibility in some apps. This can only occur if you are using a symbol with the same name as an association with a model dataset whose model is the same as the associated class. As associations almost never have the same names as columns, this would require either aliasing or joining to another table. If for some reason this does break your app, you can work around it by changing the symbol to an SQL::Identifier or a literal string. * The Sequel::Postgres.use_iso_date_format= method now only affects future Database objects. * On MySQL, Database#tables no longer returns view names, it only returns table names. You have to use Database#views to get view names now. ruby-sequel-4.1.1/doc/release_notes/3.25.0.txt000066400000000000000000000061711220156535500206660ustar00rootroot00000000000000= New Features * drop_table, drop_view, drop_column, and drop_constraint all now support a :cascade option for using CASCADE. DB.drop_table(:tab, :cascade=>true) # DROP TABLE tab CASCADE DB.drop_column(:tab, :col, :cascade=>true) # ALTER TABLE tab DROP COLUMN col CASCADE A few databases support CASCADE for dropping tables and views, but only PostgreSQL appears to support it for columns and constraints. Using the :cascade option when the underlying database doesn't support it will probably result in a DatabaseError being raised. * You can now use datasets as expressions, allowing things such as: DB[:table1].select(:column1) > DB[:table2].select(:column2) # (SELECT column1 FROM table1) > (SELECT column2 FROM table2) DB[:table1].select(:column1).cast(Integer) # CAST((SELECT column1 FROM table1) AS integer) * Dataset#select_group has been added for grouping and selecting on the same columns. DB[:a].select_group(:b, :c) # SELECT b, c FROM a GROUP BY b, c * Dataset#exclude_where and #exclude_having methods have been added, allowing you to specify which clause to affect. #exclude's behavior is still to add to the HAVING clause if one is present, and use the WHERE clause otherwise. * Dataset#select_all now accepts optional arguments and will select all columns from those arguments if present: DB[:a].select_all(:a) # SELECT a.* FROM a DB.from(:a, :b).select_all(:a, :b) # SELECT a.*, b.* FROM a, b * Dataset#group and #group_and_count now both accept virtual row blocks: DB[:a].select(:b).group{c(d)} # SELECT b FROM a GROUP BY c(d) * If you use a LiteralString as a validation error message, Errors#full_messages will now not add the related column name to the start of the error message. * Model.set_dataset now accepts SQL::Identifier, SQL::QualifiedIdentifier, and SQL::AliasedExpression instances, treating them like Symbols. = Other Improvements * The association_pks plugin's setter method will now automatically convert a given array of strings to an array of integers if the primary key field is an integer field, which should make it easier to use in web applications. * nil bound variable, prepared statement, and stored procedure arguments are now handled correctly in the JDBC adapter. * On 1.9, you can now load plugins even when ::ClassMethods, ::InstanceMethods, or ::DatasetMethods is defined. = Backwards Compatibility * The tinytds adapter now only works with tiny_tds 0.4.5 and greater. Also, if you were using the tinytds adapter with FreeTDS 0.91rc1, you need to upgrade to FreeTDS 0.91rc2 for it to work. Also, if you were referencing an entry in the freetds.conf file, you now need to specify it directly using the :dataserver option when connecting, the adapter no longer copies the :host option to the :dataserver option. * On postgresql, Sequel now no longer drops tables with CASCADE by default. You now have to use the :cascade option to drop_table if you want to use CASCADE. * The Database#drop_table_sql private method now takes an additional options hash argument. ruby-sequel-4.1.1/doc/release_notes/3.26.0.txt000066400000000000000000000074621220156535500206730ustar00rootroot00000000000000= Performance Enhancements * The internal implementation of eager_graph has been made 75% to 225% faster than before, with greater benefits to more complex graphs. * Dataset creation has been made much faster (2.5x on 1.8 and 4.4x on 1.9), and dataset cloning has been made significantly faster (40% on 1.8 and 20% on 1.9). = Other Improvements * Strings passed to setter methods for integer columns are no longer considered to be in octal format if they include leading zeroes. The previous behavior was never intended, but was a side effect of using Kernel#Integer. Strings with leading zeroes are now treated as decimal, and you can still use the 0x prefix to treat them as hexidecimal. If anyone was relying on the old octal behavior, let me know and I'll add an extension that restores the octal behavior. * The identity_map plugin now works with the standard eager loading of many_to_many and many_through_many associations. * Database#create_table! now only attempts to drop the table if it already exists. Previously, it attempted to drop the table unconditionally ignoring any errors, which resulted in misleading error messages if dropping the table raised an error caused by permissions or referential integrity issues. * The default connection pool now correctly handles the case where a disconnect error is raised and an exception is raised while running the disconnection proc. * Disconnection errors are now detected when issuing transaction statements such as BEGIN/ROLLBACK/COMMIT. Previously, these statements did not handle disconnect errors on most adapters. * More disconnection errors are now detected. Specifically, the ado adapter and do postgres subadapter now handle disconnect errors, and the postgres adapter handles more types of disconnect errors. * Database#table_exists? now always issues a query to select from the table, it no longer attempts to parse the schema to determine the information on PostgreSQL and Oracle. * Date, DateTime, and Time values are now literalized correctly on Microsoft Access. * Connecting with the mysql adapter with an options hash now works if the :port option is a string, which makes it easier to use when the connection information is stored in YAML. * The xml_serializer plugin now works around a bug in pure-Java nokogiri regarding the handling of nil values. * Nicer error messages are now used if there is an attempt to call an invalid or restricted setter method. * The RDocs are now formatted with hanna-nouveau, which allows for section ordering, so the Database and Dataset RDoc pages are more friendly. = Backwards Compatibility * If you call a Dataset method such as #each on an eager_graphed dataset, you now get plain hashes that have column alias symbol keys and their values. Previously, you got a graphed response with table alias keys and model values. It's not wise to depend on the behavior, the only supported way of returning records when eager loading is to use #all. * An error is now raised if you attempt to eager load via Dataset#eager a many_to_many association that includes an :eager_graph option. Previously, incorrect SQL would have been generated and an error raised by the database. * Datasets are no longer guaranteed to have @row_proc, @indentifier_input_method, and @identifier_output_method defined as instance variables. You should be be using methods to access them anyway. * Database#table_exists? on PostgreSQL no longer accepts an options hash. Previously, you could use a :schema option. You must now provide the schema inside the table argument (e.g. :schema__table). * If you want to use the rdoc tasks in Sequel's Rakefile, and you are still using the hanna RDoc template with RDoc 2.3, you need to upgrade to using hanna-nouveau with RDoc 3.8+. ruby-sequel-4.1.1/doc/release_notes/3.27.0.txt000066400000000000000000000055211220156535500206660ustar00rootroot00000000000000= New Features * Model.dataset_module has been added for easily adding methods to a model's dataset: Album.dataset_module do def with_name_like(x) filter(:name.like(x)) end def selling_at_least(x) filter{copies_sold > x} end end Album.with_name_like('Foo%').selling_at_least(100000).all Previously, you could use def_dataset_method to accomplish the same thing. dataset_module is generally cleaner, plus you are using actual methods instead of blocks, so calling the methods is faster on some ruby implementations. * Sequel now uses a Sequel::SQLTime class (a subclass of Time) when dealing with values for SQL time columns (which don't have a date component). These values are handled correctly when used in filters or insert/update statements (using only the time component), so Sequel can now successfully round trip values for time columns. Not all adapters support returning time column values as SQLTime instances, but the most common ones do. * You can now drop foreign key, primary key, and unique constraints on MySQL by passing the :type=>(:foreign_key|:primary_key|:unique) option to Database#drop_constraint. * The ODBC adapter now has initial support for the DB2 database, use the :db_type=>'db2' option to load the support. = Other Improvements * The mysql2 adapter now uses native prepared statements. * The tinytds adapter now uses uses sp_executesql for prepared statements. * DateTime and Time objects are now converted to Date objects when they are assigned to a date column in a Model instance. * When converting a Date object to a DateTime object, the resulting DateTime object now has no fractional day components. Previously, depending on your timezone settings, it could have had fractional day components. * The mysql2 adapter now supports stored procedures, as long as they don't return results. * Mass assignment protection now handles including modules in model classes and extending model instances with modules. Previously, if you defined a setter method in a module, access to it may have been restricted. * The prepared_statements_safe plugin now works on classes without datasets, so you can now do the following to load it for all models: Sequel::Model.plugin :prepared_statements_safe * Dataset#hash now works correctly when handling SQL::Expression instances. * Model#hash now correctly handles classes with no primary key or with a composite primary key. * Model#exists? now always returns false for new model objects. = Backwards Compatibility * If you were previously setting primary key values manually for new model objects and then calling exists? to see if the instance is already in the database, you need to change your code from: model.exists? to: model.this.get(1).nil? ruby-sequel-4.1.1/doc/release_notes/3.28.0.txt000066400000000000000000000302551220156535500206710ustar00rootroot00000000000000= New Adapter Support * Sequel now has much better support for the DB2 database. * An ibmdb adapter has been added, and is the recommended adapter to to use if you want to connect to DB2 from MRI. * A jdbc db2 subadapter has been added, allowing good DB2 support on JRuby. * The db2 adapter has been cleaned up substantially, and now works well, but it is still recommended that you switch to ibmdb if you are using the db2 adapter. * The firebird adapter has been split into shared and specific parts, and quite a few fixes were made to it. * A jdbc firebird subadapter has been added, allowing connection to firebird databases from JRuby. = New PostgreSQL 9.1 Features * Dataset#returning has been added for using the RETURNING clause on INSERT/UPDATE/DELETE queries. RETURNING allows such queries to return results in much the same way as a SELECT query works. When Dataset#returning is used, Dataset #insert, #update, and #delete now accept a block that is passed to Dataset #fetch_rows which is yielded plain ruby hashes for each row inserted, updated, or deleted. If Dataset#returning is used and a block is not given to those methods, those methods will return an array of plain hashes for all rows inserted, updated, and deleted. * Dataset#with_sql now treats a symbol as a first argument as a method name to call to get the SQL. The expected use case for this is with Dataset#returning and insert/update/delete: DB[:items]. returning(:id). with_sql(:update_sql, :b => :b + 1). map(:id) Basically, it makes it more easily to statically set the insert/update/delete SQL, and then be able to use the full dataset API for returning results. As mentioned above, using Dataset#returning with #insert, #update, and #delete yields plain hashes, so if you want to have the row_proc applied (e.g. you are using models), you need to use this method instead, since you can then call #each or #all to make sure the row_proc is called on all returned rows. * Dataset#with (common table expressions) now affects INSERT/UPDATE/DELETE queries. * Database#create_table? now uses CREATE TABLE IF NOT EXISTS on PostgreSQL 9.1. = Other New Features * The :limit option is now respected when eager loading via either eager or eager_graph. By default, Sequel will just do an array slice of the resulting ruby array, which gets the correct answer, but does not offer any performance improvements. Sequel also offers a new :eager_limit_strategy option for using more advanced query types that only load the related records from the database. The available values for the :eager_limit_strategy option are: :window_function - This uses the row_number window function partitioned by the related key fields. It can only be used on databases that support window functions (PostgreSQL 8.4+, Microsoft SQL Server 2005+, DB2). :correlated_subquery - This uses a correlated subquery that is limited. It works on most databases except MySQL and DB2. You can provide a value of true as the option to have Sequel pick a strategy to use. Sequel will never use a correlated subquery for true, since in some cases it can perform worse than loading all related records and doing the array slice in ruby. If you want to enable an eager_limit_strategy globally, you can set Sequel::Model.default_eager_limit_strategy to a value, and all associations that use :limit will default to using that strategy. * one_to_one associations that do not represent true one-to-one database relationships, but represent one-to-many relationships where you are only returning the first object based on a given order are also now handled correctly when eager loading. Previously, eager loading such associations resulted in the last matching object being associated instead of the first matching object being associated. You can also use an :eager_limit_strategy for one_to_one associations. In addition to the :window_function and :correlated_subquery values, there is also a :distinct_on value that is available on PostgreSQL for using DISTINCT ON, which is the fastest strategy if you are using PostgreSQL. * Dataset#map, #to_hash, #select_map, #select_order_map, and #select_hash now accept arrays of symbols, and if given arrays of symbols, use arrays of results. For example: DB[:items].map([:id, :name]) # => [[1, 'foo'], [2, 'bar'], ...] DB[:items].to_hash([:id, :foo_id], [:name, :bar_id]) # => {[1, 3]=>['foo', 5], [2, 4]=>['bar', 6], ...} * For SQL expression objects where Sequel cannot deduce the type of the object, it now will consider the type of the argument when a &, |, or + operator is used. For example: :x & 1 Previously, this did "x AND 1", now it does "x & 1". Using a logical operator on an integer doesn't make sense, but it's possible people did so if the database uses 1/0 for true/false. Likewise: :x + 'foo' Previously, this did "x + 'foo'" (addition), now it does "x || 'foo'" (string concatenation). * The sql_string, sql_number, and sql_boolean methods are now available on SQL::ComplexExpressions, so you can do: (:x + 1).sql_string + ' foos' # (x + 1) || ' foos' Previously, there was not an easy way to generate such SQL expressions. * :after_load association hooks are now applied when using eager_graph. Previously, they were only applied when using eager, not when using eager_graph. * Database#copy_table has been added to the postgres adapter if pg is used as the underlying driver. It allows you to get very fast exports of table data in text or CSV format. It also accepts datasets, allowing fast exports of arbitrary queries in text or CSV format. * SQL extract support (:timestamp.extract(:year)) is now emulated on the databases that don't natively support it, such as SQLite, Microsoft SQL Server, and DB2. At least the following values are supported for extraction: :year, :month, :day, :hour, :minute, and :second. * The bitwise XOR operator is now emulated on SQLite. Previously, attempting to use it would cause the database to raise an error. * A Database#use_timestamp_timezones accessor has been added on SQLite. This allows you to turn off the use of timezones in timestamps by setting the value to false. This is necessary if you want you want to use the SQLite datetime functions, or the new ability to emulate extract. Note that this setting does not affect the current database content. To convert old databases to the new format, you'll have to resave all rows that have timestamps. At some point in the future, Sequel may default to not using timezones in timestamps by default on SQLite, so if you would like to rely on the current behavior, you should set this accessor to true now. * Sequel now works around bugs in MySQL when using a subselect with a LIMIT by using a nested subselect. * Sequel now works around issues in Microsoft SQL Server and DB2 when using a subselect with IN/NOT IN that uses the emulated offset support. * The jdbc adapter now returns java.sql.Clob objects as Sequel::SQL::Blobs. * Sequel now considers database clob types as the :blob schema type. * Sequel::SQLTime.create has been added for more easily creating instances: Sequel::SQLTime.create(hour, minute, second, usec) * Dataset#select_all now accepts SQL::AliasedExpression and SQL::JoinClause arguments and returns the appropriate SQL::ColumnAll value that selects all columns from the related table. * Model.set_dataset now accepts Sequel::LiteralString objects that represent table names. This usage is not encouraged except in rare cases such as using a set returning function in PostgreSQL. * Dataset#supports_cte? now takes an optional argument specifying the type of query (:insert, :update, :delete, :select). It defaults to :select. * Dataset#supports_returning? has been added. It requires an argument specifying the type of query (:insert, :update, or :delete). * Dataset#supports_cte_in_subqueries? has been added for checking for support for this ability. Apparently, only PostgreSQL currently supports this. For other adapters that support CTEs but not in subqueries, if a subquery with a CTE is used in a JOIN, the CTE is moved from the subquery to the main query. * Dataset#supports_select_all_and_column has been added for seeing if "SELECT *, foo ..." style queries are supported. This is false on DB2, which doesn't allow such queries. When it is false, using select_append on a dataset that doesn't specifically select columns will now change the query to do "SELECT table.*, foo ..." instead, working around the limitation on DB2. * Dataset#supports_ordered_distinct_on? has been added. Currently, this is only true on PostgreSQL. MySQL can emulate DISTINCT ON using GROUP BY, but it doesn't respect ORDER BY, so it some cases it cannot be used equivalently. * Dataset#supports_where_true? has been added for checking for support of WHERE TRUE (or WHERE 1 if 1 is true). Not all databases support using such a construct, and on the databases that do not, you have to use WHERE (1 = 1) or something similar. = Other Improvements * Sequel 3.27.0 was negatively affected by a serious bug in ActiveSupport's Time.=== that has still not been fixed, which broke the literalization of Time objects. In spite of the bad precedent it sets, Sequel now avoids using Time.=== on a subclass of Time to work around this ActiveSupport bug. * Dataset#with_pk now uses a qualified primary key instead of an unqualified primary key, which means it can now be used correctly after joining to a separate table. * Association after_load hooks when lazy loading are now called after the association has been loaded, which allows them to change which records are cached. This makes the lazy load case more similar to the eager load case. * The metaprogrammatically created methods that implement Sequel's DSL support have been made significantly faster by using module_eval instead of define_method. * The type translation in the postgres, mysql, and sqlite adapters has been made faster by using Method objects that result in more direct processing. * Typecasting values for time columns from Time values to Sequel::SQLTime values now correctly handles fractional seconds on ruby 1.9. = Backwards Compatibility * Dataset#insert_returning_sql has been changed to a private method in the PostgreSQL and Firebird adapters, and it operates differently than it did previously. The private #insert_returning_pk_sql and #insert_returning_select_sql methods have been removed. * Dataset#with_pk no longer does some defensive checking for misuse of primary keys (e.g. providing a composite key when the model uses a single key). Previously, Sequel would raise an Error immediately, now such behavior is undefined, with the most likely behavior being the database raising an Error. * The :alias_association_type_map and :alias_association_name_map settings have been removed from the :eager_graph dataset option, in favor of just storing the related association reflection. * The internals of the db2 adapter have changed substantially, if you were relying on some of the private methods defined in it, you will probably have to modify your code. * The firebird adapter was substanially modified, specifically parts related to insert returning autogenerated primary key values, so if you were previously using the adapter you should probably take more care than usual when testing your upgrade. * The Dataset::WITH_SUPPORTED constant has been removed. * The Dataset#supports_cte? method now accepts an optional argument. If you overrode this method, your overridden method now must accept an optional argument. * If you were previously doing: :x & 1 and wanting "x AND 1", you have to switch to: :x.sql_boolean & 1 Likewise, if you were previously doing: :x + 'foo' and wanting "x + 'foo'", you need to switch to: :x.sql_number + 'foo' * Sequel no longer does defensive type checking in the SQL expression support, as it was often more strict than the database and would not allow the creation of expressions that were valid for the database. ruby-sequel-4.1.1/doc/release_notes/3.29.0.txt000066400000000000000000000444301220156535500206720ustar00rootroot00000000000000= New Adapter Support * Sequel now has much better support for Oracle, both in the ruby-oci8-based oracle adapter and in the jdbc/oracle adapter. * Sequel now has much better support for connecting to HSQLDB using the jdbc adapter. This support does not work correctly with the jdbc-hsqldb gem, since the version it uses is too old. You'll need to load the .jar file manually until the gem is updated. * Sequel now has much better support for connecting to Apache Derby databases using the jdbc adapter. This works with the jdbc-derby gem, but it's recommend you grab an updated .jar file as the jdbc-derby gem doesn't currently support truncate or booleans. * The db2 adapter has had most of the remaining issues fixed, and can now run Sequel's test suite cleanly. It's still recommended that users switch to the ibmdb adapter if they are connecting to DB2. * A mock adapter has been added which provides a mock Database object that allows you to easily set the returned rows, the number of rows modified by update/delete, and the autogenerating primary key integer for insert. It also allows you to set specific columns in the dataset when retrieving rows. The specs were full of partial implementations of mock adapters, this mock adapter is much more complete and offers full support for mocking transactions and database sharding. Example: DB = Sequel.mock(:fetch=>{:id=>1}, :numrows=>2, :autoid=>3) DB[:items].all # => [{:id => 1}] DB[:items].insert # => 3 DB[:items].insert # => 4 DB[:items].delete # => 2 DB[:items].update(:id=>2) # => 2 DB.sqls # => ['SELECT ...', 'INSERT ...', ...] In addition to being useful in the specs, the mock adapter is also used if you use bin/sequel without a database argument, which makes it much easier to play around with Sequel on the command line without being tied to a real database. = New Transaction Features * Database after_commit and after_rollback hooks have been added, allowing you to set procs that are called after the currently- in-effect transaction commits or rolls back. If the Database is not currently in a transaction, the after_commit proc is called immediately and the after_rollback proc is ignored. * Model after_commit, after_rollback, after_destroy_commit, and after_destroy_rollback hooks have been added that use the new Database after_commit/after_rollback hook to execute code after commit or rollback. * Database#transaction now supports a :rollback => :reraise option to reraise any Sequel::Rollback exceptions raised by the block. * Database#transaction now supports a :rollback => :always option to always rollback the transaction, which is mostly useful when using transaction-based testing. * Sequel.transaction has been added, allowing you to run simultaneous transactions on multiple Database objects: Sequel.transaction([DB1, DB2]){...} # similar to: DB1.transaction{DB2.transaction{...}} You can combine this with the :rollback => :always option to easily use multiple databases in the same test suite and make sure that changes are rolled back on all of them. * Database#in_transaction? has been added so that users can detect whether the code is currently inside a transaction. * The generic JDBC transaction support, used by 6 of Sequel's jdbc subapters, now supports savepoints if the underlying JDBC driver supports savepoints. = Other New Features * A dataset_associations plugin has been added, allowing datasets to call association methods, which return datasets of rows in the associated table that are associated to rows in the current dataset. # Dataset of tracks from albums with name < 'M' # by artists with name > 'M' Artist.filter(:name > 'M').albums.filter(:name < 'M').tracks # SELECT * FROM tracks # WHERE (tracks.album_id IN ( # SELECT albums.id FROM albums # WHERE ((albums.artist_id IN ( # SELECT artists.id FROM artists # WHERE (name > 'M'))) # AND (name < 'M')))) * Database#extend_datasets has been added, allowing you to do the equivalent of extending all of the database's future datasets with a module. For performance, it creates an anonymous subclass of the current dataset class and includes a module in it, and uses the subclass to create future datasets. Using this feature allows you to override any dataset method and call super, similar to how Sequel::Model plugins work. The method takes either a module: Sequel.extension :columns_introspection DB.extend_datasets(Sequel::ColumnsIntrospection) or a block that it uses to create an anonymous module: DB.extend_datasets do # Always select from table.* instead of * def from(*tables) ds = super if !@opts[:select] || @opts[:select].empty? ds = ds.select_all(*tables) end ds end end * Database#<< and Dataset#<< now return self, which allow them to be used in chaining: DB << "UPDATE foo SET bar_id = NULL" << "DROP TABLE bars" DB[:foo] << {:bar_id=>0} << DB[:bars].select(:id) * A Database#timezone accessor has been added, allowing you to override Sequel.database_timezone on a per-Database basis, which allows you to use two separate Database objects in the same process that have different timezones. * You can now modify the type conversion procs on a per-Database basis when using the mysql, sqlite, and ibmdb adapters, by modifying the hash returned by Database#conversion_procs. * Model.dataset_module now accepts a Module instance as an argument, and extends the model's dataset with that module. * When using the postgres adapter with the pg driver, you can now use Database#listen to wait for notifications. All adapters that connect to postgres now support Database#notify to send notifications: # process 1 DB.listen('foo') do |ev, pid, payload| ev # => 'foo' notify_pid # => some Integer payload # => 'bar' end # process 2 DB.notify('foo', :payload=>'bar') * many_to_one associations now have a :qualify option that can be set to false to not qualify the primary key when loading the association. This shouldn't be necessary to use in most cases, but in some cases qualifying a primary key breaks certain queries (e.g. using JOIN USING on the same column on Oracle). * Database#schema can now take a dataset as an argument if it just selects from a single table. If a dataset is provided, the schema parsing will use that dataset's identifier_input_method and identifier_output_method for the parsing, instead of the database's default. This makes it possible for Model classes to correctly get the table schema if they use a dataset whose identifier_(input|output)_method differs from the database default. * On databases that support common table expressions (CTEs) but do not support CTE usage in subselects, Sequel now emulates support by moving CTEs from the subselect to the main select when using the Dataset from, from_self, with, with_recursive, union, intersect, and except methods. * The bitwise compliment operator is now emulated on H2. * You can now set the convert_tinyint_to_bool setting on a per-Database basis in the mysql and mysql2 adapters. * You can now set the convert_invalid_date_time setting on a per-Database basis in the mysql adapter. * Database instances now have a dataset_class accessor that allows you to set which class is used when creating datasets. This is mostly used to implement the extend_datasets support, but it could be useful for other purposes. * Dataset#unused_table_alias now accepts an optional 2nd argument, which should be an array of additional symbols that should be considered as already used. * Dataset#requires_placeholder_type_specifiers? has been added to check if the dataset requires you use type specifiers for bound variable placeholders. The prepared_statements plugin now checks this setting and works correctly on adapters that set it to true, such as oracle. * Dataset#recursive_cte_requires_column_aliases? has been added to check if you must provide a column list for a recursive CTE. The rcte_tree plugin now checks this setting an works correctly on databases that set it to true, such as Oracle and HSQLDB. = Performance Improvements * Numerous optimizations were made to loading model objects from the database, resulting in a 7-16% speedup. Model.call was added, and now .load is just an alias for .call. This allows you to make the model dataset's row_proc the model itself, instead of needing a separate block, which improves performance. While Model.load used to call .new (and therefore #initialize), Model.call uses .allocate/#set_values/#after_initialize for speed. This saves a method call or two, and skips setting the @new instance variable. * Dataset#map, #to_hash, #select_map, #select_order_map, and #select_hash are now faster if any of the provided arguments are an array of symbols. * The Model.[] optimization is now applied in more cases. = Other Improvements * Sequel now creates accessor methods for all columns in a model's table, even if the dataset doesn't select the columns. This has been the specified behavior for a while, but the spec was broken. This allows you do to: Model.dataset = DB[:table].select(:column1, :column2) Model.select_more(:column3).first.column3 * Model.def_dataset_method now correctly handles method names that can't be used directly (such as method names with spaces). This isn't so the method can be used with arbitrary user input, but it will allow safe creation of dataset methods that are derived from column names, which could contain spaces. * Model.def_dataset_method no longer overrides private model methods. * The optimization that Model.[] uses now works correctly if the model's dataset uses a different identifier_input_method than the database. * Sharding is supported correctly by default for the transactions used by model objects. Previously, you had to use the sharding plugin to make sure the same shard was used for transactions as for the insert/update/delete statements. * Sequel now fully supports using an aliased table for the :join_table option of a many_to_many association. The only real use case for an aliased :join_table option is when the join table is the same as the associated model table. * A bug when eagerly loading a many_through_many association with composite keys where one of the join tables requires an alias has been fixed. * Sequel's transaction internals have had substantial improvments. You can now open up simultaneous transactions on two separate shards of the same Database object in the same thread. The new design allows for future support of connection pools that aren't based on threads. Sequel no longer abuses thread-local variables to store savepoint state. * Dataset#select_map and #select_order_map now return an array of single element arrays if given an array with a single entry as an argument. Previously, they returned an array of values, which wasn't consistent. * Sequel's emulation of bitwise operators with more than 2 arguments now works on all adapters that use the emulation. The emulation was broken in 3.28.0 when more than 2 arguments were used on H2, DB2, Microsoft SQL Server, PostgreSQL, and SQLite. * Dataset#columns now correctly handles the emulated offset support used on DB2, Oracle, and Microsoft SQL Server when using the jdbc, odbc, ado, and dbi adapters. Previously, Dataet#columns could contain the row number column, which wasn't in the hashes yielded by Dataset#each. * Sequel can now parse primary key information on Microsoft SQL Server. Previously, the only adapter that supported this was the jdbc adapter, which uses the generic JDBC support. The shared mssql adapter now supports parsing the information directly from the database system tables. This means that if you are using Model objects with a Microsoft SQL Server database using the tinytds, odbc, or ado adapters, the model primary key information will be set automatically. * Sequel's prepared statement support no longer defines singleton methods on the prepared statement objects. * StringMethods#like is now case sensitive on SQLite and Microsoft SQL Server, making it more similar to other databases. * Sequel now works around an SQLite column naming bug if you select columns qualified with the alias of a subselect without providing an alias for the column itself. * Sequel now handles more bound variable types when using bound variables outside of prepared statements on SQLite. * Sequel now works around a bug in certain versions of the JDBC/SQLite driver when emulating alter table support for operations such as drop_column. * Sequel now emulates the add_constraint and drop_constraint alter table operations on SQLite, though the emulation has issues. * Sequel now correctly handles composite primary keys when emulating alter_table operations on SQLite. * Sequel now applies the correct PRAGMA statements by default when connecting to SQLite via the amalgalite and swift adapters. * Sequel now supports using savepoints inside prepared transactions on MySQL. * Sequel now closes JDBC ResultSet objects as soon as it is done using them, leading to potentially lower memory usage in the JDBC adapter, and fixes issues if you try to drop a table before GC has collected a related ResultSet. * Sequel can now correctly insert all default values into a table on DB2. Before, this didn't work correctly if the table had more than one column. * Another type of disconnection error is now recognized in the mysql2 adapter. * Sequel now uses better error messages if you attempt to execute a prepared statement without a name using the postgres, mysql, and mysql2 adapters. * Some small fixes have been made that allow Sequel to run better when $SAFE=1. However, Sequel is not officially supported using $SAFE > 0, so there could be many issues remaining. * Sequel's core and model specs were cleaned up by using the mock adapter to eliminate a lot of redundant code. * Sequel's integration tests were sped up considerably, halving the execution time on some adapters. = Backwards Compatibility * Because Model.load is now an alias for .call, plugins should no longer override load. Instead, they should override .call. * Loading model objects from the database no longer calls Model#initialize. Instead, it calls Model.allocate, Model#set_values, and Model#after_initialize. So if you were overriding #initialize and expecting the changes to affect model objects loaded from the database, you need to change your code. Additionally, @new is no longer set to false for objects retieved from the database, since setting it to false hurts performance. Model#new? still returns true or false, so this only affects you if you are checking the instance variables directly. * Dataset#<< no longer returns the autogenerated primary key for the inserted row. As mentioned above, it now returns self to allow for chaining. If you were previously relying on the return value, switch from #<< to #insert. * Dataset#map no longer calls the row_proc if given an argument, and Dataset#to_hash no longer calls the row_proc if given two arguments. This should only affect your code if you were using a row_proc that modified the content of the hash (e.g. Model#after_initialize). If you were relying on the old behavior, switch: dataset.map(:foo) # to dataset.map{|r| r[:foo]} dataset.to_hash(:foo, :bar) # to h = {} dataset.each{|r| h[r[:foo]] = r[:bar]} h * Model classes now need to have a dataset before you can define associations on them. * Model classes now pass their dataset to Database#schema, instead of their table name. * The :eager_block association option (which defaults to the association's block argument) is now called before the :eager_graph association option has been applied, instead of after. * The many_to_many association reflection :qualified_right_key entry is now a method named qualified_right_key. Switch any code using association_reflection[:qualified_right_key] to use association_reflection.qualified_right_key. * If you are using like on SQLite and Microsoft SQL Server and want it to be case insensitive, switch to using ilike: # Case sensitive DB[:foos].where(:name.like('bar%')) # Case insensitive DB[:foos].where(:name.ilike('bar%')) Sequel now sets the case_sensitive_like PRAGMA to true by default on SQLite. To set it to false instead, pass the :case_sensitive_like=>false option to the database when creating it. * Sequel's alter table emulation on SQLite now renames the current table then populates the replacement table, instead of populating the replacement table at a temporary name, dropping the current table, and then renaming the replacement table. * The strings 'n' and 'no' (case insensitive) when typecasted to boolean are now considered false values instead of true. * The transaction internals had extensive changes, if you have any code that depended on the transaction internals, it will probably require changes. * Using the Sequel::MySQL module settings for convert_tinyint_to_bool and convert_invalid_date_time now only affects future Database objects. You should switch to using the per-Database methods if you are currently using the Sequel::MySQL module methods. * The customized transaction support in the do (DataObjects) adapter was removed. All three subadapters (postgres, mysql, sqlite) of the do adapter implement their own transaction support, so this should have no effect unless you were using the do adapter with a different database type. * The oracle support changed dramatically, so if you were relying on the internals of the oracle support, you should take extra care when upgrading. = Advance Notice * The next release of Sequel will contain significant changes to how a dataset is literalized into an SQL string. If you have a custom plugin, extension, or adapter that overrides a method containing "literal", "sql", or "quote", or you make other modifications or extensions to how Sequel currently literalizes datasets to SQL, your code will likely need to be modified to support the next release. ruby-sequel-4.1.1/doc/release_notes/3.3.0.txt000066400000000000000000000167501220156535500206060ustar00rootroot00000000000000New Features ------------ * An association_proxies plugin has been added. This is not a full-blown proxy implementation, but it allows you to write code such as: artist.albums.filter{num_tracks > 10} Without the plugin, you have to call filter specifically on the association's dataset: artist.albums_dataset.filter{num_tracks > 10} The plugin works by proxying array methods to the array of associated objects, and all other methods to the association's dataset. This results in the following behavior: # Will load the associated objects (unless they are already # cached), and return the length of the array artist.albums.length # Will issue an SQL query with COUNT (even if the association # is already cached), and return the result artist.albums.count * The add_*/remove_*/remove_all_* association methods now take additional arguments that are passed down to the _add_*/_remove_*/_remove_all_* methods. One of the things this allows you to do is update additional columns in join tables for many_to_many associations: class Album many_to_many :artists def _add_artist(artist, values={}) DB[:albums_artists]. insert(values.merge(:album_id=>id, :artist_id=>artist.id)) end end album = Album[1] artist1 = Artist[2] artist2 = Artist[3] album.add_artist(artist1, :relationship=>'composer') album.add_artist(artist2, :relationship=>'arranger') * The JDBC adapter now accepts a :convert_types option to turn off Java type conversion. The option is true by default for backwards compatibility and correctness, but can be set to false to double performance. The option can be set at the database and dataset levels: DB = Sequel.jdbc('jdbc:postgresql://host/database', :convert_types=>false) DB.convert_types = true ds = DB[:table] ds.convert_types = false * Dataset#from_self now takes an option hash and respects an :alias option, giving the table alias to use. * Dataset#unlimited was added, similar to unfiltered and unordered. * SQL::WindowFunction is now a subclass of SQL::GenericExpression, so you can alias it and treat it like any other SQL::Function. Other Improvements ------------------ * Microsoft SQL Server support is much, much better in Sequel 3.3.0 than in previous versions. Support is pretty good with the ODBC, ADO, and JDBC adapters, close to the level of support for PostreSQL, MySQL, SQLite, and H2. Improvements are too numerous to list, but here are some highlights: * Dataset#insert now returns the primary key (identity field), so it can be used easier with models. * Transactions can now use savepoints (except on ADO). * Offsets are supported when using SQL Server 2005 or 2008, using a ROW_NUMBER window function. However, you must specify an order for your dataset (which you probably are already doing if you are using offsets). * Schema parsing has been implemented, though it doesn't support primary key parsing (except on JDBC, since the JDBC support is used there). * The SQL syntax Sequel uses is now much more compatible, and most schema modification methods and database types now work correctly. * The ADO and ODBC adapters both work much better now. The ADO adapter no longer attempts to use transactions, since I've found that ADO does not give a stable native connection (and hence transactions weren't possible). I strongly recommend against using the ADO adapter in production. * The H2 JDBC subadapter now supports rename_column, set_column_null, set_column_type, and add_foreign_key. * Altering a columns type, null status, or default is now supported on SQLite. You can also add primary keys and unique columns. * Both the ADO and ODBC adapters now catch the native exception classes and raise Sequel::DatabaseErrors. * Model classes now default to associating to other classes in the same scope. This makes it easier to use namespaced models. * The schema parser and schema dumper now support the following types: nchar, nvarchar, ntext, smalldatetime, smallmoney, binary, and varbinary. * You can now specify the null status for a column using :allow_null in addition to :null. This is to make it easier to use the table creation methods with the results of the schema parser. * Renaming a NOT NULL column without a default now works on MySQL. * Model class initialization now raises an exception if there is a problem connecting to the database. * Connection pool performance has been increased slightly. * The literal_time method in the ODBC adapter has been fixed. * An unlikely but potential bug in the MySQL adapter has been fixed. Backwards Compatibility ----------------------- * The convert_tinyint_to_bool setting moved from the main Sequel module to the Sequel::MySQL module. The native MySQL adapter is the only adapter that converted tinyint columns to booleans when the rows are returned, so you can only use the setting with the native MySQL adapter. Additionally, the setting's behavior has changed. When parsing the schema, now only tinyint(1) columns are now considered as boolean, instead of all tinyint columns. This allows you to use tinyint(4) columns for storing small integers and tinyint(1) columns as booleans, and not have the schema parsing support consider the tinyint(4) columns as booleans. Unfortunately, due to limitations in the native MySQL driver, all tinyint column values are converted to booleans upon retrieval, not just tinyint(1) column values. Unfortunately, the previous Sequel behavior was to use the default tinyint size (tinyint(4)) when creating boolean columns (using the TrueClass or FalseClass generic types). If you were using the generic type support to create the columns, you should modify your database to change the column type from tinyint(4) to tinyint(1). If you use MySQL with tinyint columns, these changes have the potential to break applications. Care should be taken when upgrading if these changes apply to you. * Model classes now default to associating to other classes in the same scope. It's highly unlikely anyone was relying on the previous behavior, but if you have a model inside a module that you are associating to a model outside of a module, you now need to specify the associated class using the :class option. * Model#save no longer includes the primary key fields in the SET clause of the UPDATE query, only in the WHERE clause. I'm not sure if this affects backwards compatibility of production code, but it can break tests that expect specific SQL. * Behavior to handle empty identifiers has now been standardized. If any database adapter returns an empty identifier, Sequel will use 'untitled' as the identifier. This can break backwards compatibility if the adapter previously used another default and you were relying on that default. This was necessary to fix any possible "interning empty string" exceptions. * On MSSQL, Sequel now uses the datetime type instead of the timestamp type for generic DateTimes. It now uses bit for the TrueClass and FalseClass generic types, and image for the File generic type. * Sequel now unescapes URL parts: Sequel.connect(ado:///db?host=server%5cinstance) However, this can break backward compatibility if you previously expected it not to be unescaped. * The columns_for private SQLite Database method has been removed. ruby-sequel-4.1.1/doc/release_notes/3.30.0.txt000066400000000000000000000121061220156535500206550ustar00rootroot00000000000000= Dataset Literalization Refactoring * As warned about in the 3.29.0 release notes, dataset literalization has been completely refactored. It now uses an append-only design which is faster in all cases, about twice as fast for large objects and deeply nested structures, and over two orders of magnitude faster in some pathological cases. This change should not affect applications, but may affect custom extensions or adapters that dealt with literalization of objects. Most literalization methods now have a method with an _append suffix that does the actual literalization, which takes the sql string to append to as the first argument. If you were overriding a literalization method, you now probably need to override the _append version instead. If you have this literalization method: def foo_sql(bar) "BAR #{literal(bar.baz)}" end You need to change the code to: def foo_sql_append(sql, bar) sql << "BAR " literal_append(sql, bar.baz) end def foo_sql(bar) sql = "" foo_sql_append(sql, bar) sql end If you have questions about modifying your custom adapter or extension, please ask on the Google Group or the IRC channel. = New Features * Model#set_server has been added to the base support (it was previously only in the sharding plugin), which allows you to set the shard on which to save/delete the model instance: foo1.set_server(:server_a).save foo2.set_server(:server_a).destroy * Model#save now accepts a :server option that uses set_server to set the shard to use. Unlike most other #save options, this option persists past the end of the save. Previously, the :server option only affected the transaction code, it now affects the INSERT/UPDATE statement as well. * When initiating a new dataset graph, any existing selected columns is assumed to be the columns to select for the graph from the current/master table. Before, there was not a way to specify the columns to select from the current/master table. * A :graph_alias_base association option has been added, which is used to set the base alias name to use when eager graphing. This is mostly useful when cascading eager graphs to dependent associations, where multiple associations with the same name in different models are being graphed simultaneously. * You can now specify nanoseconds and a timezone offset when converting a hash or array to a timestamp. The nanoseconds and offset are the 7th and 8th entries in the array, and the :nanos and :offset entry in the hash. * The postgres adapter now respects a :connect_timeout option if you are using the pg driver. = Other Improvements * Type conversion of Java to Ruby types in the JDBC adapter has been made much faster, as conversion method lookup is now O(number of columns) instead of O(number of columns*number of rows). * Sequel::SQL::Blob literalization is now much faster on adapters that use hex encoding, by switching to String#unpack('H*'). * Database#after_commit and after_rollback now respect the :server option to set the server/shard to use. * Symbol splitting (e.g. for table__column) is now slightly faster. * All adapters now pass the dataset :limit/:offset value through Dataset#literal instead of using it verbatim. Note that Dataset#limit already called to_i on input strings, so this isn't a security issue. However, the previous code broke if you provided a Sequel-specific object (e.g. Sequel::SQL::Function) as the :limit/:offset value. * Calling graph and eager_graph on an already graphed dataset no longer modifies the receiver. * Model#set_server now correctly handles the case where @this is already loaded. * Dataset#destroy for model datasets now uses the dataset's shard for transactions. * When emulating offset support using ROW_NUMBER (on Microsoft SQL Server, DB2, and Oracle), explicitly order by the ROW_NUMBER result, as otherwise the results are not guaranteed to be ordered. * Explicitly force a case insensitive collation when emulating ILIKE on Microsoft SQL Server. Previously, ILIKE could be case sensitive on Microsoft SQL Server if case sensitive collation was the database default. * Using on_duplicate_key_update with prepared statements on MySQL now works correctly. * The tinytds adapter now works correctly if the identifier_output_method is nil. * The plugin/extension specs were cleaned up using the mock adapter. = Backwards Compatibility * In addition to the previously mentioned dataset literalization changes, any custom adapters that overrode *_clause_methods methods need to be modified to add a method that adds the SELECT/UPDATE/INSERT/DELETE. Previously, this was done by default, but due to common table expressions and the dataset literalization changes, a separate method is now needed. * Dataset#on_duplicate_key_update_sql has been removed from the shared mysql adapter. * The :columns dataset option used when inserting is no longer literalized in advance. * Dataset#as_sql no longer takes an expression, it just takes the alias, and only adds the alias part. ruby-sequel-4.1.1/doc/release_notes/3.31.0.txt000066400000000000000000000137321220156535500206640ustar00rootroot00000000000000= New Features * The serialization plugin can now support custom serialization formats, by supplying a serializer/deserializer pair of callable objects. You can also register custom deserializers via Sequel::Plugins::Serialization.register_format, so that they can be referenced by name. Example: Sequel::Plugins::Serialization.register_format(:reverse, lambda{|v| v.reverse}, lambda{|v| v.reverse}) class User < Sequel::Model serialize_attributes :reverse, :password end * Dataset#import and #multi_insert now support a :return=>:primary_key option. When this option is used, the methods return an array of primary key values, one for each inserted row. Usage of this option on MySQL requires that a separate query be issued per row (instead of the single query for all rows that MySQL would usually use). * PostgreSQL can now use Dataset#returning in conjunction with import/multi_insert to set a custom column to return. * Microsoft SQL Server can now use Dataset#output in conjection with import/multi_insert to set a custom column to return. * Dataset#import and #multi_insert now respect a :server option to set the server/shard on which to execute the queries. Additionally, options given to this method are also passed to Dataset#transaction. * Dataset#insert_multiple now returns an array of inserted primary keys. * Model.def_column_alias has been added to make it easy to create alias methods for columns. This is useful if you have a legacy database and want to create friendly method names for the underlying columns. Note that this alias only affects the setter and getter methods. This does not affect the dataset level, so you still need to use the actual column names in dataset filters. * many_to_one associations can now have the same name as the related foreign key column, using the :key_column option. Use of this feature is not recommended, as it is much better to either rename the column or rename the association. Here's an example of usage: # Example schema: # albums artists # :id /--> :id # :artist --/ :name # :name class Album < Sequel::Model def_column_alias(:artist_id, :artist) many_to_one :artist, :key_column=>:artist end * The mock adapter can now mock out database types, by providing a shared adapter name as the host (e.g. mock://postgres). This emulation is not perfect, but in most cases it allows you to see what SQL Sequel would generate on a given database without needing to install the required database driver. * Sequel now supports creating full text indexes on Microsoft SQL Server. Before using it, you must have previously setup a default full text search catalog, and you need to provide a :key_index option with an index name symbol. * Dataset#group_rollup and #group_cube methods have been added for GROUP BY ROLLUP and GROUP BY CUBE support. These features are in a recent SQL standard, and they are supported to various degrees on Microsoft SQL Server, DB2, Oracle, MySQL, and Derby. * Dataset#full_text_search on Microsoft SQL Server now supports multiple search terms. * The jdbc adapter now supports a :login_timeout option, giving the timeout in seconds. = Other Improvements * Dataset#exists can now be used with prepared statement placeholders. * Dataset#full_text_search can now be used with prepared statement placeholders on PostgreSQL, MySQL, and Microsoft SQL Server. * If tables from two separate schema are detected when parsing the schema for a table on PostgreSQL, an error is now raised. Previously, no error was raised, which led to weird errors later, such as duplicate columns in a model's primary_key. * RETURNING is now supported with UPDATE/DELETE on PostgreSQL 8.2+. Previously, Sequel only supported it on 9.1+, but PostgreSQL introduced support for it in 8.2. * The shared postgres adapter now correctly handles the return value for Dataset#insert if you provide a separate column array and value array on PostgreSQL < 8.2. * Handle case in the PostgreSQL adapter where the server version cannot be determined via a query. * H2 clob types are now treated as string instead of as blob. Treating clob as blob breaks on H2, as it doesn't automatically hex-unescape the input for clobs as it does for blobs. * Dataset#empty? now works correctly when the dataset has an offset and offset support is being emulated. * The mock adapter no longer defaults to downcasing identifiers on output. = Backwards Compatibility * Dataset#exists now returns a PlaceholderLiteralString instead of a LiteralString, which could potentially break some code. If you would like a String returned, you can pass the returned object to Dataset#literal: dataset.literal(dataset.exists) * Dataset#from no longer handles :a__b__c___d as "a.b.c AS d". This was not the intended behavior, and nowhere else in Sequel is a symbol treated that way. Now, Dataset#from is consistent with the rest of Sequel, using "a.b__c AS d". This should only affect people in very rare cases, as most databases don't use three level qualified tables. One exception is Microsoft SQL Server, which can use three level qualified tables for cross-database access. * Previously, Dataset#insert_multiple returned an array of hashes, now it returns an array of primary key values. * Dataset#EXRACT_CLOSE in the shared sqlite adapter has been renamed to Dataset#EXTRACT_CLOSE. * Dataset::StoredProcedureMethods::SQL_QUERY_TYPE and Dataset::ArgumentMapper::SQL_QUERY_TYPE constants have been removed, as have related sql_query_type private methods. * The serialization plugin was significantly refactored. Model.serialization_map now contains a callable object instead of a Symbol, and Model.serialization_format has been removed. Model.define_serialized_attribute_accessors private method now takes two callable objects before the columns, instead of a single symbol. ruby-sequel-4.1.1/doc/release_notes/3.32.0.txt000066400000000000000000000173241220156535500206660ustar00rootroot00000000000000= New Features * Prepared statements now support :map and :to_hash prepared statement types. The main reason for this is that certain extensions (e.g. sequel_pg) optimize map/to_hash calls, and there previously was not a way to use prepared statements with the map/to_hash optimizations. * Sequel.empty_array_handle_nulls has been added to change how IN/NOT IN operations with an empty array are handled. See the Backwards Compatibility section for details. * 5 new association options have been added that allow you to define associations where the underlying columns clash with standard ruby method names: many_to_one :primary_key_method one_to_many :key_method one_to_many :primary_key_column many_to_many :left_primary_key_column many_to_many :right_primary_key_method Using these new options, you can now define associations that work correctly when the underlying primary/foreign key columns clash with existing ruby method names. See the RDoc for details. * A use_after_commit_rollback setting has been added to models. This defaults to true, but can be set to false for performance or to allow models to be used in prepared transactions (which don't support after_commit/after_rollback). * Dataset#update_ignore has been added when connecting to MySQL, enabling use of the UPDATE IGNORE syntax to skip updating a row if the update would cause a unique constraint to be violated. * Database#indexes is now supported when connecting to Microsoft SQL Server. * On Microsoft SQL Server, the :include option is now supported when creating indexes, for storing column values in the index, which can be used by the query optimizer. = Other Improvements * The filtering/excluding by associations code now uses qualified identifiers instead of unqualified identifiers, which allows it to avoid ambiguous column names if you are doing your own joins. * Virtual row blocks that return arrays are now handled correctly in Dataset#select_map/select_order_map. * Dataset#select_map/select_order_map can now take both a block argument as well as a regular argument. * Dataset#select_order_map now handles virtual row blocks that return ordered expressions. * Database#table_exists? should no longer generate false negatives if you only have permission to retrieve some column values but not all. Note that if you lack permission to SELECT from the table itself, table_exists? can still generate false negatives. * The active_model plugin now supports ActiveModel 3.2, by adding support for to_partial_path. * The serialization_modification_detection plugin now handles changed_columns correctly both for new objects and after saving objects. * The serialization plugin now clears the deserialized values when it does the automatic refresh after saving a new object, mostly for consistency. You can use the skip_create_refresh plugin to skip refreshing when creating a new model object. * Column default values are now wrapped in parentheses on SQLite, which fixes some cases such as when the default is an SQL function call. * Alter table emulation now works correctly on SQLite when foreign keys reference the table being altered. The emulation requires a renaming/deleting the existing table and creating a new table, which can break foreign key references. Sequel now disables the foreign key PRAGMA when altering tables, so SQLite won't track the table renames and break the foreign key relationships. * The set_column_type table alteration method no longer modifies default values and NULL/NOT NULL settings on Microsoft SQL Server, H2, and SQLite. * On MySQL, Time/DateTime columns now use the timestamp type if the default value is Sequel::CURRENT_TIMESTAMP, since it is currently impossible for MySQL to have a non-constant default for a datetime column (without using a workaround like a trigger). * Metadata methods such as tables, views, and view_exists? are now handled correctly on Oracle if custom identifier input methods are used. * Sequel now ignores errors that occur when attempting to get information on column defaults in Oracle (which can happen if you lack permission to the appropriate table). Previously, such errors would cause the schema parser to raise an error, now, the schema information is just returned without default information. * Database#indexes now skips the primary key index when connecting to DB2, Derby, HSQLDB, and Oracle via the jdbc adapter. * Database#indexes now works correctly on DB2. * The progress adapter has been fixed, it had been broken since the dataset literalization refactoring. * Dataset#naked! now works correctly. Previously, it just returned the receiver unmodified. * Dataset#paginate! has been removed, as it was broken. * The query extension no longer breaks Dataset#clone if an argument is not given. * Transaction related queries are no longer logged twice in the mock adapter. = Backwards Compatibility * Sequel's default handling of NOT IN operators with an empty array of values has changed, which can change which rows are returned for such queries. Previously, Sequel was inconsistent in that it tried to handle NULL values correctly in the IN case, but not in the NOT IN case. Now, it defaults to handling NULL values correctly in both cases: # 3.31.0 DB[:a].where(:b=>[]) # SELECT * FROM a WHERE (b != b) DB[:a].exclude(:b=>[]) # SELECT * FROM a WHERE (1 = 1) # 3.32.0 DB[:a].where(:b=>[]) # SELECT * FROM a WHERE (b != b) DB[:a].exclude(:b=>[]) # SELECT * FROM a WHERE (b = b) The important change in behavior is that in the NOT IN case, if the left hand argument is NULL, the filter returns NULL instead of true. This has the potential to change query results. "Correct" here is really an opinion and not a fact, as there are valid arguments for the alternative behavior: DB[:a].where(:b=>[]) # SELECT * FROM a WHERE (1 = 0) DB[:a].exclude(:b=>[]) # SELECT * FROM a WHERE (1 = 1) The difference is that the "correct" NULL behavior is more consistent with the non-empty array cases. For example, if b is NULL: # "Correct" NULL handling # Empty array: where(:b=>[]) WHERE (b != b) # NULL WHERE (b = b) # NULL # Non-empty array: where(:b=>[1, 2]) WHERE (b IN (1, 2)) # NULL WHERE (b NOT IN (1, 2)) # NULL # Static boolean handling # Empty array: where(:b=>[]) WHERE (1 = 0) # false WHERE (1 = 1) # true # Non-empty array: where(:b=>[1, 2]) WHERE (b IN (1, 2)) # NULL WHERE (b NOT IN (1, 2)) # NULL Sequel chooses to default to behavior consistent with the non-empty array cases (similar to SQLAlchemy). However, there are two downsides to this handling. The first is that some databases with poor optimizers (e.g. MySQL) might do a full table scan with the default syntax. The second is that the static boolean handling may be generally perferable, if you believe that IN/NOT IN with an empty array should always be true or false and never NULL even if the left hand argument is NULL. As there really isn't a truly correct answer in this case, Sequel defaults to the "correct" NULL handling, and allows you to switch to the static boolean handling via: Sequel.empty_array_handle_nulls = false This is currently a global setting, it may be made Database or Dataset specific later if requested. Also, it is possible the default will switch in the future, so if you care about a specific handling, you should set your own default. * Database#table_exists? now only rescues Sequel::DatabaseErrors instead of StandardErrors, so it's possible it will raise errors instead of returning false on custom adapters that don't wrap their errors correctly. ruby-sequel-4.1.1/doc/release_notes/3.33.0.txt000066400000000000000000000141531220156535500206640ustar00rootroot00000000000000= New Features * A server_block extension has been added that makes Sequel's sharding support easier to use by scoping database access inside the block to a given server/shard: Sequel.extension :server_block DB.extend Sequel::ServerBlock DB.with_server(:shard_1) do # All of these will execute against shard_1 DB.tables DB[:table].all DB.run 'SOME SQL' end * An arbitrary_servers extension has been added that extends Sequel's sharding support so that you can use arbitrary connection options instead of referencing an existing, predefined server/shard: Sequel.extension :arbitrary_servers DB.pool.extend Sequel::ArbitraryServers DB[:table].server(:host=>'foo', :database=>'bar').all You can use this extension in conjunction with the server_block extension: DB.with_server(:host=>'foo', :database=>'bar') do DB.synchronize do # All of these will execute on host foo, database bar DB.tables DB[:table].all DB.run 'SOME SQL' end end The combination of these two extensions makes it pretty easy to write a thread-safe Rack middleware that scopes each request to an arbitrary database. * The sqlite adapter now supports an integer_booleans setting for using 1/0 for true/false values, instead of the the 't'/'f' values used by default. As SQLite recommends using integers to store booleans, converting your existing database and enabling this setting is recommended, but for backwards compatibility it is set to false. You can convert you existing database by doing the following for each table/column that has booleans: DB[:table].update(:boolean_column=>{'t'=>1}. case(0, :boolean_column)) The integer_booleans default setting may change in a future version of Sequel, so you should set it manually to false if you prefer the current default. * You can now disable transaction use in migrations, in one of two ways. You generally only need to do this if you are using an SQL query inside a migration that is specifically not supported inside a transaction, such as CREATE INDEX CONCURRENTLY on PostgreSQL. The first way to disable transactions is on a per-migration basis by calling the no_transaction method inside the Sequel.migration block: Sequel.migration do no_transaction change do # ... end end That will make it so that a transaction is not used for that particular migration. The second way is passing the :use_tranctions=>false option when calling Migrator.run (using the API), which will completely disable transactions for all migrations during the migrator run. * The postgres adapter now respects an :sslmode option when using pg as the underlying driver, you can set the value of this option to disable, allow, prefer, or require. * Database#create_schema and #drop_schema are now defined when connecting to PostgreSQL. * Database#supports_savepoints_in_prepared_transactions? has been added for checking if savepoints are supported inside prepared transactions. This is true if both savepoints and prepared transactions are both supported, except on MySQL > 5.5.12 (due to MySQL bug 64374). = Other Improvements * The mysql and mysql2 adapters now both provide an accurate number of rows matched, so Sequel::Model usage on those adapters will now raise a NoExistingObject exception by default if you attempt to delete or update an instance that no longer exists in the database. * Foreign key creation now works correctly without specifying the :key option when using MySQL with the InnoDB table engine. InnoDB requires that you list the column explicitly, even if you are referencing the primary key of the table, so if the :key option is not given, the database schema is introspected to find the primary key for the table. If you are attempting to create a table with a self-referential foreign key, it introspects the generator to get the primary key for the table. * The sqlite adapter will now return 1/0 stored in boolean columns as true/false. It will convert dates stored as Integers/Floats to Date objects by assuming they represent the julian date. It will convert times stored as Integers/Floats to Sequel::SQLTime objects by assuming they represent a number of seconds. It will convert datetimes stored as Integers by assuming they represent a unix epoch time integer, and datetimes stored as Floats by assuming the represent the julian date (with fractional part representing the time of day). These changes make Sequel handle SQLite's recommendations for boolean/date/time storage. * The instance_hooks plugin's (before|after)_*_hook methods now return self so they can be used in a method chain. * The list plugin now automatically adds new entries to the end of the list when creating the entries, if the position field is not specifically set. * An identifier_output_method is now respected in the mysql2 adapter. * NaN/Infinity Float values are now quoted correctly for input on PostgreSQL, and the postgres adapter correctly handles them on retrieval from the database. * The :collate column option is now respected when creating tables or altering columns on MySQL. * You can now force use of the TimestampMigrator when the IntegerMigrator would be used by default by calling TimestampMigrator.apply or .run. * Mock adapter usage with a specific SQL dialect now uses the appropriate defaults for quoting identifiers. * You can now disable the use of sudo in the rake install/uninstall tasks using the SUDO='' environment variable. * A very misleading error message has been fixed when attempting to constantize an invalid string in the model inflector. = Backwards Compatibility * The sqlite adapter now typecasts columns that SQLite stores as INTEGER/REAL. Previously, it only typecasted columns that SQLite stored as TEXT/BLOB. For details about SQLite storage, see http://www.sqlite.org/datatype3.html. Any custom type conversion procs used with the sqlite adapter should be modified to work with Integer/Float objects in addition to String objects. ruby-sequel-4.1.1/doc/release_notes/3.34.0.txt000066400000000000000000000650731220156535500206740ustar00rootroot00000000000000= New PostgreSQL Extensions * A pg_array extension has been added, supporting PostgreSQL's numeric and string array types. Both single dimensional and multi-dimensional array types are supported. Array values are returned as instances of Sequel::Postgres::PGArray, which is a delegate class of Array. You can turn an existing array into a PGArray using Array#pg_array. If you are using arrays in model objects, you need to load support for that: DB.extend Sequel::Postgres::PGArray::DatabaseMethods This makes schema parsing and typecasting of array columns work correctly. This extension also allows you to use PGArray objects and arrays in bound variables when using the postgres adapter with pg. * A pg_hstore extension has been added, supporting PostgreSQL's hstore type, which is a simple hash with string keys and string or NULL values. hstore values are retrieved as instances of Sequel::Postgres::HStore, which is a delegate class of Hash. You can turn an existing hash into an hstore using Hash#hstore. If you are using hstores in model objects, you need to load support for that: DB.extend Sequel::Postgres::HStore::DatabaseMethods This makes schema parsing and typecasting of hstore columns work correctly. This extension also allows you to use HStore objects and hashes in bound variables when using the postgres adapter with pg. * A pg_array_ops extension has been added, making it easier to call PostgreSQL array operators and functions using plain ruby code. Examples: a = :array_column.pg_array a[1] # array_column[1] a[1][2] # array_column[1][2] a.push(1) # array_column || 1 a.unshift(1) # 1 || array_column a.any # ANY(array_column) a.join # array_to_string(array_column, '', NULL) If you are also using the pg_array extension, you can turn a PGArray object into a query object, which allows you to run operations on array literals: a = [1, 2].pg_array.op a.push(3) # ARRAY[1,2] || 3 * A pg_hstore_ops extension has been added, making it easier to call PostgreSQL hstore operators and functions using plain ruby code. Examples: h = :hstore_column.hstore h['a'] # hstore_column -> 'a' h.has_key?('a') # hstore_column ? 'a' h.keys # akeys(hstore_column) h.to_array # hstore_to_array(hstore_column) If you are also using the pg_hstore extension, you can turn an HStore object into a query object, which allows you to run operations on hstore literals: h = {'a' => 'b'}.hstore.op h[a] # '"a"=>"b"'::hstore -> 'a' * A pg_auto_parameterize extension has been added for automatically using bound variables for all queries. For example, it can take code such as: DB[:table].where(:column=>1) and do: SELECT * FROM table WHERE column = $1; -- [1] Note that automatically parameterizing queries is not generally faster unless the bound variables are large (i.e. long text/bytea values). Also, there are multiple corner cases when automatically parameterizing queries, though most can be worked around by adding explicit casts. * A pg_statement_cache extension has been added that works with the pg_auto_parameterize extension for automatically caching prepared statements and reusing them when using the postgres adapter with pg. The combination of these two extensions makes it possible to take an entire Sequel application and turn most or all of the queries into prepared statements. Note that these two extensions do not necessarily improve performance. For simple queries, they actually hurt performance. They do help for complex queries, but in all cases, it's faster to use Sequel's prepared statements API manually. = Other New Extensions * A query_literals extension has been added that makes the select, group, and order methods operate similar to the filter methods in that if they are given a regular string as their first argument, they treat it as a literal string, with additional arguments, if any, used as placeholder values. This extension allows you to write code such as: DB[:table].select('a, b, ?' 2).group('a, b').order('c') # Without query_literals: # SELECT 'a, b, ?', 2 FROM table GROUP BY 'a, b' ORDER BY 'c' # With query_literals: # SELECT a, b, 2 FROM table GROUP BY a, b ORDER BY c Sequel's default handling in this case is to use literal strings, which is generally not desired and on some databases not even valid syntax. In general, you'll probably want to use this extension for all of a database's datasets, which you can do via: Sequel.extension :query_literals DB.extend_datasets(Sequel::QueryLiterals) The next major version of Sequel (4.0.0) will probably integrate this extension into the core library. * A select_remove extension has been added that adds Dataset#select_remove, for removing selected columns/expressions from a dataset: ds = DB[:table] # Assume table has columns a, b, and c ds.select_remove(:c) # SELECT a, b FROM table # Removal by column alias ds.select(:a, :b___c, :c___b).select_remove(:c) # SELECT a, c AS b FROM table # Removal by expression ds.select(:a, :b___c, :c___b).select_remove(:c___b) # SELECT a, b AS c FROM table This method makes it easier to select all columns except for the columns given. This is common in cases where a table has a few large columns that are expensive to retrieve. This method does have some corner cases, so read the documentation before using it. * A schema_caching extension has added that makes it possible for Database instances to dump the cached schema metadata to a marshalled file, and load the cached schema metadata from the file. This can be significantly faster than reparsing the schema from the database, especially for databases with high latency. bin/sequel -S has been added to dump the schema for the given database to a file, and DB.load_schema_cache(filename) can be used to populate the schema cache inside your application. This should be done after creating the Database object but before loading your model files. Note that Sequel does no checking to ensure that the cached schema currently reflects the state of the database. That is up to the application. * A null_dataset extension has been added, which adds Dataset#nullify for creating a dataset that will not issue a database query. It implements the null object pattern for datasets, and is probably most useful in methods that must return a dataset, but can determine that such a dataset will never return a row. = New Plugins * A static_cache plugin has been added, allowing you to cache a model statically. This plugin is useful for models whose tables do not change while the application is running, such as lookup tables. When using this plugin, the following methods will no longer require queries: * Primary key lookups (e.g. Model[1]) * Model.all calls * Model.each calls * Model.map calls without an argument * Model.to_hash calls without an argument The statically cached model instances are frozen so they are not accidently modified. * A many_to_one_pk_lookup plugin has been added that changes the many_to_one association retrieval code to do a simple primary key lookup on the associated class in most cases. This results in significantly better performance, especially if the associated model is using a caching plugin (either caching or static_cache). = Core Extension Replacements * Most of Sequel's core extensions now have equivalent methods defined on the Sequel module: :column.as(:alias) -> Sequel.as(:column, :alias) :column.asc -> Sequel.asc(:column) :column.desc -> Sequel.desc(:column) :column.cast(Integer) -> Sequel.cast(:column, Integer) :column.cast_numeric -> Sequel.cast_numeric(:column) :column.cast_string -> Sequel.cast_string(:column) :column.extract(:year) -> Sequel.extract(:year, :column) :column.identifier -> Sequel.identifier(:column) :column.ilike('A%') -> Sequel.ilike(:column, 'A%') :column.like('A%') -> Sequel.like(:column, 'A%') :column.qualify(:table) -> Sequel.qualify(:table, :column) :column.sql_subscript(1) -> Sequel.subscript(:column, 1) :function.sql_function(1) -> Sequel.function(:function, 1) 'some SQL'.lit -> Sequel.lit('some SQL') 'string'.to_sequel_blob -> Sequel.blob('string') {:a=>1}.case(0) -> Sequel.case({:a=>1}, 0) {:a=>1}.sql_negate -> Sequel.negate(:a=>1) {:a=>1}.sql_or -> Sequel.or(:a=>1) [[1, 2]].sql_value_list -> Sequel.value_list([[1, 2]]) [:a, :b].sql_string_join -> Sequel.join([:a, :b]) ~{:a=>1} -> Sequel.~(:a=>1) :a + 1 -> Sequel.+(:a, 1) :a - 1 -> Sequel.-(:a, 1) :a * 1 -> Sequel.*(:a, 1) :a / 1 -> Sequel./(:a, 1) :a & 1 -> Sequel.&(:a, 1) :a | 1 -> Sequel.|(:a, 1) * You can now wrap any object in a Sequel expression using Sequel.expr. This is similar to the sql_expr extension, but without defining the sql_expr method on all objects: 1.sql_expr -> Sequel.expr(1) The sql_expr extension now just has Object#sql_expr call Sequel.expr. * Virtual Rows now have methods defined that handle the standard mathematical operators: select{|o| o.+(1, :a)} # SELECT (1 + a) the standard inequality operators: where{|o| o.>(2, :a)} # WHERE (2 > a) and the standard boolean operators: where{|o| o.&({:a=>1}, o.~(:b=>1))} # WHERE ((a = 1) AND (b != 1)) Additionally, there is now direct support for creating literal strings in instance_evaled virtual row blocks using `: where{a > `some crazy SQL`} # WHERE (a > some crazy SQL) This doesn't override Kernel.`, since virtual rows use a BasicObject subclass. Previously, using ` would result in calling the SQL function named ` with the given string, which probably isn't valid syntax on most databases. * You can now require 'sequel/no_core_ext' to load Sequel without the core extensions. The previous way of setting the SEQUEL_NO_CORE_EXTENSIONS constant or environment variable before loading Sequel still works. * The core extensions have been moved from Sequel's core library into an extension that is loadable with Sequel.extension. This extension is still loaded by default for backwards compatibility. However, the next major version of Sequel will no longer load this extension by default (though it will still be available to load manually). * You can now check if the core extensions have been loaded by using Sequel.core_extensions?. = Foreign Keys in the Schema Dumper * Database#foreign_key_list has been added that gives an array of foreign key constraints on the table. It is currently implemented on MySQL, PostgreSQL, and SQLite, and may be implemented on other database types in the future. Each entry in the return array is a hash, with at least the following keys present: :columns :: An array of columns in the given table :table :: The table referenced by the columns :key :: An array of columns referenced (in the table specified by :table), but can be nil on certain adapters if the primary key is referenced. The hash may also contain entries for: :deferrable :: Whether the constraint is deferrable :name :: The name of the constraint :on_delete :: The action to take ON DELETE :on_update :: The action to take ON UPDATE * The schema_dumper extension now dumps foreign key constraints on databases that support Database#foreign_key_list. On such databases, dumping a schema migration will dump the tables in topological order, such that referenced tables always come before referencing tables. In case there is a circular dependency, Sequel breaks the dependency and adds separate foreign key constraints at the end of the migration. However, when a circular dependency is broken, the migration can probably not be migrated down. Foreign key constraints can also be dumped as a separate migration using Database#dump_foreign_key_migration, similar to how Database#dump_indexes_migration works. * When using bin/sequel -C to copy databases, foreign key constraints are now copied if the source database supports Database#foreign_key_list. = Other New Features * Dataset#to_hash_groups and #select_hash_groups have been added. These methods are similar to #to_hash and #select_hash in that they return a hash, but hashes returned by *_hash_groups methods have arrays of all matching values, unlike the *_hash methods which just use the last matching value. Example: DB[:table].all # => [{:a=>1, :b=>2}, {:a=>1, :b=>3}, {:a=>2, :b=>4}] DB[:table].to_hash(:a, :b) # => {1=>3, 2=>4} DB[:table].to_hash_groups(:a, :b) # => {1=>[2, 3], 2=>[4]} * Model#set_fields and #update_fields now accept :missing=>:skip and :missing=>:raise options, allowing them to be used in more cases. :missing=>:skip skips missing entries in the hash, instead of setting the field to the default hash value. :missing=>:raise raises an error for missing fields, similar to strict_param_setting = true. It's recommended that these options be used in new code in preference to #set_only and #update_only. * Database#drop_table? has been added, for dropping tables if they already exist. This uses DROP TABLE IF EXISTS on the databases that support it. Database#supports_drop_table_if_exists? has been added for checking whether the database supports that syntax. * Database#create_join_table has been added that allows easy creation of many_to_many join tables: DB.create_join_table(:album_id=>:albums, :artist_id=>:artists) This uses real foreign keys for both of the columns, uses a composite primary key of both of the columns, and adds an additional composite index of the columns in reverse order. The primary key and additional index should ensure that almost all operations on the join table can benefit from an index. In terms of customization, the values in the hash can be hashes themselves for column specific options, and an additional options hash can also be given to override some of the default settings. Database#drop_join_table also exists and takes the same options as create_join_table. It mostly exists to make it easy to reverse migrations that use create_join_table. * Model#freeze has been added that freezes a model such that it works correctly in a read-only state. Before, it used the standard Object#freeze, which broke some things that should work, and allowed changes that shouldn't be allowed (like modifying the instance's values). * ConnectionPool#all_connections has been added, which yields each available connection in the pool to the block. For threaded pools, it does not yield connections that are currently being used by other threads. When using this method, it is important to only operate on the yielded connection objects, and not make any modifications to the pool itself. The pool is also locked until the method returns. * ConnectionPool#after_connect= has been added, allowing you to change a connection pool's after_connect proc after instantiating the pool. * ConnectionPool#disconnection_proc= has been added, allowing you to change a connection pool's disconnection_proc after instantiating the pool. * A Model.cache_anonymous_models accessor has been added, and can be set to false to disable the caching of classes created by Sequel::Model(). This caching is only useful if you want to reload the model's file without getting a superclass mismatch. This setting is true by default for backwards compatibility, but may be changed to false in a later version, so you should manually set it to true if you are using code reloading. * Model.instance_dataset has been added for getting the dataset used for model instances (a naked dataset restricted to a single row). * Dataset#with_sql_delete has been added for running the given SQL string as a delete and returning the number of rows modified. It's designed as a replacement for with_sql(sql).delete, which is slower as it requires cloning the dataset. * The :on_update and :on_delete entries for foreign_key now accept string arguments which are used literally. * Prepared statement objects now have a log_sql accessor that can be turned on to log the entire SQL statement instead of just the prepared statement name. * Dataset#multi_replace has been added on MySQL. This is similar to multi_insert, but uses REPLACE instead of INSERT. * Dataset#explain has been added to MySQL. You can use an :extended=>true option to use EXPLAIN EXTENDED. * A Database#type_supported? method has been added on PostgreSQL to check if the database supports the given type: DB.type_supported?(:hstore) * Datatabase#reset_conversion_procs has been added to the postgres adapter, for use by extensions that modify the default conversion procs and want to have the database use the updated defaults. * A Database#convert_infinite_timestamps accessor has been added to the postgres adapter, allowing you to return infinite timestamps as nil, a string, or a float. * SQL::PlaceholderLiteralString objects can now use a placeholder array, where placeholder values are inserted between array elements. This is about 2.5-3x faster than using a string with ? placeholders, and allows usage of ? inside the array: Sequel.lit(["(", " ? ", ")"], 1, 2) # (1 ? 2) * SQL::Subscript#[] has been added for accessing members of a multi-dimensional array: Sequel.subscript(:column, 1)[2][3] # column[1][2][3] * SQL::Wrapper has been added for wrapping arbitrary objects in a Sequel expression object. * SQL::QualifiedIdentifier objects can now contain arbitrary Sequel expressions. Before, they could only contain a few expression types. This makes it easier to add extensions to support PostgreSQL row-valued types. = Performance Improvements * Model.[] when called with a primary key has been made about 110% faster for most models by avoiding cloning datasets. * Model.[] when called without arguments or with a single nil argument is much faster as it now returns nil immediately instead of issuing a database query. * Model#delete and Model#destroy have been made about 75% faster for most models by using a static SQL string. * Model.new is now twice as fast when passed an empty hash. * Model#set is now four times as fast when passed an empty hash. * Model#this has been made about 85% faster by reducing the number of dataset clones needed from 3 to 1. * Some proc activations have been removed, giving minor speedups when running on MRI. = Other Improvements * Database#uri and #url now return the connection string given to Sequel.connect. Previously, they tried to reconstruct the url using the database's options, but that didn't work well in corner cases. * Database#inspect now shows the URL and/or options given when connecting to the database. Previously, it showed the URL, or all of the databases options if constructing the URL raised an error. * Sequel no longer checks for prepared transactions support when using transactions unless a prepared transaction is specifically requested. * The schema utility dataset cached in the Database object is now reset if you use Database#extend_datasets, ensuring that the new value will use the given extension. * The prepared_statements* plugins now log the full SQL by default. Since the user doesn't choose the name of the prepared statements, it was often difficult to determine what SQL was actually run if you were only looking at a subsection of the SQL log. * The nested_attributes plugin's delete/remove support now works correctly when a false value is given for _delete/_remove and strict_param_setting is true. * The hook_class_methods and validation_class_methods plugins now work correctly when subclassing if the subclass attempts to create instances inside Model.inherited. * The caching plugin has been refactored. Model.cache_get_pk and cache_delete_pk have been added for retrieving/deleting from the cache by primary key. Model.cache_key is now a public method. * The typecast_on_load plugin now works correctly when saving new model objects when insert_select is supported. * In the sql_expr extension, nil.sql_expr is no longer treated as a boolean value. It is now treated as a value with generic type. * The postgres adapter no longer issues a query to map type names to type oids if no named conversion procs have been registered. * The postgres adapter now works around issues in ruby-pg by supporting fractional seconds for Time/DateTime values, and supporting SQL::Blob (bytea) values with embedded "\0" characters. * The postgres adapter now supports pre-defining the PG_NAMED_TYPES and PG_TYPES constants. This is so extensions can define them, so they don't have to load the postgres adapter file first. If extensions need to use these constants, they should do: PG_NAMED_TYPES = {} unless defined?(PG_NAMED_TYPES) PG_TYPES = {} unless defined?(PG_TYPES) That way they work whether they are loaded before or after the postgres adapter. * PostgreSQL 8.2-9.0 now correctly add the RETURNING clause when building queries. Sequel 3.31.0 added support for returning values from delete/update queries in PostgreSQL 8.2-9.0, but didn't change the literalization code to use the RETURNING clause on those versions. * The jdbc/postgres adapter now converts Java arrays (Java::OrgPostgresqlJdbc4::Jdbc4Array) to ruby arrays. * Tables and schemas with embedded ' characters are now handled correctly when parsing primary keys and sequences on PostgreSQL. * Identifiers are now escaped on MySQL and SQLite. Previously they were quoted, but internal ` characters were not doubled. * Fractional seconds for the time type are now returned correctly on jdbc (assuming they are returned as java.sql.Time values by JDBC). * Multiple changes were made to ensure that Sequel works correctly when the core extensions are not loaded. * Composite foreign key constraints are now retained when emulating alter_table operations on SQLite. Previously, only single foreign key constraints were retained. * An error is no longer raised when no indexes exist when calling Database#indexes on jdbc/sqlite. * A possible SystemStackError has been fixed in the SQLite adapter, when trying to delete a dataset that uses a having clause and no where clause. * ROLLUP/CUBE support now works correctly on Microsoft SQL Server 2005. * Unsigned tinyint types are now recognized in the schema dumper. * Using primary_key :column, :type=>Bignum now works correctly on H2. Previously, the column created was not autoincrementing. * Using a bound variable for a limit is now supported in the ibmdb adapter on ruby 1.9. * Connecting to PostgreSQL via the swift adapter has been fixed when using newer versions of swift. * The mock adapter now handles calling the Database#execute methods directly (instead of via a dataset). * The mock adapter now has the ability to have per-shared adapter specific initialization code executed. This has been used to fix some bugs when using the shared postgres adapter. * The pretty_table extension has been split into two extensions, one that adds a method to Dataset and one that just adds the PrettyTable class. Also, PrettyTable.string has been added to get a string copy of the table. * A spec_model_no_assoc task has been added for running model specs without the association plugin loaded. This is to check that the SEQUEL_NO_ASSOCIATIONS setting works correctly. = Deprecated Features to be Removed in Sequel 3.35.0 * Ruby <1.8.7 support is now deprecated. * PostgreSQL <8.2 support is now deprecated. * Dataset#disable_insert_returning on PostgreSQL is now deprecated. Starting in 3.35.0, RETURNING will now always be used to get the primary key value when inserting. * Array#all_two_pairs? is now deprecated. It was part of the core extensions, but the core extensions have been refactored to no longer require it. As it doesn't specifically relate to creating Sequel expression objects, it is being removed. The private Array#sql_expr_if_all_two_pairs method is deprecated as well. = Other Backwards Compatibility Issues * The generic Bignum type now uses bigint on SQLite, similar to other databases. The integer type was previously used. The only exception is for auto incrementing primary keys, which still use integer for Bignum as SQLite doesn't support autoincrementing columns other than integer. * On SQLite, Dataset#explain now returns a string, similar to PostgreSQL (and now MySQL). * When using the JDBC adapter, Java::OrgPostgresqlUtil::PGobject objects are converted to ruby strings if the dataset is set to convert types (the default setting). This is to support the hstore extension, but it could have unforeseen effects if custom types were used. * For PostgreSQL connection objects, #primary_key and #sequence now require their arguments are provided as already literalized strings. Note that these methods are being removed in the next version because they will not be needed after PostgreSQL <8.2 support is dropped. * Database#uri and #url now return a string or nil, but never raise an exception. Previously, they would either return a string or raise an exception. * The Model @simple_pk and @simple_table instance variables should no longer be modified directly. Instead, the setter methods should be used. * Model.primary_key_lookup should no longer be called with a nil value. * Logging of prepared statements on some adapters has been changed slightly, so log parsers might need to be updated. * Dataset#identifier_append and #table_ref_append no longer treat literal strings and blobs specially. Previously, they were treated as identifiers. * Dataset#qualified_identifier_sql_append now takes 3 arguments, so any extensions that override it should be modified accordingly. * Some internally used constants and private methods have been deleted: Database::CASCADE Database::NO_ACTION Database::SET_DEFAULTS Database::SET_NULL Database::RESTRICT Dataset::COLUMN_ALL or moved: MySQL::Dataset::AFFECTED_ROWS_RE -> MySQL::Database MySQL::Dataset#affected_rows -> MySQL::Database * The sql_expr extension no longer creates the Sequel::SQL::GenericComplexExpression class. ruby-sequel-4.1.1/doc/release_notes/3.35.0.txt000066400000000000000000000132461220156535500206700ustar00rootroot00000000000000= New Features * A dirty plugin has been added, which saves the initial value of the column when the column is changed, similar to ActiveModel::Dirty: artist.name # => 'Foo' artist.name = 'Bar' artist.initial_value(:name) # 'Foo' artist.column_change(:name) # ['Foo', 'Bar'] artist.column_changes # {:name => ['Foo', 'Bar']} artist.column_changed?(:name) # true artist.reset_column(:name) artist.name # => 'Foo' artist.column_changed?(:name) # false artist.update(:name=>'Bar') artist.column_changes # => {} artist.previous_changes # => {:name=>['Foo', 'Bar']} * Database#create_table now respects an :as option to create a database based on the results of a query. The :as option value should either be an SQL string or a dataset. DB.create_table(:new_foos, :as=>DB[:foos].where(:new=>true)) * The json_serializer and xml_serializer plugins can now serialize arbitrary arrays of model objects by passing an :array option to the to_json class method. This works around an issue in ruby's JSON library where Array#to_json does not pass arguments given to it to the members of the array. Artist.to_json(:array=>[Artist[1]], :include=>:albums) * You can now use the % (modulus) operator in the same way you can use the bitwise operators in Sequel: :column.sql_number % 1 # (column % 1) * On PostgreSQL, you can now provide :only, :cascade, and :restart options to Dataset#truncate to use ONLY, CASCADE, and RESTART IDENTITY. Additionally, you can now truncate multiple tables at the same time: DB.from(:table1, :table2).truncate(:cascade=>true) * The :index option when creating columns in the schema generator can now take a hash of index options: DB.create_table(:foo){Integer :bar, :index=>{:unique=>true}} * A Database#cache_schema accessor has been added, it can be set to false to have the Database never cache schema results. This can be useful in Rails development mode, so that you don't need to restart a running server to have models pick up the new schema. * Database#log_exception has been added for easier instrumentation. It is called with the exception and SQL query string for all queries that raise an exception. * The Sequel.migration DSL now has a transaction method that forces transaction use for the given migration. = Other Improvements * Many theoretical thread-safety issues have been fixed for ruby implementations that don't use a global interpreter lock. Previously, Sequel relied on MRI's global interpreter lock for part of its thread safety, now it does manually locking in more places to avoid thread-safety issues on JRuby (and other ruby implementations without a global interpreter lock). No Sequel user ever reported a production error related to the previous thread-safety issues, and most of the issues fixed were so difficult to hit that even tests specifically designed to raise errors were unable to do so. * Sequel.single_threaded = true now disables the mutex synchronization that enforces thread safety for additional performance in single threaded mode. * Sequel's migrators now only attempt to use transactions by default if the underlying database supports transactional DDL. SQLite does support transactional DDL, but Sequel will not use transactions for SQLite migrations as it causes issues when emulating alter_table operations for tables with foreign keys. * Errors that occur when rolling back database transactions are now handled correctly. Previously, the underlying exception was raised, it wasn't correctly wrapped in a Sequel::DatabaseError, and if it was due to a database disconnection, the connection wasn't removed from the pool. * Sequel no longer sets ruby instance variables on java objects, fixing warnings on JRuby 1.7 and attempting to be forward compatible with JRuby 2.0. * Sequel now uses date and timestamp formats that are multilanguage and not DATEFORMAT dependent on Microsoft SQL Server. * Sequel now correctly escapes blackslash-carriage return-line feed on Microsoft SQL Server. * Parsing the column default values in the oracle adapter no longer requires database superuser privileges. * Sequel now correctly handles parsing schema for tables in other databases on MySQL. Previously, it would always look in the current database. * Sequel no longer doubles backslashes in strings by default. It now only does so on MySQL, since that is the only database that appears to use backslashes for escaping. This fixes issues with backslashes being doubled on some of the less commonly used adapters. * The pg_auto_parameterize extension now works correctly when using cursors. * Dataset#truncate now raises an Error if you attempt to do so on a dataset that uses HAVING. Previously, it only checked for WHERE. * The schema dumper now recognized the identity type. = Backwards Compatibility * Association reflections now store cached information in a separate subhash due to the thread-safety changes. Any code accessing an association reflection should always call the related method to get the cached data instead of checking for a specific location in the hash. * Association reflection internals for many_through_many associations changed significantly, any code that accesses the edge information in the reflection will need to be changed to use the new methods instead of accessing the old values directly. * The features deprecated in 3.34.0 have now been removed: * Ruby <1.8.7 support * PostgreSQL <8.2 support * Dataset#disable_insert_returning on PostgreSQL * Array#all_two_pairs? and #sql_expr_if_all_two_pairs ruby-sequel-4.1.1/doc/release_notes/3.36.0.txt000066400000000000000000000257441220156535500206770ustar00rootroot00000000000000= New Features * An eager_each plugin has been added, which automatically makes eagerly loaded datasets do eager loading if you call #each (or another Enumerable method) instead of #all. By default, if you call #each on an eager dataset, it will not do eager loading, and if you call #each on an eager_graph dataset, you will get plain hashes with columns from all joined tables instead of model objects. With this plugin, #each on both eager and eager_graph datasets will do eager loading. * The nested attributes plugin now supports composite primary keys in associated records. Additionally, it now deals better with natural primary keys in associated records. There is a new :unmatched_pk option that can be set to :create if you want to create new associated records when the input hash contains primary key information that doesn't match one of the existing associated objects. The nested attributes plugin now also supports a :transform option. If given, this option is called with the parent object and the input hash given for each associated record passed into the nested atttributes setter. The callable should return the hash of attributes to use. * Model#from_json in the json_serializer plugin now takes an options hash and recognizes the :fields option. If the :fields option is given, it should be an array of field names, and set_fields is called with the array instead of using set. This allows you to easily filter which fields in the hash are set in the model instance. The entire options hash is also passed to set_fields if :fields is present, so you can additionally use the :missing => :raise or :missing => :skip options that set_fields supports. * The Dataset#to_json method in the json_serializer plugin now respects :root=>:collection and :root=>:instance options. If :root=>:collection is given, only the collection is wrapped in a hash, and if :root=>:instance is given, only the instances are wrapped in a hash. For backwards compatibility, both the instances and collection are wrapped in a hash: Model.to_json(:root=>true) # {"models":[{"model":{"id":1}}]} Model.to_json(:root=>:collection) # {"models":[{"id":1}]} Model.to_json(:root=>:instance) # [{"model":{"id":1}}] Wrapping both the collection and instance in a root by default is probably an undesired behavior, so the default for :root=>true may change in the next major version of Sequel. Users who want the current behavior should switch to using :root=>:both. * The schema_dumper extension now respects an :index_names option when dumping. This option can be set to false to never dump the index names. It can also be set to :namespace, in which case if the database does not have a global index namespace, it will automatically prefix the name of the index with the name of the table. Database#global_index_namespace? was added to check if the database uses a global index namespace. If false, index names are probably namespaced per table (MySQL, MSSQL, Oracle). * :each is now a valid prepared statement type. This prepared statement type requires a block when you call the statement, and iterates over the records of the statement a row at a time. Previously, there wasn't a way to iterate over the records of a prepared statement a row at a time, since the :select and :all types collect all rows into an array before iterating over them. * A :connection_handling=>:queue option is now respected for database objects, and changes the threaded connection pools to use a queue instead of a stack as the data structure for storing available connections. A queue does not perform as well as a stack, but reduces the likelihood of stale connections. It is possible that Sequel will change in the future from using a stack by default to using a queue by default, so any users who specifically desire a stack to be used should specify the :connection_handling=>:stack option. * Sequel::Migrator now supports is_current? class method to check if there are no outstanding migrations to apply. It also supports a check_current class method, which raises an exception if there are outstanding migrations to apply. * A pg_json extension has been added, supporting PostgreSQL's 9.2 json type, similarly to the pg_array and pg_hstore extensions. Note that with the current PostgreSQL json code, the root object can be a string or number, but ruby's json library requires the root json value to be an object or array. So you will probably get an exception if you attempt to retrieve a PostgreSQL json value that ruby's JSON library won't parse. * A pg_inet extension has been added, which automatically typecasts PostgreSQL inet and cidr types to ruby IPAddr objects on retrieval. * Database#transaction on PostgreSQL now recognizes :read_only and :deferrable options, and can use them to set the READ ONLY and DEFERRABLE transaction flags. A :synchronous option is also recognized, which can be set to true, false, :local, or :remote_write, and sets the value of synchronous_commit just for that transaction. * When adding and dropping indexes on PostgreSQL, a :concurrently option can be used to create or drop the index CONCURRENTLY, which doesn't require a full write table lock. * When dropping indexes on PostgreSQL, :if_exists and :cascade options are now recognized. * When using alter_table set_column_type on PostgreSQL, the :using option is respected, and can be used to force a specific conversion from the previous value to the new value with the USING syntax. * On MySQL, you can now set an :sql_mode option when connecting. This can be a string or symbol or an array of them, and each should match one of MySQL's sql_modes. MySQL's default SQL mode is fairly loose, and using one of the strict sql modes is recommended, but for backwards compatibility, Sequel will not set a specific SQL mode by default. However, that may change in the next major version of Sequel, so to be forwards compatible you should set :sql_mode=>nil if you do not desire a strict SQL mode to be set automatically. * Partial indexes are now supported on Microsoft SQL Server 2008 (SQL Server refers to them as filtered indexes). Attempting to use a partial index on an earlier version of SQL Server will result in the database raising an exception. * A jdbc/progress adapter has been added, supporting the Progress database via the jdbc adapter. = Other Improvements * Dataset#get now works correctly if you pass it a nil or false argument. Previously, it ignored the argument and used the block instead. If you want to use the block argument, you should not pass in a regular argument. * Database#call now passes any blocks given to it to the underlying prepared statement object. Before, a passed block was ignored. * Sequel::Model.db is no longer set automatically when creating an anonymous class with an associated database object. This fixes cases where a library would create namespaced models, and the database used by the library would be set as the default for the user's application code. * Model *_to_one association setters are now no-ops if you pass a value that is the same as the cached value. This fixes issues with reciprocal associations getting reordered, and is better for performance. For cases where the old behavior is desired, the set_associated_object_if_same? method can be overridden to return true for object. If you are manually setting objects in the associations cache before calling the setter method, you may want to set that. * The dirty plugin no longer affects the return value of refresh and lock!. Internal changes should now help ensure that plugins don't affect the return values of these methods. * Sequel now supports JRuby 1.7's new exception handling, fixing exception handling when connecting in the jdbc adapter. * When dumping unsigned integer types in the schema dumper, if the unsigned values could overflow a 32-bit signed integer type, the generic Bignum class is used as the type. This should fix issues when copying a database containing an unsigned 32-bit integer column with values between 2^31 and 2^32-1. * In the optimistic_locking plugin, attempting to refresh and save after a failed save now works correctly. Before, the second save would never modify a row. * Time types on jdbc/postgres are now typecasted accurately on retrieval, before they could be off by up to a millisecond due to floating point issues. * Disconnect detection in the mysql2 adapter has been improved. * The jdbc/mysql, do/mysql, and swift/mysql adapters all now support the :timeout option to set the MySQL wait_timeout. * Savepoints in prepared transactions are now supported on MySQL 5.5.23+, since the bug that caused them to be unsupported starting in 5.5.13 has been fixed. * Parsing foreign key metadata for tables with an explicit schema now works correctly on PostgreSQL. * bin/sequel -C now namespaces indexes automatically when copying from a database without a global index namespace to a database with a global index namespace. * Indexes are now dropped in reverse order that they were added in the schema_dumper. * The Model typecasting code works around bugs in objects where object.==('') would raise an exception instead of returning false. * A better error message is used if an invalid JDBC URL is provided and the JDBC driver's new.connect method returns NULL. * A document describing Sequel's object model has been added, describing the objects Sequel uses to represent SQL concepts. * Most adapter specific options to Database methods are now mentioned in the main Database method RDoc. = Backwards Compatibility * The nested_attributes plugin internals changed significantly. If you were overriding one of the nested_attributes* private methods and calling super to get the default behavior, you may have to update your code. * Database#case_sensitive_like has been removed on SQLite. This method never worked correctly, it always returned false even if the case_sensitive_like PRAGMA was set. That's because SQLite doesn't offer a getter for this PRAGMA, only a setter. Note that Database#case_sensitive_like= still exists and works correctly. * Database#single_value has been removed from the native SQLite adapter. This method was designed for internal use, and hasn't been used for some time. Any current users of the method should switch to Dataset#single_value. * The private Database#defined_columns_for method in the SQLite adapter no longer takes an options hash. * A couple jdbc/postgres adapter methods are now private. Previously, the jdbc/postgres adapter overrode some private superclass methods but left the methods public. * When using the optimistic_locking plugin, refreshing inside a before_update method after calling super will now result in the lock checking being skipped. * The private Model#_refresh no longer returns self, so external plugins should no longer rely on that behavior. ruby-sequel-4.1.1/doc/release_notes/3.37.0.txt000066400000000000000000000340371220156535500206730ustar00rootroot00000000000000= New Features * Database#extension and Dataset#extension have been added and make it much easier to use extensions that just define modules, where you previously had to manually extend a Database or Dataset object with the module to get the extension's behavior. These methods operate similarly to model plugins, where you just specify the extension symbol, except that you can specify multiple extensions at once: DB.extension(:pg_array, :pg_hstore) For databases, these modify the Database itself (and potentially all of its datasets). Dataset#extension operates like other dataset methods, returning a modified clone of the dataset with the extension added: dataset = dataset.extension(:columns_introspection) Dataset#extension! has also been added for modifying the receiver instead of returning a clone. Not all extensions are usable by Database#extension or Dataset#extension, the extension has to have specific support for it. The following extensions support both Database#extension and Dataset#extension: * columns_introspection * query_literals * split_array_nil The following extensions support just Database#extension: * arbitrary_servers * looser_typecasting * pg_array * pg_auto_parameterize * pg_hstore * pg_inet * pg_interval * pg_json * pg_range * pg_statement_cache * server_block Any user that was loading these extensions with Sequel.extension and then manually extending objects with the extension's module is encouraged to switch to Database#extension and/or Dataset#extension. * Dataset join methods now respect a :qualify=>:deep option to do deep qualification of expressions, allowing qualification of subexpressions in the expression tree. This can allow you to do things like: DB[:a].join(:b, {:c.cast(Integer)=>:d.cast(Integer)}, :qualify=>:deep) # SELECT * FROM a INNER JOIN b # ON (CAST(b.c AS INTEGER) = CAST(a.d AS INTEGER)) For backwards compatibility, by default Sequel will only do automatic qualification if the arguments are simple symbols. This may change in a future version, if automatic qualification of only symbols is desired, switch to using :qualify=>:symbol. You can also choose to do no automatic qualification using the :qualify=>false option. * All of Sequel's model associations now work with key expressions that are not simple column references, without creating a fully custom association. So you can create associations where the primary/foreign key values are stored in PostgreSQL array or hstore columns, for example. * The pg_array extension has now been made more generic, so that it is easy to support array types for any scalar type that is currently supported. All scalar types that Sequel's postgres adapter supports now have corresponding array types supported in the pg_array extension. So if you load the pg_array extension and return a date array column, the returned values will be arrays of ruby Date objects. Other pg_* extensions that add support for PostgreSQL-specific scalar types now support array versions of those types if the pg_array extension is loaded first. * A pg_range extension has been added, making it easy to deal with PostgreSQL 9.2+'s range types. As ruby's Range class does not support all PostgreSQL range type values (such as empty ranges, unbounded ranges, or ranges with an exlusive beginning), range types are returned as instances of Sequel::Postgres::PGRange, which has an API similar to Range. You can turn a PGRange into a Range using PGRange#to_range, assuming that the range type value does not use features that are incompatible with ruby's Range class. The pg_range extension supports all range types supported by default in PostgreSQL 9.2, and makes it easy to support custom range types. * A pg_range_ops extension has been added, which adds DSL support for PostgreSQL range operators and functions, similar to the pg_array_ops and pg_hstore_ops extensions. * A pg_interval extension has been added, which makes Sequel return PostgreSQL interval types as instances of ActiveSupport::Duration. This is useful if you want to take the interval value and use it in calculations in ruby (assuming you load the appropriate parts of ActiveSupport). * A split_array_nil extension has been added, which changes how Sequel compiles IN/NOT IN expressions with arrays with nil values. where(:col=>[1, nil]) # Default: # WHERE (col IN (1, NULL)) # with split_array_nil extension: # WHERE ((col IN (1)) OR (col IS NULL)) exclude(:col=>[1, nil]) # Default: # WHERE (col NOT IN (1, NULL)) # with split_array_nil extension: # WHERE ((col NOT IN (1)) AND (col IS NOT NULL)) * The nested_attributes plugin now allows the :fields option to be a proc, which is called with the associated object and should return an array of allowable fields. * You can now specify the graph alias base when using eager_graph on a per-call basis. Previously, it could only be set on a per association basis. This is helpful if you have multiple associations to the same class, and are cascading the eager graph to dependent associations of that class for both of the associations. Previously, there was no way to manually give descriptive names to the tables in the cascaded associations, but you can now do so by passing the association as an Sequel::SQL::AliasedExpression instance instead of a plain Symbol. Here's a usage example: ds = Game.eager_graph(:winner=>:players.as(:winning_players), :loser=>:players.as(:losing_players)). where(:winning_players__name=>'A', :losing_players__name=>'B') * many_through_many associations now differentiate between column references and method references, by supporting the :left_primary_key_column and :right_primary_key_method options that many_to_many associations support. * Custom :eager_loader procs that accept a single hash argument now have an additional entry passed in the hash, :id_map, which is easier to use than the :key_hash entry (which is still present for backwards compatibility). Anyone with custom :eager_loader procs is encouraged to switch from using :key_hash to :id_map. * You can now override the create_table/alter_table schema generators per database/adapter. This allows for database specific generator subclasses, which have methods for unique features for that database. * You can now setup exclusion constraints on PostgreSQL using the create_table and alter_table schema generators: DB.create_table(:t) do ... exclude([[:col1, '&&'], [:col2, '=']]) # EXCLUDE USING gist (col1 WITH &&, col2 WITH =) end One common use for exclusion constraints is to make sure that no two rows have overlapping values/ranges/circles. * When adding foreign key constraints to an existing table on PostgreSQL, you can use the :not_valid option to mark the constraint as not yet valid. This will make it so that future changes to the table need to respect the foreign key constraint, but existing rows do not. After cleaning up the existing data, you can then use the alter_table validate_constraint method to mark the constraint as valid. * An eval_inspect extension has been added that attempts to do do the following for Sequel::SQL::Expression instances: eval(obj.inspect) == obj # => true There are a lot of cases that this extension does not handle, but it does a decent job in most cases. This is currently only used internally in a specific case in the schema_dumper extension. = Other Improvements * The filter by associations support now respects the method reference vs column reference distinction that other parts of the association code have respected since 3.32.0. * In the nested_attributes plugin, new one_to_one associated values are saved once instead of twice. Previously it attempted to save them before they were associated to the current model object, which can violate some validations/constraints. * When saving an associated object in the one_to_one association setter method, Sequel no longer adds an unnecessary filter condition when nullifying the foreign key for existing rows in the associated table. * The list plugin's before_create method now calls super, which fixes usage when other plugins that define before_create are loaded before it. * In the pg_array extension, when typecasting an Array to PGArray, a recursive map is done on the input array to convert each value in the input array to the expected type, using the typecasting method that would be used for the scalar value. For example, for model objects, where ids is an integer array column: model.set(:ids=>['1', '2']).ids.to_a # => [1, 2] * The pg_array extension now correctly handles bytea arrays used in bound variables. * The pg_array extension no longer uses the JSON-based parser for floating point types, since it doesn't handle NaN and Infinity values correctly. * When typecasting in the pg_array extension, PGArray values are only returned verbatim if they have a matching database type. Otherwise, the underlying array is rewrapped in a new PGArray value with the correct database type. * H2 clob types are now recognized as strings instead of blobs. Previously the code attempted to do this, but it didn't do so correctly. * The jdbc/postgres adapter now converts scalar values of the array to the appropriate type. Previously, if you retrieved a date array, you got back a ruby array of JavaSQL::SQL::Date instances. Now, you get back a ruby array of ruby Date instances. * The schema_dumper extension now dumps migrations as change migrations, instead of separate up/down migrations, resulting in simpler code. * When dumping non-integer foreign keys in the schema dumper, an explicit type is now used. Previously, the column would have been dumped as an integer column. * When dumping unsigned integer columns in the schema dumper, add a column > 0 constraint in the dumped migration. * On Microsoft SQL Server, when updating a dataset with a limit, the limit is now respected. * When emulating offset using the ROW_NUMBER window function, do not require that the dataset be ordered. If an order is not provided, default to ordering on all of the columns in the dataset. If you want to override the default order used in such a case, you need to override the default_offset_order method for the dataset. * On SQLite, casting to Date/Time/DateTime now calls an SQLite date/datetime function instead of using a cast, as SQLite treats such a cast as a cast to integer. * When using JRuby 1.6 in ruby 1.9 mode and typecasting a time column, workaround a bug where Time#nsec is 0 even though Time#usec is not. * The odbc/mssql adapter now correctly handles the case where SCOPE_IDENTITY returns NULL after an insert. * bin/sequel now accepts multiple -l options for logging to multiple output files. * In addition to Sequel's rigorous pre-push testing, Sequel now also uses TravisCI for continuous integration testing across a wider range of ruby implementations. = Backwards Compatibility * The keys in the :key_hash entry passed to the :eager_loader proc are now method references instead of column references. For most associations, they are the same thing, but for associations using the :key_column/:primary_key_column/:left_primary_key_column options, the values could be different. If you were using one of those options and had a custom eager_loader, you should switch from indexing into the :key_hash option to just using the :id_map option. * The :key_hash entry passed to the :eager_loader proc is now no longer guaranteed to contain key maps for associations other than the one currently being eagerly loaded. Previously, it contained key maps for all associations that were being eagerly loaded. If you have a custom :eager_loader proc that accessed a key map for a separate association that was being loaded concurrently, you'll now have to build the key map manually if it doesn't exist. * If you previously explicitly specified an :eager_loader_key option when defining an association, you may need to change it so that it is a method reference instead of a column reference, or possibly just omit the option. * If you have a custom :eager_loader proc for an association where the default :eager_loader_key option references a method that the model does not respond to (or raises an exception), you may need to specify the :eager_loader_key=>nil option. * In the pg_auto_parameterize extension, String values are no longer automatically casted to text. This is because the default type of a string literal in PostgreSQL is unknown, not text. This makes it much less likely to require manual casts, but has the potential to break existing code relying on the automatic cast to text. As a work around, any query that can no longer be automatically parameterized after this query just needs to add manual casting to text. * Sequel now raises an exception if you attempt to clone associations with different types, except if one type is one_to_many and the other is one_to_one. Cloning from other types was usually a bug, and raising an exception early will make it much easier to track such bugs down. * When running the plugin/extension and PostgreSQL adapter specs, a json library is now required. * The json/postgres adapter array typecasting internals have been modified, if you were relying on the internals, you may need to update your code. * The pg_array extension internals changed significantly. PGArray no longer has any subclasses by default, as parsing is now done in separate objects. Anyone relying on the pg_array internals will need to update their code. * The postgres adapter no longer sets up type conversion of int2vector and money types, since in both cases the conversion was incorrect in most cases. These types will now be returned as strings. If you are relying on the conversion, you'll need to add your own custom type procs. ruby-sequel-4.1.1/doc/release_notes/3.38.0.txt000066400000000000000000000207131220156535500206700ustar00rootroot00000000000000= New Features * A pg_row extension has been added that supports PostgreSQL's row-valued/composite types. You can register support for specific row types: DB.register_row_type(:address) Then you can create values of that row type: ad = DB.row_type(:address, ['555 Foo St.', 'Bar City', '98765']) # or ad = DB.row_type(:address, :street=>'555 Foo St.', :city=>'Bar City', :zip=>'98765') Which you can use in your datasets: DB[:people].insert(:name=>'Me', :address=>ad) If you are using the native postgres adapter, when retreiving row type values, they will be returned as instances of the row type, which are hash-like objects: ad = DB[:people].get(:address) ad[:street] # => '555 Foo St.' ad[:city] # => 'Bar City' ad[:zip] # => '98765' If you are also using the pg_array extension, then arrays of composite types are supported automatically. Composite types can also include arrays of other types as well as other composite types, though recursive composite types are not allowed by PostgreSQL. Using arrays and composite types brings one of the benefits of document databases to PostgreSQL, allowing you to store nested structures inside a single row. * A pg_row_ops extension has been added that adds DSL support for accessing members of row-valued/composite types. You first create a row op: r = Sequel.pg_row_op(:row_column) Then you can get DSL support for accessing members of that row_column via the #[] method: r[:a] # (row_column).a This works with composite types containing composite types: r[:a][:b] # ((row_column).a).b When used in conjunction with the pg_array_ops extension, there is support for composite types that include arrays, as well as arrays of composite types: r[1][:a] # (row_column[1]).a r[:a][1] # (row_column).a[1] The extension offers additional support for referencing a table's type when it contains a column with the same name, see the RDoc for details. * A pg_row plugin has been added, that works with the pg_row extension, and allows you to represent row-valued types as Sequel::Model objects (instead of the hash-like objects they use by default). In your model class, you load the plugin: class Address < Sequel::Model(:address) plugin :pg_row end Then you can use Address instances in your datasets: ad = Address.new(:street=>'555 Foo St.', :city=>'Bar City', :zip=>'98765') DB[:people].insert(:name=>'Me', :address=>ad) And if you are using the native postgres adapter, the dataset will return the type as a model instance: ad = DB[:people].get(:address) ad.street # => '555 Foo St.' ad.city # => 'Bar City' ad.zip # => '98765' * A pg_typecast_on_load plugin has been added. This plugin is designed for use with the jdbc/postgres, do/postgres, and swift/postgres adapters, and it is similar to the typecast_on_load plugin. However, while the typecast_on_load plugin uses setter methods, the pg_typecast_on_load plugin uses the same code that the native postgres adapter uses for typecasting. * The tinytds adapter now supports a :textsize option to override the default TEXTSIZE setting. The FreeTDS default is fairly small (~64k), so if you want to use large blob or text columns, you should probably set this to a value larger than the largest text/blob you want to use. * Sequel.expr when called with a symbol now splits the symbol and returns an Identifier, QualifiedIdentifier, or AliasedExpression, depending on the content of the symbol. Previously, it only wrapped the symbol using a Wrapper. * Identifier#* and QualifiedIdentifier#* when called without any argument now represent a selection of all columns from the represented table: Sequel.expr(:table).* # table.* Sequel.expr(:schema__table).* # schema.table.* This makes it easier to represent the selection of all columns in a table without using the core extensions. * Model#values now has a Model#to_hash alias. * SQL::Blob values now have as, cast, and lit methods even if the core extensions are not loaded. = Other Improvements * When loading multiple pg_* extensions into a Database instance, the conversion procs are only reset once instead of once per extension. * All adapters that access PostgreSQL now store type conversion procs, similar to the native postgres adapter. This has been added to make it easier to write extensions that support advanced PostgreSQL types. * Database#schema output on PostgreSQL now includes the type oid for each column. * You can now register custom array types to specific Database instances, using the :type_procs and :typecast_methods_module options, so it is now possible to have custom array types without affecting global state. * Dropping of columns with defaults now works correctly on Microsoft SQL Server. Before, it would fail as the related constraint was not dropped first. * The MySQL type "double(x,y)" is now recognized as a float type. * The jdbc/jtds and jdbc/derby adapters now handle nil prepared statement values in more cases. * Blob prepared statement arguments are now handled correctly on jdbc/db2 and jdbc/oracle. * Sequel now works around a Time#nsec bug in JRuby 1.6 ruby 1.9 mode when using Time values in prepared statements in the jdbc adapter. * Java::JavaUtil::UUID types are now returned as ruby strings when converting types in the jdbc adapter. * Real boolean literals are now used on derby 10.7+. On derby <10.7 Sequel still uses (1 = 1) and (1 != 1) for true and false. This allows you to use boolean columns with a true/false default on derby 10.7+. * Clobs are now treated as string types instead of blobs on derby, since treating clob as blob doesn't work there. * The swift adapter now supports an output identifier method. * The swift adapter now returns blobs as SQL::Blob instances. * The schema_dumper extension no longer produces code that requires the core extensions. * All of Sequel's specs now run without the core extensions loaded, ensuring that none of the internals depend on the core extensions. The only exception is the specs for the core extensions themselves. = Backwards Compatibility * The pg_* extensions no longer modify core classes if the core_extensions extension is not loaded. All methods they added now have equivalent methods on the main Sequel module: Sequel.pg_array Sequel.pg_array_op Sequel.hstore Sequel.hstore_op Sequel.pg_json Sequel.pg_range Sequel.pg_range_op * The Sequel::SQL::IdentifierMethods module has been removed. This module was only included in Symbol if the core_extensions were enabled. Since it only defined a single method, now the core extensions just define that method directly on Symbol. * The swift adapter now requires swift-db-{postgres,mysql,sqlite3} gems instead of the swift gem. swift/postgres requires swift-db-postgres 0.2.0+, swift/sqlite requires swift-db-sqlite 0.1.2+, and swift/mysql requires swift-db-mysql. * Sequel will no longer typecast a string to a PostgreSQL array or hstore column in a model column setter. This is because the parsers that Sequel uses were designed to support only PostgreSQL's output format. It's unlikely that a user would provide that format for typecasting, and while there aren't known security issues with the parsers, they were not designed to handle arbtirary user input, so typecasting from string is no longer allowed and will now raise an error. The only reason such typecasting was allowed in the first place was to work around issues in the jdbc/postgres, do/postgres, and swift/postgres adapters, using the the typecast_on_load plugin. If you were previously using the typecast_on_load plugin for hstore or array columns, you need to switch to using the new pg_typecast_on_load plugin. * The private get_conversion_procs method in the postgres adapter no longer accepts an argument. * The Sequel::Postgres::PGArray::DatabaseMethods singleton define_array_typecast_method method has been removed. This method was designed for internal use. * The change to make Sequel.expr split symbols can cause the following type of code to break: Sequel.expr(:column___alias).desc This is because expr now returns an AliasedExpression, which doesn't support the desc method. However, as you can't apply an order to an aliased expression, nobody should be relying on this. ruby-sequel-4.1.1/doc/release_notes/3.39.0.txt000066400000000000000000000215201220156535500206660ustar00rootroot00000000000000= New Features * A constraint_validations extension and plugin have been added, which allow you to define validations when creating tables, which are enforced by database constraints, and have those validations be automatically discovered and used by your Sequel::Model classes. The extension is designed to be used in your migrations/schema modification code: DB.extension(:constraint_validations) DB.create_constraint_validations_table DB.create_table(:foos) do primary_key :id String :name validate do min_length 5, :name end end This creates a database CHECK constraint that ensures that the minimum length for the column is 5 characters. It also adds metadata about the validation to the sequel_constraint_validations table. To have the model class automatically create validations, just include the plugin in the model: class Foo < Sequel::Model plugin :constraint_validations end Note that MySQL does not enforce CHECK constraints (it parses but ignores them), so using the extension on MySQL does not actually enforce constraints at the database level, though it still does support the automatic model validations if the plugin is used. * Dataset#count now takes an argument or a virtual row block, allowing you to do: DB[:table].count(:column_name) DB[:table].count{function_name(column1, column2)} When count is given an argument, instead of returning the total number of rows, it returns the number of rows where the argument has a non-NULL value. * Database#copy_into has been added to the postgres adapter when the pg driver is being used, and can be used for very fast inserts into tables if you already have the input preformatted in PostgreSQL text or CSV format. * set_table_not_null has been added to the alter table generator, for a nicer API: alter_table(:t){set_column_not_null :col} # instead of alter_table(:t){set_column_allow_null :col, false} Additionally, set_column_allow_null now defaults the second argument to true for a nicer API: alter_table(:t){set_column_allow_null :col} # instead of alter_table(:t){set_column_allow_null :col, true} * Database#supports_regexp? has been added for checking if the database supports Regexp in filters. Currently, only MySQL and PostgreSQL support Regexps. Attempting to use a Regexp on a database that doesn't support it now raises an error when attempting to generate the SQL, instead of sending invalid SQL to the database. * Sequel.char_length has been added for a cross platform char_length function (emulated when char_length is not supported natively by the database). * Sequel.trim has been added for a cross platform trim function (emulated when trim is not supported natively by the database). * ValidationFailed and HookFailed exceptions now have a model method that returns the model instance related to the exception. This makes it possible to use Model.create inside a begin/rescue block and get access to the underlying instance if there is a validation or before/around hook error. * The subclasses plugin now accepts a block, which is called with each model class created. This is useful if you want to apply changes to classes created in the future instead of just existing classes. * The validates_unique validation in the validation_helpers plugin now accepts a :where option for a custom uniqueness filter. Among other things this makes it easy to implement a case insensitive uniqueness validation on a case sensitive column. * The threaded connection pools now support a :connection_handling=>:disconnect option, which makes them disconnect connections after use instead of returning them to the pool. This makes it possible to completely control connection lifetime using Database#synchronize. * The pg_row_op extension now has support for PGRowOp#*, for referencing the members of the composite type as separate columns. * MySQL's set type and default value are now recognized. * bin/sequel now accepts a -c argument for running an arbitrary code string instead of using an IRB prompt. = Other Improvements * Sequel now parses current date/timestamp column defaults when parsing the schema for a table. The values will be returned as Sequel::CURRENT_DATE for date columns and Sequel::CURRENT_TIMESTAMP for timestamp columns. The schema_dumper extension will work with these defaults, so if you dump the schema for a table with a column that uses a current timestamp default, the dumped schema will include the default. The defaults setter plugin also works with these changes, so that when new model objects are instantiated, they get the current Date/Time/DateTime values set. * On MySQL and PostgreSQL, Sequel will now by default attempt to combine multiple alter_table operations into a single query where it believes it can do so correctly. This can potentially improve performance ~N times, where N is the number of alter table operations. This can change the SQL used for old migrations (though it shouldn't change the result), and is a potentially risky change. This may be disabled by default in future versions if it causes problems. * The defaults_setter plugin now correctly sets false default values. * The schema_dumper plugin now preserves fractional seconds in timestamp column defaults when dumping. * Time->DateTime and DateTime->Time typecasts now retain fractional seconds on ruby 1.8. * Array arguments passed to most PGArrayOp methods are now automatically wrapped in a PGArray. If you want to use this support, you need to make sure to load both the pg_array and pg_array_op extensions. * Sequel now does a better job of finding the sequence for a given table on PostgreSQL, handling more corner cases. A small side effect of this is sometimes sequence names will be quoted. * Some potential thread-safety issues when using Sequel with PostgreSQL on a non-GVL ruby implementation have been fixed. * Sequel now correctly caches the server version query on MySQL. * Sets of alter_table operations on MySQL and Microsoft SQL Server that require parsing the current database schema, where later alter_table operations depend on earlier ones, should now work correctly. * You can now drop check constraints on tables on SQLite, though doing so drops all check constraints on the table, not only the specific check constraint given. * The identity_map plugin no longer breaks if used with a model without a primary key. * Sequel::SQL::NegativeBooleanConstant now inherits from Constant instead of BooleanConstant. This means that Sequel::NULL == Sequel::NOTNULL is now false instead of true. * You can now override the convert_tinyint_to_bool settings on a per-Dataset basis in the mysql and mysql2 adapters, though the overriding is different depending on the adapter. Check the commit log for details. * timestamp(N) types are now recognized as datetime, which should fix certain cases on Oracle. * Dataset#insert now handles a single model instance argument as a single value if the model uses the pg_row plugin. * When joining a model dataset using a model class as the table argument, a subselect is used unless the model is a simple select from the underlying table. * The specs now cleanup after themselves, dropping the tables that they create for testing. = Backwards Compatibility * The defaults_setter plugin's behavior changed due to the current date/timestamp support. Previously, it would not set a value for the column, since the default wasn't recognized. Therefore, the database would use the default value on insert, which would be the database's current timestamp. Now, the value is set to the current Date/Time/DateTime on model object instantiation, so the database wouldn't use the column default. Instead of the database's current timestamp on insert, the column value will be the application's current timestamp on model instantiation. Users who don't want this behavior can remove the default values in the model: Model.default_values.delete(:column_name) * Plain (non-model) datasets no longer allow insert to accept a single model instance argument. Also, they no longer call values on a single argument if the object responds to it. * Plain (non-model) datasets no longer accept model classes as tables in the join/graph methods. Also, they no longer call table_name on the argument if the object responds to it. * The schema_dumper extension now requires the eval_inspect extension, which changes inspect output for Sequel::SQL::Expression objects. * Custom adapters that override Database#alter_table_sql_list now need to make sure it returns an already flattened array. * The identity_map_key method in the identity_map plugin now returns nil instead of a random string if the given pk is nil. ruby-sequel-4.1.1/doc/release_notes/3.4.0.txt000066400000000000000000000307161220156535500206050ustar00rootroot00000000000000New Plugins ----------- * A nested_attributes plugin was added allowing you to modify associated objects directly through a model object, similar to ActiveRecord's Nested Attributes. Artist.plugin :nested_attributes Artist.one_to_many :albums Artist.nested_attributes :albums a = Artist.new(:name=>'YJM', :albums_attributes=>[{:name=>'RF'}, {:name=>'MO'}]) # No database activity yet a.save # Saves artist and both albums a.albums.map{|x| x.name} # ['RF', 'MO'] It takes most of the same options as ActiveRecord, as well as a a few additional options: * :destroy - Allow destruction of nested records. * :limit - For *_to_many associations, a limit on the number of records that will be processed, to prevent denial of service attacks. * :remove - Allow disassociation of nested records (can remove the associated object from the parent object, but not destroy the associated object). * :strict - Set to false to not raise an error message if a primary key is provided in a record, but it doesn't match an existing associated object. If a block is provided, it is passed each nested attribute hash. If the hash should be ignored, the block should return anything except false or nil. * A timestamps plugin was added for automatically adding before_create and before_update hooks for setting values on timestamp columns. There are a couple of existing external plugins that handle timestamps, but the implementations are suboptimal. The new built-in plugin supports the following options (with the default in parentheses): * :create - The field to hold the create timestamp (:created_at) * :force - Whether to overwrite an existing create timestamp (false) * :update - The field to hold the update timestamp (:updated_at) * :update_on_create - Whether to set the update timestamp to the create timestamp when creating (false) * An instance_hooks plugin was added for adding hooks to specific w model instances: obj = Model.new obj.after_save_hook{do_something} obj.save # calls do_something after the obj has been saved All of the standard hooks are supported, except for after_initialize. Instance level before hooks are executed in reverse order of addition before calling super. Instance level after hooks are executed in order of addition after calling super. If any of the instance level before hook blocks return false, no more instance level before hooks are called and false is returned. Instance level hooks are cleared when the object is saved successfully. * A boolean_readers plugin was added for creating attribute? methods for boolean columns. This can provide a nicer API: obj = Model[1] obj.active # Sequel default column reader obj.active? # Using the boolean_readers plugin You can provide a block when loading the plugin to change the criteria used to determine if the column is boolean: Sequel::Model.plugin(:boolean_readers) do |c| db_schema[c][:db_type] =~ /\Atinyint/ end This may be useful if you are using MySQL and have some tinyint columns that represent booleans and others that represent integers. You can turn the convert_tinyint_to_bool setting off and use the attribute methods for the integer value and the attribute? methods for the boolean value. Other New Features ------------------ * Sequel now has support for converting Time/DateTime to local or UTC time upon storage, retrieval, or typecasting. There are three different timezone settings: * Sequel.database_timezone - The timezone that timestamps use in the database. If the database returns a time without an offset, it is assumed to be in this timezone. * Sequel.typecast_timezone - Similar to database_timezone, but used for typecasting data from a source other than the database. This is currently only used by the model typecasting code. * Sequel.application_timezone - The timezone that the application wants to deal with. All Time/DateTime objects are converted into this timezone upon retrieval from the database. Unlike most things in Sequel, these are only global settings, you cannot change them per database. There are only three valid timezone settings: * nil (the default) - Don't do any timezone conversion. This is the historical behavior. * :local - Convert to local time/Consider time to be in local time. * :utc - Convert to UTC/Consider time to be in UTC. So if you want to store times in the database as UTC, but deal with them in local time in the application: Sequel.application_timezone = :local Sequel.database_timezone = :utc If you want to set all three timezones to the same value: Sequel.default_timezone = :utc There are three conversion methods that are called: * Sequel.database_to_application_timestamp - Called on time objects coming out of the database. If the object coming out of the database (usually a string) does not have an offset, assume it is already in the database_timezone. Return a Time/DateTime object (depending on Sequel.datetime_class), in the application_timzone. * Sequel.application_to_database_timestamp - Used when literalizing Time/DateTime objects into an SQL string. Converts the object to the database_timezone before literalizing them. * Sequel.typecast_to_application_timestamp - Called when typecasting objects for model datetime columns. If the object being typecasted does not already have an offset, assume it is already in the typecast_timezone. Return a Time/DateTime object (depending on Sequel.datetime_class), in the application_timezone. Sequel does not yet support named timezones or per thread modification of the timezone (for showing all timestamps in the current user's timezone). Extensions to support both features are planned for a future version. * Dataset#truncate was added for truncating tables. Truncate allows for fast removal of all rows in a table. * Sequel now supports typecasting a hash to date, time, and datetime types. This allows easy usage of Sequel with forms that split the entry of these database types into separate from fields. With this code, you can just have field names like: date[year] date[month] date[day] Rack will parse that into: {'date'=>{'year'=>?, 'month'=>?, 'day'=>?}} So then you can do: obj.date = params['date'] # or obj.set(params) * validates_unique now takes a block that can be used to scope the uniqueness constraint. This allows you to easily set up uniqueness validations that are only necessary in a given scope. For example, a validation on username, but only for active users (as inactive users are soft deleted but remain in the table). You just pass a block to validates_unique: validates_unique(:name){|ds| ds.filter(:active)} * The serialization plugin now supports json. * Sequel now supports generic concepts of CURRENT_{DATE,TIME,TIMESTAMP}. Most databases support these SQL concepts, but not all, and some implementations act differently. The Sequel::SQL::Constants module holds the three constants, which are instances of SQL::Constant, an SQL::GenericExpression subclass. This module is included in Sequel, so you can reference the constants more easily (e.g. Sequel::CURRENT_TIMESTAMP). It's separated out into a separate module so that you can just include that module in the top level scope, allowing you to reference the constants directly (e.g. CURRENT_TIMESTAMP). DB[:events].filter{date < ::Sequel::CURRENT_DATE} # or: include Sequel::SQL::Constants DB[:events].filter{date < ::CURRENT_DATE} * Database#run was added for executing arbitrary SQL on a database. It's an alias for Database#<<, but it allows for a nicer API inside migrations, since you can now do: run 'SQL' instead of: self << 'SQL' You can also provide a :server option to run the SQL on the given server/shard: run 'SQL', :server=>:shard1 * Sequel::Model() can now take a database argument in addition to a symbol or dataset argument. If a database is given, it'll create an anonymous subclass attached to the given database. Other changes were made to allow the following code to work: class Item < Sequel::Model(DB2) end That will work correctly assuming a table named items in DB2. * Dataset#ungrouped was added for removing a grouping from an existing dataset. Also, Dataset#group when called with no arguments or with a nil argument also removes any existing grouping instead of resulting in invalid SQL. * Model#modified? was added, letting you know if the model has been modified. If the model hasn't been modified, calling Model#save_changes will do nothing. * SQL::OrderedExpression now supports #asc, #desc, and #invert. Other Improvements ------------------ * The serialization and lazy_attribute plugins now add accessor methods to a module included in the class, instead of to the model class itself. This allows the methods to be overridden in the class and work well with super, as well for the plugins to work together on the same column. Make sure the lazy_attributes accessor is setup before the serialization accessor if you want to have a lazy serialized column. * Calling the add_* method for many_to_many association now saves the record if the record is new. This makes it operate more similarly to one_to_many associations. Previously, it raised an Error. * Dataset#import now works correctly when called with a dataset. Previously, it generated incorrect SQL. * The JDBC adapter now converts byte arrays to/from SQL::Blob. * The JDBC adapter now attempts to bind unknown types using setObject instead of raising, so it can work with native Java objects. It also binds boolean parameters correctly. * Using multiple emulated ALTER TABLE statements (such as drop_column) in a single alter_table block now works correctly on SQLite. * Database#indexes now works on JDBC for tables in a non-default schema. It also now properly detects unique indexes on MSSQL. * Database#schema on JDBC now accepts a :schema option. Also, returned schema hashes now include a :column_size entry specifying the maximum length/precision for the column, since the :db_type entry doesn't have contain the information on JDBC. * Datasets without tables now work correctly on Oracle, so things like DB.get(...) now work. * A descriptive error message is given if you attempt to use Sequel with the mysql.rb driver (which Sequel doesn't support). * The postgres adapter now works correctly with a modified postgres-pr that raises PGErrors instead of RuntimeErrors (e.g. http://github.com/jeremyevans/postgres-pr). * You now get a Sequel::InvalidOperation instead of a NoMethodError if you attempt to update a dataset without a table. * The inflection support has been modified to reduce code duplication. Backwards Compatibility ----------------------- * Sequel now includes fractional seconds in timestamps for all adapters except MySQL. It's possible that this may break timestamp columns for databases that are not regularly tested. * Sequel now includes timezone values in timestamps on Microsoft SQL Server, Oracle, PostgreSQL and SQLite. The modification for SQLite is probably the biggest cause for concern, since SQLite stores times as text. If you have an SQLite database that uses timestamps and is accessed by something other than Sequel, you should make sure that it works with the timestamp format that Sequel now uses. * The default timestamp format used by Sequel now uses a space instead of 'T' between the date and time parts, which could possibly affect some databases that are not regularly tested. * Attempting to insert into a grouped dataset or a dataset that selects from multiple tables will now raise an Error. Previously, it would ignore any GROUP or JOIN settings and generate bad SQL if there were multiple FROM tables. * Database#<< now always returns nil. Before, the return value was adapter dependent. * ODBC::Time and ODBC::DateTime values are now converted to the Sequel.datetime_class. Before, ODBC::Time used Time and ODBC::DateTime used DateTime regardless of the Sequel.datetime_class setting. * The default inflections were modified, fixing some obvious errors and possibly changing some existing inflections. Further changes to the default inflections are unlikely. ruby-sequel-4.1.1/doc/release_notes/3.40.0.txt000066400000000000000000000053631220156535500206650ustar00rootroot00000000000000= New Features * Sequel now has vastly improved support for Microsoft Access. * Sequel now supports the CUBRID database, with a cubrid adapter that uses the cubrid gem, and a jdbc/cubrid adapter for accessing CUBRID via JDBC on JRuby. * The association_pks plugin now supports composite keys. * Database#transaction now accepts a :disconnect=>:retry option, in which case it will automatically retry the block if it detects a disconnection. This is potentially dangerous, and should only be used if the entire block is idempotent. There is also no checking against an infinite retry loop. * SQL::CaseExpression#with_merged_expression has been added, for converting a CaseExpression with an associated expression to one without an associated expression, by merging the expression into each condition. = Other Improvements * Sequel now quotes arguments/columns in common table expressions. * Sequel now handles nil values correctly in the pg_row extension. * Sequel::Postgres::HStore instances can now be marshalled. * Sequel now uses clob for String :text=>true types on databases that don't support a text type. * On PostgreSQL, Sequel now quotes channel identifier names when using LISTEN/NOTIFY. * On PostgreSQL, Sequel now correctly handles the case where named type conversion procs have been added before the Database object is instantiated. * On DB2, Sequel now explicitly sets NOT NULL for unique constraint columns instead of foreign key columns. DB2 does not allow columns in unique constraints to be NULL, but does allow foreign key columns to be NULL. * In the oracle adapter, clob values are now returned as ruby strings upon retrieval. * Sequel now detects more types of disconnections in the postgres, mysql, and mysql2 adapters. * If a database provides a default column value that isn't a ruby string, it is used directly as the ruby default, instead of causing the schema parsing to fail. = Backwards Compatibility * Code using Sequel's oracle adapter that expected clob values to be returned as OCI8::CLOB instances needs to be modified to work with ruby strings. * Because Sequel now quotes column names in common table expressions, those names are now case sensitive, which could break certain poorly coded queries. Similar issues exist with the quoting of channel identifier names in LISTEN/NOTIFY on PostgreSQL. * The private Database#requires_return_generated_keys? method has been removed from the jdbc adapter. Custom jdbc subadapters relying on this method should override the private Database#execute_statement_insert method instead to ensure that RETURN_GENERATED_KEYS is used for insert statements. * The private Dataset#argument_list and #argument_list_append methods have been removed. ruby-sequel-4.1.1/doc/release_notes/3.41.0.txt000066400000000000000000000140051220156535500206570ustar00rootroot00000000000000= New Features * A connection_validator extension has been added, which automatically determines if connections checked out from the pool are still valid. If they are not valid, the connection is disconnected and another connection is used automatically, transparent to user code. Checking if connections are valid requires a query, so this extension causes a performance hit. For that reason, connections are only checked by default if they have been inactive for more than a configured amount of time (1 hour by default). You can choose to validate connections on every checkout via: DB.pool.connection_validation_timeout = -1 However, this can cause a substantial performance hit unless you are purposely using coarse connection checkouts via manual calls to Database#synchronize (for example, in a Rack middleware). Using coarse checkouts can greatly reduce the amount of concurrency that Sequel supports (for example, limiting the number of concurrent requests to the number of database connections), so this method is not without its tradeoffs. * Sequel.delay has been added for a generic form of delayed evaluation. This method takes a block and delays evaluating it until query literalization. By default, Sequel evaluates most arguments immediately: foo = 1 ds = DB[:bar].where(:baz=>foo) # SELECT * FROM bar WHERE (baz = 1) foo = 2 ds # SELECT * FROM bar WHERE (baz = 1) Using Sequel.delay, you can delay the evaluation: foo = 1 ds = DB[:bar].where(:baz=>Sequel.delay{foo}) # SELECT * FROM bar WHERE (baz = 1) foo = 2 ds # SELECT * FROM bar WHERE (baz = 2) * Sequel now supports the :unlogged option when creating tables on PostgreSQL, to create an UNLOGGED table. * On SQLite, Database#transaction now supports a :mode option for setting up IMMEDIATE/EXCLUSIVE SQLite transactions. Sequel also supports a Database#transaction_mode accessor for setting the default transaction mode on SQLite. * Most pg_* extension objects (e.g. PGArray) now support the #as method for creating an SQL::AliasedExpression object. * The single_table_inheritance plugin now supports non-bijective mappings. In lay terms, this means that a one-to-one mapping of column values to classes is no longer required. You can now have multiple column values that map to a single class in the :model_map option, and specify a :key_chooser option to choose which column value to use for the given model class. * The touch plugin now handles the touching of many_to_many associations, and other associations that use joined datasets. * ConnectionPool#pool_type has been added. It returns a symbol representing the type of connection pool in use (similar to Database#database_type). * Database#valid_connection? has been added for checking if a given connection is still valid. * Database#disconnect_connection is now part of the public API, and can be used to disconnect a given connection. = Other Improvements * Uniqueness validation now correctly handles nil values. Previously, it checked the underlying table for other rows where the column IS NULL, but that is incorrect behavior. Sequel's new (correct) behavior is to skip the uniqueness check if the column is nil. * Foreign key parsing is now supported on Microsoft SQL Server. * Dataset#reverse and #reverse_order now accept virtual row blocks. * Changing the name of the primary key column, and possibly other schema changes on the primary key column, are now supported on MySQL. * Primary key columns are now specifically marked as NOT NULL on SQLite, as non-integer primary keys on SQLite are not considered NOT NULL by default. * Failure to create a native prepared statement is now handled better in the postgres, mysql, and mysql2 adapters. * Firebird now emulates selecting data without an underlying table (e.g. DB.get(1)). * Finding the name of the constraint that sets column defaults on Microsoft SQL Server now works correctly on JRuby 1.7. * An additional type of disconnect error is now recognized in the jdbc/sqlserver adapter. * Many adapters have been fixed so that they don't raise an exception if trying to disconnect an already disconnected connection. * Many adapters have been fixed so that Database#log_connection_execute logs and executes the given SQL on the connection. * Many adapters have been fixed so that Database#database_error_classes returns an array of database exception classes for that adapter. * Database#log_exception now handles a nil exception message. * Dataset#limit(nil, nil) now resets offset in addition to limit, but you should still use Dataset#unlimited instead. * A bin/sequel usage quide has been added to the documentation. = Backwards Compatibility * Sequel now treats clob columns as strings instead of blobs (except on DB2 when use_clob_as_blob = true). This can make it so the values are returned as strings instead of SQL::Blob values. Since SQL::Blob is a String subclass, this generally will not affect user code unless you are passing the values as input to a separate blob column. * The Database <-> ConnectionPool interface was completely changed. Sequel no longer supports custom connection procs or disconnection procs in the connection pools. The :disconnection_proc Database option is no longer respected, and blocks passed to Database.new are now ignored. This change should not be user-visible, but if you had any code that was monkeying with the connection pool internals, you may need to modify it. * Code that was using the uniqueness check to also check for presence should add a separate check for presence. Such code was broken, as it only worked if there was already a NULL column value in the table. If you were relying on this broken behavior, you should clean up the NULL data in the column and then mark the database column as NOT NULL. * If you have code that specifically abuses the fact that non-integer primary keys on SQLite allow NULL values by default, it will no longer work. ruby-sequel-4.1.1/doc/release_notes/3.42.0.txt000066400000000000000000000054221220156535500206630ustar00rootroot00000000000000= New Features * Dataset#avg, #interval, #min, #max, #range, and #sum now accept virtual row blocks, allowing you to more easily get aggregate values of expressions based on the table: DB[:table].sum{some_function(column1, column2)} # => 134 # SELECT sum(some_function(column1, column2)) FROM table * Database#do has been added on PostgreSQL for using the DO anonymous code block execution statement. * Model.dataset_module now uses a Module subclass, which allows you to call subset inside a dataset_module block, making it easier to consolidate dataset method code: class Album < Sequel::Model dataset_module do subset(:gold){copies_sold > 500000} end end * Database#copy_table and #copy_into are now supported on jdbc/postgres. * Sequel now supports deferred constraints on constraint types other than foreign keys. The only databases that appear to implement this are Oracle and PostgreSQL. * Sequel now supports INITIALLY IMMEDIATE deferred constraints via the :deferrable=>:immediate constraint/column option. * Sequel now supports setting the default size of string columns, via the default_string_column_size option or accessor. In some cases, Sequel's default string column size of 255 is too large (e.g. MySQL with utf8mb4 character set), and this allows you to change it. = Other Improvements * Dataset#count and other methods now use a subselect in the case where the dataset has an offset but no limit. * If an error occurs while attempting to commit a transaction, Sequel now attempts to rollback the transaction. Some databases do this automatically, but not all. Among other things, this fixes issues with deferred foreign key constraint violations on SQLite. * When extending a model's dataset, the model's instance_dataset is reset, insuring that it will also be extended with the module. * When passing an invalid argument to Dataset#filter, the exception message now includes the argument. * The force_encoding plugin now works with frozen string values. * Public methods added to a model dataset_module now have model class methods created for them even if the method was added outside of a dataset_module block. * On PostgreSQL, Database#indexes now includes a :deferrable entry for each index hash, which will be true for unique indexes where the underlying constraint is deferrable. * On Microsoft SQL Server 2000, Dataset#update no longer includes a limit (TOP), allowing it to work correctly. = Backwards Compatibility * Model.dataset_methods has been removed. This was used to store blocks for methods created via def_dataset_method and subset. The internals have been changed so that a dataset_module is always used in these cases, therefore there was no longer a reason for this method. ruby-sequel-4.1.1/doc/release_notes/3.43.0.txt000066400000000000000000000075661220156535500206770ustar00rootroot00000000000000= New Features * A core_refinements extension has been added, which offers refinement versions of Sequel's core extensions. This requires the new experimental refinement support added in ruby 2.0, and allows you to use the Sequel DSL methods in a file without actually modifying the Symbol, String, Array, and Hash classes. * A date_arithmetic extension has been added for performing database-independent date calculations (adding/subtracting an interval to/from a date): Sequel.extension :date_arithmetic e = Sequel.date_add(:date_column, :years=>1, :months=>2, :days=>3) DB[:table].where(e > Sequel::CURRENT_DATE) In addition to providing the interval as a hash, you can also provide it as an ActiveSupport::Duration object. This extension is supported on 11 database types. * Dataset#get can now take an array of multiple expressions to get an array of values, similar to map/select_map: value1, value2 = DB[:table].get([:column1, :column2]) * Sequel can now handle [host.]database.schema.table qualified tables on Microsoft SQL Server. To implement this support, the split_qualifiers method has been added to Database and Dataset for taking a possibly qualified identifier and splitting it into an array of identifier strings. * The string_stripper plugin now offers the ability to manually specify which columns to skip stripping for via Model.skip_string_stripping. = Other Improvements * The jdbc adapter now works with the new jdbc-* gems, which require a manual load_driver step that the older jdbc-* gems did not require. * The string_stripper plugin no longer strips blob columns or values. * Database#copy_into in both the postgres and jdbc/postgres adapters has been fixed to better handle exceptions. * Dataset#hash and Model#hash are now significantly faster. * Lambda procs with 0 arity can now be used as virtual row blocks on ruby 1.9. Previously, attempting to use a lambda proc with 0 arity as a virtual row block on ruby 1.9 would raise an exception. * Schema-qualified composite types are now handled correctly in the pg_row extension. * Database#reset_primary_key_sequence on PostgreSQL now works correctly when a default_schema is set. * tinyint(1) unsigned columns on MySQL are now parsed as booleans instead of integers on MySQL if converting tinyint to boolean. * The jdbc adapter now supports the jdbc-hsqldb gem, so you can now install that instead of having to require the .jar manually. * Blobs are now casted correctly on DB2 when the use_clob_as_blob setting is false. * Oracle timestamptz types are now handled correctly in the jdbc/oracle adapter. * Sequel now defaults to :prefetch_rows = 100 in the oracle adapter, which can significantly improve performance. * Sequel now defines respond_to_missing? where method_missing? is defined and the object also responds to respond_to?. * Sequel::BasicObject now responds to instance_exec on ruby 1.8. = Backwards Compatibility * The meta_def method that was defined on Database, Dataset, and Model classes and instances has been moved to an extension named meta_def, and is no longer loaded by default. This method was previously used internally, and it wasn't designed for external use. If you have code that uses meta_def, you should now load the extension manually: Sequel.extension :meta_def * The private _*_dataset_helper model association methods are no longer defined. The AssociationReflection#dataset_helper_method public method is also no longer defined. * Dataset#schema_and_table now always returns strings (or nil). Before, in some cases it would return symbols. * Using a conditions specifier array with Dataset#get no longer works due to the new multiple values support in Database#get. So code such as: DB[:table].get([[:a, 1], [:b, 2]]) should be changed to: DB[:table].get(Sequel.expr([[:a, 1], [:b, 2]])) ruby-sequel-4.1.1/doc/release_notes/3.44.0.txt000066400000000000000000000133061220156535500206650ustar00rootroot00000000000000= New Features * Dataset#paged_each has been added, for processing entire datasets without keeping all rows in memory, even if the underlying driver keeps all query results in memory. This is implemented using limits and offsets, and requires an order (model datasets use a default order by primary key). It defaults to fetching 1000 rows at a time, but that can be changed via the :rows_per_fetch option. This method is drop-in compatible for each. Previously, the pagination extension's each_page method could be used for a similar purpose, but users of each_page are now encouraged to switch to paged_each. * Sequel now recognizes constraint violation exceptions on most databases, and will raise specific exceptions for different types of constraint violations, instead of the generic Sequel::DatabaseError: * Sequel::ConstraintViolation (generic superclass) * Sequel::CheckConstraintViolation * Sequel::NotNullConstraintViolation * Sequel::ForeignKeyConstraintViolation * Sequel::UniqueConstraintViolation * Sequel::Postgres::ExclusionConstraintViolation * The :dataset association option can now take accept an optional association reflection option. Instead of doing: Album.one_to_many :artists, :dataset=>{Artist...} you can now do: Album.one_to_many :artists, :dataset=>{|r| r.associated_dataset...} This second form will preform better. * Temporary views are now supported on PostgreSQL and SQLite using the :temp option to create_view. = Other Improvements * Row fetching speed in the tinytds adapter has been increased by up to 60%. * Row fetching speed in the mysql2 adapter when using an identifier output method has been increased by up to 50%. * On databases where offsets are emulated via the ROW_NUMBER window function (Oracle, DB2, Microsoft SQL Server), using an offset in a subselect is now supported. For example, the following code previously didn't work correctly with emulated offsets: # Second 5 rows ordered by column2 of the second 10 rows ordered # by column 1. DB[:table].order(:column1).limit(10, 10). from_self.order(:column2).limit(5, 5) Row processing speed has been increased slightly for all adapters that supported databases where offsets are emulated. * Association method performance has improved by caching an intermediate dataset. This can close to triple the performance of the association_dataset method, and increase the performance of the association method by close to 30%. * Virtual Row performance has increased about 30% in the typical case by using a shared VirtualRow instance. * Database#create_or_replace_view is now emulated on databases that don't support it directly by dropping the view before attempting to create it. * The columns_introspection extension can now introspect for simple select * queries from subselects, and it can now use the cached schema information in the database for simple select * queries from tables. * The identity_map plugin now works correctly with many-to-many right-side composite keys. * Dataset#last for Model datasets now works even if you don't specify an order explicitly, giving the last entry by primary key. Note that Dataset#first for model datasets still does not order by default. * The eager_each plugin no longer uses Object#extend at runtime. * Database#remove_cached_schema is now thread-safe on non-GVL ruby implementations. * Connection errors in the jdbc adapter now provide slightly more helpful messages. * Sequel now uses the standard offset emulation code in the jdbc/as400 adapter, instead of custom offset emulation code specific to that adapter. * Database#create_view with a dataset now works correctly when using the pg_auto_parameterize extension. * Database#columns no longer calls the row_proc. * Dataset#schema_and_table no longer turns a literal string into a non-literal string. * The oracle adapter now works with a :prefetch_rows=>nil option, which explicitly disables prefetching. * The mock mssql adapter now sets a server_version so that more parts of it work. = Backwards Compatibility * Offset emulation via ROW_NUMBER works by moving the query to a subselect that also selects from the ROW_NUMBER window function, and filtering on the ROW_NUMBER in the main query. Previously, the ROW_NUMBER was also present in the output columns, and some adapter code was needed to hide that fact. Now, the outer select selects all of the inner columns in the subselect except for the ROW_NUMBER, reducing the adapter code needed. This has the side effect of potentially requiring a query (or multiple queries for multiple subselects) to determine the columns to use. The columns_introspection extension may reduce the number of queries needed. * The correlated_subquery eager limit strategy is no longer supported on Microsoft SQL Server for many_*_many associations. As the window_function eager limit strategy is supported there, there is no reason to use the correlated_subquery strategy. * The public AssociationReflection#_dataset_method method has been removed. * The private _*_dataset methods for associations (e.g. _albums_dataset) have been removed. * The private Dataset#offset_returns_row_number_column? method has been removed. * :conditions options for associations are now added to the association dataset before the foreign key filters, instead of after. This should have no effect unless you were introspecting the dataset's opts or sql and acting on it. * The added abilities in the columns_introspection plugin to use cached schema for introspection can now cause it to return incorrect results if the table's schema has changed since it was cached by Sequel. ruby-sequel-4.1.1/doc/release_notes/3.45.0.txt000066400000000000000000000172061220156535500206710ustar00rootroot00000000000000= New Features * Database#transaction now recognizes a :retry_on option, which should contain an exception class or array of exception classes. If the transaction raises one of the given exceptions, Sequel will automatically retry the transaction block. It's a bad idea to use this option if the transaction block is not idempotent. By default, Sequel only retries the block 5 times by default, to protect against infinite looping. You can change the number of retries with the :num_retries option. Users of the :disconnect=>:retry option are encouraged to switch to :retry_on=>Sequel::DatabaseDisconnectError. * Dataset#escape_like has been added for escaping LIKE metacharacters. This is designed for the case where part of the LIKE pattern is based on user input that should not treat the metacharacters specially. * Serialization failures/deadlocks are now raised as Sequel::SerializationFailure exception instances. This exception class is a good candidate for the transaction :retry_on option. * On PostgreSQL, you can now provide the :force_standard_strings and :client_min_messages Database options to override the defaults on a per-instance basis. * On PostgreSQL, Database#tables and #views now recognizes a :qualify option, which if true will return qualified identifiers instead of plain symbols. * Transaction isolation levels are now supported on Oracle, DB2, and all jdbc subadapters using the JDBC transaction support. * Dataset.def_mutation_method now accepts a :module option for the module in which to define the methods (defaulting to self). * An unlimited_update plugin has been added. It's sole purpose is to eliminate a MySQL warning in replicated environments, since by default Sequel::Model uses a LIMIT clause when updating on MySQL. * The named_timezones extension now adds a Sequel.tzinfo_disambiguator accessor to automatically handle TZInfo::AmbiguousTime exceptions. This should be a callable object that accepts two arguments, a DateTime instance and an array of timezone periods, and returns the timezone period to use. = Other Improvements * Sequel now handles JSON securely, specifying the :create_additions=>false option when using JSON.parse. If you really want to get the old vulnerable behavior back, override Sequel.parse_json. * The json_serializer and xml_serializer plugins are now secure by default. Before, the default behavior of these plugins allowed for round tripping, such that: Album.from_xml(album.to_xml) == album Unfortunately, that requires that the deserialization allow the setting of any column. Since the plugins also handle associations, you could also set any column in any associated object, even cascading to associated objects of those objects. The new default behavior only allows deserialization to set the same columns that mass-assignment would set, and not to handle associated objects at all by default. The following additional options are supported: :fields :: The specific fields to set (this was already supported by the json_serializer plugin). :associations :: The specific associations to handle. :all_columns :: The previous behavior of setting all columns. :all_associations :: The previous behavior of setting all associations. Since JSON parsing no longer deserializes into arbitrary ruby instances, from_json and array_from_json class methods have been added to the json_serializer plugin, for deserializing into model instances. These mirror the from_xml and array_from_xml class methods in the xml_serializer plugin. Note that the :all_columns and :all_associations methods were only added to make backwards compatibility easier. It is likely they will be removed in Sequel 4, along with the json_create class method. * Sequel now attempts to use database specific error codes or SQLState codes instead of regexp parsing to determine if a more specific DatabaseError subclass should be used. This should make error handling faster and more robust. * Sequel now uses ESCAPE '\' when using LIKE, for similar behavior across databases. Previously, no ESCAPE clause was used, so behavior differed across databases, with most not using escaping, and PostgreSQL, MySQL, and H2 defaulting to backslash as the escape character. * The query extension has been reimplemented and now uses a proxy object instead of Object#extend. * The :pool_timeout Database option now supports fractional seconds. * Database#quote_identifier is now a public method. * Metadata parsing (schema, indexes, foreign_key_list) on PostgreSQL now correctly handles the case where an unqualified table name is used and tables with that name exist in multiple schemas. It now picks the first matching table in the schema_search_path, instead of failing or returning results from all tables. * Sequel::Model instances no longer attempt to typecast the money type on PostgreSQL, since the previous typecast didn't work correctly, and correct typecasting is locale-dependent. * Sequel no longer picks up foreign keys for tables in other databases when using Database#foreign_key_list on MySQL. * A warning when using the mysql2 3.12 beta has been eliminated. * A warning has been eliminated when using the jdbc/oracle adapter on JRuby 1.7. * Sequel's ilike emulation should now work by default on databases without specific syntax support. * Dataset#from_self! no longer creates a self referential dataset. * Coverage testing now uses simplecov instead of rcov on ruby 1.9+. = Backwards Compatibility * The switch to using JSON.parse :create_additions=>false means that if your app expected JSON to deserialize into arbitrary ruby objects, it is probably broken. You should update your application code to manually convert the deserialized hashes into the ruby objects you want. Note that it's not just this new version of Sequel that will cause that, older versions of Sequel will break in the same way if you update your JSON library to a version that is not vulnerable by default. This potentially affects the pg_json extension and serialization plugin if you were expecting the JSON stored in the database to be deserialized into arbitrary ruby objects. See the json_serializer/xml_serializer changes mentioned in the Other Improvements section. * The reimplemented query extension is not completely backwards compatible. For example, inside a query block, self refers to the proxy object instead of a dataset, and calling methods that return rows no longer raises an exception. * The metadata parsing methods on PostgreSQL no longer work with unqualified tables where the table is not in the schema search path. This makes metadata parsing consistent with how datasets operate. For tables outside the schema search path, you must qualify it before use now. Additionally, using a nonexistent table name will raise an exception instead of returning empty results in some cases. * The Dataset#def_mutation_method instance method has been removed. This method added mutation methods directly on the dataset instance, which is generally not desired. Using the def_mutation_method class method with the :module option is now the recommended approach. * The switch to using ESCAPE for LIKE characters is backwards incompatible on databases that don't use escaping by default, when backslash is used in a LIKE pattern as a regular character. Now you have to double the backslash in the pattern. * Database#database_error_regexps private method now can return any enumerable yielding regexp/exception class pairs, it is no longer specified to return a hash. ruby-sequel-4.1.1/doc/release_notes/3.46.0.txt000066400000000000000000000104301220156535500206620ustar00rootroot00000000000000= New Features * Dataset#first! has been added. This is identical to #first, except where #first would return nil due to no row matching, #first! raises a Sequel::NoMatchingRow exception. The main benefit here is that a standard exception class is now used, so external libraries can deal with these exceptions appropriately (such as web applications returning a 404 error). * Dataset#with_pk! has been added to model datasets. Similar to #first!, this raises a Sequel::NoMatchingRow exception instead of returning nil if there is no matching row. * A drop_foreign_key method has been added to the alter_table generator: alter_table(:tab){drop_foreign_key :col} This relies on foreign_key_list working and including the name of the foreign key. Previously, you'd have to drop the foreign key constraint before dropping the column in some cases. * Column constraints can now be named using :*_constraint_name options: create_table(:tab) do primary_key :id, :primary_key_constraint_name=>:pk_name foriegn_key :t_id, :t, :foreign_key_constraint_name=>:fk_name, :unique=>true, :unique_constraint_name=>:uk_name end This makes it easier to name constraints, which has always been recommended as it makes it easier to drop such constraints in the future. * On Microsoft SQL Server, Dataset#cross_apply and #outer_apply have been added to use CROSS/OUTER APPLY. These are useful if you want to join a table to the output of a function that takes the table as an argument. = Other Improvements * The connection pools are now faster when using the :connection_handling=>:queue option. * External connection pool classes can now be loaded automatically by the :pool_class option. * Database#each_server now raises if not given a block. Previously, it just leaked Database references. * On Microsoft SQL Server, ] characters are now escaped correctly in identifiers. * On PostgreSQL, infinite dates are also handled when using Database#convert_infinite_timestamps. Previously, infinite dates were incorrectly converted to 0000-01-01. * The associations, composition, serialization, and dirty plugins now clear caches stored in the instance in some additional cases, such as when saving model instances when the dataset supports insert_select. * Model#validates_type in the validation_helpers plugin now handles false values correctly. * The string_stripper plugin has been fixed to not change the result of Model.set_dataset. * You can now drop primary key constraints on H2, using: alter_table(:tab){drop_constraint :foo, :type=>:primary_key} * The jdbc/as400 adapter has been fixed, it was broken starting in Sequel 3.44.0. * A Security guide has been added explaining various security issues to think about when using Sequel. = Backwards Compatibility * The change to make associations, composition, serialization, and dirty now clear caches after saving when the dataset supports insert_select can break code that expected the previous behavior. For example: artist = Artist[1] artist.has_albums # => false album = Album.new(:artist=>artist) def album.after_create super artist.update(:has_albums=>true) end album.save artist.has_albums # => false Such code should either refresh the artist after saving the album, or use album.artist.has_albums. You already had to do that if the dataset did not support insert_select; the impetus for this change was to make the behavior consistent. * Decimal/numeric columns are now strictly typecast by default, similar to integer and real/double precision columns. If you want the previous loose typecasting to for decimal/numeric columns, use the looser_typecasting extension. * External adapters that called Database.set_adapter_scheme with a string should change to using a symbol. * Dataset#select_map, #select_order_map, and #get now raise an exception if they are passed a plain string inside an array. If you do want to use a plain string, you now need to alias it: dataset.get([Sequel.as('string', :some_alias)]) = Sequel 4 Implementation Planning * Sequel 4 implementation planning has begun. If you want to view and/or provide feedback on the implementation plan, see https://github.com/jeremyevans/sequel-4-plans ruby-sequel-4.1.1/doc/release_notes/3.47.0.txt000066400000000000000000000262571220156535500207010ustar00rootroot00000000000000= New Plugins * An auto_validations plugin has been added, which automatically adds not null, type, and unique validations based on information obtained from parsing the database schema. If you don't require customization of the validation error message per column, this can significantly DRY up validation code. Currently this plugin requires the database support index parsing; that restriction will be removed in Sequel 4. * An input_transformer plugin has been added, for automatically running a transformation proc on all model column setter input before use. This is a generalization of the string_stripper plugin, allowing arbitrary modifications to the input. * An error_splitter plugin has been added, for splitting validation errors applying to multiple columns into a separate validation error per column. This is useful if you want to to include such errors when using Errors#on to get all errors on the column. In general, only uniqueness errors apply to multiple columns, so those are the only errors likely to be affected. = Other New Features * Database.extension has been added, allowing you to load an extension into all future databases. This is similar to loading a plugin into Sequel::Model itself. For example, if you want all Database instances to use the query_literals extension, run the following before creating your Database instances: Sequel::Database.extension :query_literals * Database.after_initialize has been added for running a hook on all new databases created. * Model.default_set_fields_options has been added, allowing you to set the default options for the #set_fields and #update_fields methods. This is useful if you want to make :missing=>:raise or :missing=>:skip the default behavior. * The :setter, :adder, :remover, and :clearer association options have been added. These allow you to override the default implementation used to modify the association. :setter affects the *_to_one setter method, :adder the *_to_many add_* method, :remover the *_to_many remove_* method, and :clearer the *_to_many remove_all_* method. Previously, you had to override a private method to get the same behavior, this just offers a nicer API for that. * A :keep_reference Database option has been added. When set to false, a reference to the Database instance is not kept in Sequel::DATABASES. This is designed for Database instances created by libraries, so they don't accidentally get chosen as the default Sequel::Model database. * Model#modified! now accepts a column and marks that column as changed. This is useful if you plan on mutating the column value as opposed to reassigning it. * Model#modified? now accepts a column and returns whether the column has been changed. * The migrators now support an :allow_missing_migration_files option, which makes them silently ignore errors related to missing migration files. * validates_schema_types has been added to validation_helpers, which validates that the column values are instances of the expected ruby type for the given database schema type. This is a more robust version of the validates_not_string extension, and users of validates_not_string are encouraged to switch soon, as validates_not_string is going away in Sequel 4. validates_schema_type has been added to validation_class_methods, which preforms the same validation, but it requires the columns be listed explicitly. validates_type in validation_helpers has been expanded to accept an array of allowable classes. Related to this is the addition of Database#schema_type_class for returning the type class(es) for the given schema type symbol. * validates_not_null has been added to the validation_helpers plugin. This is similar to the validates_presence validation, but only checks for nil values, allowing empty/blank strings. * In the caching plugin, when the :ignore_exceptions option is true, exceptions raised when deleting an object from the cache are now ignored correctly. * On PostgreSQL, Sequel now supports a :search_path Database option to automatically set the client connection search_path. This allows you to control which schemas do no require qualification, and in which order to check schemas when referencing unqualified objects. If you were using the default_schema setting, it is recommended that you switch to using :search_path instead. * The pg_array extension can now register array types on a per-Database basis via Database#register_array_type. Previously, only global registration of array types was allowed. Additionally, when registering array types on a per-Database basis, the oids can be looked up automatically, making it possible to register array types with just a type name: DB.register_array_type(:interval) * The pg_array extension now automatically creates conversion procs for array types of all named types used by the database. This means that if you use the pg_array and pg_hstore extensions, the hstore[] type is now handled correctly. * The postgres adapter now supports :use_iso_date_format and :convert_infinite_timestamps Database options. Previously, use_iso_date_format was only a global setting, and convert_infinite_timestamps could only be set after initialization. * Database#supports_schema_parsing? has been added to check if schema parsing via the Database#schema method is supported. = Other Improvements * A race condition related to prepared_sql for newly prepared statements has been fixed. * Dataset#get now works correctly if given an array with multiple columns if there were no returned rows. * The plugins that ship with Sequel now handle frozen model instances correctly. * Freezing of model instances now works correctly for models without primary keys. * Database constraints added with the constraint_validations plugin now handle NULL values correctly if the :allow_nil=>true setting is used. * The pagination, pretty_table, query, schema_caching, schema_dumper, and select_remove extensions can now be loaded by Database#extension. If you are loading them globally via Sequel.extension, switch to using Database#extension, since that will be required starting in Sequel 4. * The lazy_attributes plugin no longer uses the identity_map plugin internally, and eager loading lazy attributes now works correctly without an active identity map. * The many_to_one_pk_lookup plugin now handles many more corner cases, and should be safe to enable by default. * The static_cache plugin now has optimized implementations of Model.map, .to_hash, and .to_hash_groups which work without a database query. Model.count without arguments has also been optimized to not require a database query. * Fetching new records has been made faster when using the update_primary_key plugin, since it was changed to cache the primary key values lazily. * When using the update_primary_key plugin, if the primary key changes, clear the associations cache of all non-many_to_one associations (since those will likely be based on the primary key). * The pg_typecast_on_load plugin no longer errors if given a column that doesn't have a matching oid conversion proc. * Handling of domain types on PostgreSQL has been significantly improved. Domain type columns now have correct model typecasting, and the pg_row extension correctly sets up conversion procs for domain types inside composite types. * Postgres::HStoreOp#- now automatically casts string input to text, so that PostgreSQL doesn't assume the string is an hstore. * Postgres::PGRangeOp#starts_before and #ends_after have been renamed to #ends_before and #starts_after. The previous names were misleading. The old names are still available for backwards compatibility, but they will be removed in the Sequel 4. * The pg_row plugin now handles aliased tables correctly. * Model#validate in the validation_class_methods plugin no longer skips validate methods in superclasses or previously loaded plugins. * Loading the touch plugin into a model subclass after it has been loaded into a model superclass no longer ignores inherited touched associations. * Sequel no longer resets the conversion procs for the Database instance when using Databaset#extension to load a pg_* extension that adds global conversion procs. Instead, the global conversion procs are added to the instance-specific conversion procs. The result of this is that manually added conversion procs will not be lost if an extension is loaded afterward. * The jdbc adapter now references the driver class before loading subadapter specific code, which can fix issues if the database tries to connect on initialization (such as the jdbc/postgres adapter if the pg_hstore extension is loaded previously). * A guide describing Sequel's support for advanced PostgreSQL features has been added. = Backwards Compatibility * If you have already used the constraint_validations plugin to create validations with the :allow_nil=>true option, you should drop and regenerate those constraints to ensure they handle NULL values correctly. * The change to make PostgreSQL automatically handle domain types can break previous code that set up special conversions and typecasts per domain type. In the schema parsing, if you want to get the domain type information, it will be contained in the :db_domain_type and :domain_oid schema entries. * Sequel::Postgres.use_iso_date_format is now only defined if you are using the postgres adapter. Previously, it could be defined when using other adapters with a pg_* extension, even though the setting had no effect in that case. * The validation_class_methods plugin now copies validations into the subclass upon inheritance, instead of recursing into the superclass on validation. This makes it more similar to how all the other Sequel plugins work. However, it also means that if you add validations to a superclass after creating a subclass, the subclass won't have those validations. Additionally if you skip superclass validations in a child class after creating a grandchild class, the grandchild class could still have the parent class's validations. * The validates_unique validation in validation_helpers no longer attempts to do the uniqueness query if the underlying columns have validation errors. The reasoning behind this is that if the underlying columns are not valid, the uniqueness query can cause a DatabaseError. * If you were passing strings in hstore format to Postgres::HStoreOp#-, you should manually cast them to hstore: hstore_op - Sequel.cast('a=>b', :hstore) * The default validation error message for validates_type has been modified. * Database#schema_column_type was made public accidently by an adapter and a few extensions. That has been fixed, but if you were calling it with an explicit receiver and it happened to work by accident before, you'll need to update your code. = Sequel 4 Implementation Planning * Sequel 4 implementation work will begin shortly. All Sequel users are encouraged to read about the proposed changes and provide feedback on the implementation plan. For details, see https://github.com/jeremyevans/sequel-4-plans. ruby-sequel-4.1.1/doc/release_notes/3.48.0.txt000066400000000000000000000411601220156535500206700ustar00rootroot00000000000000= Deprecation Warnings The main change in Sequel 3.48.0 is the deprecation of Sequel features that will be modified, moved, or removed in Sequel 4. For the reasoning behind these changes, please review the commits logs at https://github.com/jeremyevans/sequel-4-plans/commits/master == Deprecation Logging If you use a deprecated method or feature, Sequel will by default print a deprecation message and 10 lines of backtrace to stderr to easily allow you to figure out which code needs to be updated. You can change where the deprecation messages go and how many lines of backtrace are given using the following: # Log deprecation information to a file Sequel::Deprecation.output = File.open('deprecated.txt', 'wb') # Turn off all deprecation logging Sequel::Deprecation.output = nil # Use 5 lines of backtrace when logging deprecation messages Sequel::Deprecation.backtrace_filter = 5 # Use all backtrace lines when logging deprecation messages Sequel::Deprecation.backtrace_filter = true # Don't include backtraces in the deprecation logging Sequel::Deprecation.backtrace_filter = false # Select which backtrace lines to output Sequel::Deprecation.backtrace_filter = \ lambda{|line, line_no| line_no < 3 || line =~ /my_app/} == Major Change * The core extensions will no longer be loaded by default. You will have to use `Sequel.extension :core_extensions` to load the core extensions. * The Symbol#[] and Symbol#{<,>,<=,>=} methods will no longer be provided by the core extensions on ruby 1.8. You will have to use `Sequel.extension :ruby18_symbol_extensions` to use them. == Core Behavior Changes * Dataset#filter becomes an alias for #where, and #exclude becomes an alias for #exclude_where. You will have to use `DB.extension :filter_having` to get the previous behavior. Dataset#and and #or will also only affect the WHERE clause. * Dataset#and, #or, and #invert will not raise errors for no existing filter. * Dataset#select_more becomes an alias for #select_append. * Dataset#select and #from will not longer consider a hash argument as an alias specification. You will have to use `DB.extension :hash_aliases` to get the previous behavior. * Database#dataset and Dataset.new will not take an options hash. * Database#transaction :disconnect=>:retry option will be removed. * Calling Dataset#add_graph_aliases before #graph or #set_graph_aliases will raise an Error. * Datasets will have a frozen options hash by default. * Dataset#set_overrides and #set_defaults will move to the set_overrides extension. * Sequel.empty_array_handle_nulls will be removed. To get the empty_array_handle_nulls = false behavior, you will have to use `DB.extension :empty_array_ignore_nulls`. * The second argument to Dataset #union, #intersect, and #except must be an options hash if it is given. * The fourth argument to Dataset #join_table must be an options hash if it is given. * Using a mismatched number of placeholders and arguments in a placeholder literal string will raise an error. * Dataset#graph_each will move to the graph_each extension. * Database#default_schema will be removed. * Dataset#[]= will be moved to the sequel_3_dataset_methods extension. * Dataset#insert_multiple will be moved to the sequel_3_dataset_methods extension. * Dataset#set will be moved to the sequel_3_dataset_methods extension. * Dataset#to_csv will be moved to the sequel_3_dataset_methods extension. * Dataset#db= and #opts= setters will be moved to the sequel_3_dataset_methods extension. * Dataset#qualify_to and #qualify_to_first_source will be moved to the sequel_3_dataset_methods extension. * Remove default methods that raise Sequel::NotImplemented: Database#connect, #execute, #foreign_key_list, #indexes, #tables, and #views, and Dataset#fetch_rows. * Sequel::SQL::Expression#to_s will be removed. * All Dataset methods in Dataset::PUBLIC_APPEND_METHODS except for #literal, #quote_identifier, and #quote_schema_table will be removed. * All Dataset methods in Dataset::PRIVATE_APPEND_METHODS will be removed. * Sequel k_require, ts_require, tsk_require, and check_requiring_thread will be removed. * Dataset.def_append_methods will be removed. * Dataset#table_ref_append will be removed. * Sequel.virtual_row_instance_eval accessor will be removed. * Database#reset_schema_utility_dataset will be removed. == Adapter Behavior Changes * The Database#do method will be removed from the ado, db2, dbi, informix, odbc, openbase, and oracle adapters. * The jdbc adapter will raise an error when parsing the schema for a table if it detects results for the same table name in multiple schemas. * The Database#query method will be removed from the informix adapter. * Dataset#lock on PostgreSQL will check the given lock mode. * Sequel will check the client_min_messages setting before use on PostgreSQL. * Prepared statement placeholders on PostgreSQL will no longer support implicit casting via :$x__type. == Extension Behavior Changes * The following extensions will no longer make global changes to the Database and Dataset classes: null_dataset, pagination, pretty_table, query, schema_caching, schema_dumper, select_remove, and to_dot. These will be changed to Database/Dataset specific extensions. * The pg_auto_parameterize and pg_statement_cache extensions will be removed. * Sequel::Dataset.introspect_all_columns will be removed from the columns_introspection extension. * PGRangeOp#starts_before and #ends_after will be removed from the pg_range_ops extension. == Model Behavior Changes * Model#initialize will accept only one argument. * The after_initialize hook will be moved to a plugin. * Move blacklist-based security methods (#set_except, #update_except, .set_restricted_columns) to a plugin. * The :eager_loader and :eager_grapher association option procs will always be passed a hash. * Model string column setters will consider array and hash input to be invalid. * Remove save taking multiple arguments for the columns to save. Add Model#save :columns option for saving specific columns. * Don't automatically choose a reciprocal association with a condition or block. * Don't automatically set up reciprocal associations if multiple ones match. * Model::Errors#[] will no longer modify the receiver. If you want autovivification, use the active_model plugin. * Model.set_primary_key will not longer accept composite keys as multiple arguments. * The correlated_subquery eager limit strategy will be removed. * The following Model class dataset methods will be removed: print, each_page, paginate, set, add_graph_aliases, insert_multiple, query, set_overrides, set_defaults, to_csv. * The Model.{destroy,delete,update} class dataset methods will be moved to the scissors plugin. * Model#pk_or_nil will be removed. * Model#set_values will no longer be called directly by any Sequel code, and overriding it is deprecated. It will be removed in Sequel 4.1. * Model.cache_anonymous_models accessor will move to Sequel module. * Model::InstanceMethods.class_attr_overridable and .class_attr_reader will be removed. * The :one_to_one option check for one_to_many associations will be removed. == Plugin Behavior Changes * Public dataset methods will no longer have class methods automatically added. * The validates_not_string validation will be removed from the validation_class_methods and validation_helpers plugin. * In the json_serializer plugin, the to_json :root=>true option means :root=>:collection instead of :root=>:both. * In the json_serializer plugin, the to_json :naked option will default to true, and there will not be way to add the JSON.create_id automatically. * In the json_serializer plugin, from_json will no longer automatically delete the JSON.create_id key from the input hash. * The #to_json and #to_xml :all_columns and :all_associations options in the json_serializer and xml_serializer plugins will be removed. * The Model.json_create method will be removed from the json_serializer plugin. * The validates_type validation will raise validation errors for nil if :allow_nil=>true is not used. * auto_validate_presence_columns will be removed from the auto_validations plugin * The identity_map plugin will be removed. == Internal Changes * The sequel_core.rb and sequel_model.rb files will be removed. * Dataset#{quote_identifiers,identifier_output_method, identifier_input_method} will assume Database implements the methods. = Forwards Compatibility Not all changes planned in Sequel 4 have deprecation warnings. The following changes will be made in Sequel 4 but do not have deprecation warnings in 3.48.0: * The threaded connection pools will default to :connection_handling=>:queue. You can manually set :connection_handling=>:stack to get the current behavior. * Dataset#join_table will default to :qualify=>:deep. You can manually set :qualify=>:symbol to get the current behavior. This can be set at a global level by overriding Dataset#default_join_table_qualification. * Model.raise_on_typecast_failure will default to false. Set this to true to get the current behavior of raising typecast errors in the setter methods. * Model#save will no longer call Model#_refresh or Model#set_values internally after an insert. Manually refreshes will be treated differently than after creation refreshes in Sequel 4. * On SQLite, integer_booleans will be true by default. Set this to false to get the current behavior of 't' for true and 'f' for false. * On SQLite, use_timestamp_timezones will be false by default. Set this to true to get the current behavior with timezone information in timestamps. * The default value for most option hash arguments will be an empty frozen hash. If you are overriding methods and modifying option hashes, fix your code. * The defaults_setter plugin will work in a lazy manner instead of an eager manner. If you must have the values hash contain defaults for new objects (instead of just getting defaults from getter methods), you'll need to fork the current plugin. * Model#set_all will allow setting the primary key columns. * The many_to_one_pk_lookup plugin will be integrated into the default associations support. * The association_autoreloading plugin will be integrated into the default associations support. * Plugins will extend the class with ClassMethods before including InstanceMethods in the class. * Dataset#get, #select_map, and #select_order_map will automatically add aliases for unaliased expressions if given a single expression. * Database#tables and #views on PostgreSQL will check against the current schemas in the search path. * Sequel::SQL::SQLArray alias for ValueList will be removed. * Sequel::SQL::NoBooleanInputMethods will be removed. * Sequel::NotImplemented will be removed. * Sequel::Model::EMPTY_INSTANCE_VARIABLES will be removed. * Sequel will no longer provide a default database for the adapter or integration specs. = New Features * You can now choose which Errors class to use on a per model basis by overriding Model#errors_class. * The following Database methods have been added to check for support: supports_index_parsing?, supports_foreign_key_parsing?, support_table_listing?, supports_view_listing?. * The pg_hstore_ops extension now integrates with the pg_array, pg_hstore, and pg_array_ops extensions, allowing you to pass in arrays and hashes to be treated as PGArrays and HStores, and returning ArrayOps for PostgreSQL functions/operators that return arrays. * Sequel.object_to_json and Sequel.json_parser_error_class have been added and all internal json usage uses them, so you can now override these methods if you want to use an alternative json library with Sequel. * The association_proxies plugin now accepts a block allowing the user control over which methods are proxied to the dataset or the cached array of instances. You can base the decision on where to send the method using a variety of factors including the method name, the method arguments, the state of the current instance, or the related association. Here's an example of a simple case just depending on the name of the method; Model.plugin :association_proxies do |opts| [:find, :where, :create].include?(opts[:method]) end If the block returns true, the method is sent to the dataset, otherwise it is sent to the array of associated objects. * The auto_validations plugin now accepts a :not_null=>:presence option, for doing a presence validation instead of a not_null validation. This is useful for databases with NOT NULL constraints where you also want to disallow empty strings. * The auto_validations plugin now validates against explicit nil values in NOT NULL columns that have defaults. * The constraint_validations plugin now reflects validations, using Model.constraint_validation_reflections. Model.constraint_validation_reflections[:column] # => [[:presence, {}], # [:max_length, {:argument=>255, :message=>'just too long'}]] * The constraint_validations plugin can now be set to pass specific validations options to the validation_helpers plugin. This can be useful if using the auto_validations plugin with this plugin to avoid duplicate error messages for nil values: Model.plugin :constraint_validations, :validates_options=>{:presence=>{:allow_nil=>true}} * The named_timezones extension can now be loaded as a database extension, which allows for automatic conversions of string timezones: DB.extension :named_timezones DB.timezone = 'America/Los_Angeles' * Offsets are now emulated by Microsoft Access using a combination of reverse orders and total counts. This is slow, especially on large datasets, but probably better than having no support at all. It is also possible to use the same code to support Microsoft SQL Server 2000, but as Sequel does not support that (minimum supported version is 2005), you have to do it manually: Sequel.require 'adapters/utils/emulate_offset_with_reverse_and_count' DB.extend_datasets Sequel::EmulateOffsetWithReverseAndCount = Other Improvements * Dataset#clone is now faster. * Database methods that create datasets (fetch, from, select, get) are now faster. * Model.with_pk and .with_pk! are now faster. * Dataset#or now just clones if given an empty argument, similar to Dataset#where. * Sequel now correctly frees statements after using them in the ibmdb adapter. Previously, they weren't freed until GC, which could result in errors if all available handles are in use. * Dataset creation is now faster on Microsoft SQL Server. * The mediumint and mediumtext types are now recognized on MySQL. * The ado adapter now handles disconnecting an already disconnected connection. * The auto_validations plugin now works on databases that don't support index parsing. However, it will not set up automatic uniqueness validations on such databases. * The validation_helpers is more strict in some cases when checking for nil values, using a specific nil check instead of general falsely check. * The table inheritance plugins now correctly handle usage of set_dataset in a subclass. * The bin/sequel command line tool now has specs. = Backwards Compatibility * Sequel now uses aliases for many internal Dataset#get calls, such as those used by table_exists? and max. * Sequel now no longer uses class variables internally. Instead, instance variables of the Sequel::Database class are used. * Sequel now sets up the identifier mangling methods on Database initialization instead of on first use. * The private Database#adapter_initialize method has been added for per adapter configuration. All internal adapters have been switched to use this method instead of overridding initialize, and all external adapters should as well. This makes sure that Database instances are not added to Sequel::DATABASES until they have been completely initialized. * Virtual row blocks no longer convert their return values to an array. Among other things, this means that having a virtual row block return a hash works as expected. * The private Dataset#hash_key_symbol method now only takes a single argument. * Database#constraint_validations in the constraint_validations plugin now returns raw hash rows, instead of arrays of validation method call arguments. * Dataset#count now uses a lowercase count function in the SQL. * Passing a non-String or Hash as the first argument to an adapter method (e.g. Sequel.postgres(1)) now raises an error. Before, this used to work on some adapters that implicitly converted the database name to a string. * The stats and dcov rake tasks were removed. ruby-sequel-4.1.1/doc/release_notes/3.5.0.txt000066400000000000000000000504141220156535500206030ustar00rootroot00000000000000New Plugins ----------- * A class_table_inheritance plugin has been added, supporting model inheritance in the database using a table-per-model-class approach. Each table stores only attributes unique to that model or subclass hierarchy. For example, with this hierarchy: Employee / \ Staff Manager | Executive the following database schema may be used (table - columns): * employees - id, name, kind * staff - id, manager_id * managers - id, num_staff * executives - id, num_managers The class_table_inheritance plugin assumes that the main table (e.g. employees) has a primary key field (usually autoincrementing), and all other tables have a foreign key of the same name that points to the same key in their superclass's table. For example: * employees.id - primary key, autoincrementing * staff.id - foreign key referencing employees(id) * managers.id - foreign key referencing employees(id) * executives.id - foreign key referencing managers(id) When using the class_table_inheritance plugin, subclasses use joined datasets: Employee.dataset.sql # SELECT * FROM employees Manager.dataset.sql # SELECT * FROM employees # INNER JOIN managers USING (id) Executive.dataset.sql # SELECT * FROM employees # INNER JOIN managers USING (id) # INNER JOIN executives USING (id) This allows Executive.all to return instances with all attributes loaded. The plugin overrides deleting, inserting, and updating in the model to work with multiple tables, by handling each table individually. This plugin allows and encourages the use of a :key option to mark a column holding the class name. This allows methods on the superclass to return instances of specific subclasses. a = Employee.all # [<#Staff>, <#Manager>, <#Executive>] This plugin requires the lazy_attributes plugin and uses it to handle subclass specific attributes that would not be loaded when calling superclass methods (since those wouldn't join to the subclass tables). For example: a.first.values # {:id=>1, name=>'S', :kind=>'Staff'} a.first.manager_id # Loads the manager_id attribute from the # database The class_table_inheritance plugin requires JOIN USING and therefore is not supported on H2 or Microsoft SQL Server, which do not support that SQL-92 feature. * An associations_dependencies plugin was added for deleting, destroying, or nullifying associated objects when destroying a model object. This just gives an easy way to add the necessary before and after destroy hooks. The following association types support the following dependency actions: * :many_to_many - :nullify (removes all related entries in join table) * :many_to_one - :delete, :destroy * :one_to_many - :delete, :destroy, :nullify (sets foreign key to NULL for all associated objects) This plugin works directly with the association datasets and does not use any cached association values. The :delete action will delete all associated objects from the database in a single SQL call. The :destroy action will load each associated object from the database and call the destroy method on it. The plugin call takes a hash of association symbol keys and dependency action symbol values. Alternatively, you can specify additional dependencies later using add_association_dependencies: Business.plugin :association_dependencies, :address=>:delete # or: Artist.plugin :association_dependencies Artist.add_association_dependencies :albums=>:destroy, :reviews=>:delete, :tags=>:nullify * A force_encoding plugin was added that forces the encoding of strings used in model instances. When model instances are loaded from the database, all values in the hash that are strings are forced to the given encoding. Whenever you update a model column attribute, the resulting value is forced to a given encoding if the value is a string. There are two ways to specify the encoding. You can either do so in the plugin call itself, or via the forced_encoding class accessor: class Album < Sequel::Model plugin :force_encoding, 'UTF-8' # or plugin :force_encoding self.forced_encoding = 'UTF-8' end This plugin only works on ruby 1.9, since strings don't have encodings in 1.8. * A typecast_on_load plugin was added, for fixing bad database typecasting when loading model objects. Most of Sequel's database adapters don't have complete control over typecasting, and may return columns that aren't typecast correctly (with correct being defined as how the model object would typecast the same column values). This plugin modifies Model.load to call the setter methods (which typecast by default) for all columns given. You can either specify the columns to typecast on load in the plugin call itself, or afterwards using add_typecast_on_load_columns: Album.plugin :typecast_on_load, :release_date, :record_date # or: Album.plugin :typecast_on_load Album.add_typecast_on_load_columns :release_date, :record_date If the database returns release_date and record_date columns as strings instead of dates, this will ensure that if you access those columns through the model object, you'll get Date objects instead of strings. * A touch plugin was added, which adds Model#touch for updating an instance's timestamp, as well as touching associations when an instance is updated or destroyed. The Model#touch instance method saves the object with a modified timestamp. By default, it uses the :updated_at column, but you can set which column to use. It also supports touching of associations, so that when the current model object is updated or destroyed, the associated rows in the database can have their modified timestamp updated to the current timestamp. Example: class Album < Sequel::Model plugin :touch, :column=>:modified_on, :associations=>:artist end * A subclasses plugin was added, for recording all of a models subclasses and descendent classes. Direct subclasses are available via the subclasses method, and all descendent classes are available via the descendents method: c = Class.new(Sequel::Model) c.plugin :subclasses sc1 = Class.new(c) sc2 = Class.new(c) ssc1 = Class.new(sc1) c.subclasses # [sc1, sc2] sc1.subclasses # [ssc1] sc2.subclasses # [] ssc1.subclasses # [] c.descendents # [sc1, ssc1, sc2] The main use case for this is if you want to modify all models after the model subclasses have been created. Since mutable options are copied when subclassing, modifying parent classes does not affect current subclasses, only future ones. The subclasses plugin allows you get all subclasses so that you can easily modify them. The plugin only records subclasses created after the plugin call, though. * An active_model plugin was added, giving Sequel::Model an ActiveModel complaint API, in so much as it passes the ActiveModel::Lint tests. New Extensions -------------- * A named_timezones extension was added, allowing you to use named timezones such as "America/Los_Angeles" (the default Sequel timezone support only supports UTC or local time). This extension requires TZInfo. It also sets the Sequel.datetime_class to DateTime, so database timestamps will be returned as DateTime instances instead of Time instances. This is because ruby's Time class doesn't support timezones other than UTC and local time. This plugin allows you to pass either strings or TZInfo::Timezone instance to Sequel.database_timezone=, application_timezone=, and typecast_timezone=. If a string is passed, it is converted to a TZInfo::Timezone using TZInfo::Timezone.get. Let's say you have the database server in New York and the application server in Los Angeles. For historical reasons, data is stored in local New York time, but the application server only services clients in Los Angeles, so you want to use New York time in the database and Los Angeles time in the application. This is easily done via: Sequel.database_timezone = 'America/New_York' Sequel.application_timezone = 'America/Los_Angeles' Then, before timestamps are stored in the database, they are converted to New York time. When timestamps are retrieved from the database, they are converted to Los Angeles time. * A thread_local_timezones extension was added. This allows you to set a per-thread timezone that will override the default global timezone while the thread is executing. The main use case is for web applications that execute each request in its own thread, and want to set the timezones based on the request. The most common example is having the database always store time in UTC, but have the application deal with the timezone of the current user. That can be done with: Sequel.database_timezone = :utc # In each thread: Sequel.thread_application_timezone = current_user.timezone This extension is designed to work with the named_timezones extension. * An sql_expr extension was added that adds .sql_expr methods to all objects, giving them easy access to Sequel's DSL: 1.sql_expr < :a # 1 < a false.sql_expr & :a # FALSE AND a true.sql_expr | :a # TRUE OR a ~nil.sql_expr # NOT NULL "a".sql_expr + "b" # 'a' || 'b' Proc#sql_expr uses a virtual row: proc{[[a, b], [a, c]]}.sql_expr | :x # (((a = b) AND (a = c)) OR x) * A looser_typecasting extension was added, for using to_f and to_i instead of the more strict Kernel.Float and Kernel.Integer when typecasting floats and integers. To use it, you should extend the database with the Sequel::LooserTypecasting module after loading the extension: Sequel.extension :looser_typecasting DB.extend(Sequel::LooserTypecasting) This makes the behavior more like ActiveRecord: a = Artist.new(:num_albums=>'a') a.num_albums # => 0 Other New Features ------------------ * Associations now support composite keys. All of the :*key options options now accept arrays of symbols instead of plain symbols. Example: Artist.primary_key # [:name, :city] Album.many_to_one :artist, :key=>[:artist_name, :artist_city] Artist.one_to_many :albums, :key=>[:artist_name, :artist_city] All association types are supported, including the built-in many_to_many association and the many_through_many plugin. Both methods of eager loading work with composite keys for all association types. Setter and add/remove/remove_all methods also now work with composite keys. * Associations now respect a :validate option, which can be set to false to not validate when implicitly saving associated objects. There isn't a lot of implicit saving in Sequel's association methods, but this gives the user the control over validation when the association methods implicitly save an object. * In addition to the regular association methods, the nested_attributes plugin was also updated to respect the :validate_association option. It was also modified to not validate associated objects twice, once when the parent object was validated and again when the associated object was saved. Additionally, if you pass :validate=>false to the save method when saving the parent object, it will not longer attempt to validate associated objects when saving them. * Dataset#insert and #insert_sql were refactored and now support the following API: * No arguments - Treat as a single empty hash argument * Single argument: * Hash - Use keys as columns and values as values * Array - Use as values, without specifying columns * Dataset - Use a subselect, without specifying columns * LiteralString - Use as the values * 2 arguments: * Array, Array - Use first array as keys, second as values * Array, Dataset - Use a subselect, with the array as columns * Array, LiteralString - Use LiteralString as the values, with the array as the columns * Anything else: Treat all given values an an array of values * Graphing now works with previously joined datasets. The main use case of this is when eagerly loading (via eager_graph) model associations for models backed by joined datasets, such as those created by the class_table_inheritance plugin. * Sequel.virtual_row was added allowing you to easily use the VirtualRow support outside of select, order, and filter calls: net_benefit = Sequel.virtual_row{revenue > cost} good_employee = Sequel.virtual_row{num_commendations > 0} fire = ~net_benefit & ~good_employee demote = ~net_benefit & good_employee promote = net_benefit & good_employee DB[:employees].filter(fire).update(:employed=>false) DB[:employees].filter(demote).update(:rank=>:rank-1) DB[:employees].filter(promote).update(:rank=>:rank+1) * When Sequel wraps exception in its own classes (to provide database independence), it now keeps the wrapped exception available in a wrapped_exception accessor. This allows you to more easily determine the wrapped exception class, without resorting to parsing the exception message. begin DB.run('...') rescue Sequel::DatabaseError => e case e.wrapped_exception when Mysql::Error ... when PGError ... end end * The MySQL adapter now supports a Dataset#split_multiple_result_sets method that yields arrays of rows (one per result set), instead of rows. This allows you to submit multiple statements at the same time (or call a stored procedure that returns multiple result sets), and know which rows are related to which result sets. This violates a lot of Sequel's internal assumptions and should be used with care. Existing row_procs are modified to work correctly, but graphing will not work on these datasets. * The ADO adapter now accepts a :conn_string option and uses that as the full ADO connection string. This can be used to connect to any datasource ADO supports, such as Microsoft Excel. * The Microsoft SQL Server shared adapter now supports a Database#server_version method. * The Microsoft SQL Server shared adapter now supports updating and deleting from joined datasets. * The Microsoft SQL Server shared adapter now supports a Dataset#output method that uses the OUTPUT clause. * Model#_save now calls either Model#_insert or Model#_update for inserting/updating the row in the database. This allows for easier overriding when you want to allow creating and updating model objects backed by a joined dataset. * Dataset#graph now takes a :from_self_alias option specifying the alias to use for the subselect created if the receiver is a joined but not yet graphed dataset. It defaults to the first source table in the receiver. Other Improvements ------------------ * Typecasting model attributes is now done before checking existing values, instead of after. Before, the code for the model attribute setters would compare the given value to the existing entry. If it didn't match, the value was typecasted and then assigned. That led to the following situation: a = Album[1] a.num_tracks # => 10 params # => {'num_tracks'=>'10'} a.set(params) a.changed_columns # => [:num_tracks] The new behavior typecasts the value first, and only sets it and records the column as changed if it doesn't match the typecasted value. * Model#modified? is now always true if the record is new. modified? indicates the instance's status relative to the database, and since a new object is not yet in the database, and saving the object would add it, the object is considered modified. A consequence of this is that Model#save_changes now always saves if the object is new. If you want to check if there were changes to columns since the object was first initialized, you should use !changed_columns.empty?, which was the historical way to handle the situation. * The DataObjects (do) adpater now supports DataObjects 0.10. * Dataset#select_more and Dataset#order_more no longer affect the receiver. They are supposed to just return a modified copy of the receiver instead of modifying the receiver itself. For a few versions they have been broken in that they modified the receiver in addition to returning a modified copy. * Performance was increased for execution of prepared statements with multiple bound variables on MySQL. * On MySQL, database errors raised when preparing statements or setting bound variable values are now caught and raised as Sequel::DatabaseErrors. * On MySQL, more types of disconnection errors are detected. * When altering columns in MySQL, options such as :unsigned, :elements, and :size that are given in the call are now respected. * MySQL enum defaults are now handled correctly in the schema dumper. * The schema dumper no longer attempts to use unparseable defaults as literals on MySQL, since MySQL does not provide defaults as valid literals. * The emulated offset support in the shared Microsoft SQL Server adapter now works better with model classes (or any datasets with row_procs). * Microsoft SQL Server now supports using the WITH clause in delete, update, and insert calls. * Parsed indexes when connecting to Microsoft SQL Server via JDBC no longer include primary key indexes. * Dataset#insert_select now returns nil if disable_insert_returning is used in the shared PostgreSQL adapter. This makes it work as expected with model object creation. * Calling Model.set_primary_key with an array of symbols to set a composite primary key is now supported. You can also provide multiple symbol arguments to do the same thing. Before, specifying an array of symbols broke the Model.[] optimization. * Literalization of timezones in timestamps now works correctly on Oracle. * __FILE__ and __LINE__ are now used everywhere that eval is called with a string, which makes for better backtraces. * The native MySQL adapter now correctly handles returning before yielding all result sets. Previously, this caused a commands out of sync error. * Table names in common table expressions are now quoted. * The Oracle adapter's Dataset#except now accepts a hash, giving it the same API as the default Dataset#except. * When connecting to Microsoft SQL Server via ADO, allow Dataset#insert to take multiple arguments. * Fractional timestamps are no longer used on ODBC. * Schema parsing now works on MSSQL when the database is set to not quote identifiers. * Timezone offsets are no longer used on Microsoft SQL Server, since they only work for the datetimeoffset type. * Only 3 fractional digits in timestamps are used in Microsoft SQL Server, since an error is raised if the use the datetime type with more than that. * The integration test suite now has guards for expected failures when run on known databases. Expected failures are marked as pending. Backwards Compatibility ----------------------- * Graphing to an previously joined (but not graphed) dataset now causes the receiver to be wrapped in a subselect, so if you graph a dataset to a previously joined dataset, and then filter the dataset referring to tables that were in the joined dataset (other than the first table), the SQL produced will probably no longer be valid. You should either filter the dataset before graphing or use the name of the first source of the joined dataset (which is what the subselected is aliased to) if filtering afterward. In certain cases, this change can cause tables to be aliased differently, so if you were graphing previously joined datasets and then filtering using the automatically generated aliases, you might need to modify your code. * The DataObjects (do) adpater no longer supports DataObjects 0.9.x. * The Dataset#virtual_row_block_call private instance method has been removed. * Sequel's timezone support was significantly refactored, so if you had any custom modifications to the timezone support, they might need to be refactored as well. * The SQL generation code was significantly refactored, so if you had any custom modifications in that area, you might need to refactor as well. ruby-sequel-4.1.1/doc/release_notes/3.6.0.txt000066400000000000000000000341131220156535500206020ustar00rootroot00000000000000New Features ------------ * Dataset#filter and related methods now accept a string with named placeholders, and a hash with placeholder values: ds.filter('copies_sold > :sales', :sales=>500000) Sequel's general support for this syntax is nicer: ds.filter{copies_sold > 500000} But named placeholder support can make it easier to port code from other database libraries. Also, it works much better than the ? placeholder support if you have a long SQL statement: DB['SELECT :n FROM t WHERE p > :q AND p < :r', :n=>1,:q=>2,:r=>3] Sequel doesn't subsitute values that don't appear in the hash: ds.where('price < :p AND id in :ids', :p=>100) # WHERE (price < 100 AND id in :ids) This makes it easier to spot missed placeholders, and avoids issues with PostgreSQL's :: casting syntax or : inside string literals. * The Model add_ association method now accepts a hash and creates a new associated model object associated to the receiver: Artist[:name=>'YJM'].add_album(:name=>'RF') * The Model remove_ association method now accepts a primary key and removes the associated model object from the association. For models using composite primary keys, an array of primary key values can be used. Example: Artist[:name=>'YJM'].remove_album(1) # regular pk Artist[:name=>'YJM'].remove_album([2, 3]) # composite pk * Dataset#bind was added, allowing you to bind values before calling Dataset#call. This is more consistent with Sequel's general approach where queries can be built in any order. * The native postgres adapter now has Dataset#use_cursor, which allows you to process huge datasets without keeping all records in memory. The default number of rows per cursor fetch is 1000, but that can be modified: DB[:huge_table].use_cursor.each{|r| p r} DB[:huge_table].use_cursor(:rows_per_fetch=>10000).each{|r| p r} This probably won't work with prepared statements or bound variables. * The nested_attributes plugin now adds newly created objects to the cached association array immediately, even though the changes are not persisted to the database until after the object is saved. The reasoning for this is that otherwise there is no way to access the newly created associated objects before the save, and no way to access them at all if validation fails. This makes the nested_attributes plugin much easier to use, since now you can just iterate over the cached association array when building the form. If validation fails, it will have the newly created failed objects in the array, so you can easily display the form as the user entered it for them to make changes. This change doesn't affect many_to_one associations, since those don't have a cached association array. This also does not affect updating existing records, since those are already in the cached array. * You can now easily override the default options used in the validation_helpers plugin (the recommended validation plugin). Options can be overridden at a global level: Sequel::Plugins::ValidationHelpers::DEFAULT_OPTIONS[:format]. merge!(:message=>"incorrect format", :allow_missing=>true) Options can also be overridden on a per-class level: class Album < Sequel::Model plugin :validation_helpers DEFAULT_VALIDATION_OPTIONS = { :format=>{:message=>"incorrect format", :allow_missing=>true}} private def default_validation_helpers_options(type) super.merge(DEFAULT_VALIDATION_OPTIONS[type] || {}) end end * You can now use a proc instead of a string for the validation_helpers :message option. This should allow much easier internationalization support. If a proc is given, Sequel calls it to get the format string to use. Whether the proc should take an argument depends on whether the associated validation method takes an argument before the array of columns to validate, and the argument provided is what is passed to the proc. The exception to this is the validates_not_string method, which doesn't take an argument, but does pass one to the proc (a symbol with the schema type of the column). Combined with the above default option support, full internationalization support for the validation_helpers plugin should be fairly easy. * The nested_attributes plugin now accepts a :fields option that specifies the fields that are allowed. If specified, the plugin will use set_only instead of set when mass assigning attributes. Without this, the only way to control which fields are allowed is to set allowed/restricted attributes at a class level in the associated class. * Associations now accept a :distinct option which uses the SQL DISTINCT clause. This can be used instead of :uniq for many_to_many and many_through_many associations to handle the uniqueness in the database instead of in ruby. It can also be useful for one_to_many associations to models that don't have primary keys. * The caching plugin now accepts an :ignore_exceptions option that allows it to work with memcached (which raises exceptions instead of returning nil for missing records). * Sequel now emulates JOIN USING poorly using JOIN ON for databases that don't support JOIN USING (MSSQL and H2). This isn't guaranteed to work for all queries, since USING and ON have different semantics, but should work in most cases. * The MSSQL shared adapter now supports insert_select, for faster model object creation. If for some reason you need to disable it, you can use disable_insert_output. * Model#modified! has been added which explicitly marks the object as modified. So even if no column values have been modified, calling save_changes/update will still run through the regular save process and call all before and after save/update hooks. * Model#marshallable! has been added which removes unmarshallable attributes from the object. Previously, you couldn't marshal a saved model object because it contained a dataset with a singleton class. Custom _dump and _load methods could be used instead, but this approach is easier to implement. * Dataset#literal_other now calls sql_literal on the object with the current dataset instance, if the object responds to it. This makes it easier to support the literalization of arbitrary objects. Note that if the object is a subclass of a class handled by an existing dataset literalization method, you cannot use this method. You have to override the specific Dataset#literal_* method in that case. * Model#save_changes now accepts an option hash that is passed to save: album.save_changes(:validate=>false) * A bunch of Dataset#*_join methods have been added, for specific join types: * cross_join * natural_join * full_join * left_join * right_join * natural_full_join * natural_left_join * natural_right_join Previously, you had to use join_table(:cross, ...) to use a CROSS JOIN. * You can now create clustered indexes on Microsoft SQL Server using the :clustered option. * AssociationReflection#associated_object_keys has been added, specifying the keys in the associated model object that are related to this association. * Sequel::SQL::SQLArray#to_a was added. Other Improvements ------------------ * Constant lookup in virtual row blocks now works correctly in ruby 1.9. Virtual row blocks are based on BasicObject on ruby 1.9, which doesn't allow referencing objects in the top level scope. So the following code would cause an error on 1.9: DB[:bonds].filter{maturity_date > Time.now} Sequel now uses a Sequel::BasicObject class on 1.9 with a const_missing that looks up constants in Object, which allows the above code to work. * Sequel no longer attempts to load associated objects when one of the key fields in the current table is NULL. This fixes the behavior when the :primary_key option for the association is used to point to a non-primary key. A consequence of this change is that attempting to load a *_to_many association for a new model object now returns an empty array instead of raising an exception. This has its own advantage of allowing the same association viewing code to work on both new and existing objects. Previously, you had to actively avoid calling the association method on new objects, or Sequel would raise an exception. * Dataset aggreate methods (sum/avg/min/max/range/interval) now work correctly with limited, grouped, or compound datasets. Previously, count worked with them, but other aggregate methods did not. These methods now use a subquery if called on a limited, grouped or compound dataset. * It is no longer required to have an existing GROUP BY clause to use a HAVING clause (except on SQLite, which doesn't permit it). Sequel has always had this limitation, but it's not required by the SQL standard, and there are valid reasons to use HAVING without GROUP BY. * Sequel will now emulate support for databases that don't support multiple column IN/NOT IN syntax, such as MSSQL and SQLite: ds.filter([:col1, :col2]=>[[1, 2], [3, 4]].sql_array) # default: WHERE (col1, col2) IN ((1, 2), (3, 4)) # emulated: WHERE (((col1 = 1) AND (col2 = 2)) OR # ((col1 = 3) AND (col2 = 4))) This is necessary for eager loading associated objects for models with composite primary keys. * Sequel now emulates :column.ilike('blah%') for case insensitive searches on MSSQL and H2. MSSQL is case insensitive by default, so it is the same as like. H2 is case sensitive, so Sequel uses a case insensitive cast there. * The nested_attributes plugin no longer allows modification of keys related to the association. This fixes a possible security issue with the plugin, where a user could associate the nested record to a different record. For example: Artist.one_to_many :albums Artist.plugin :nested_attributes Artist.nested_attributes :albums artist = Artist.create artist2 = Artist.create album = Album.create artist.add_album(album) artist.albums_attributes = [{:id=>album.id, :artist_id=>artist2.id}] artist.save * The one_to_many remove_* association method now makes sure that the object to be removed is currently associated to this object. Before, the method could be abused to disassociate the object from whatever object it was associated to. * Model add_ and remove_ association methods now check that the passed object is of the correct class. * Calling the add_* association method no longer adds the record to the cached association array if the object is already in the array. Previously, Sequel did this for reciprocal associations, but not for regular associations. This makes the most sense for one_to_many associations, since those can only be associated to the object once. For many_to_many associations, if you want an option to disable the behavior, please bring it up on the Sequel mailing list. * An array with a string and placeholders that is passed to Dataset#filter is no longer modified. Previously: options = ["name like ?", "%dog%"] DB[:players].where(options) options # => ["%dog%"] * Getting the most recently inserted autoincremented primary key is now optimized when connecting to MySQL via JDBC. * Model.inherited now calls Class.inherited. * The MSSQL shared adapter once again works on ruby 1.9. It was broken in 3.5.0 due to minor syntax issues. * The force_encoding plugin now handles refreshing an existing object, either explicitly or implicitly when new objects are created. To use the force_encoding plugin with the identity_map plugin, the identity_map plugin should be loaded first. * Using nil as a bound variable now works on PostgreSQL. Before, Sequel would incorrectly use "" instead of NULL, since it transformed all objects to strings before binding them. Sequel now binds the objects directly. * The Amalgalite adapter is now significantly faster, especially for code that modifies the schema or submits arbitrary SQL statements using Database <<, run, or execute_ddl. * Model#save_changes is now used when updating existing associated objects in the nested_attributes plugin. This should be significantly faster for the common case of submitting a complex form with nested objects without making modifications. * You can now prepare insert statements that take multiple arguments, such as insert(1, 2, 3) and insert(columns, values). * Dataset#group_and_count now supports aliased columns. * Adding indexes to tables outside the default schema now works. * Eager graphing now works better with models that use aliased tables. * Sequel now correctly parses the column schema information for tables in a non-default schema on Microsoft SQL Server. * changed_columns is now cleared for when saving new model objects for adapters that support insert_select, such as PostgreSQL. * Dataset#replace on MySQL now works correctly when default values are used. * Dataset#lock on PostgreSQL now works correctly. * Dataset#explain now works correctly on SQLite, and works using any adapter. It also works correctly on Amalgalite. * The JDBC adapter now handles binding Time arguments correctly when using prepared statements. * Model add_ and remove_ association methods now have more descriptive exception messages. * Dataset#simple_select_all? now ignores options that don't affect the SQL, such as :server. * Dataset#window in the PostgreSQL adapter now respects existing named windows. * Sequel now better handles a failure to begin a new transaction. * The dataset code was split into some additional files for improved readability. * Many documentation improvements were made. Backwards Compatibility ----------------------- * Model::Errors no longer uses a default proc, but emulates one in the [] method. This is unlikely to have a negative affect unless you are calling a method on it that doesn't call [] (maybe using it in a C extension?). * Model#table_name now only provides the alias if an aliased table is used. * The Sequel::Dataset::STOCK_COUNT_OPTS constant has been removed. * Dataset#lock on PostgreSQL now returns nil instead of a dataset. ruby-sequel-4.1.1/doc/release_notes/3.7.0.txt000066400000000000000000000151261220156535500206060ustar00rootroot00000000000000New Features ------------ * Sequel now has support for deleting and updating joined datasets on MySQL and PostgreSQL. Previously, Sequel only supported this to a limited extent on Microsoft SQL Server, and support there has been improved as well. This allows you to do: DB.create_table!(:a){Integer :a; Integer :d} DB.create_table!(:b){Integer :b; Integer :e} DB.create_table!(:c){Integer :c; Integer :f} # Insert some rows ds = DB.from(:a, :b). join(:c, :c=>:e.identifier). where(:d=>:b) ds.where(:f=>6).update(:a => 10) ds.where(:f=>5).delete Which will set the a column to 10 for all rows in table a, where an associated row in table c (through table b) has a value of 6 for column f. It will delete rows from table a where an associated row in table c (through table b) has a value of 5 for column f. Sequel assumes the that first FROM table is the table being updated/deleted. MySQL and Microsoft SQL Server do not require multiple FROM tables, but PostgreSQL does. * Dataset #select_map, #select_order_map, and #select_hash convenience methods were added for quickly creating arrays and hashes from a dataset. select_map and select_order_map both return arrays of values for the column specified. The column can be specified either via an argument or a block, similar to Dataset#get. Both accept any valid objects as arguments. select_hash returns a hash. It requires two symbol arguments, but can handle implicit qualifiers or aliases in the symbols. Neither of these methods offer any new functionality, they just cut down on the number of required key strokes: select_map(:column) # select(:column).map(:column) select_order_map(:column) # select(:column).order(:column). # map(:column) select_hash(:key_column, :value_column) # select(:key_column, :value_column). # to_hash(:key_column, :value_column) * The NULL, NOTNULL, TRUE, SQLTRUE, FALSE, and SQLFALSE constants were added to Sequel::SQL::Constants. This allows you to do: include Sequel::SQL::Constants DB[:table].where(:a=>'1', :b=>NOTNULL) Previously, the shortest way to do this was: DB[:table].where(:a=>'1').exclude(:b=>nil) It may make the code more descriptive: DB[:table].where(:b=>NULL) # compared to DB[:table].where(:b=>nil) This gives the option to use SQL terminology instead of ruby terminology. The other advantage of using the constants it that they handle operators and methods like other Sequel::SQL objects: NULL & SQLFALSE # BooleanExpression => "(NULL AND FALSE)" nil & false # false NULL + :a # NumericExpression => "(NULL + a)" nil + :a # raises NoMethodError NULL.sql_string + :a # StringExpression => "(NULL || a)" NULL.as(:b) # AliasedExpression => "NULL AS b" For complex systems that want to represent SQL boolean objects in ruby (where you don't know exactly how they'll be used), using the constants is recommended. In order not to be too verbose, including Sequel::SQL::Constants is recommended. It's not done by default, but you can still reference the constants under the main Sequel module by default (e.g. Sequel::NULL). * The validates_unique method in the validation_helpers plugin now supports an :only_if_modified option, which should speed up the common case where the unique attribute is not modified for an existing record. It's not on by default, since it's possible the database could be changed between retrieving the model object and updating it. * The Dataset #union, #intersect, and #except methods now accept an :alias option which is used as the alias for the returned dataset. DB[:table].union(DB[:old_table], :alias=>:table) * Model#destroy now supports a :transaction option, similar to Model#save. * The shared Oracle adapter now supports Dataset#sequence for returning autogenerated primary key values on insert from a related sequence. This makes Oracle work correctly when using models, with something like the following: class Album < Sequel::Model set_dataset dataset.sequence(:seq_albums_id) end You currently need to call Dataset#sequence in every model class where the underlying table uses a sequence to generate primary key values. Other Improvements ------------------ * In Model #save and #destroy when using transactions and when raise_on_save_failure is false, ensure that transactions are rolled back if a before hook returns false. * Dataset#group_and_count now handles arguments other than Symbols. A previous change to the method raised an exception if a Symbol was not provided. It also handles AliasedExpressions natively, so the following works correctly: DB[:table].group_and_count(:column.as(:alias)) * Sequel no longer uses native autoreconnection in the mysql adapter. Native autoreconnection has problems with prepared statements, where a new native connection is used behind Sequel's back, so Sequel thinks the prepared statement has already been defined on the connection, when it fact it hasn't. Any other changes that affect the state of the connection will be lost when native autoreconnection is used as well. Sequel's connection pool already handles reconnection if it detects a disconnection. This commit also adds an additional exception message to recognize as a disconnect. If there other exception messages related to disconnects, please post them on the Sequel mailing list. * The schema_dumper plugin now specifies the :type option for primary key if it isn't Integer. * On PostgreSQL, the bigserial type is used if :type=>Bignum is given as an option to primary key. This makes it operate more similarly to other adapters that support autoincrementing 64-bit integer primary keys. * The native mysql adapter will now attempt to load options in the [client] section of the my.cnf file. * The rake spec tasks for the project now work correctly with RSpec 1.2.9. Backwards Compatibility ----------------------- * Dataset::GET_ERROR_MSG and Dataset::MAP_ERROR_MSG constants were removed. Both were replaced with Dataset::ARG_BLOCK_ERROR_MSG. * The behavior of the Model#save_failure private instance method was modified. It now always raises an exception, and validation failures no longer call it. * The internals of how autogenerated primary key metadata is stored when creating tables on PostgreSQL has been modified. * The native MySQL adapter no longer sets the OPT_LOCAL_INFILE option to "client" on the native connection. ruby-sequel-4.1.1/doc/release_notes/3.8.0.txt000066400000000000000000000134071220156535500206070ustar00rootroot00000000000000New Features ------------ * Dataset#each_server was added, allowing you to run the same query (most likely insert/update/delete) on all shards. This is useful if you have a sharded database but have lookup tables that should be identical on all shards. It works by yielding copies of the current dataset that are tied to each server/shard: DB[:table].filter(:id=>1).each_server do |ds| ds.update(:name=>'foo') end * Database#each_server was added, allowing you to run schema modification methods on all shards. It works by yielding a new Sequel::Database object for each shard, that will connect to only that shard: DB.each_server do |db| db.create_table(:t){Integer :num} end * You can now add and remove servers/shards from the connection pool while Sequel is running: DB.add_servers(:shard1=>{:host=>'s1'}, :shard2=>{:host=>'s2'}) DB.remove_servers(:shard1, :shard2) * When you attempt to disconnect from a server that has connections currently in use, Sequel will now schedule those connections to be disconnected when they are returned to the pool. Previously, Sequel disconnected available connections, but ignored connections currently in use, so it wasn't possible to guarantee complete disconnection from the server. Even with this new feature, you can only guarantee eventual disconnection, since disconnection of connections in use happens asynchronously. * Database#disconnect now accepts a :servers option specifying the server(s) from which to disconnect. This should be a symbol or array of symbols representing servers/shards. Only those specified will be disconnected: DB.disconnect(:servers=>[:shard1, :shard2]) * A validates_type validation was added to the validation_helpers plugin. It allows you to check that a given column contains the correct type. I can be helpful if you are also using the serialization plugin to store serialized ruby objects, by making sure that the objects are of the correct type (e.g. Hash): def validate validates_type(Hash, :options) end * Sequel::SQL::Expression#== is now supported for all expressions: :column.qualify(:table).cast(:type) == \ :column.qualify(:table).cast(:type) # => true :column.qualify(:table).cast(:type) == \ :other_column.qualify(:table).cast(:type) # => false * When using the generic File type to create blob columns on MySQL, you can specify the specific database type by using the :size option (with :tiny, :medium, and :long values recognized): DB.create_table(:docs){File :body, :size=>:long} # longblob * The mysql adapter will now default to using mysqlplus, falling back to use mysql. mysqlplus is significantly better for threaded code because queries do not block the entire interpreter. * The JDBC adapter is now able to detect certain types of disconnect errors. * ConnectionPool.servers and Database.servers were added, which return an array of symbols specifying the servers/shards in use. Other Improvements ------------------ * The single-threaded connection pool now raises DatabaseConnectionErrors if unable to connect, so it now operates more similarly to the default connection pool. * The single-threaded connection pool now operates more similar to the default connection pool when given a nonexistent server. * PGErrors are now correctly converted to DatabaseErrors in the postgres adapter when preparing statements or executing prepared statements. * DatabaseDisconnectErrors are now raised correctly in the postgres adapter if the connection status is not OK after a query raises an error. * In the mysql adapter, multiple statements in a single query should now be handled correctly in the all cases, not just when using Dataset#each. So you can now submit multiple queries in a single string to Database#run. * Model object creation on Microsoft SQL Server 2000 once again works correctly. Previously, an optimization was used that was only supported on 2005+. * Backslashes are no longer doubled inside string literals when connecting to Microsoft SQL Server. * The ORDER clause now correctly comes after the HAVING clause on Microsoft SQL Server. * Sequel now checks that there is an active transaction before rolling back transactions on Microsoft SQL Server, since there are cases where Microsoft SQL Server will roll back transactions implicitly. * Blobs are now handled correctly when connecting to H2. * 64-bit integers are now handled correctly in JDBC prepared statements. * In the boolean_readers plugin, correctly handle columns not in the db_schema, and don't raise an error if the model's columns can't be determined. * In the identity_map plugin, remove instances from the cache if they are deleted or destroyed. Backwards Compatibility ----------------------- * Dataset::FROM_SELF_KEEP_OPTS was merged into Dataset::NON_SQL_OPTIONS. While used in different places, they were used for the same purpose, and entries missing from one should have been included in the other. * The connection pool internals changed substantially. Now, ConnectionPool #allocated and #available_connections will return nil instead of an array or hash if they are called with a nonexistent server. These are generally only used internally, though they are part of the public API. #created_count and #size still return the size of the :default server when called with a nonexistent server, though. * The meta_eval and metaclass private methods were removed from Sequel::MetaProgramming (only the meta_def public method remains). If you want these methods, use the metaid gem. * The irregular ox->oxen pluralization rule was removed from the default inflections, as it screws up the more common box->boxes. ruby-sequel-4.1.1/doc/release_notes/3.9.0.txt000066400000000000000000000217301220156535500206060ustar00rootroot00000000000000New Features ------------ * The ConnectionPool classes were refactored from 2 separate classes to a 5 class hierarchy, with one main class and 4 subclasses, one for each combination of sharding and threading. The primary reason for this refactoring is to make it so that the user doesn't have to pay a performance penalty for sharding if they aren't using it. A connection pool that supports sharding is automatically used if the :servers option is used when setting up the database connection. In addition, the default connection pool no longer contains the code to schedule future disconnections of currently allocated connections. The sharded connection pool must be used if that feature is desired. The unsharded connection pools are about 25-30% faster than the sharded versions. * An optimistic_locking plugin was added to Sequel::Model. This plugin implements a simple database-independent locking mechanism to ensure that concurrent updates do not override changes: class Person < Sequel::Model plugin :optimistic_locking end p1 = Person[1] p2 = Person[1] # works p1.update(:name=>'Jim') # raises Sequel::Plugins::OptimisticLocking::Error p2.update(:name=>'Bob') In order for this plugin to work, you need to make sure that the database table has a lock_version column (or other column you name via the lock_column class level accessor) that defaults to 0. The optimistic_locking plugin does not work with the class_table_inheritance plugin. * Dataset#unused_table_alias was added, which takes a symbol and returns either that symbol or a new symbol which can be used as a table alias when joining a table to the dataset. The symbol returned is guaranteed to not already be used by the dataset: DB[:test].unused_table_alias(:blah) # => :blah DB[:test].unused_table_alias(:test) # => :test_0 The use case is when you need to join a table to a dataset, where the table may already be used inside the dataset, and you want to generate a unique alias: ds.join(:table.as(ds.unused_table_alias(:table)), ...) * The Sequel::ValidationFailed exception now has an errors accessor which returns the Sequel::Model::Errors instance with the validation errors. This can be helpful in situations where a generalized rescue is done where the model object reference is not available. * bin/sequel now works without an argument, which is useful for testing SQL generation (and not much else). * Support SELECT ... INTO in the MSSQL adapter, using Dataset#into, which takes a table argument. * You can now provide your own connection pool class via the :pool_class option when instantiating the database. Other Improvements ------------------ * IN/NOT IN constructs with an empty array are now handled properly. DB[:table].filter(:id=>[]) # IN DB[:table].exclude(:id=>[]) # NOT IN Before, the IN construct would mostly work, other than some minor differences in NULL semantics. However, the NOT IN construct would not work. Sequel now handles the NOT IN case using an expression that evaluates to true. * If using an IN/NOT IN construct with multiple columns and a dataset argument, where multiple column IN/NOT IN support is emulated, a separate query is done to get the records, which is then handled like an array of values. This means that the following type of query now works on all tested databases: DB[:table1].filter([:id1, :id2]=>DB[:table2].select(:id1, :id2)) * Schemas and aliases are now handled correctly when eager graphing. * Implicitly qualified symbols are now handled correctly in update statements, useful if you are updating a joined dataset and need to reference a column that appears in multiple tables. * The active_model plugin has been brought up to date with activemodel 3.0 beta (though it doesn't work on edge). Additionally, the active_model plugin now requires active_model in order to use ActiveModel::Naming. * In the schema_dumper extension, always include the varchar limit, even if it is 255 columns (the default). This makes it so that PostgreSQL will use a varchar(255) column instead of a text column when restoring a schema dump of a varchar(255) column from another database. * You can now load adapters from outside the Sequel lib directory, now they just need to be in a sequel/adapters directory somewhere in the LOAD_PATH. * You can now load extensions from outside the Sequel lib directory using Sequel.extension. External extensions need to be in a sequel/extensions directory somewhere in the LOAD_PATH. * Using bound variables for limit and offset in prepared statements now works correctly. * Performance of prepared statements was improved in the native SQLite adapter. * The schema_dumper extension now passes the options hash from dump_*_migration to Database#tables. * In the single_table_inheritance plugin, qualify the sti_key column with the table name, so that subclass datasets can safely be joined to other tables having the same column name. * In the single_table_inheritance plugin, handle case where the sti_key value is nil or '' specially, so that those cases always return an instance of the main model class. This fixes issues if constantize(nil) returns Object instead of raising an exception. * No longer use Date#to_s for literalization, always use ISO8601 format for dates. * A couple lambdas which were instance_evaled were changed to procs for ruby 1.9.2 compatibility. * MSSQL emulated offset support was simplified to only use one subquery, and made to work correctly on ruby 1.9. * Emulate multiple column IN/NOT IN on H2, since it doesn't handle all cases correctly. * ODBC timestamps are now handled correctly if the database_timezone is nil. * ArgumentErrors raised when running queries in the ODBC adapter are now raised as DatabaseErrors. * Attempting to use DISTINCT ON on SQLite now raises an error before sending the query to the database. * The options hash passed to the database connection method is no longer modified. However, there may be additional options present in Database#opts that weren't specified by the options hash passed to the database connection method. * Make Dataset#add_graph_aliases handle the case where the dataset has not yet been graphed. * You can now provide an SQL::Identifier as a 4th argument to Dataset#join_table, and unsupported arguments are caught and an exception is raised. * The gem specification has been moved out of the Rakefile, so that the gem can now be built without rake, and works well with gem build and bundler. * The Rakefile no longer assumes the current directory is in the $LOAD_PATH, so it should work correctly on ruby 1.9.2. * All internal uses of require are now thread safe. * Empty query parameter keys in connection strings are now ignored instead of raising an exception. * The specs were changed so that you can run them in parallel. Previously there was a race condition in the migration extension specs. Backwards Compatibility ----------------------- * If you plan on using sharding at any point, you now must pass a :servers option when connecting to the database, even if it is an empty hash. You can no longer just call Database#add_servers later. * The connection_proc and disconnection_proc accessors were removed from the connection pools, so you can no longer modify the procs after the connection pool has been instantiated. You must now provide the connection_proc as the block argument when instantiating the pool, and the disconnection_proc via the :disconnection_proc option. * In the hash passed to Dataset#update, symbol keys with a double embedded underscore are now considerated as implicit qualifiers, instead of being used verbatim. If you have a column that includes a double underscore, you now need to wrap it in an SQL::Identifier or use a String instead. * The connection pools no longer convert non-StandardError based exceptions to RuntimeErrors. Previously, all of the common adapters turned this feature off, so there is no change for most users. * Sequel::ConnectionPool is now considered an abstract class and should not be instantiated directly. Use ConnectionPool.get_pool to return an instance of the appropriate subclass. * The Sequel::SingleThreadedPool constant is no longer defined. * The private Dataset#eager_unique_table_alias method was removed, use the new public Dataset#unused_table_alias method instead, which has a slightly different API. * The private Dataset#eager_graph_qualify_order method was removed, used Dataset#qualified_expression instead. * The private Sequel::Model class methods plugin_gem_location and plugin_gem_location_old have been removed. * Gems built with the rake tasks now show up in the root directory instead of the pkg subdirectory, and no tarball package is created. Other News ---------- * Sequel now has an official blog at http://sequel.heroku.com. ruby-sequel-4.1.1/doc/release_notes/4.0.0.txt000066400000000000000000000236331220156535500206020ustar00rootroot00000000000000= Backwards Compatibility * All behavior resulting in deprecation messages in 3.48.0 has been removed or modified. If you plan on upgrading to Sequel 4.0.0 and have not yet upgraded to 3.48.0, upgrade to 3.48.0 first, fix code that results in deprecation warnings, and then upgrade to 4.0.0. * The threaded connection pools now default to :connection_handling=>:queue. You can manually set :connection_handling=>:stack to get the previous behavior. * Model.raise_on_typecast_failure now defaults to false. Set this to true to get the previous behavior of raising typecast errors in the setter methods. * Model#save no longer calls Model#_refresh or Model#set_values internally after an insert. Manual refreshes are now treated differently than after creation refreshes. * On SQLite, integer_booleans now defaults to true. Set this to false to get the previous behavior of 't' for true and 'f' for false. Sequel will not automatically upgrade your data, users are responsible for doing that if they want to switch the integer_booleans setting. Note that regardless of the setting, Sequel will return the correct ruby values when retrieving the rows. Example Code to Migrate Existing Data: DB[:table].where(:column=>'t').update(:column=>1) DB[:table].where(:column=>'f').update(:column=>0) * On SQLite, use_timestamp_timezones is now false by default. Set this to true to get the previous behavior with timezone information in timestamps. Sequel will not automatically upgrade your data, users are responsible for doing that if they want to switch the use_timestamp_timezones setting. Note that regardless of the setting, Sequel will return the correct ruby values when retrieving the rows. * Using window functions when eagerly loading associations with limits or offsets is now done automatically if the database supports it. Previously, this had to be enabled manually. If you would like to disable this optimization and just do the slicing in ruby, set default_eager_limit_strategy = nil. * The default value for most option hash arguments is now an shared empty frozen hash. If you are overriding methods and modifying option hashes, fix your code. * The defaults_setter plugin now works in a lazy manner instead of an eager manner. So calling the related method returns the default value if there is no value stored, but Sequel does not add the default values to the internal values hash, and will not attempt to insert what it thinks is the default value when saving the new object. * Model#set_all and #update_all now allow setting the primary key columns. * The many_to_one_pk_lookup and association_autoreloading plugins are now integrated into the default associations support. * Plugins now extend the class with ClassMethods before including InstanceMethods in the class. * Dataset#get, #select_map, and #select_order_map now automatically add aliases for unaliased expressions if given a single expression. * Database#tables and #views on PostgreSQL now check against the current schemas in the search path. * Calling ungraphed on an eager_graph dataset will restore the row_proc for that dataset. This is not backwards compatible if your method chain does: dataset.eager_graph.naked.ungraphed Switch such code to: dataset.eager_graph.ungraphed.naked * The Model#set_restricted and #update_restricted private methods have a slightly different API now. * Sequel::SQL::SQLArray alias for ValueList has been removed. * Sequel::SQL::NoBooleanInputMethods has been removed. * Sequel::NotImplemented has been removed. Default implementations of methods that used to raise this exception have been removed. * Sequel::Model::EMPTY_INSTANCE_VARIABLES has been removed. * The Sequel::Postgres::DatabaseMethods::EXCLUDE_SCHEMAS and SYSTEM_TABLE_REGEXP constants have been removed. * Dataset#columns_without_introspection has been removed from the columns_introspection extension. * Sequel no longer provides a default database for the adapter or integration specs. Additionally, if you are using spec_config.rb to configure a database to use when adapter/integration testing, you may need to modify it, as Sequel now uses the DB constant for the database being tested. * The SEQUEL_MSSQL_SPEC_REQUIRE and SEQUEL_DB2_SPEC_REQUIRE environment variables are no longer respected when adapter/integration testing those databases. Use RUBYOPT with the -r flag. * In the 3.48.0 release notes, it was announced that Dataset#join_table would default to :qualify=>:deep in 4.0.0. This change was made but reverted before the release of 4.0.0 as it was determined too likely to break existing code, there was no deprecation warning (since it just changed a setting), and the benefit was minimal. You can make deep qualification the default by by overriding Dataset#default_join_table_qualification. = New Features * A pg_array_associations plugin has been added, for creating an association based on a PostgreSQL array column containing foreign keys. Example: # Database schema: # tags albums # :id (int4) <--\ :id # :name \-- :tag_ids (int4[]) # :name class Album plugin :pg_array_associations pg_array_to_many :tags end class Tag plugin :pg_array_associations many_to_pg_array :albums end This operates similarly to a many_to_many association, but does not require a join table. All of the usual Sequel association features are supported, such as adding, removing, and clearing associations, eager loading via eager and eager_graph, filtering by associations, and dataset associations. Note that until PostgreSQL gains the ability to enforce foreign key constraints in array columns, this plugin is not recommended for production use unless you plan on emulating referential integrity constraints via triggers. * Dataset#from now accepts virtual_row blocks, making it easy to use with table returning functions: DB.from{table_returning_function(arg)} * Sequel.deep_qualify has been added, for easily doing a deep qualification of objects: Sequel.deep_qualify(:table, Sequel.+(:column, 1)) # ("table"."column" + 1) Sequel.deep_qualify(:table, Sequel.like(:a, 'b')) # ("table"."a" LIKE 'b' ESCAPE '\') * The prepared_statements_associations plugin now handles one_to_one associations. * SQL::Subscript objects now handle ruby range arguments, operating as an SQL array slice: Sequel.subscript(:a, 1..2) # a[1:2] * Database#create_view now accepts a :columns option to provide explicit column names for the view. * Postgres::ArrayOp#[] now returns an ArrayOp if given a range, since a PostgreSQL array slice can be treated as an array. * Postgres::ArrayOp#hstore has been added for creating hstores from PostgreSQL arrays. * When creating full text indexes on PostgreSQL, the :index_type=>:gist option can be used to use a gist index instead of the default gin index. This can be useful if insert/update speed is more important than lookup speed. * You can now provide the :owner option to Database#create_schema on PostgreSQL to specify the owner of the schema. * You can now provide the :if_exists option to Database#drop_view on PostgreSQL to not raise an error if the view doesn't exist. * The pg_json extension now handles non-JSON plain strings, integers and floats in PostgreSQL JSON columns. = Support for New Features in PostgreSQL 9.3 * A pg_json_ops extension has been added to support the new json operators and functions. * Postgres::ArrayOp#replace and #remove have been added for using the array_replace and array_remove functions. * You can now provide the :if_not_exists option when using Database#create_schema on PostgreSQL to not raise an error if the schema already exists. * Database#create_view now supports a :recursive option on PostgreSQL for creating recursive views. * Database#create_view and #drop_view now support a :materialized option on PostgreSQL for creating/dropping materialized views. * Database#refresh_view has been added on PostgreSQL for refreshing materialized views. = Other Improvements * Check constraints are now always surrounded by parantheses, since that is required by the SQL standard. This fixes issues in the cases where parentheses were not used automatically, such as when a function call was used. * Using an offset without a limit when eager loading now works correctly. * The prepared_statements_associations plugin now works correctly when the associated class uses a filtered dataset. * The prepared_statements_associations plugin can now use a prepared statement for cases where the association uses :conditions. * Boolean prepared statement arguments now work correctly in the sqlite adapter when the integer_booleans setting is true. * Dataset#inspect on prepared statements now handles anonymous dataset classes correctly. * When dataset string/blob literalization depends on having a database connection and the dataset has an assigned server, a connection to the assigned server is used. * More disconnect errors are now handled when using the postgres adapter with the postgres-pr driver, and in the jdbc/oracle adapter. * Composite primary keys are now parsed correctly on SQLite 3.7.16+. * Blobs are now hex escaped on MySQL, which can solve some encoding issues when blobs are used as literals in the same SQL query with UTF-8 strings. * BigDecimals instances are now formatted nicer in the pretty_table extension. * Sequel now raises an exception when attempting to literalize infinite and NaN floats on MySQL. In general, this would result in MySQL raising an error, but in extreme cases it could have failed silently. * You can now use a NO_SEQUEL_PG environment variable to not automatically require sequel_pg in the postgres adapter. * Dataset#unbind now always uses symbol keys in the bind variable hash. ruby-sequel-4.1.1/doc/release_notes/4.1.0.txt000066400000000000000000000061751220156535500206050ustar00rootroot00000000000000= New Features * Database#run and #<< now accept SQL::PlaceholderLiteralString objects, allowing you to more easily run arbitrary DDL queries with placeholders: DB.run Sequel.lit("CREATE TABLE ? (? integer)", :table, :column) * You can now provide options for check constraints by calling the constraint/add_constraint methods with a hash as the first argument. On PostgreSQL, you can now use the :not_valid option for check constraints, so they are enforced for inserts and updates, but not for existing rows. DB.create_table(:table) do ... constraint({:name=>:constraint_name, :not_valid=>true}) do column_name > 10 end end * Dataset#stream has been added to the mysql2 adapter, and will have the dataset stream results if used with mysql2 0.3.12+. This allows you to process large datasets without keeping the entire dataset in memory. DB[:large_table].stream.each{|r| ...} * Database#error_info has been added to the postgres adapter. It is supported on PostgreSQL 9.3+ if pg-0.16.0+ is used as the underlying driver, and it gives you a hash of metadata related to the exception: DB[:table_name].insert(1) rescue DB.error_info($!) # => {:schema=>"public", :table=>"table_name", :column=>nil, :constraint=>"constraint_name", :type=>nil} * The :deferrable option is now supported when adding exclusion constraints on PostgreSQL, to allow setting up deferred exclusion constraints. * The :inherits option is now supported in Database#create_table on PostgreSQL, for table inheritance: DB.create_table(:t1, :inherits=>:t0){} # CREATE TABLE t1 () INHERITS (t0) * Dataset#replace and #multi_replace are now supported on SQLite, just as they have been previously on MySQL. * In the jdbc adapter, Java::JavaUtil::HashMap objects are now converted to ruby Hash objects. This is to make it easier to handle the PostgreSQL hstore type when using the jdbc/postgres adapter. * The odbc adapter now supports a :drvconnect option that accepts an ODBC connection string that is passed to ruby-odbc verbatim. = Other Improvements * The prepared_statements plugin no longer breaks the instance_filters and update_primary_key plugins. * Dropping indexes for tables in a specific schema is now supported on PostgreSQL. Sequel now explicitly specifies the same schema as the table when dropping such indexes. * Calling Model#add_association methods with a primary key value now raises a Sequel::NoMatchingRow if there is no object in the associated table with that primary key. Previously, this situation was not handled and resulted in a NoMethodError being raised later. * When an invalid virtual row block function call is detected, an error is now properly raised. Previously, the error was not raised until the SQL was produced for the query. = Backwards Compatibility * The :driver option to the odbc adapter is deprecated and will be removed in a future version. It is thought to be broken, and users wanting to use DSN-less connections should use the new :drvconnect option. * The Postgres::ArrayOp#text_op private method has been removed. ruby-sequel-4.1.1/doc/schema_modification.rdoc000066400000000000000000000523541220156535500214120ustar00rootroot00000000000000= Schema modification methods Here's a brief description of the most common schema modification methods: == +create_table+ +create_table+ is the most common schema modification method, and it's used for adding new tables to the database. You provide it with the name of the table as a symbol, as well a block: create_table(:artists) do primary_key :id String :name end Note that if you want a primary key for the table, you need to specify it, Sequel does not create one by default. === Column types Most method calls inside the create_table block will create columns, since +method_missing+ calls +column+. Columns are generally created by specifying the column type as the method name, followed by the column name symbol to use, and after that any options that should be used. If the method is a ruby class name that Sequel recognizes, Sequel will transform it into the appropriate type for the given database. So while you specified +String+, Sequel will actually use +varchar+ or +text+ depending on the underlying database. Here's a list of all of ruby classes that Sequel will convert to database types: create_table(:columns_types) do # common database type used Integer :a0 # integer String :a1 # varchar(255) String :a2, :size=>50 # varchar(50) String :a3, :fixed=>true # char(255) String :a4, :fixed=>true, :size=>50 # char(50) String :a5, :text=>true # text File :b, # blob Fixnum :c # integer Bignum :d # bigint Float :e # double precision BigDecimal :f # numeric BigDecimal :f2, :size=>10 # numeric(10) BigDecimal :f3, :size=>[10, 2] # numeric(10, 2) Date :g # date DateTime :h # timestamp Time :i # timestamp Time :i2, :only_time=>true # time Numeric :j # numeric TrueClass :k # boolean FalseClass :l # boolean end Note that in addition to the ruby class name, Sequel also pays attention to the column options when determining which database type to use. Also note that for boolean columns, you can use either TrueClass or FalseClass, they are treated the same way (ruby doesn't have a Boolean class). Also note that this conversion is only done if you use a supported ruby class name. In all other cases, Sequel uses the type specified verbatim: create_table(:columns_types) do # database type used string :a1 # string datetime :a2 # datetime blob :a3 # blob inet :a4 # inet end In addition to specifying the types as methods, you can use the +column+ method and specify the types as the second argument, either as ruby classes, symbols, or strings: create_table(:columns_types) do # database type used column :a1, :string # string column :a2, String # varchar(255) column :a3, 'string' # string column :a4, :datetime # datetime column :a5, DateTime # timestamp column :a6, 'timestamp(6)' # timestamp(6) end === Column options When using the type name as method, the third argument is an options hash, and when using the +column+ method, the fourth argument is the options hash. The following options are supported: :default :: The default value for the column. :index :: Create an index on this column. If given a hash, use the hash as the options for the index. :null :: Mark the column as allowing NULL values (if true), or not allowing NULL values (if false). If unspecified, will default to whatever the database default is. :primary_key :: Mark this column as the primary key. This is used instead of the primary key method if you want a non-autoincrementing primary key. :primary_key_constraint_name :: The name to give the primary key constraint. :type :: Overrides the type given as the method name or a separate argument. Not usually used by +column+ itself, but often by other methods such as +primary_key+ or +foreign_key+. :unique :: Mark the column as unique, generally has the same effect as creating a unique index on the column. :unique_constraint_name :: The name to give the unique key constraint. === Other methods In addition to the +column+ method and other methods that create columns, there are a other methods that can be used: ==== +primary_key+ You've seen this one used already. It's used to create an autoincrementing integer primary key column. create_table(:a0){primary_key :id} If you want an autoincrementing 64-bit integer: create_table(:a0){primary_key :id, :type=>Bignum} If you want to create a primary key column that doesn't use an autoincrementing integer, you should not use this method. Instead, you should use the :primary_key option to the +column+ method or type method: create_table(:a1){Integer :id, :primary_key=>true} # Non autoincrementing integer primary key create_table(:a2){String :name, :primary_key=>true} # varchar(255) primary key If you want to create a composite primary key, you should call the +primary_key+ method with an array of column symbols. You can provide a specific name to use for the primary key constraint via the :name option: create_table(:items) do Integer :group_id Integer :position primary_key [:group_id, :position], :name=>:items_pk end If provided with an array, +primary_key+ does not create a column, it just sets up the primary key constraint. ==== +foreign_key+ +foreign_key+ is used to create a foreign key column that references a column in another table (or the same table). It takes the column name as the first argument, the table it references as the second argument, and an options hash as it's third argument. A simple example is: create_table(:albums) do primary_key :id foreign_key :artist_id, :artists String :name end +foreign_key+ accepts the same options as +column+. For example, to have a unique foreign key with varchar(16) type: foreign_key :column_name, :unique=>true, :type=>'varchar(16)' +foreign_key+ also accepts some specific options: :deferrable :: Makes the foreign key constraint checks deferrable, so they aren't checked until the end of the transaction. :foreign_key_constraint_name :: The name to give the foreign key constraint. :key :: The column in the associated table that this column references. Unnecessary if this column references the primary key of the associated table, at least on most databases. :on_delete :: Specify the behavior of this foreign key column when the row with the primary key it references is deleted , can be :restrict, :cascade, :set_null, or :set_default. You can also use a string, which is used literally. :on_update :: Specify the behavior of this foreign key column when the row with the primary key it references modifies the value of the primary key. Takes the same options as :on_delete. Like +primary_key+, if you provide +foreign_key+ with an array of symbols, it will not create a column, but create a foreign key constraint: create_table(:artists) do String :name String :location primary_key [:name, :location] end create_table(:albums) do String :artist_name String :artist_location String :name foreign_key [:artist_name, :artist_location], :artists end When using an array of symbols, you can also provide a :name option to name the constraint: create_table(:albums) do String :artist_name String :artist_location String :name foreign_key [:artist_name, :artist_location], :artists, :name=>'albums_artist_name_location_fkey' end If you want to add a foreign key for a single column with a named constraint, you must use the array form with a single symbol: create_table(:albums) do primary_key :id Integer :artist_id String :name foreign_key [:artist_id], :artists, :name=>'albums_artist_id_fkey' end ==== +index+ +index+ creates indexes on the table. For single columns, calling index is the same as using the :index option when creating the column: create_table(:a){Integer :id, :index=>true} # Same as: create_table(:a) do Integer :id index :id end create_table(:a){Integer :id, :index=>{:unique=>true}} # Same as: create_table(:a) do Integer :id index :id, :unique=>true end Similar to the +primary_key+ and +foreign_key+ methods, calling +index+ with an array of symbols will create a multiple column index: create_table(:albums) do primary_key :id foreign_key :artist_id, :artists Integer :position index [:artist_id, :position] end The +index+ method also accepts some options: :name :: The name of the index (generated based on the table and column names if not provided). :type :: The type of index to use (only supported by some databases) :unique :: Make the index unique, so duplicate values are not allowed. :where :: Create a partial index (only supported by some databases) ==== +unique+ The +unique+ method creates a unique constraint on the table. A unique constraint generally operates identically to a unique index, so the following three +create_table+ blocks are pretty much identical: create_table(:a){Integer :a, :unique=>true} create_table(:a) do Integer :a index :a, :unique=>true end create_table(:a) do Integer :a unique :a end Just like +index+, +unique+ can set up a multiple column unique constraint, where the combination of the columns must be unique: create_table(:a) do Integer :a Integer :b unique [:a, :b] end ==== +full_text_index+ and +spatial_index+ Both of these create specialized index types supported by some databases. They both take the same options as +index+. ==== +constraint+ +constraint+ creates a named table constraint: create_table(:artists) do primary_key :id String :name constraint(:name_min_length){char_length(name) > 2} end Instead of using a block, you can use arguments that will be handled similarly to Dataset#where: create_table(:artists) do primary_key :id String :name constraint(:name_length_range, Sequel.function(:char_length, :name)=>3..50) end ==== +check+ +check+ operates just like +constraint+, except that it doesn't take a name and it creates an unnamed constraint: create_table(:artists) do primary_key :id String :name check{char_length(name) > 2} end It's recommended that you use the +constraint+ method and provide a name for the constraint, as that makes it easier to drop the constraint later if necessary. == +create_join_table+ +create_join_table+ is a shortcut that you can use to create simple many-to-many join tables: create_join_table(:artist_id=>:artists, :album_id=>:albums) which expands to: create_table(:albums_artists) do foreign_key :album_id, :albums, :null=>false foreign_key :artist_id, :artists, :null=>false primary_key [:album_id, :artist_id] index [:artist_id, :album_id] end == create_table :as=> To create a table from the result of a SELECT query, instead of passing a block to +create_table+, provide a dataset to the :as option: create_table(:older_items, :as=>DB[:items].where{updated_at < Date.today << 6}) == +alter_table+ +alter_table+ is used to alter existing tables, changing their columns, indexes, or constraints. It it used just like +create_table+, accepting a block which is instance_evaled, and providing its own methods: === +add_column+ One of the most common methods, +add_column+ is used to add a column to the table. Its API is similar to that of +create_table+'s +column+ method, where the first argument is the column name, the second is the type, and the third is an options hash: alter_table(:albums) do add_column :copies_sold, Integer, :default=>0 end === +drop_column+ As you may expect, +drop_column+ takes a column name and drops the column. It's often used in the +down+ block of a migration to drop a column added in an +up+ block: alter_table(:albums) do drop_column :copies_sold end === +rename_column+ +rename_column+ is used to rename a column. It takes the old column name as the first argument, and the new column name as the second argument: alter_table(:albums) do rename_column :copies_sold, :total_sales end === +add_primary_key+ If you forgot to include a primary key on the table, and want to add one later, you can use +add_primary_key+. A common use of this is to make many_to_many association join tables into real models: alter_table(:albums_artists) do add_primary_key :id end Just like +create_table+'s +primary_key+ method, if you provide an array of symbols, Sequel will not add a column, but will add a composite primary key constraint: alter_table(:albums_artists) do add_primary_key [:album_id, :artist_id] end If you just want to take an existing single column and make it a primary key, call +add_primary_key+ with an array with a single symbol: alter_table(:artists) do add_primary_key [:id] end === +add_foreign_key+ +add_foreign_key+ can be used to add a new foreign key column or constraint to a table. Like +add_primary_key+, if you provide it with a symbol as the first argument, it creates a new column: alter_table(:albums) do add_foreign_key :artist_id, :artists end If you want to add a new foreign key constraint to an existing column, you provide an array with a single element. It's encouraged to provide a name when adding the constraint, via the :name option: alter_table(:albums) do add_foreign_key [:artist_id], :artists, :name=>:albums_artist_id_fkey end To set up a multiple column foreign key constraint, use an array with multiple column symbols: alter_table(:albums) do add_foreign_key [:artist_name, :artist_location], :artists, :name=>:albums_artist_name_location_fkey end === +drop_foreign_key+ +drop_foreign_key+ is used to drop foreign keys from tables. If you provide a symbol as the first argument, it drops both the foreign key constraint and the column: alter_table(:albums) do drop_foreign_key :artist_id end If you want to just drop the foreign key constraint without dropping the column, use an array. It's encouraged to use the :name option to provide the constraint name to drop, though on some databases Sequel may be able to find the name through introspection: alter_table(:albums) do drop_foreign_key [:artist_id], :name=>:albums_artist_id_fkey end An array is also used to drop a composite foreign key constraint: alter_table(:albums) do drop_foreign_key [:artist_name, :artist_location], :name=>:albums_artist_name_location_fkey end If you do not provide a :name option and Sequel is not able to determine the name to use, it will probably raise a Sequel::Error exception. === +add_index+ +add_index+ works just like +create_table+'s +index+ method, creating a new index on the table: alter_table(:albums) do add_index :artist_id end It accepts the same options as +create_table+'s +index+ method, and you can set up a multiple column index using an array: alter_table(:albums_artists) do add_index [:album_id, :artist_id], :unique=>true end === +drop_index+ As you may expect, +drop_index+ drops an existing index: alter_table(:albums) do drop_index :artist_id end Just like +drop_column+, it is often used in the +down+ block of a migration. To drop an index with a specific name, use the :name option: alter_table(:albums) do drop_index :artist_id, :name=>:artists_id_index end === +add_full_text_index+, +add_spatial_index+ Corresponding to +create_table+'s +full_text_index+ and +spatial_index+ methods, these two methods create new indexes on the table. === +add_constraint+ This adds a named constraint to the table, similar to +create_table+'s +constraint+ method: alter_table(:albums) do add_constraint(:name_min_length){char_length(name) > 2} end There is no method to add an unnamed constraint, but you can pass nil as the first argument of +add_constraint+ to do so. However, it's not recommend to do that as it is difficult to drop such a constraint. === +add_unique_constraint+ This adds a unique constraint to the table, similar to +create_table+'s +unique+ method. This usually has the same effect as adding a unique index. alter_table(:albums) do add_unique_constraint [:artist_id, :name] end === +drop_constraint+ This method drops an existing named constraint: alter_table(:albums) do drop_constraint(:name_min_length) end There is no database independent method to drop an unnamed constraint. Generally, the database will give it a name automatically, and you will have to figure out what it is. For that reason, you should not add unnamed constraints that you ever might need to remove. On some databases, you must specify the type of constraint via a :type option: alter_table(:albums) do drop_constraint(:albums_pk, :type=>:primary_key) drop_constraint(:albums_fk, :type=>:foreign_key) drop_constraint(:albums_uk, :type=>:unique) end === +set_column_default+ This modifies the default value of a column: alter_table(:albums) do set_column_default :copies_sold, 0 end === +set_column_type+ This modifies a column's type. Most databases will attempt to convert existing values in the columns to the new type: alter_table(:albums) do set_column_type :copies_sold, Bignum end You can specify the type as a string or symbol, in which case it is used verbatim, or as a supported ruby class, in which case it gets converted to an appropriate database type. === +set_column_allow_null+ This allows you to set the column as allowing NULL values: alter_table(:albums) do set_column_allow_null :artist_id end === +set_column_not_null+ This allows you to set the column as not allowing NULL values: alter_table(:albums) do set_column_not_null :artist_id end == Other +Database+ schema modification methods Sequel::Database has many schema modification instance methods, most of which are shortcuts to the same methods in +alter_table+. The following +Database+ instance methods just call +alter_table+ with a block that calls the method with the same name inside the +alter_table+ block with all arguments after the first argument (which is used as the table name): * +add_column+ * +drop_column+ * +rename_column+ * +add_index+ * +drop_index+ * +set_column_default+ * +set_column_type+ For example, the following two method calls do the same thing: alter_table(:artists){add_column :copies_sold, Integer} add_column :artists, :copies_sold, Integer There are some other schema modification methods that have no +alter_table+ counterpart: === +drop_table+ +drop_table+ takes multiple arguments and treats all arguments as a table name to drop: drop_table(:albums_artists, :albums, :artists) Note that when dropping tables, you may need to drop them in a specific order if you are using foreign keys and the database is enforcing referential integrity. In general, you need to drop the tables containing the foreign keys before the tables containing the primary keys they reference. === drop_table? drop_table? is similar to drop_table, except that it only drops the table if the table already exists. On some databases, it uses IF NOT EXISTS, on others it does a separate query to check for existence. This should not be used inside migrations, as if the the tbale does not exist, it may mess up the migration. === +rename_table+ You can rename an existing table using +rename_table+. Like +rename_column+, the first argument is the current name, and the second is the new name: rename_table(:artist, :artists) === create_table! create_table! drops the table if it exists before attempting to create it, so: create_table!(:artists) do primary_key :id end is the same as: drop_table?(:artists) create_table(:artists) do primary_key :id end It should not be used inside migrations, as if the table does not exist, it may mess up the migration. === create_table? create_table? only creates the table if it does not already exist, so: create_table?(:artists) do primary_key :id end is the same as: unless table_exists?(:artists) create_table(:artists) do primary_key :id end end Like create_table!, it should not be used inside migrations. === +create_view+ and +create_or_replace_view+ These can be used to create views. The difference between them is that +create_or_replace_view+ will unconditionally replace an existing view of the same name, while +create_view+ will probably raise an error. Both methods take the name as the first argument, and either an string or a dataset as the second argument: create_view(:gold_albums, DB[:albums].where{copies_sold > 500000}) create_or_replace_view(:gold_albums, "SELECT * FROM albums WHERE copies_sold > 500000") === +drop_view+ +drop_view+ drops existing views. Just like +drop_table+, it can accept multiple arguments: drop_view(:gold_albums, :platinum_albums) ruby-sequel-4.1.1/doc/security.rdoc000066400000000000000000000331361220156535500172710ustar00rootroot00000000000000= Security Considerations with Sequel When using Sequel, there are some security areas you should be aware of: * Code Execution * SQL Injection * Denial of Service * Mass Assignment * General Parameter Handling == Code Execution The most serious security vulnerability you can have in any library is a code execution vulnerability. Sequel should not be vulnerable to this, as it never calls eval on a string that is derived from user input. However, some Sequel methods used for creating methods via metaprogramming could conceivably be abused to do so: * Sequel::Schema::CreateTableGenerator.add_type_method * Sequel::Dataset.def_mutation_method * Sequel::Model::Plugins.def_dataset_methods * Sequel.def_adapter_method (private) * Sequel::SQL::Expression.to_s_method (private) * Sequel::Plugins::HookClassMethods::ClassMethods#add_hook_type As long as you don't call those with user input, you should not be vulnerable to code execution. == SQL Injection The primary security concern in SQL database libraries is SQL injection. Because Sequel promotes using ruby objects for SQL concepts instead of raw SQL, it is less likely to be vulnerable to SQL injection. However, because Sequel still makes it easy to use raw SQL, misuse of the library can result in SQL injection in your application. There are basically two kinds of possible SQL injections in Sequel: * SQL code injections * SQL identifier injections === SQL Code Injections ==== Full SQL Strings Some Sequel methods are designed to execute raw SQL, including: * Sequel::Database#execute * Sequel::Database#run * Sequel::Database#<< * Sequel::Database#[] * Sequel::Database#fetch * Sequel::Dataset#with_sql Here are some examples of use: DB.run 'SQL' DB << 'SQL' DB.execute 'SQL' DB['SQL'].all DB.fetch('SQL').all DB.dataset.with_sql('SQL').all If you pass a string to these methods that is derived from user input, you open yourself up to SQL injection. The Sequel::Database#run, Sequel::Database#<<, and Sequel::Database#execute methods are not designed to work at all with user input. If you must use them with user input, you should escape the user input manually via Sequel::Database#literal. Example: DB.run "SOME SQL #{DB.literal(params[:user].to_s)}" With Sequel::Database#[], Sequel::Database#fetch and Sequel::Dataset#with_sql, you should use placeholders, in which case Sequel automatically literalizes the input: DB['SELECT * FROM foo WHERE bar = ?', params[:user].to_s] ==== Manually Created Literal Strings Sequel generally treats ruby strings as SQL strings (escaping them correctly), and not as raw SQL. However, you can convert a ruby string to a literal string, and Sequel will then treat it as raw SQL. This is typically done through String#lit if the {core_extensions}[link:files/doc/core_extensions_rdoc.html] are in use, or Sequel.lit[rdoc-ref:Sequel::SQL::Builders#lit] if they are not in use. 'a'.lit Sequel.lit('a') Using String#lit or Sequel.lit[rdoc-ref:Sequel::SQL::Builders#lit] to turn a ruby string into a literal string results in SQL injection if the string is derived from user input. With both of these methods, the strings can contain placeholders, which you can use to safely include user input inside a literal string: 'a = ?'.lit(params[:user_id].to_s) Sequel.lit('a = ?', params[:user_id].to_s) Even though they have similar names, note that Sequel::Database#literal operates very differently from String#lit or Sequel.lit[rdoc-ref:Sequel::SQL::Builders#lit]. Sequel::Database#literal is for taking any supported object, and getting an SQL representation of that object, while String#lit or Sequel.lit[rdoc-ref:Sequel::SQL::Builders#lit] are for treating a ruby string as raw SQL. For example: DB.literal(Date.today) # "'2013-03-22'" DB.literal('a') # "'a'" DB.literal(Sequel.lit('a')) # "a" DB.literal(:a => 'a') # "(\"a\" = 'a')" DB.literal(:a => Sequel.lit('a')) # "(\"a\" = a)" ==== SQL Filter Fragments The most common way to use raw SQL with Sequel is in filters: DB[:table].where("name > 'M'") If a filter method is passed a string as the first argument, it treats the rest of the arguments (if any) as placeholders to the string. So you should never do: DB[:table].where("name > #{params[:id].to_s}") # SQL Injection! Instead, you should use a placeholder: DB[:table].where("name > ?", params[:id].to_s) # Safe Note that for that type of query, Sequel generally encourages the following form: DB[:table].where{|o| o.name > params[:id].to_s} # Safe Sequel's DSL supports a wide variety of SQL concepts, so it's possible to code most applications without every using raw SQL. A large number of dataset methods ultimately pass down their arguments to a filter method, even some you may not expect, so you should be careful. At least the following methods pass their arguments to a filter method: * Sequel::Dataset#where * Sequel::Dataset#having * Sequel::Dataset#filter * Sequel::Dataset#exclude * Sequel::Dataset#exclude_where * Sequel::Dataset#exclude_having * Sequel::Dataset#and * Sequel::Dataset#or * Sequel::Dataset#first * Sequel::Dataset#last * Sequel::Dataset#[] * Sequel::Dataset#[]= The Model.find[rdoc-ref:Sequel::Model::ClassMethods#find] and Model.find_or_create[rdoc-ref:Sequel::Model::ClassMethods#find_or_create] class methods also call down to the filter methods. ==== SQL Fragment passed to Dataset#update Similar to the filter methods, Sequel::Dataset#update also treats a string argument as raw SQL: DB[:table].update("column = 1") So you should not do: DB[:table].update("column = #{params[:value].to_s}") # SQL Injection! Instead, you should do: DB[:table].update(:column => params[:value].to_s) # Safe ==== SQL Fragment passed to Dataset#lock_style The Sequel::Dataset#lock_style method also treats an input string as SQL code. This method should not be called with user input. === SQL Identifier Injections Usually, Sequel treats ruby symbols as SQL identifiers, and ruby strings as SQL strings. However, there are some parts of Sequel that treat ruby strings as SQL identifiers if an SQL string would not make sense in the same context. For example, Sequel::Database#from and Sequel::Dataset#from will treat a string as a table name: DB.from('t') # SELECT * FROM "t" Another place where Sequel treats ruby strings as identifiers are the Sequel::Dataset#insert and Sequel::Dataset#update methods: DB[:t].update('b'=>1) # UPDATE "t" SET "b" = 1 DB[:t].insert('b'=>1) # INSERT INTO "t" ("b") VALUES (1) Note how the identifier is still quoted in these cases. Sequel quotes identifiers by default on most databases. However, it does not quote identifiers by default on DB2 and Informix. On those databases using an identifier derived from user input can lead to SQL injection. Similarly, if you turn off identifier quoting manually on other databases, you open yourself up to SQL injection if you use identifiers derived from user input. When Sequel quotes identifiers, using an identifier derived from user input does not lead to SQL injection, since the identifiers are also escaped when quoting. Exceptions to this are Oracle (can't escape ") and Microsoft Access (can't escape ]). In general, even if doesn't lead to SQL Injection, you should avoid using identifiers derived from user input unless absolutely necessary. Sequel also allows you to create identifiers using Sequel.identifier[rdoc-ref:Sequel::SQL::Builders#identifier] for plain identifiers, Sequel.qualify[rdoc-ref:Sequel::SQL::Builders#qualify] for qualified identifiers, and Sequel.as[rdoc-ref:Sequel::SQL::Builders#as] for aliased expressions. So if you pass any of those values derived from user input, you are dealing with the same scenario. Note that the issues with SQL identifiers do not just apply to places where strings are used as identifiers, they also apply to all places where Sequel uses symbols as identifiers. However, if you are creating symbols from user input, you at least have a denial of service vulnerability, and possibly a more serious vulnerability. == Denial of Service Sequel converts some strings to symbols. Because symbols in ruby are not garbage collected, if the strings that are converted to symbols are derived from user input, you have a denial of service vulnerability due to memory exhaustion. The strings that Sequel converts to symbols are generally not derived from user input, so Sequel in general is not vulnerable to this. However, users should be aware of the cases in which Sequel creates symbols, so they do not introduce a vulnerability into their application. === Column Names/Aliases Sequel returns SQL result sets as an array of hashes with symbol keys. The keys are derived from the name that the database server gives the column. These names are generally static. For example: SELECT column FROM table The database will generally use "column" as the name in the result set. If you use an alias: SELECT column AS alias FROM table The database will generally use "alias" as the name in the result set. So if you allow the user to control the alias name: DB[:table].select(:column.as(params[:alias])) Then you have a denial of service vulnerability. In general, such a vulnerability is unlikely, because you are probably indexing into the returned hash(es) by name, and if an alias was used and you didn't expect it, your application wouldn't work. === Database Connection Options All database connection options are converted to symbols. For a connection URL, the keys are generally fixed, but the scheme is turned into a symbol and the query option keys are used as connection option keys, so they are converted to symbols as well. For example: postgres://host/database?option1=foo&option2=bar Will result in :postgres, :option1, and :option2 symbols being created. Certain option values are also converted to symbols. In the general case, the sql_log_level option value is, but some adapters treat additional options similarly. This is not generally a risk unless you are allowing the user to control the connection URLs or are connecting to arbitrary databases at runtime. == Mass Assignment Mass assignment is the practice of passing a hash of columns and values to a single method, and having multiple column values for a given object set based on the content of the hash. The security issue here is that mass assignment may allow the user to set columns that you didn't intend to allow. The Model#set[rdoc-ref:Sequel::Model::InstanceMethods#set] and Model#update[rdoc-ref:Sequel::Model::InstanceMethods#update] methods do mass assignment. The default configuration of Sequel::Model allows all model columns except for the primary key column(s) to be set via mass assignment. Example: album = Album.new album.set(params[:album]) # Mass Assignment Both Model.new[rdoc-ref:Sequel::Model::InstanceMethods::new] and Model.create[rdoc-ref:Sequel::Model::ClassMethods#create] call Model#set[rdoc-ref:Sequel::Model::InstanceMethods#set] internally, so they also allow mass assignment: Album.new(params[:album]) # Mass Assignment Album.create(params[:album]) # Mass Assignment Instead of these methods, it is encouraged to either use the Model#set_only[rdoc-ref:Sequel::Model::InstanceMethods#set_only], Model#update_only[rdoc-ref:Sequel::Model::InstanceMethods#update_only], Model#set_fields[rdoc-ref:Sequel::Model::InstanceMethods#set_fields], or Model#update_fields[rdoc-ref:Sequel::Model::InstanceMethods#update_fields] methods, which allow you to specify which fields to allow on a per-call basis. This pretty much eliminates the chance that the user will be able to set a column you did not intend to allow: album.set_only(params[:album], [:name, :copies_sold]) album.set_fields(params[:album], [:name, :copies_sold]) You can override the columns to allow by default during mass assignment via the Model.set_allowed_columns[rdoc-ref:Sequel::Model::ClassMethods#set_allowed_columns] class method. This is a good practice, though being explicit on a per-call basis is still recommended: Album.set_allowed_columns(:name, :copies_sold) Album.create(params[:album]) # Only name and copies_sold set For more details on the mass assignment methods, see the {Mass Assignment Guide}[link:files/doc/mass_assignment_rdoc.html]. == General Parameter Handling This issue isn't necessarily specific to Sequel, but it is a good general practice. If you are using values derived from user input, it is best to be explicit about their type. For example: Album.where(:id=>params[:id]) is probably a bad idea. Assuming you are using a web framework, params\[:id\] could be a string, an array, a hash, or nil. Assuming that +id+ is an integer field, you probably want to do: Album.where(:id=>params[:id].to_i) If you are looking something up by name, you should try to enforce the value to be a string: Album.where(:name=>params[:name].to_s) If you are trying to use an IN clause with a list of id values based on input provided on a web form: Album.where(:id=>params[:ids].to_a.map{|i| i.to_i}) Basically, be as explicit as possible. While there aren't any known security issues in Sequel when you do: Album.where(:id=>params[:id]) It allows the attacker to choose to do any of the following queries: id IS NULL # nil id = '1' # '1' id IN ('1', '2', '3') # ['1', '2', '3'] id = ('a' = 'b') # {'a'=>'b'} id = ('a' IN ('a', 'b') AND 'c' = '') # {'a'=>['a', 'b'], 'c'=>''} While none of those allow for SQL injection, it's possible that they might have an issue in your application. For example, a long array or deeply nested hash might cause the database to have to do a lot of work that could be avoided. In general, it's best to let the attacker control as little as possible, and explicitly specifying types helps a great deal there. ruby-sequel-4.1.1/doc/sharding.rdoc000066400000000000000000000241071220156535500172170ustar00rootroot00000000000000= Read-Only Slaves/Writable Master and Database Sharding Sequel has support for read only slave databases with a writable master database, as well as database sharding (where you can pick a server to use for a given dataset). Support for both features is database independent, and should work for all database adapters that ship with Sequel. == The :servers Database option Sharding and read_only support are both enabled via the :servers database option. Using the :servers database option makes Sequel use a connection pool class that supports sharding, and the minimum required to enable sharding support is to use the empty hash: DB=Sequel.connect('postgres://master_server/database', :servers=>{}) In most cases, you are probably not going to want to use an empty hash. Keys in the server hash are not restricted to type, but the general recommendation is to use a symbol unless you have special requirements. Values in the server hash should be either hashes or procs that return hashes. These hashes are merged into the Database object's default options hash to get the connection options for the shard, so you don't need to override all options, just the ones that need to be modified. For example, if you are using the same user, password, and database name and just the host is changing, you only need a :host entry in each shard's hash. Note that all servers should have the same schema for all tables you are accessing, unless you really know what you are doing. == Master and Slave Database Configurations === Single Read-Only Slave, Single Master To use a single, read-only slave that handles SELECT queries, the following is the simplest configuration: DB=Sequel.connect('postgres://master_server/database', \ :servers=>{:read_only=>{:host=>'slave_server'}}) This will use the slave_server for SELECT queries and master_server for other queries. === Multiple Read-Only Slaves, Single Master Let's say you have 4 slave database servers with names slave_server0, slave_server1, slave_server2, and slave_server3. DB=Sequel.connect('postgres://master_server/database', \ :servers=>{:read_only=>proc{|db| {:host=>db.get_slave_host}}}) def DB.get_slave_host @current_host ||= -1 "slave_server#{(@current_host+=1)%4}" end This will use one of the slave servers for SELECT queries and use the master_server for other queries. It's also possible to pick a random host instead of using the round robin approach presented above, but that can result in less optimal resource usage. === Multiple Read-Only Slaves, Multiple Masters This involves the same basic idea as the multiple slaves, single master, but it shows that the master database is named :default. So for 4 masters and 4 slaves: DB=Sequel.connect('postgres://master_server/database', \ :servers=>{:read_only=>proc{|db| {:host=>db.get_slave_host}}, \ :default=>proc{|db| {:host=>db.get_master_host}}}) def DB.get_slave_host @current_slave_host ||= -1 "slave_server#{(@current_slave_host+=1)%4}" end def DB.get_master_host @current_master_host ||= -1 "master_server#{(@current_master_host+=1)%4}" end == Sharding There is specific support in Sequel for handling master/slave database combinations, with the only necessary setup being the database configuration. However, since sharding is always going to be implementation dependent, Sequel supplies the basic infrastructure, but you have to tell it which server to use for each dataset. Let's assume a simple scenario, a distributed rainbow table for SHA-1 hashes, sharding based on the first hex character (for a total of 16 shards). First, you need to configure the database: servers = {} (('0'..'9').to_a + ('a'..'f').to_a).each do |hex| servers[hex.to_sym] = {:host=>"hash_host_#{hex}"} end DB=Sequel.connect('postgres://hash_host/hashes', :servers=>servers) This configures 17 servers, the 16 shard servers (/hash_host_[0-9a-f]/), and 1 default server which will be used if no shard is specified ("hash_host"). If you want the default server to be one of the shard servers (e.g. hash_host_a), it's easiest to do: DB=Sequel.connect('postgres://hash_host_a/hashes', :servers=>servers) That will still set up a second pool of connections for the default server, since it considers the default server and shard servers independent. Note that if you always set the shard on a dataset before using it in queries, it will not attempt to connect to the default server. Sequel may use the default server in queries it generates itself, such as to get column names or table schemas, so you should always have a default server that works. To set the shard for a given query, you use the Dataset#server method: DB[:hashes].server(:a).where(:hash=>/31337/) That will return all matching rows on the hash_host_a shard that have a hash column that contains 31337. Rainbow tables are generally used to find specific hashes, so to save some work, you might want to add a method to the dataset that automatically sets the shard to use. This is fairly easy using a Sequel::Model: class Rainbow < Sequel::Model(:hashes) dataset_module do def plaintext_for_hash(hash) raise(ArgumentError, 'Invalid SHA-1 Hash') unless /\A[0-9a-f]{40}\z/.match(hash) server(hash[0...1].to_sym).where(:hash=>hash).get(:plaintext) end end end Rainbow.plaintext_for_hash("e580726d31f6e1ad216ffd87279e536d1f74e606") The connection pool can be further controlled to change how it handles attempts to access shards that haven't been configured. The default is to assume the :default shard. However, you can specify a different shard using the :servers_hash option when connecting to the database: DB = Sequel.connect(..., :servers_hash=>Hash.new(:some_shard)) You can also use this feature to raise an exception if an unconfigured shard is used: DB = Sequel.connect(..., :servers_hash=>Hash.new{raise ...}) If you specify a :servers_hash option to raise an exception for non configured shards you should also explicitly specify a :read_only entry in your :servers option for the case where a shard is not specified. In most cases it is sufficient to make the :read_only entry the same as the :default shard: servers = {:read_only => {}} (('0'..'9').to_a + ('a'..'f').to_a).each do |hex| servers[hex.to_sym] = {:host=>"hash_host_#{hex}"} end DB=Sequel.connect('postgres://hash_host/hashes', :servers=>servers, :servers_hash=>Hash.new{raise Exception.new("Invalid Server")}) === Sharding Plugin Sequel comes with a sharding plugin that makes it easy to use sharding with model objects. It makes sure that objects retrieved from a specific shard are always saved back to that shard, allows you to create objects on specific shards, and even makes sure associations work well with shards. You just need to remember to set to model to use the plugin: class Rainbow < Sequel::Model(:hashes) plugin :sharding end Rainbow.server(:a).first(:id=>1).update(:plaintext=>'VGM') If all of your models are sharded, you can set all models to use the plugin via: Sequel::Model.plugin :sharding === server_block Extension By default, you must specify the server/shard you want to use for every dataset/action, or Sequel will use the default shard. If you have a group of queries that should use the same shard, it can get a bit redundent to specify the same shard for all of them. The server_block extension adds a Database#with_server method that scopes all database access inside the block to the given shard by default: DB.extension :server_block DB.with_server(:a) do # this SELECT query uses the "a" shard if r = Rainbow.first(:hash=>/31337/) r.count += 1 # this UPDATE query also uses the "a" shard r.save end end The server_block extension doesn't currently integrate with the sharding plugin, as it ties into the Dataset#server method. This shouldn't present a problem in practice as long as you just access the models inside the with_server block, since they will use the shard set by with_server by default. However, you will probably have issues if you retrieve the models inside the block and save them outside of the block. If you need to do that, call the server method explicitly on the dataset used to retrieve the model objects. === arbitrary_servers Extension By default, Sequel's sharding support is designed to work with predefined shards. It ships with Database#add_servers and Database#remove_servers methods to modify these predefined shards on the fly, but it is a bit cumbersome to work with truly arbitrary servers (requiring you to call add_servers before use, then remove_servers after use). The arbitrary_servers extension allows you to pass a server/shard options hash as the server to use, and those options will be merged directly into the database's default options: DB.extension :arbitrary_servers DB[:rainbows].server(:host=>'hash_host_a').all # or DB[:rainbows].server(:host=>'hash_host_b', :database=>'backup').all arbitrary_servers is designed to work well in conjunction with the server_block extension: DB.with_server(:host=>'hash_host_b', :database=>'backup') do DB.synchronize do # All queries here default to the backup database on hash_host_b end end If you are using arbitrary_servers with server_block, you may want to define the following method (or something similar) so that you don't need to call synchronize separately: def DB.with_server(*) super{synchronize{yield}} end The reason for the synchronize method is that it checks out a connection and makes the same connection available for the duration of the block. If you don't do that, Sequel will probably disconnect from the database and reconnect to the database on each request, since connections to arbitrary servers are not cached. Note that this extension only works with the sharded threaded connection pool. If you are using the sharded single connection pool, you need to switch to the sharded threaded connection pool before using this extension. If you are passing the :single_threaded option to the Database, just remove that option. If you are setting: Sequel.single_threaded = true just remove or comment out that code. ruby-sequel-4.1.1/doc/sql.rdoc000066400000000000000000000576641220156535500162350ustar00rootroot00000000000000= Sequel for SQL Users One of the main benefits of Sequel is that it doesn't require the user to know SQL in order to use it, though SQL knowledge is certainly helpful. Unlike most other Sequel documentation, this guide assumes you know SQL, and provides an easy way to discover how to do something in Sequel given the knowledge of how to do so in SQL. == You Can Just Use SQL With Sequel, it's very easy to just use SQL for your queries. If learning Sequel's DSL seems like a waste of time, you are certainly free to write all your queries in SQL. Sequel uses a few different methods depending on the type of query you are doing. === SELECT For SELECT queries, you should probably use Database#fetch with a string and a block: DB.fetch("SELECT * FROM albums") do |row| puts row[:name] end Database#fetch will take the query you give it, execute it on the database, and yield a hash with column symbol keys for each row returned. If you want to use some placeholder variables, you can set the placeholders with ? and add the arguments to fetch: DB.fetch("SELECT * FROM albums WHERE name LIKE ?", 'A%') do |row| puts row[:name] end You can also use named placeholders by starting the placeholder with a colon, and using a hash for the argument: DB.fetch("SELECT * FROM albums WHERE name LIKE :pattern", :pattern=>'A%') do |row| puts row[:name] end This can be helpful for long queries where it is difficult to match the ? with the arguments. What Sequel actually does internally is two separate things. It first creates a dataset representing the query, and then it executes the dataset's SQL code to retrieve the objects. Often, you want to define a dataset at some point, but not execute it till later. You can do this by leaving off the block, and storing the dataset in a variable: ds = DB.fetch("SELECT * FROM albums") Then, when you want to retrieve the rows later, you can call +each+ on the dataset to retrieve the rows: ds.each{|r| puts r[:name]} You should note that Database#[] calls Database#fetch if a string is provided, so you can also do: ds = DB["SELECT * FROM albums"] ds.each{|r| puts r[:name]} However, note that Database#[] cannot take a block directly, you have to call +each+ on the returned dataset. There are plenty of other methods besides +each+, one is +all+ which returns all records as an array: DB["SELECT * FROM albums"].all # [{:id=>1, :name=>'RF', ...}, ...] === INSERT, UPDATE, DELETE INSERT, UPDATE, and DELETE all work the same way. You first create the dataset with the SQL you want to execute using Database#[]: insert_ds = DB["INSERT INTO albums (name) VALUES (?)", 'RF'] update_ds = DB["UPDATE albums SET name = ? WHERE name = ?", 'MO', 'RF'] delete_ds = DB["DELETE FROM albums WHERE name = ?", 'MO'] Then, you call the +insert+, +update+, or +delete+ method on the returned dataset: insert_ds.insert update_ds.update delete_ds.delete +update+ and +delete+ should return the number of rows affected, and +insert+ should return the autogenerated primary key integer for the row inserted (if any). === Other Queries All other queries such as TRUNCATE, CREATE TABLE, and ALTER TABLE should be executed using Database#run: DB.run "CREATE TABLE albums (id integer primary key, name varchar(255))" You can also use Database#<<: DB << "ALTER TABLE albums ADD COLUMN copies_sold INTEGER" === Other Places Almost everywhere in Sequel, you can drop down to literal SQL by providing a literal string, which you can create with Sequel.lit: DB[:albums].select('name') # SELECT 'name' FROM albums DB[:albums].select(Sequel.lit('name')) # SELECT name FROM albums For a simpler way of creating literal strings, you can also use the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html], which adds the String#lit method, and other methods that integrate Sequel's DSL with the ruby language: DB[:albums].select('name'.lit) So you can use Sequel's DSL everywhere you find it helpful, and fallback to literal SQL if the DSL can't do what you want or you just find literal SQL easier. == Translating SQL Expressions into Sequel The rest of this guide assumes you want to use Sequel's DSL to represent your query, that you know how to write the query in SQL, but you aren't sure how to write it in Sequel's DSL. This section will describe how specific SQL expressions are handled in Sequel. The next section will discuss how to create queries by using method chaining on datasets. === Database#literal It's important to get familiar with the Database#literal method, which will return the SQL that will be used for a given expression: DB.literal(1) # => "1" DB.literal(:column) # => "\"column\"" DB.literal('string') # => "'string'" Try playing around to see how different objects get literalized into SQL === Database Loggers Some Sequel methods handle literalization slightly differently than Database#literal. If you want to see all SQL queries that Sequel is sending to the database, you should add a database logger: DB.loggers << Logger.new($stdout) Now that you know how to see what SQL is being used, let's jump in and see how to map SQL syntax to Sequel syntax: === Identifiers In Sequel, SQL identifiers are usually specified as ruby symbols: :column # "column" As you can see, Sequel quotes identifiers by default. Depending on your database, it may uppercase them by default as well: :column # "COLUMN" on some databases A plain symbol is usually treated as an unqualified identifier. However, if you are using multiple tables in a query, and you want to reference a column in one of the tables that has the same name as a column in another one of the tables, you need to qualify that reference. There's two main ways in Sequel to do that. The first is implicit qualification inside the symbol, using the double underscore: :table__column # "table"."column" Note that you can't use a period to separate them: :table.column # calls the column method on the symbol Also note that specifying the period inside the symbol doesn't work if you are quoting identifiers: :"table.column" # "table.column" The other way to qualify an identifier is to use the Sequel.qualify with the table and column symbols: Sequel.qualify(:table, :column) # "table"."column" Another way to generate identifiers is to use Sequel's {virtual row support}[link:files/doc/virtual_rows_rdoc.html]: DB[:albums].select{name} # SELECT "name" FROM "albums" DB[:albums].select{albums__name} # SELECT "albums"."name" FROM "albums" === Numbers In general, ruby numbers map directly to SQL numbers: # Integers 1 # 1 -1 # -1 # Floats 1.5 # 1.5 # BigDecimals BigDecimal.new('1000000.123091029') # 1000000.123091029 === Strings In general, ruby strings map directly to SQL strings: 'name' # 'name' "name" # 'name' === Aliasing Sequel allows for implicit aliasing in column symbols using the triple underscore: :column___alias # "column" AS "alias" You can combine this with implicit qualification: :table__column___alias # "table"."column" AS "alias" You can also use the Sequel.as method to create an alias, and the +as+ method on most Sequel-specific expression objects: Sequel.as(:column, :alias) # "column" AS "alias" Sequel.qualify(:table, :column).as(:alias) # "table"."column" AS "alias" === Functions The easiest way to use SQL functions is via a virtual row: DB[:albums].select{func{}} # SELECT func() FROM "albums" DB[:albums].select{func(col1, col2)} # SELECT func("col1", "col2") FROM "albums" You can also use the Sequel.function method on the symbol that contains the function name: Sequel.function(:func) # func() Sequel.function(:func, :col1, :col2) # func("col1", "col2") === Aggregate Functions Aggregate functions work the same way as normal functions, since they share the same syntax: Sequel.function(:sum, :column) # sum(column) However, if you want to use the DISTINCT modifier to an aggregate function, you either have to use literal SQL or a virtual row block: Sequel.function(:sum, Sequel.lit('DISTINCT column')) # sum(DISTINCT column) DB[:albums].select{sum(:distinct, :column){}} # SELECT sum(DISTINCT column) FROM albums If you want to use the wildcard as the sole argument of the aggregate function, you again have to use literal SQL or a virtual row block: Sequel.function(:count, Sequel.lit('*')) # count(*) DB[:albums].select{count(:*){}} # SELECT count(*) FROM albums Note that Sequel provides helper methods for aggregate functions such as +count+, +sum+, +min+, +max+, +avg+, and +group_and_count+, which handle common uses of aggregate functions. === Window Functions If the database supports window functions, Sequel can handle them using a virtual row block: DB[:albums].select{function(:over){}} # SELECT function() OVER () FROM albums DB[:albums].select{count(:over, :*=>true){}} # SELECT count(*) OVER () FROM albums DB[:albums].select{function(:over, :args=>col1, :partition=>col2, :order=>col3){}} # SELECT function(col1) OVER (PARTITION BY col2 ORDER BY col3) FROM albums DB[:albums].select{function(:over, :args=>[c1, c2], :partition=>[c3, c4], :order=>[c5, c6]){}} # SELECT function(c1, c2) OVER (PARTITION BY c3, c4 ORDER BY c5, c6) FROM albums === Equality Operator (=) Sequel uses hashes to specify equality: {:column=>1} # ("column" = 1) You can also specify this as an array of two element arrays: [[:column, 1]] # ("column" = 1) === Not Equal Operator (!=) You can specify a not equals condition by inverting the hash or array of two element arrays using Sequel.negate or Sequel.~: Sequel.negate(:column => 1) # ("column" != 1) Sequel.negate([[:column, 1]]) # ("column" != 1) Sequel.~(:column => 1) # ("column" != 1) Sequel.~([[:column, 1]]) # ("column" != 1) The difference between the two is that negate only works on hashes and arrays of element arrays, and it negates all entries in the, while ~ does a general inversion. This is best shown by an example with multiple entries: Sequel.negate(:column => 1, :foo => 2) # (("column" != 1) AND (foo != 2)) Sequel.~(:column => 1, :foo => 2) # (("column" != 1) OR (foo != 2)) The most common need for not equals is in filters, in which case you can use the +exclude+ method: DB[:albums].exclude(:column=>1) # SELECT * FROM "albums" WHERE ("column" != 1) Note that +exclude+ does a generalized inversion, similar to Sequel.~. === Inclusion and Exclusion Operators (IN, NOT IN) Sequel also uses hashes to specify inclusion, and inversions of those hashes to specify exclusion: {:column=>[1, 2, 3]} # ("column" IN (1, 2, 3)) Sequel.~(:column=>[1, 2, 3]) # ("column" NOT IN (1, 2, 3)) As you may have guessed, Sequel switches from an = to an IN when the hash value is an array. It also does this for datasets, which easily allows you to test for inclusion and exclusion in a subselect: {:column=>DB[:albums].select(:id)} # ("column" IN (SELECT "id" FROM "albums")) Sequel.~(:column=>DB[:albums].select(:id)) # ("column" NOT IN (SELECT "id" FROM "albums")) Sequel also supports the SQL EXISTS operator using Dataset#exists: DB[:albums].exists # EXISTS (SELECT * FROM albums) === Identity Operators (IS, IS NOT) Hashes in Sequel use IS if the value is true, false, or nil: {:column=>nil) # ("column" IS NULL) {:column=>true) # ("column" IS TRUE) {:column=>false) # ("column" IS FALSE) Negation works the same way as it does for equality and inclusion: Sequel.~(:column=>nil) # ("column" IS NOT NULL) Sequel.~(:column=>true) # ("column" IS NOT TRUE) Sequel.~(:column=>false) # ("column" IS NOT FALSE) === Inversion Operator (NOT) Sequel's general inversion operator is ~, which works on symbols and most Sequel-specific expression objects: Sequel.~(:column) # NOT "column" Note that ~ will actually apply the inversion operation to the underlying object, which is why Sequel.~(:column=>1) produces (column != 1) instead of NOT (column = 1). === Inequality Operators (< > <= >=) Sequel defines the inequality operators directly on most Sequel-specific expression objects: Sequel.qualify(:table, :column) > 1 # ("table"."column" > 1) Sequel.qualify(:table, :column) < 1 # ("table"."column" < 1) Sequel.function(:func) >= 1 # (func() >= 1) Sequel.function(:func, :column) <= 1 # (func("column") <= 1) If you want to use them on a symbol, you should call Sequel.expr with the symbol: Sequel.expr(:column) > 1 # ("column" > 1) A common use of virtual rows is to handle inequality operators: DB[:albums].where{col1 > col2} # SELECT * FROM "albums" WHERE ("col1" > "col2") === Standard Mathematical Operators (+ - * /) The standard mathematical operates are defined on most Sequel-specific expression objects: Sequel.expr(:column) + 1 # "column" + 1 Sequel.expr(:table__column) - 1 # "table"."column" - 1 Sequel.qualify(:table, :column) * 1 # "table"."column" * 1 Sequel.expr(:column) / 1 # "column" / 1 You can also call the operator methods directly on the Sequel module: Sequel.+(:column, 1) # "column" + 1 Sequel.-(:table__column, 1) # "table"."column" - 1 Sequel.*(Sequel.qualify(:table, :column), 1) # "table"."column" * 1 Sequel./(:column, 1) # "column" / 1 Note that the following does not work: 1 + Sequel.expr(:column) # raises TypeError For commutative operates such as + and *, this isn't a problem as you can just reorder, but non-commutative operators such as - and / cannot be expressed directly. The solution is to use one of the methods on the Sequel module: Sequel.expr(1) / :column # (1 / "column") Sequel./(1, :column) # (1 / "column") === Boolean Operators (AND OR) Sequel defines the & and | methods on most Sequel-specific expression objects to handle AND and OR: Sequel.expr(:column1) & :column2 # ("column1" AND "column2") Sequel.expr(:column1=>1) | {:column2=>2} # (("column1" = 1) OR ("column2" = 2)) (Sequel.function(:func) > 1) & :column3 # ((func() > 1) AND "column3") Note the use of parentheses in the last statement. If you omit them, you won't get what you expect. Because & has higher precedence than > Sequel.function(:func) > 1 & :column3 is parsed as: Sequel.function(:func) > (1 & :column3) You and also use the Sequel.& and Sequel.| methods: Sequel.&(:column1, :column2) # ("column1" AND "column2") Sequel.|({:column1=>1}, {:column2=>2}) # (("column1" = 1) OR ("column2" = 2)) You can use hashes and arrays of two element arrays to specify AND and OR with equality conditions: {:column1=>1, :column2=>2} # (("column1" = 1) AND ("column2" = 2)) [[:column1, 1], [:column2, 2]] # (("column1" = 1) AND ("column2" = 2)) As you can see, these literalize with ANDs by default. You can use the Sequel.or method to use OR instead: Sequel.or(:column1=>1, :column2=>2) # (("column1" = 1) OR ("column2" = 2)) You've already seen the Sequel.negate method, which will use ANDs if multiple entries are used: Sequel.negate(:column1=>1, :column2=>2) # (("column1" != 1) AND ("column2" != 2)) To negate while using ORs, the Sequel.~ operator can be used: Sequel.~(:column1=>1, :column2=>2) # (("column1" != 1) OR ("column2" != 2)) Note again that Dataset#exclude uses ~, not +negate+: DB[:albums].exclude(:column1=>1, :column2=>2) # SELECT * FROM "albums" WHERE (("column" != 1) OR ("column2" != 2)) === Casts Casting in Sequel is done with the +cast+ method, which is available on most of the Sequel-specific expression objects: Sequel.expr(:name).cast(:text) # CAST("name" AS text) Sequel.expr('1').cast(:integer) # CAST('1' AS integer) Sequel.qualify(:table, :column).cast(:date) # CAST("table"."column" AS date) You can also use the Sequel.cast method: Sequel.cast(:name, :text) # CAST("name" AS text) === Bitwise Mathematical Operators (& | ^ << >> ~) Sequel allows the use of bitwise mathematical operators on Sequel::SQL::NumericExpression objects: Sequel.expr(:number) + 1 # => # (Sequel.expr(:number) + 1) & 5 # (("number" + 1) & 5) As you can see, when you use the + operator on a symbol, you get a NumericExpression. You can turn an expression a NumericExpression using +sql_number+: Sequel.expr(:number).sql_number | 5 # ("number" | 5) Sequel.function(:func).sql_number << 7 # (func() << 7) Sequel.cast(:name, :integer).sql_number >> 8 # (CAST("name" AS integer) >> 8) Sequel allows you to do the cast and conversion at the same time via +cast_numeric+: Sequel.expr(:name).cast_numeric ^ 9 # (CAST("name" AS integer) ^ 9) Note that &, |, and ~ are already defined to do AND, OR, and NOT on most expressions, so if you want to use the bitwise operators, you need to make sure that they are converted first: ~Sequel.expr(:name) # NOT "name" ~Sequel.expr(:name).sql_number # ~"name" === String Operators (||, LIKE, Regexp) Sequel allows the use of the string concatenation operator on Sequel::SQL::StringExpression objects, which can be created using the +sql_string+ method on an expression: Sequel.expr(:name).sql_string + ' - Name' # ("name" || ' - Name') Just like for the bitwise operators, Sequel allows you do do the cast and conversion at the same time via +cast_string+: Sequel.expr(:number).cast_string + ' - Number' # (CAST(number AS varchar(255)) || ' - Number') Note that similar to the mathematical operators, you cannot switch the order the expression and have it work: 'Name - ' + Sequel.expr(:name).sql_string # raises TypeError Just like for the mathematical operators, you can use Sequel.expr to wrap the object: Sequel.expr('Name - ') + :name # ('Name - ' || "name") The Sequel.join method concatenates all of the elements in the array: Sequel.join(['Name', :name]) # ('Name' || "name") Just like ruby's String#join, you can provide an argument for a string used to join each element: Sequel.join(['Name', :name], ' - ') # ('Name' || ' - ' || "name") For the LIKE operator, Sequel defines the +like+ and +ilike+ methods on most Sequel-specific expression objects: Sequel.expr(:name).like('A%') # ("name" LIKE 'A%') Sequel.expr(:name).ilike('A%') # ("name" ILIKE 'A%') You can also use the Sequel.like and Sequel.ilike methods: Sequel.like(:name, 'A%') # ("name" LIKE 'A%') Sequel.ilike(:name, 'A%') # ("name" ILIKE 'A%') Note the above syntax for ilike, while Sequel's default, is specific to PostgreSQL. However, most other adapters override the behavior. For example, on MySQL, Sequel uses LIKE BINARY for +like+, and LIKE for +ilike+. If the database supports both case sensitive and case insensitive LIKE, then +like+ will use a case sensitive LIKE, and +ilike+ will use a case insensitive LIKE. Inverting the LIKE operator works like other inversions: ~Sequel.like(:name, 'A%') # ("name" NOT LIKE 'A%') Sequel also supports SQL regular expressions on MySQL and PostgreSQL. You can use these by passing a ruby regular expression to +like+ or +ilike+, or by making the regular expression a hash value: Sequel.like(:name, /^A/) # ("name" ~ '^A') ~Sequel.ilike(:name, /^A/) # ("name" !~* '^A') {:name=>/^A/i} # ("name" ~* '^A') Sequel.~(:name=>/^A/) # ("name" !~ '^A') Note that using +ilike+ with a regular expression will always make the regexp case insensitive. If you use +like+ or the hash with regexp value, it will only be case insensitive if the Regexp itself is case insensitive. === Order Specifications (ASC, DESC) Sequel supports specifying ascending or descending order using the asc+ and +desc+ method on most Sequel-specific expression objects: Sequel.expr(:column).asc # "column" ASC Sequel.expr(:column).qualify(:table).desc # "table"."column" DESC You can also use the Sequel.asc and Sequel.desc methods: Sequel.asc(:column) # "column" ASC Sequel.desc(Sequel.expr(:column).qualify(:table)) # "table"."column" DESC On some databases, you can specify null ordering: Sequel.asc(:column, :nulls=>:first) # "column" ASC NULLS FIRST Sequel.desc(Sequel.expr(:column).qualify(:table), :nulls=>:last) # "table"."column" DESC NULLS LAST === All Columns (.*) To select all columns in a table, Sequel supports the * method on identifiers without an argument: Sequel.expr(:table).* # "table".* === CASE statements Sequel allows the easy production of SQL CASE statements using the Sequel.case method. The first argument is a hash or array of two element arrays representing the conditions, the second argument is the default value (ELSE). The keys of the hash (or first element in each array) is the WHEN condition, and the values of the hash (or second element in each array) is the THEN result. Here are some examples: Sequel.case({:column=>1, 0) # (CASE WHEN "column" THEN 1 ELSE 0 END) Sequel.case([[column, 1]], 0) # (CASE WHEN "column" THEN 1 ELSE 0 END) Sequel.case({{:column=>nil}=>1}, 0) # (CASE WHEN (column IS NULL) THEN 1 ELSE 0 END) If the hash or array has multiple arguments, multiple WHEN clauses are used: Sequel.case({:c=>1, :d=>2}, 0) # (CASE WHEN "c" THEN 1 WHEN "d" THEN 2 ELSE 0 END) Sequel.case([[:c, 1], [:d, 2]], 0) # (CASE WHEN "c" THEN 1 WHEN "d" THEN 2 ELSE 0 END) If you provide a 3rd argument to Sequel.case, it goes between CASE and WHEN: Sequel.case({2=>1, 3=>5}, 0, :column) # (CASE column WHEN 2 THEN 1 WHEN 3 THEN 5 ELSE 0 END) === Subscripts/Array Access ([]) Sequel supports SQL subscripts using the +sql_subscript+ method on most Sequel-specific expression objects: Sequel.expr(:column).sql_subscript(3) # column[3] Sequel.expr(:column).qualify(:table).sql_subscript(3) # table.column[3] You can also use the Sequel.subscript method: Sequel.subscript(:column, 3) # column[3] Just like in SQL, you can use any expression as a subscript: Sequel.subscript(:column, Sequel.function(:func)) # column[func()] == Building Queries in Sequel In Sequel, the SQL queries are build with method chaining. === Creating Datasets You generally start by creating a dataset by calling Dataset#[] with a symbol specifying the table name: DB[:albums] # SELECT * FROM albums If you want to select from multiple FROM tables, use multiple arguments: DB[:albums, :artists] # SELECT * FROM albums, artists If you don't want to select from any FROM tables, just call dataset: DB.dataset # SELECT * === Chaining Methods Once you have your dataset object, you build queries by chaining methods, usually with one method per clause in the query: DB[:albums].select(:id, :name).where(Sequel.like(:name, 'A%')).order(:name) # SELECT id, name FROM albums WHERE (name LIKE 'A%') ORDER BY name Note that the order of your method chain is not usually important unless you have multiple methods that affect the same clause: DB[:albums].order(:name).where(Sequel.like(:name, 'A%')).select(:id, :name) # SELECT id, name FROM albums WHERE (name LIKE 'A%') ORDER BY name === Using the Same Dataset for SELECT, INSERT, UPDATE, and DELETE Also note that while the SELECT clause is displayed when you look at a dataset, a Sequel dataset can be used for INSERT, UPDATE, and DELETE as well. Here's an example: ds = DB[:albums] ds.all # SELECT * FROM albums ds.insert(:name=>'RF') # INSERT INTO albums (name) VALUES ('RF') ds.update(:name=>'RF') # UPDATE albums SET name = 'RF' ds.delete # DELETE FROM albums In general, the +insert+, +update+, and +delete+ methods use the appropriate clauses you defined on the dataset: ds = DB[:albums].where(:id=>1) ds.all # SELECT * FROM albums WHERE (id = 1) ds.insert(:name=>'RF') # INSERT INTO albums (name) VALUES ('RF') ds.update(:name=>'RF') # UPDATE albums SET name = 'RF' WHERE (id = 1) ds.delete # DELETE FROM albums WHERE (id = 1) Note how +update+ and +delete+ used the +where+ argument, but that +insert+ did not, because INSERT doesn't use a WHERE clause. === Methods Used for Each SQL Clause To see which methods exist that affect each SQL clause, see the {"Dataset Basics" guide}[link:files/doc/dataset_basics_rdoc.html]. ruby-sequel-4.1.1/doc/testing.rdoc000066400000000000000000000170311220156535500170730ustar00rootroot00000000000000= Testing with Sequel Whether or not you use Sequel in your application, you are usually going to want to have tests that ensure that your code works. When you are using Sequel, it's helpful to integrate it into your testing framework, and it's generally best to run each test in its own transaction if possible. That keeps all tests isolated from each other, and it's simple as it handles all of the cleanup for you. Sequel doesn't ship with helpers for common libraries, as the exact code you need is often application-specific, but this page offers some examples that you can either use directly or build on. == Transactional tests These run each test in its own transaction, the recommended way to test. === RSpec 1 class Spec::Example::ExampleGroup def execute(*args, &block) result = nil Sequel::Model.db.transaction(:rollback=>:always){result = super(*args, &block)} result end end === RSpec 2, <2.8 class RSpec::Core::ExampleGroup # Setting an around filter globally doesn't appear to work in <2.8, # so set one up for each subclass. def self.inherited(subclass) super subclass.around do |example| Sequel::Model.db.transaction(:rollback=>:always){example.call} end end end === RSpec 2, >=2.8 # Global around filters should work RSpec.configure do |c| c.around(:each) do |example| DB.transaction(:rollback=>:always){example.run} end end === Test::Unit # Must use this class as the base class for your tests class SequelTestCase < Test::Unit::TestCase def run(*args, &block) result = nil Sequel::Model.db.transaction(:rollback=>:always){result = super} result end end # Or you could override the base implementation like this class Test::Unit::TestCase alias_method :_original_run, :run def run(*args, &block) result = nil Sequel::Model.db.transaction(:rollback => :always) do result = _original_run(*args, &block) end result end end === MiniTest::Unit # Add a subclass # Must use this class as the base class for your tests class SequelTestCase < MiniTest::Unit::TestCase def run(*args, &block) result = nil Sequel::Model.db.transaction(:rollback=>:always){result = super} result end end # Or you could override the base implementation like this class MiniTest::Unit::TestCase alias_method :_original_run, :run def run(*args, &block) result = nil Sequel::Model.db.transaction(:rollback => :always) do result = _original_run(*args, &block) end result end end == Transactional testing with multiple databases You can use the Sequel.transaction method to run a transaction on multiple databases, rolling all of them back. Instead of: Sequel::Model.db.transaction(:rollback=>:always) Use Sequel.transaction with an array of databases: Sequel.transaction([DB1, DB2, DB3], :rollback=>:always) == Nontransactional tests In some cases, it is not possible to use transactions. For example, if you are testing a web application that is running in a separate process, you don't have access to that process's database connections, so you can't run your examples in transactions. In that case, the best way to handle things is to cleanup after each test by deleting or truncating the database tables used in the test. The order in which you delete/truncate the tables is important if you are using referential integrity in your database (which you should be doing). If you are using referential integrity, you need to make sure to delete in tables referencing other tables before the tables that are being referenced. For example, if you have an +albums+ table with an +artist_id+ field referencing the +artists+ table, you want to delete/truncate the +albums+ table before the +artists+ table. Note that if you have cyclic references in your database, you will probably need to write your own custom cleaning code. === RSpec class Spec::Example::ExampleGroup after do [:table1, :table2].each{|x| Sequel::Model.db.from(x).truncate} # or [:table1, :table2].each{|x| Sequel::Model.db.from(x).delete} end end === Test::Unit # Must use this class as the base class for your tests class SequelTestCase < Test::Unit::TestCase def teardown [:table1, :table2].each{|x| Sequel::Model.db.from(x).truncate} # or [:table1, :table2].each{|x| Sequel::Model.db.from(x).delete} end end = Testing Sequel Itself Sequel has multiple separate test suites. All test suites run under either RSpec 1 or RSpec 2. == rake spec The +spec+ rake task (which is also the default rake task) runs Sequel's core and model specs. These specs use a mocked database connection, and test for specific SQL used and for generally correct behavior. == rake spec_plugin The +spec_plugin+ rake task runs the specs for the plugins and extensions that ship with Sequel. These also use a mocked database connection, and operate very similarly to the general Sequel core and model specs. == rake spec_core_ext The +spec_core_ext+ rake task runs the specs for the core_extensions extension. These are run separately from the other extension tests to make sure none of the other extensions require the core_extensions. == rake spec_bin The +spec_bin+ rake task runs the specs for bin/sequel. These use an SQLite3 database, and require either the sqlite3 (non-JRuby) or jdbc-sqlite3 (JRuby) gem. == rake spec_adapter (e.g. rake spec_postgres) The spec_adapter specs run against a real database connection with nothing mocked, and test for correct results. They are slower than the standard specs, but they will catch errors that are mocked out by the default specs, as well as show issues that only occur on a certain database, adapter, or a combination of the two. These specs are broken down into two parts. For each database, there are specific specs that only apply to that database, and these are called the adapter specs. There are also shared specs that apply to all (or almost all) databases, these are called the integration specs. For database types that don't have specific adapter tests, you can use rake spec_integration to just run the shared integration tests. == Environment variables Sequel often uses environment variables when testing to specify either the database to be tested or specify how testing should be done. You can also specify the databases to test by copying spec/spec_config.rb.example to spec/spec_config.rb and modifying it. See that file for details. It may be necessary to use spec_config.rb as opposed to an environment variable if your database connection cannot be specified by a connection string. Sequel does not create test databases automatically, except for file-based databases such as SQLite/H2/HSQLDB/Derby. It's up to the user to create the test databases manually and give Sequel a valid connection string in an environment variable (or setup the connection object in spec_config.rb). === Connection Strings The SEQUEL_INTEGRATION_URL environment variable specifies the Database connection URL to use for the adapter and integration specs. Additionally, when running the adapter specs, you can also use the SEQUEL_ADAPTER_URL environment variable (e.g. SEQUEL_POSTGRES_URL for spec_postgres). === Other SEQUEL_COLUMNS_INTROSPECTION :: Whether to run the specs with the columns_introspection extension loaded by default SEQUEL_NO_PENDING :: Don't mark any specs as pending, try running all specs SKIPPED_TEST_WARN :: Warn when skipping any tests because libraries aren't available ruby-sequel-4.1.1/doc/thread_safety.rdoc000066400000000000000000000037671220156535500202530ustar00rootroot00000000000000= Thread Safety Most Sequel usage (and all common Sequel usage) is thread safe by default. Specifically, multiple threads can operate on Database instances, Dataset instances, and Model classes concurrently without problems. In general, Database instance and Model classes are not modified after application startup, and modifying Dataset instances returns modified copies of the dataset instead of mutating it. == Connection Pool In order to allow multiple threads to operate on the same database at the same time, Sequel uses a connection pool. The connection pool is designed so that a thread uses a connection for the minimum amount of time, returning the connection to the pool as soon as it is done using the connection. If a thread requests a connection and the pool does not have an available connection, a new connection will be created. If the maximum number of connections in the pool has already been reached, the thread will block (actually busy-wait) until a connection is available or the the connection pool timeout has elapsed (in which case a PoolTimeout error will be raised). == Exceptions This is a small list of things that are specifically non thread-safe. This is not an exhaustive list, there may be cases not mentioned here. 1) Model instances: Model instances are not thread-safe unless they are frozen first. Multiple threads should not operate on an unfrozen model instance concurrently. 2) Model class modifications: Model class modifications, such as adding associations and loading plugins, are not designed to be thread safe. You should not modify a class in one thread if any other thread can concurrently access it. Model subclassing is designed to be thread-safe, so you create a model subclass in a thread and modify it safely. 3) Dataset mutation methods: Dataset mutation methods are not thread safe, you should not call them on datasets that could be accessed by other threads. It is safe to clone the dataset first inside a thread and call mutation methods on the cloned dataset. ruby-sequel-4.1.1/doc/transactions.rdoc000066400000000000000000000145351220156535500201340ustar00rootroot00000000000000= Database Transactions Sequel uses autocommit mode by default for all of its database adapters, so in general in Sequel if you want to use database transactions, you need to be explicit about it. There are a few cases where transactions are used implicitly by default: * Dataset#import to insert many records at once * Model#save * Model#destroy * Migrations if the database supports transactional schema * A few model plugins Everywhere else, it is up to you to use a database transaction if you want to. == Basic Transaction Usage In Sequel, the Database#transaction method should be called if you want to use a database transaction. This method must be called with a block. If the block does not raise an exception, the transaction is committed: DB.transaction do # BEGIN DB[:foo].insert(1) # INSERT end # COMMIT If the block raises a Sequel::Rollback exception, the transaction is rolled back, but no exception is raised outside the block: DB.transaction do # BEGIN raise Sequel::Rollback end # ROLLBACK # no exception raised If any other exception is raised, the transaction is rolled back, and the exception is raised outside the block: DB.transaction do # BEGIN raise ArgumentError end # ROLLBACK # ArgumentError raised If you want Sequel::Rollback exceptions to be reraised, use the :rollback => :reraise option: DB.transaction(:rollback => :reraise) do # BEGIN raise Sequel::Rollback end # ROLLBACK # Sequel::Rollback raised If you always want to rollback (useful for testing), use the :rollback => :always option: DB.transaction(:rollback => :always) do # BEGIN DB[:foo].insert(1) # INSERT end # ROLLBACK # no exception raised If you want to check whether you are currently in a transaction, use the Database#in_transaction? method: DB.in_transaction? # false DB.transaction do DB.in_transaction? # true end == Transaction Hooks You can add hooks to an in progress transaction that are called after the transaction commits or rolls back: x = nil DB.transaction do DB.after_commit{x = 1} DB.after_rollback{x = 2} x # nil end x # 1 x = nil DB.transaction do DB.after_commit{x = 1} DB.after_rollback{x = 2} raise Sequel::Rollback end x # 2 == Nested Transaction Calls / Savepoints You can nest calls to transaction, which by default just reuses the existing transaction: DB.transaction do # BEGIN DB.transaction do DB[:foo].insert(1) # INSERT end end # COMMIT You can use the :savepoint => true option in the inner transaction to explicitly use a savepoint (if the database supports it): DB.transaction do # BEGIN DB.transaction(:savepoint => true) do # SAVEPOINT DB[:foo].insert(1) # INSERT end # RELEASE SAVEPOINT end # COMMIT If a Sequel::Rollback exception is raised inside the savepoint block, it will only rollback to the savepoint: DB.transaction do # BEGIN DB.transaction(:savepoint => true) do # SAVEPOINT raise Sequel::Rollback end # ROLLBACK TO SAVEPOINT # no exception raised end # COMMIT Other exceptions, unless rescued inside the outer transaction block, will rollback the savepoint and the outer transactions, since they are reraised by the transaction code: DB.transaction do # BEGIN DB.transaction(:savepoint => true) do # SAVEPOINT raise ArgumentError end # ROLLBACK TO SAVEPOINT end # ROLLBACK # ArgumentError raised == Prepared Transactions / Two-Phase Commit Sequel supports database prepared transactions on PostgreSQL, MySQL, and H2. With prepared transactions, at the end of the transaction, the transaction is not immediately committed (it acts like a rollback). Later, you can call +commit_prepared_transaction+ to commit the transaction or +rollback_prepared_transaction+ to roll the transaction back. Prepared transactions are usually used with distributed databases to make sure all databases commit the same transaction or none of them do. To use prepared transactions in Sequel, you provide a string as the value of the :prepare option: DB.transaction(:prepare => 'foo') do # BEGIN DB[:foo].insert(1) # INSERT end # PREPARE TRANSACTION 'foo' Later, you can commit the prepared transaction: DB.commit_prepared_transaction('foo') or roll the prepared transaction back: DB.rollback_prepared_transaction('foo') == Transaction Isolation Levels The SQL standard supports 4 isolation levels: READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, and SERIALIZABLE. Not all databases implement the levels as specified in the standard (or implement the levels at all), but on most databases, you can specify which transaction isolation level you want to use via the :isolation option to Database#transaction. The isolation level is specified as one of the following symbols: :uncommitted, :committed, :repeatable, and :serializable. Using this option make Sequel use the correct transaction isolation syntax for your database: DB.transaction(:isolation => :serializable) do # BEGIN # SET TRANSACTION ISOLATION LEVEL SERIALIZABLE DB[:foo].insert(1) # INSERT end # COMMIT == Automatically Restarting Transactions Sequel offers the ability to automatically restart transactions if specific types of errors are detected. For example, if you want to automatically restart a transaction if a serialization failure is detected: DB.transaction(:isolation => :serializable, :retry_on=>[Sequel::SerializationFailure]) do ModelClass.find_or_create(:name=>'Foo') end At the serializable transaction isolation level, find_or_create may raises a Sequel::SerializationFailure exception if multiple threads simultaneously run that code. With the :retry_on option set, the transaction will be automatically retried until it succeeds. Note that automatic retrying should not be used unless the entire transaction block is idempotent, as otherwise it can cause non-idempotent behavior to execute multiple times. For example, with the following code: DB.transaction(:isolation => :serializable, :retry_on=>[Sequel::SerializationFailure]) do logger.info 'Ensuring existence of ModelClass with name Foo' ModelClass.find_or_create(:name=>'Foo') end The logger.info method will be called multiple times if there is a serialization failure. The :num_retries option can be used to set the maxmimum number of times to retry. It is set to 5 times by default. ruby-sequel-4.1.1/doc/validations.rdoc000066400000000000000000000643641220156535500177460ustar00rootroot00000000000000= Model Validations This guide is based on http://guides.rubyonrails.org/activerecord_validations_callbacks.html == Overview This guide is designed to teach you how to use Sequel::Model's validation support. It attempts to explain how Sequel's validation support works, what validations are useful for, and how to use the +validation_helpers+ plugin to add specific types of validations to your models. == Why Validations? Validations are primarily useful for associating error messages to display to the user with specific attributes on the model. It is also possible to use them to enforce data integrity for model instances, but that's not a recommended use unless the only way to modify the database is through model instances, or you have complex data integrity requirements that aren't possible to specify via database-level constraints. == Data Integrity Data integrity is best handled by the database itself. For example, if you have a date column that should never contain a NULL value, the column should be specified in the database as NOT NULL. If you have an integer column that should only have values from 1 to 10, there should be a CHECK constraint that ensures that the value of that column is between 1 and 10. And if you have a varchar column where the length of the entries should be between 2 and 255, you should be setting the size of the varchar column to 255, and using a CHECK constraint to ensure that all values have at least two characters. Unfortunately, sometimes there are situations where that is not possible. For example, if you are using MySQL and don't have control over the database configuration, it's possible that if you attempt to insert a string with 300 characters into a varchar(255) field, then MySQL may just silently truncate it for you, instead of raising an error. In that case, it may be necessary to use a model validation to enforce the database integrity. Also, in some cases you may have data integrity requirements that are difficult to enforce via database constraints, especially if you are targetting multiple database types. Finally, validations are generally easier to write than database constraints, so if data integrity isn't of great importance, using validations to provide minimal data integrity is probably fine. == Usage Regardless of whether you are using validations for data integrity or just for error messages, the usage is the same. Whenever you attempt to save a model instance, before sending the INSERT or UPDATE query to the database, Sequel::Model will attempt to validate the instance by calling +validate+. If +validate+ does not add any errors to the object, the object is considered valid, and valid? will return true. If +validate+ adds any errors to the object, valid? will return false, and the save will either raise a Sequel::ValidationFailed exception (the default), or return nil (if +raise_on_save_failure+ is false). By validating the object before sending the database query, Sequel attempts to ensure that invalid objects are not saved in the database. However, if you are not enforcing the same validations in the database via constraints, it's possible that invalid data can get added to the database via some other method. This leads to odd cases such as retrieving a model object from the database, not making any changes to it, attempting to save it, and having the save raise an error. == Skipping Validations Sequel::Model uses the +save+ method to save model objects, and all saving of model objects passes through the +save+ method. This means that all saving of model objects goes through the validation process. The only way to skip validations when saving a model object is to pass the :validate => false option to +save+. If you use that option, +save+ will not attempt to validate the object before saving it. Note that it's always possible to update the instance's database row without using +save+, by using a dataset to update it. Validations will only be run if you call +save+ on the model object, or another model method that calls +save+. For example, the +create+ class method instantiates a new instance of the model, and then calls +save+, so it validates the object. However, the +insert+ class method is a dataset method that just inserts the raw hash into the database, so it doesn't validate the object. == valid? and +validate+ Sequel::Model uses the valid? method to check whether or not a model instance is valid. This method should not be overridden. Instead, the +validate+ method should be overridden to add validations to the model: class Album < Sequel::Model def validate super errors.add(:name, 'cannot be empty') if !name || name.empty? end end Album.new.valid? # false Album.new(:name=>'').valid? # false Album.new(:name=>'RF').valid? # true If the valid? method returns false, you can call the +errors+ method to get an instance of Sequel::Model::Errors describing the errors on the model: a = Album.new # => # a.valid? # => false a.errors # => {:name=>["cannot be empty"]} You may notice that the +errors+ method appears to return a hash. That's because Sequel::Model::Errors is a subclass of Hash. Note that calling the +errors+ method before the valid? method will result in an +errors+ being empty: Album.new.errors # => {} So just remember that you shouldn't check +errors+ until after you call valid?. Sequel::Model::Errors has some helper methods that make it easy to get an array of all of the instance's errors, or for checking for errors on a specific attribute. These will be covered later in this guide. == +validation_helpers+ While Sequel::Model does provide a validations framework, it does not define any built-in validation helper methods that you can call. However, Sequel ships with a plugin called +validation_helpers+ that handles most basic validation needs. So instead of specifying validations like this: class Album < Sequel::Model def validate super errors.add(:name, 'cannot be empty') if !name || name.empty? errors.add(:name, 'is already taken') if name && new? && Album[:name=>name] errors.add(:website, 'cannot be empty') if !website || website.empty? errors.add(:website, 'is not a valid URL') unless website =~ /\Ahttps?:\/\// end end You can call simple methods such as: class Album < Sequel::Model plugin :validation_helpers def validate super validates_presence [:name, :website] validates_unique :name validates_format /\Ahttps?:\/\//, :website, :message=>'is not a valid URL' end end Other than +validates_unique+, which has it's own API, the methods defined by +validation_helpers+ have one of the following two APIs: (atts, opts={}):: For methods such as +validates_presence+, which do not take an additional argument. (arg, atts, opts={}):: For methods such as +validates_format+, which take an additional argument. For both of these APIs, +atts+ is either a column symbol or array of column symbols, and +opts+ is an optional options hash. The following methods are provided by +validation_helpers+: === +validates_presence+ This is probably the most commonly used helper method, which checks if the specified attributes are not blank. In general, if an object responds to blank?, it calls the method to determine if the object is blank. Otherwise, nil is considered blank, empty strings or strings that just contain whitespace are blank, and objects that respond to empty? and return true are considered blank. All other objects are considered non-blank for the purposes of +validates_presence+. This means that +validates_presence+ is safe to use on boolean columns where you want to ensure that either true or false is used, but not NULL. class Album < Sequel::Model def validate super validates_presence [:name, :website, :debut_album] end end === +validates_not_null+ This is similar to +validates_presence+, but only checks for NULL/nil values, allowing other blank objects such as empty strings or strings with just whitespace. === +validates_format+ +validates_format+ is used to ensure that the string value of the specified attributes matches the specified regular expression. It's useful for checking that fields such as email addresses, URLs, UPC codes, ISBN codes, and the like, are in a specific format. It can also be used to validate that only certain characters are used in the string. class Album < Sequel::Model def validate super validates_format /\A\d\d\d-\d-\d{7}-\d-\d\z/, :isbn validates_format /\a[0-9a-zA-Z:' ]+\z/, :name end end === +validates_exact_length+, +validates_min_length+, +validates_max_length+, +validates_length_range+ These methods all deal with ensuring that the length of the specified attribute matches the criteria specified by the first argument to the method. +validates_exact_length+ is for checking that the length of the attribute is equal to that value, +validates_min_length+ is for checking that the length of the attribute is greater than or equal to that value, +validates_max_length+ is for checking that the length of the attribute is less than or equal to that value, and +validates_length_range+ is for checking that the length of the attribute falls in the value, which should be a range or another object that responds to include?. class Album < Sequel::Model def validate super validates_exact_length 17, :isbn validates_min_length 3, :name validates_max_length 100, :name validates_length_range 3..100, :name end end === +validates_integer+, +validates_numeric+ These methods check that the specified attributes can be valid integers or valid floats. +validates_integer+ tests the attribute value using Kernel.Integer and +validates_numeric+ tests the attribute using Kernel.Float. If the Kernel methods raise an exception, the validation fails, otherwise it succeeds. class Album < Sequel::Model def validate super validates_integer :copies_sold validates_numeric :replaygain end end === +validates_includes+ +validates_includes+ checks that the specified attributes are included in the first argument to the method, which is usually an array, but can be any object that responds to include?. class Album < Sequel::Model def validate validates_includes [1, 2, 3, 4, 5], :rating end end === +validates_type+ +validates_type+ checks that the specified attributes are instances of the class specified in the first argument. The class can be specified as the class itself, or as a string or symbol with the class name, or as a an array of classes. class Album < Sequel::Model def validate super validates_type String, [:name, :website] validates_type :Artist, :artist validates_type [String, Integer], :foo end end === +validates_schema_types+ +validates_schema_types+ uses the database metadata for the model's table to determine which ruby type(s) should be used for the given database type, and calls +validates_type+ with that ruby type. It's designed to be used with the raise_on_typecast_failure = false setting (the default starting in Sequel 4). raise_on_typecast_failure = false 'banana' When raise_on_typecast_failure = false, you can call +validates_schema_types+ with all columns. If any of those columns has a value that doesn't match the type that Sequel expects, it's probably because the column was set and Sequel was not able to typecast it correctly, which means it probably isn't valid. For example, let's say that you want to check that a couple of columns contain valid dates: class Album < Sequel::Model self.raise_on_typecast_failure = false def validate super validates_schema_types [:release_date, :record_date] end end album = Album.new album.release_date = 'banana' album.release_date # => 'banana' album.record_date = '2010-05-17' album.record_date # => # album.valid? # => false album.errors # => {:release_date=>["is not a valid date"]} For web applications, you usually want the raise_on_typecast_failure = false setting, so that you can accept all of the input without raising an error, and then present the user with all error messages. Without the setting, if the user submits any invalid data, Sequel will immediately raise an error. +validates_schema_types+ is helpful because it allows you to check for typecasting errors on columns, and provides a good default error message stating that the attribute is not of the expected type. === +validates_unique+ +validates_unique+ has a similar but different API than the other +validation_helpers+ methods. It takes an arbitrary number of arguments, which should be column symbols or arrays of column symbols. If any argument is a symbol, Sequel sets up a unique validation for just that column. If any argument is an array of symbols, Sequel sets up a unique validation for the combination of the columns. This means that you get different behavior depending on whether you call the object with an array or with separate arguments. For example: validates_unique(:name, :artist_id) Will set up a 2 separate uniqueness validations. It will make it so that no two albums can have the same name, and that each artist can only be associated with one album. In general, that's probably not what you want. You probably want it so that two albums can have the same name, unless they are by the same artist. To do that, you need to use an array: validates_unique([:name, :artist_id]) That sets up a single uniqueness validation for the combination of the fields. You can mix and match the two approaches. For example, if all albums should have a unique UPC, and no artist can have duplicate album names: validates_unique(:upc, [:name, :artist_id]) +validates_unique+ also accepts a block to scope the uniqueness constraint. For example, if you want to ensure that all active albums have a unique name, but inactive albums can duplicate the name: validates_unique(:name){|ds| ds.where(:active)} If you provide a block, it is called with the dataset to use for the uniqueness check, which you can then filter to scope the uniqueness validation to a subset of the model's dataset. Additionally, you can also include an optional options hash as the last argument. Unlike the other validations, the options hash for +validates_unique+ only checks for two options: :message :: The message to use :only_if_modified :: Only check the uniqueness if the object is new or one of the columns has been modified. +validates_unique+ is the only method in +validation_helpers+ that checks with the database. Attempting to validate uniqueness outside of the database suffers from a race condition, so any time you want to add a uniqueness validation, you should make sure to add a uniqueness constraint or unique index on the underlying database table. See the {"Migrations and Schema Modification" guide}[link:files/doc/migration_rdoc.html] for details on how to do that. == +validation_helpers+ Options All +validation_helpers+ methods except +validates_unique+ accept the following options: === :message The most commonly used option, used to override the default validation message. Can be either a string or a proc. If a string, it is used directly. If a proc, the proc is called and should return a string. If the validation method takes an argument before the array of attributes, that argument is passed as an argument to the proc. The exception is the +validates_not_string+ method, which doesn't take an argument, but passes the schema type symbol as the argument to the proc. class Album < Sequel::Model def validate super validates_presence :copies_sold, :message=>'was not given' validates_min_length 3, :name, :message=>proc{|s| "should be more than #{s} characters"} end end === :allow_nil The :allow_nil option skips the validation if the attribute value is nil or if the attribute is not present. It's commonly used when you have a +validates_presence+ method already on the attribute, and don't want multiple validation errors for the same attribute: class Album < Sequel::Model def validate super validates_presence :copies_sold validates_integer :copies_sold, :allow_nil=>true end end Without the :allow_nil option to +validates_integer+, if the copies_sold attribute was nil, you would get two separate validation errors, instead of a single validation error. === :allow_blank The :allow_blank is similar to the :allow_nil option, but instead of just skipping the attribute for nil values, it skips the attribute for all blank values. For example, let's say that artists can have a website. If they have one, it should be formatted like a URL, but it can be nil or an empty string if they don't have one. class Album < Sequel::Model def validate super validates_format /\Ahttps?:\/\//, :website, :allow_blank=>true end end a = Album.new a.website = '' a.valid? # true === :allow_missing The :allow_missing option is different from the :allow_nil option, in that instead of checking if the attribute value is nil, it checks if the attribute is present in the model instance's values hash. :allow_nil will skip the validation when the attribute is in the values hash and has a nil value and when the attribute is not in the values hash. :allow_missing will only skip the validation when the attribute is not in the values hash. If the attribute is in the values hash but has a nil value, :allow_missing will not skip it. The purpose of this option is to work correctly with missing columns when inserting or updating records. Sequel only sends the attributes in the values hash when doing an insert or update. If the attribute is not present in the values hash, Sequel doesn't specify it, so the database will use the table's default value when inserting the record, or not modify the value when saving it. This is different from having an attribute in the values hash with a value of nil, which Sequel will send as NULL. If your database table has a non NULL default, this may be a good option to use. You don't want to use allow_nil, because if the attribute is in values but has a value nil, Sequel will attempt to insert a NULL value into the database, instead of using the database's default. == Conditional Validation Because Sequel uses the +validate+ instance method to handle validation, making validations conditional is easy as it works exactly the same as ruby's standard conditionals. For example, if you only want to validate an attribute when creating an object: validates_presence :name if new? If you only want to validate the attribute when updating an existing object: validates_integer :copies_sold unless new? Let's say you only to make a validation conditional on the status of the object: validates_presence :name if status_id > 1 validates_integer :copies_sold if status_id > 3 You can use all the standard ruby conditional expressions, such as +case+: case status_id when 1 validates_presence :name when 2 validates_presence [:name, :artist_id] when 3 validates_presence [:name, :artist_id, :copies_sold] end You can make the input to some validations dependent on the values of another attribute: validates_min_length(status_id > 2 ? 5 : 10, [:name]) validates_presence(status_id < 2 ? :name : [:name, :artist_id]) Basically, there's no special syntax you have to use for conditional validations. Just handle conditionals the way you would in other ruby code. == Default Error Messages These are the default error messages for all of the helper methods in +validation_helpers+: :exact_length :: is not #{arg} characters :format :: is invalid :includes :: is not in range or set: #{arg.inspect} :integer :: is not a number :length_range :: is too short or too long :max_length :: is longer than #{arg} characters :min_length :: is shorter than #{arg} characters :not_null :: is not present :numeric :: is not a number :schema_types :: is not a valid #{schema_type} :type :: is not a #{arg} :presence :: is not present :unique :: is already taken == Modifying the Default Options It's easy to modify the default options used by +validation_helpers+. All of the default options are stored in the Sequel::Plugins::ValidationHelpers::DEFAULT_OPTIONS hash. So you just need to modify that hash to change the default options. One way to do that is to use merge! to update the hash: Sequel::Plugins::ValidationHelpers::DEFAULT_OPTIONS.merge!( :presence=>{:message=>'cannot be empty'}, :includes=>{:message=>'invalid option', :allow_nil=>true}, :max_length=>{:message=>lambda{|i| "cannot be more than #{i} characters"}, :allow_nil=>true}, :format=>{:message=>'contains invalid characters', :allow_nil=>true}) This updates the default messages that will be used for the presence, includes, max_length, and format validations, and sets the default value of the :allow_nil option to true for the includes, max_length, and format validations. You can also override Sequel::Model#default_validation_helpers_options private method to override these settings on a per-model or even per-instance basis. == Custom Validations Just as the first validation example showed, you aren't limited to the validation methods defined by +validation_helpers+. Inside the +validate+ method, you can add your own validations by adding to the instance's errors using errors.add whenever an attribute is not valid: class Album < Sequel::Model def validate super errors.add(:release_date, 'cannot be before record date') if release_date < record_date end end Just like conditional validations, with custom validations you are just using the standard ruby conditionals, and calling errors.add with the column symbol and the error message if you detect invalid data. It's fairly easy to create your own custom validations that can be reused in all your models. For example, if there is a common need to validate that one column in the model comes before another column: class Sequel::Model def validates_after(col1, col2) errors.add(col1, "cannot be before #{col2}") if send(col1) < send(col2) end end class Album < Sequel::Model def validate super validates_after(:release_date, :record_date) end end == Setting Validations for All Models Let's say you want to add some default validations that apply to all of your model classes. It's fairly easy to do by overriding the +validate+ method in Sequel::Model, adding some validations to it, and if you override +validate+ in your model classes, just make sure to call +super+. class Sequel::Model def self.string_columns @string_columns ||= columns.reject{|c| db_schema[c][:type] != :string} end def validate super validates_format(/\A[^\x00-\x08\x0e-\x1f\x7f\x81\x8d\x8f\x90\x9d]*\z/n, model.string_columns, :message=>"contains invalid characters") end end This will make sure that all string columns in the model are validated to make sure they don't contain any invalid characters. Just remember that if you override the +validate+ method in your model classes, you need to call +super+: class Album < Sequel::Model def validate super # Important! validates_presence :name end end If you forget to call +super+, the validations that you defined in Sequel::Model will not be enforced. It's a good idea to call super whenever you override one of Sequel::Model's methods, unless you specifically do not want the default behavior. == Sequel::Model::Errors ' As mentioned earlier, Sequel::Model::Errors is a subclass of Hash with a few special methods, the most common of which are described here: === +add+ +add+ is the method used to add error messages for a given column. It takes the column symbol as the first argument and the error message as the second argument: errors.add(:name, 'is not valid') === +on+ +on+ is a method usually used after validation has been completed, to determine if there were any errors on a given attribute. It takes the column value, and returns an array of error messages if there were any, or nil if not: errors.on(:name) If you want to make some validations dependent upon the results of other validations, you may want to use +on+ inside your validates method: validates_integer(:release_date) if errors.on(:record_date) Here, you don't care about validating the release date if there were validation errors for the record date. === +full_messages+ +full_messages+ returns an array of error messages for the object. It's commonly called after validation to get a list of error messages to display to the user: album.errors # => {:name=>["cannot be empty"]} album.errors.full_messages # => ["name cannot be empty"] === +count+ +count+ returns the total number of error messages in the errors. album.errors.count # => 1 == Other Validation Plugins === +constraint_validations+ Sequel ships with a +constraint_validations+ plugin and extension, that allows you to setup constraints when creating your database tables, and have Model validations automatically created that mirror those constraints. === +auto_validations+ Autovalidations uses the not null and type information obtained from parsing the database schema, and the unique index information from parsing the database's index information, and automatically setting up not_null, schema_types, and unique validations. If you don't require customizing validation messages on a per-column basis, it can DRY up a lot of validation code. === +validation_class_methods+ Sequel ships with the +validation_class_methods+ plugin, which uses class methods instead of instance methods to define validations. It exists mostly for legacy compatibility, but it is still supported. ruby-sequel-4.1.1/doc/virtual_rows.rdoc000066400000000000000000000237051220156535500201630ustar00rootroot00000000000000= Virtual Row Blocks Dataset methods where, order, and select all take blocks that are referred to as virtual row blocks. Many other dataset methods pass the blocks they are given into one of those three methods, so there are actually many Sequel::Dataset methods that take virtual row blocks. == Why Virtual Rows Virtual Rows were created to work around the issue that some parts of Sequel's standard DSL could not be used on ruby 1.9. For example, the following Sequel code historically worked on ruby 1.8, but not ruby 1.9: dataset.where(:a > :b[:c]) # WHERE a > b(c) This code does not work on ruby 1.9 for two reasons. First, Symbol#> (like other inequality methods) is already defined in ruby 1.9, so Sequel does not override it to return an SQL inequality expression. Second, Symbol#[] is already defined on ruby 1.9, so Sequel does not override it to return an SQL function expression. It's possible to use Sequel's DSL to represent such expressions, but it is a little verbose: dataset.where(Sequel.expr(:a) > Sequel.function(:b, :c)) # WHERE a > b(c) The virtual row DSL makes such code more concise: dataset.where{a > b(c)} == Regular Procs vs Instance Evaled Procs Virtual row blocks behave differently depending on whether the block accepts an argument. If the block accepts an argument, it is called with an instance of Sequel::SQL::VirtualRow. If it does not accept an argument, it is evaluated in the context of an instance of Sequel::SQL::VirtualRow. ds = DB[:items] # Regular proc ds.where{|o| o.column > 1} # WHERE column > 1 # Instance-evaled proc ds.where{column > 1} # WHERE column > 1 If you aren't familiar with the difference between regular blocks and instance evaled blocks, you should probably consult a general ruby reference, but briefly, with regular procs, methods called without an explicit receiver inside the proc call the method on the receiver in the surrounding scope, while instance evaled procs call the method on the receiver of the instance_eval call. However, in both cases, local variables available in the surrounding scope will be available inside the proc. If that doesn't make sense, maybe this example will help: def self.a 42 end b = 32 # Regular proc ds.where{|o| o.c > a - b} # WHERE c > 10 # Instance-evaled proc ds.where{c > a - b} # WHERE c > (a - 32) There are two related differences here. First is the usage of o.c vs +c+, and second is the difference between the the use of +a+. In the regular proc, you couldn't call +c+ without an explicit receiver in the proc, unless the self of the surrounding scope responded to it. For +a+, note how ruby calls the method on the receiver of the surrounding scope in the regular proc, which returns an integer, and does the subtraction before Sequel gets access to it. In the instance evaled proc, calling +a+ without a receiver calls the a method on the VirtualRow instance. For +b+, note that it operates the same in both cases, as it is a local variable. Basically, the choice for whether to use a regular proc or an instance evaled proc is completely up to you. The same things can be accomplished with both. Instance evaled procs tend to produce shorter code, but by modifying the scope can be more difficult for a new user to understand. That being said, I usually use instance evaled procs unless I need to call methods on the receiver of the surrounding scope inside the proc. == Local Variables vs Method Calls If you have a method that accepts 0 arguments and has the same name as a local variable, you can call it with () to differentiate the method call from the local variable access. This is mostly useful in instance_evaled procs: b = 32 ds.where{b() > b} # WHERE b > 32 == VirtualRow Methods VirtualRow is a class that returns SQL::Identifiers, SQL::QualifiedIdentifiers, SQL::Functions, or SQL::WindowFunctions depending on how it is called. == SQL::Identifiers - Regular columns SQL::Identifiers can be thought of as regular column references in SQL, not qualified by any table. You get an SQL::Identifier if the method is called without a block or arguments, and doesn't have a double underscore in the method name: ds.where{|o| o.column > 1} ds.where{column > 1} # WHERE column > 1 == SQL::QualifiedIdentifiers - Qualified columns SQL::QualifiedIdentifiers can be thought of as column references in SQL that are qualified to a specific table. You get an SQL::QualifiedIdentifier if the method is called without a block or arguments, and has a double underscore in the method name: ds.where{|o| o.table__column > 1} ds.where{table__column > 1} # WHERE table.column > 1 Using the double underscore for SQL::QualifiedIdentifiers was done to make usage very similar to using symbols, which also translate the double underscore into a qualified column. == SQL::Functions - SQL function calls SQL::Functions can be thought of as function calls in SQL. You get a simple function call if you call a method with arguments and without a block: ds.where{|o| o.function(1) > 1} ds.where{function(1) > 1} # WHERE function(1) > 1 To call a SQL function with multiple arguments, just use those arguments in your function call: ds.where{|o| o.function(1, o.a) > 1} ds.where{function(1, a) > 1} # WHERE function(1, a) > 1 If the SQL function does not accept any arguments, you need to provide an empty block to the method to distinguish it from a call that will produce an SQL::Identifier: ds.select{|o| o.version{}} ds.select{version{}} # SELECT version() To use the SQL wildcard (*) as the sole argument in a function call (most often used with the count function), you should provide :* as the sole argument to the method, and provide an empty block to the method: ds.select{|o| o.count(:*){}} ds.select{count(:*){}} # SELECT count(*) To append the DISTINCT keyword before the method arguments, you need to make :distinct the first argument of the method call, and provide an empty block to the method: ds.select{|o| o.count(:distinct, o.col1){}} ds.select{count(:distinct, col1){}} # SELECT count(DISTINCT col1) To use multiple columns with the DISTINCT keyword, use multiple arguments in the method call: ds.select{|o| o.count(:distinct, o.col1, o.col2){}} ds.select{count(:distinct, col1, col2){}} # SELECT count(DISTINCT col1, col2) == SQL::WindowFunctions - SQL window function calls SQL::WindowFunctions can be thought of as calls to SQL window functions. Not all databases support them, but they are very helpful for certain types of queries. To use them, you need to make :over the first argument of the method call, with an optional hash as the second argument, and provide an empty block to the method. Here are some examples of use: ds.select{|o| o.rank(:over){}} ds.select{rank(:over){}} # SELECT rank() OVER () ds.select{|o| o.count(:over, :*=>true){}} ds.select{count(:over, :*=>true){}} # SELECT count(*) OVER () ds.select{|o| o.sum(:over, :args=>o.col1, :partition=>o.col2, :order=>o.col3){}} ds.select{sum(:over, :args=>col1, :partition=>col2, :order=>col3){}} # SELECT sum(col1) OVER (PARTITION BY col2 ORDER BY col3) == Operators VirtualRows use method_missing to handle almost all method calls. However, they have special handling of some operator methods to make certain things easier. The operators all use a prefix form. === Math Operators The standard +, -, *, and / mathematical operators are defined: ds.select{|o| o.-(1, o.a).as(b)} ds.select{self.-(1, a).as(b)} # SELECT (1 - a) AS b === Boolean Operators The & and | methods are defined to use AND and OR: ds.where{|o| o.&({:a=>:b}, :c)} ds.where{self.&({:a=>:b}, :c)} # WHERE ((a = b) AND c) The ~ method is defined to do inversion: ds.where{|o| o.~({:a=>1, :b=>2})} ds.where{self.~({:a=>1, :b=>2})} # WHERE ((a != 1) OR (b != 2)) === Inequality Operators The standard >, <, >=, and <= inequality operators are defined: ds.where{|o| o.>(1, :c)} ds.where{self.>(1, :c)} # WHERE (1 > c) == Literal Strings The backtick operator can be used inside an instance-evaled virtual row block to create a literal string: ds.where{a > `some SQL`} # WHERE (a > some SQL) You can use this on a regular virtual row block too, but it doesn't look as nice: ds.where{|o| o.>(:a, o.`('some SQL')} == Returning multiple values It's common when using select and order virtual row blocks to want to return multiple values. If you want to do that, you just need to return an array: ds.select{|o| [o.column1, o.sum(o.column2).as(o.sum)]} ds.select{[column1, sum(column2).as(sum)]} # SELECT column1, sum(column2) AS sum Note that if you forget the array brackets, you'll end up with a syntax error: # Invalid ruby syntax ds.select{|o| o.column1, o.sum(o.column2).as(o.sum)} ds.select{column1, sum(column2).as(sum)} == Alternative Description of the VirtualRow method call rules * If a block is given: * The block is currently not called. This may change in a future version. * If there are no arguments, an SQL::Function with the name of method used, and no arguments. * If the first argument is :*, an SQL::Function is created with a single wildcard argument (*). * If the first argument is :distinct, an SQL::Function is created with the keyword DISTINCT prefacing all remaining arguments. * If the first argument is :over, the second argument if provided should be a hash of options to pass to SQL::Window. The options hash can also contain :*=>true to use a wildcard argument as the function argument, or :args=>... to specify an array of arguments to use as the function arguments. * If a block is not given: * If there are arguments, an SQL::Function is returned with the name of the method used and the arguments given. * If there are no arguments and the method contains a double underscore, split on the double underscore and return an SQL::QualifiedIdentifier with the table and column. * Otherwise, create an SQL::Identifier with the name of the method. ruby-sequel-4.1.1/lib/000077500000000000000000000000001220156535500145445ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel.rb000066400000000000000000000000271220156535500163660ustar00rootroot00000000000000require 'sequel/model' ruby-sequel-4.1.1/lib/sequel/000077500000000000000000000000001220156535500160425ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/adapters/000077500000000000000000000000001220156535500176455ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/adapters/ado.rb000066400000000000000000000124151220156535500207400ustar00rootroot00000000000000require 'win32ole' module Sequel # The ADO adapter provides connectivity to ADO databases in Windows. module ADO class Database < Sequel::Database DISCONNECT_ERROR_RE = /Communication link failure/ set_adapter_scheme :ado # In addition to the usual database options, # the following options have an effect: # # :command_timeout :: Sets the time in seconds to wait while attempting # to execute a command before cancelling the attempt and generating # an error. Specifically, it sets the ADO CommandTimeout property. # If this property is not set, the default of 30 seconds is used. # :driver :: The driver to use in the ADO connection string. If not provided, a default # of "SQL Server" is used. # :conn_string :: The full ADO connection string. If this is provided, # the usual options are ignored. # :provider :: Sets the Provider of this ADO connection (for example, "SQLOLEDB"). # If you don't specify a provider, the default one used by WIN32OLE # has major problems, such as creating a new native database connection # for every query, which breaks things such as temporary tables. # # Pay special attention to the :provider option, as without specifying a provider, # many things will be broken. The SQLNCLI10 provider appears to work well if you # are connecting to Microsoft SQL Server, but it is not the default as that would # break backwards compatability. def connect(server) opts = server_opts(server) s = opts[:conn_string] || "driver=#{opts[:driver]};server=#{opts[:host]};database=#{opts[:database]}#{";uid=#{opts[:user]};pwd=#{opts[:password]}" if opts[:user]}" handle = WIN32OLE.new('ADODB.Connection') handle.CommandTimeout = opts[:command_timeout] if opts[:command_timeout] handle.Provider = opts[:provider] if opts[:provider] handle.Open(s) handle end def disconnect_connection(conn) conn.Close rescue WIN32OLERuntimeError nil end # Just execute so it doesn't attempt to return the number of rows modified. def execute_ddl(sql, opts=OPTS) execute(sql, opts) end # Just execute so it doesn't attempt to return the number of rows modified. def execute_insert(sql, opts=OPTS) execute(sql, opts) end # Use pass by reference in WIN32OLE to get the number of affected rows, # unless is a provider is in use (since some providers don't seem to # return the number of affected rows, but the default provider appears # to). def execute_dui(sql, opts=OPTS) return super if opts[:provider] synchronize(opts[:server]) do |conn| begin log_yield(sql){conn.Execute(sql, 1)} WIN32OLE::ARGV[1] rescue ::WIN32OLERuntimeError => e raise_error(e) end end end def execute(sql, opts=OPTS) synchronize(opts[:server]) do |conn| begin r = log_yield(sql){conn.Execute(sql)} yield(r) if block_given? rescue ::WIN32OLERuntimeError => e raise_error(e) end end nil end private def adapter_initialize case @opts[:conn_string] when /Microsoft\.(Jet|ACE)\.OLEDB/io Sequel.require 'adapters/ado/access' extend Sequel::ADO::Access::DatabaseMethods self.dataset_class = ADO::Access::Dataset else @opts[:driver] ||= 'SQL Server' case @opts[:driver] when 'SQL Server' Sequel.require 'adapters/ado/mssql' extend Sequel::ADO::MSSQL::DatabaseMethods self.dataset_class = ADO::MSSQL::Dataset set_mssql_unicode_strings end end super end # The ADO adapter's default provider doesn't support transactions, since it # creates a new native connection for each query. So Sequel only attempts # to use transactions if an explicit :provider is given. def begin_transaction(conn, opts=OPTS) super if @opts[:provider] end def commit_transaction(conn, opts=OPTS) super if @opts[:provider] end def database_error_classes [::WIN32OLERuntimeError] end def disconnect_error?(e, opts) super || (e.is_a?(::WIN32OLERuntimeError) && e.message =~ DISCONNECT_ERROR_RE) end def rollback_transaction(conn, opts=OPTS) super if @opts[:provider] end end class Dataset < Sequel::Dataset Database::DatasetClass = self def fetch_rows(sql) execute(sql) do |s| columns = cols = s.Fields.extend(Enumerable).map{|column| output_identifier(column.Name)} @columns = columns s.getRows.transpose.each do |r| row = {} cols.each{|c| row[c] = r.shift} yield row end unless s.eof end end # ADO returns nil for all for delete and update statements. def provides_accurate_rows_matched? false end end end end ruby-sequel-4.1.1/lib/sequel/adapters/ado/000077500000000000000000000000001220156535500204105ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/adapters/ado/access.rb000066400000000000000000000250711220156535500222030ustar00rootroot00000000000000Sequel.require 'adapters/shared/access' Sequel.require 'adapters/utils/split_alter_table' module Sequel module ADO # Database and Dataset instance methods for Access specific # support via ADO. module Access class AdoSchema QUERY_TYPE = { :columns => 4, :indexes => 12, :tables => 20, :views => 23, :foreign_keys => 27 } attr_reader :type, :criteria def initialize(type, crit) @type = QUERY_TYPE[type] @criteria = Array(crit) end class Column DATA_TYPE = { 2 => "SMALLINT", 3 => "INTEGER", 4 => "REAL", 5 => "DOUBLE", 6 => "MONEY", 7 => "DATETIME", 11 => "BIT", 14 => "DECIMAL", 16 => "TINYINT", 17 => "BYTE", 72 => "GUID", 128 => "BINARY", 130 => "TEXT", 131 => "DECIMAL", 201 => "TEXT", 205 => "IMAGE" } def initialize(row) @row = row end def [](col) @row[col] end def allow_null self["IS_NULLABLE"] end def default self["COLUMN_DEFAULT"] end def db_type t = DATA_TYPE[self["DATA_TYPE"]] if t == "DECIMAL" && precision t + "(#{precision.to_i},#{(scale || 0).to_i})" elsif t == "TEXT" && maximum_length && maximum_length > 0 t + "(#{maximum_length.to_i})" else t end end def precision self["NUMERIC_PRECISION"] end def scale self["NUMERIC_SCALE"] end def maximum_length self["CHARACTER_MAXIMUM_LENGTH"] end end end module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::Access::DatabaseMethods include Sequel::Database::SplitAlterTable DECIMAL_TYPE_RE = /decimal/io LAST_INSERT_ID = "SELECT @@IDENTITY".freeze # Remove cached schema after altering a table, since otherwise it can be cached # incorrectly in the rename column case. def alter_table(name, *) super remove_cached_schema(name) nil end # Access doesn't let you disconnect if inside a transaction, so # try rolling back an existing transaction first. def disconnect_connection(conn) conn.RollbackTrans rescue nil super end def execute_insert(sql, opts=OPTS) synchronize(opts[:server]) do |conn| begin r = log_yield(sql){conn.Execute(sql)} res = log_yield(LAST_INSERT_ID){conn.Execute(LAST_INSERT_ID)} res.getRows.transpose.each{|r| return r.shift} rescue ::WIN32OLERuntimeError => e raise_error(e) end end nil end def tables(opts=OPTS) m = output_identifier_meth ado_schema_tables.map {|tbl| m.call(tbl['TABLE_NAME'])} end def views(opts=OPTS) m = output_identifier_meth ado_schema_views.map {|tbl| m.call(tbl['TABLE_NAME'])} end # Note OpenSchema returns compound indexes as multiple rows def indexes(table_name,opts=OPTS) m = output_identifier_meth idxs = ado_schema_indexes(table_name).inject({}) do |memo, idx| unless idx["PRIMARY_KEY"] index = memo[m.call(idx["INDEX_NAME"])] ||= { :columns=>[], :unique=>idx["UNIQUE"] } index[:columns] << m.call(idx["COLUMN_NAME"]) end memo end idxs end # Note OpenSchema returns compound foreign key relationships as multiple rows def foreign_key_list(table, opts=OPTS) m = output_identifier_meth fks = ado_schema_foreign_keys(table).inject({}) do |memo, fk| name = m.call(fk['FK_NAME']) specs = memo[name] ||= { :columns => [], :table => m.call(fk['PK_TABLE_NAME']), :key => [], :deferrable => fk['DEFERRABILITY'], :name => name, :on_delete => fk['DELETE_RULE'], :on_update => fk['UPDATE_RULE'] } specs[:columns] << m.call(fk['FK_COLUMN_NAME']) specs[:key] << m.call(fk['PK_COLUMN_NAME']) memo end fks.values end private # Emulate rename_column by adding the column, copying data from the old # column, and dropping the old column. def alter_table_sql(table, op) case op[:op] when :rename_column unless sch = op[:schema] raise(Error, "can't find existing schema entry for #{op[:name]}") unless sch = op[:schema] || schema(table).find{|c| c.first == op[:name]} sch = sch.last end [ alter_table_sql(table, :op=>:add_column, :name=>op[:new_name], :default=>sch[:ruby_default], :type=>sch[:db_type], :null=>sch[:allow_null]), from(table).update_sql(op[:new_name]=>op[:name]), alter_table_sql(table, :op=>:drop_column, :name=>op[:name]) ] when :set_column_null, :set_column_default raise(Error, "can't find existing schema entry for #{op[:name]}") unless sch = op[:schema] || schema(table).find{|c| c.first == op[:name]} sch = sch.last sch = if op[:op] == :set_column_null sch.merge(:allow_null=>op[:null]) else sch.merge(:ruby_default=>op[:default]) end [ alter_table_sql(table, :op=>:rename_column, :name=>op[:name], :new_name=>:sequel_access_backup_column, :schema=>sch), alter_table_sql(table, :op=>:rename_column, :new_name=>op[:name], :name=>:sequel_access_backup_column, :schema=>sch) ] else super end end def begin_transaction(conn, opts=OPTS) log_yield('Transaction.begin'){conn.BeginTrans} end def commit_transaction(conn, opts=OPTS) log_yield('Transaction.commit'){conn.CommitTrans} end def rollback_transaction(conn, opts=OPTS) log_yield('Transaction.rollback'){conn.RollbackTrans} end def schema_column_type(db_type) case db_type.downcase when 'bit' :boolean when 'byte', 'guid' :integer when 'image' :blob else super end end def schema_parse_table(table_name, opts) m = output_identifier_meth(opts[:dataset]) m2 = input_identifier_meth(opts[:dataset]) tn = m2.call(table_name.to_s) idxs = ado_schema_indexes(tn) ado_schema_columns(tn).map {|row| specs = { :allow_null => row.allow_null, :db_type => row.db_type, :default => row.default, :primary_key => !!idxs.find {|idx| idx["COLUMN_NAME"] == row["COLUMN_NAME"] && idx["PRIMARY_KEY"] }, :type => if row.db_type =~ DECIMAL_TYPE_RE && row.scale == 0 :integer else schema_column_type(row.db_type) end, :ado_type => row["DATA_TYPE"] } specs[:default] = nil if blank_object?(specs[:default]) specs[:allow_null] = specs[:allow_null] && !specs[:primary_key] [ m.call(row["COLUMN_NAME"]), specs ] } end def ado_schema_tables rows=[] fetch_ado_schema(:tables, [nil,nil,nil,'TABLE']) do |row| rows << row end rows end def ado_schema_views rows=[] fetch_ado_schema(:views, [nil,nil,nil]) do |row| rows << row end rows end def ado_schema_indexes(table_name) rows=[] fetch_ado_schema(:indexes, [nil,nil,nil,nil,table_name.to_s]) do |row| rows << row end rows end def ado_schema_columns(table_name) rows=[] fetch_ado_schema(:columns, [nil,nil,table_name.to_s,nil]) do |row| rows << AdoSchema::Column.new(row) end rows.sort!{|a,b| a["ORDINAL_POSITION"] <=> b["ORDINAL_POSITION"]} end def ado_schema_foreign_keys(table_name) rows=[] fetch_ado_schema(:foreign_keys, [nil,nil,nil,nil,nil,table_name.to_s]) do |row| rows << row end rows.sort!{|a,b| a["ORDINAL"] <=> b["ORDINAL"]} end def fetch_ado_schema(type, criteria=[]) execute_open_ado_schema(type, criteria) do |s| cols = s.Fields.extend(Enumerable).map {|c| c.Name} s.getRows.transpose.each do |r| row = {} cols.each{|c| row[c] = r.shift} yield row end unless s.eof end end # This is like execute() in that it yields an ADO RecordSet, except # instead of an SQL interface there's this OpenSchema call # cf. http://msdn.microsoft.com/en-us/library/ee275721(v=bts.10) # def execute_open_ado_schema(type, criteria=[]) ado_schema = AdoSchema.new(type, criteria) synchronize(opts[:server]) do |conn| begin r = log_yield("OpenSchema #{type.inspect}, #{criteria.inspect}") { if ado_schema.criteria.empty? conn.OpenSchema(ado_schema.type) else conn.OpenSchema(ado_schema.type, ado_schema.criteria) end } yield(r) if block_given? rescue ::WIN32OLERuntimeError => e raise_error(e) end end nil end end class Dataset < ADO::Dataset include Sequel::Access::DatasetMethods end end end end ruby-sequel-4.1.1/lib/sequel/adapters/ado/mssql.rb000066400000000000000000000046751220156535500221100ustar00rootroot00000000000000Sequel.require 'adapters/shared/mssql' module Sequel module ADO # Database and Dataset instance methods for MSSQL specific # support via ADO. module MSSQL module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::MSSQL::DatabaseMethods # Query to use to get the number of rows affected by an update or # delete query. ROWS_AFFECTED = "SELECT @@ROWCOUNT AS AffectedRows" # Issue a separate query to get the rows modified. ADO appears to # use pass by reference with an integer variable, which is obviously # not supported directly in ruby, and I'm not aware of a workaround. def execute_dui(sql, opts=OPTS) return super unless @opts[:provider] synchronize(opts[:server]) do |conn| begin log_yield(sql){conn.Execute(sql)} res = log_yield(ROWS_AFFECTED){conn.Execute(ROWS_AFFECTED)} res.getRows.transpose.each{|r| return r.shift} rescue ::WIN32OLERuntimeError => e raise_error(e) end end end private # The ADO adapter's default provider doesn't support transactions, since it # creates a new native connection for each query. So Sequel only attempts # to use transactions if an explicit :provider is given. def begin_transaction(conn, opts=OPTS) super if @opts[:provider] end def commit_transaction(conn, opts=OPTS) super if @opts[:provider] end def rollback_transaction(conn, opts=OPTS) super if @opts[:provider] end end class Dataset < ADO::Dataset include Sequel::MSSQL::DatasetMethods # Use a nasty hack of multiple SQL statements in the same call and # having the last one return the most recently inserted id. This # is necessary as ADO's default :provider uses a separate native # connection for each query. def insert(*values) return super if @opts[:sql] with_sql("SET NOCOUNT ON; #{insert_sql(*values)}; SELECT CAST(SCOPE_IDENTITY() AS INTEGER)").single_value end # If you use a better :provider option for the database, you can get an # accurate number of rows matched. def provides_accurate_rows_matched? !!db.opts[:provider] end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/amalgalite.rb000066400000000000000000000142541220156535500223000ustar00rootroot00000000000000require 'amalgalite' Sequel.require 'adapters/shared/sqlite' module Sequel # Top level module for holding all Amalgalite-related modules and classes # for Sequel. module Amalgalite # Type conversion map class for Sequel's use of Amalgamite class SequelTypeMap < ::Amalgalite::TypeMaps::DefaultMap methods_handling_sql_types.delete('string') methods_handling_sql_types.merge!( 'datetime' => %w'datetime timestamp', 'time' => %w'time', 'float' => ['float', 'double', 'real', 'double precision'], 'decimal' => %w'numeric decimal money' ) # Store the related database object, in order to be able to correctly # handle the database timezone. def initialize(db) @db = db end # Return blobs as instances of Sequel::SQL::Blob instead of # Amalgamite::Blob def blob(s) SQL::Blob.new(s) end # Return numeric/decimal types as instances of BigDecimal # instead of Float def decimal(s) BigDecimal.new(s) end # Return datetime types as instances of Sequel.datetime_class def datetime(s) @db.to_application_timestamp(s) end def time(s) Sequel.string_to_time(s) end # Don't raise an error if the value is a string and the declared # type doesn't match a known type, just return the value. def result_value_of(declared_type, value) if value.is_a?(::Amalgalite::Blob) SQL::Blob.new(value.to_s) elsif value.is_a?(String) && declared_type (meth = self.class.sql_to_method(declared_type.downcase)) ? send(meth, value) : value else super end end end # Database class for SQLite databases used with Sequel and the # amalgalite driver. class Database < Sequel::Database include ::Sequel::SQLite::DatabaseMethods set_adapter_scheme :amalgalite # Mimic the file:// uri, by having 2 preceding slashes specify a relative # path, and 3 preceding slashes specify an absolute path. def self.uri_to_options(uri) # :nodoc: { :database => (uri.host.nil? && uri.path == '/') ? nil : "#{uri.host}#{uri.path}" } end private_class_method :uri_to_options # Connect to the database. Since SQLite is a file based database, # the only options available are :database (to specify the database # name), and :timeout, to specify how long to wait for the database to # be available if it is locked, given in milliseconds (default is 5000). def connect(server) opts = server_opts(server) opts[:database] = ':memory:' if blank_object?(opts[:database]) db = ::Amalgalite::Database.new(opts[:database]) db.busy_handler(::Amalgalite::BusyTimeout.new(opts.fetch(:timeout, 5000)/50, 50)) db.type_map = SequelTypeMap.new(self) connection_pragmas.each{|s| log_yield(s){db.execute_batch(s)}} db end # Amalgalite is just the SQLite database without a separate SQLite installation. def database_type :sqlite end # Run the given SQL with the given arguments. Returns nil. def execute_ddl(sql, opts=OPTS) _execute(sql, opts){|conn| log_yield(sql){conn.execute_batch(sql)}} nil end # Run the given SQL with the given arguments and return the number of changed rows. def execute_dui(sql, opts=OPTS) _execute(sql, opts){|conn| log_yield(sql){conn.execute_batch(sql)}; conn.row_changes} end # Run the given SQL with the given arguments and return the last inserted row id. def execute_insert(sql, opts=OPTS) _execute(sql, opts){|conn| log_yield(sql){conn.execute_batch(sql)}; conn.last_insert_rowid} end # Run the given SQL with the given arguments and yield each row. def execute(sql, opts=OPTS) _execute(sql, opts) do |conn| begin yield(stmt = log_yield(sql){conn.prepare(sql)}) ensure stmt.close if stmt end end end # Run the given SQL with the given arguments and return the first value of the first row. def single_value(sql, opts=OPTS) _execute(sql, opts){|conn| log_yield(sql){conn.first_value_from(sql)}} end private # Yield an available connection. Rescue # any Amalgalite::Errors and turn them into DatabaseErrors. def _execute(sql, opts) begin synchronize(opts[:server]){|conn| yield conn} rescue ::Amalgalite::Error, ::Amalgalite::SQLite3::Error => e raise_error(e) end end # The Amagalite adapter does not need the pool to convert exceptions. # Also, force the max connections to 1 if a memory database is being # used, as otherwise each connection gets a separate database. def connection_pool_default_options o = super.dup # Default to only a single connection if a memory database is used, # because otherwise each connection will get a separate database o[:max_connections] = 1 if @opts[:database] == ':memory:' || blank_object?(@opts[:database]) o end # Both main error classes that Amalgalite raises def database_error_classes [::Amalgalite::Error, ::Amalgalite::SQLite3::Error] end end # Dataset class for SQLite datasets that use the amalgalite driver. class Dataset < Sequel::Dataset include ::Sequel::SQLite::DatasetMethods Database::DatasetClass = self # Yield a hash for each row in the dataset. def fetch_rows(sql) execute(sql) do |stmt| @columns = cols = stmt.result_fields.map{|c| output_identifier(c)} col_count = cols.size stmt.each do |result| row = {} col_count.times{|i| row[cols[i]] = result[i]} yield row end end end private # Quote the string using the adapter instance method. def literal_string_append(sql, v) db.synchronize(@opts[:server]){|c| sql << c.quote(v)} end end end end ruby-sequel-4.1.1/lib/sequel/adapters/cubrid.rb000066400000000000000000000074061220156535500214510ustar00rootroot00000000000000require 'cubrid' Sequel.require 'adapters/shared/cubrid' module Sequel module Cubrid CUBRID_TYPE_PROCS = { ::Cubrid::DATE => lambda{|t| Date.new(t.year, t.month, t.day)}, ::Cubrid::TIME => lambda{|t| SQLTime.create(t.hour, t.min, t.sec)}, 21 => lambda{|s| s.to_i} } class Database < Sequel::Database include Sequel::Cubrid::DatabaseMethods ROW_COUNT = "SELECT ROW_COUNT()".freeze LAST_INSERT_ID = "SELECT LAST_INSERT_ID()".freeze set_adapter_scheme :cubrid def connect(server) opts = server_opts(server) conn = ::Cubrid.connect( opts[:database], opts[:host] || 'localhost', opts[:port] || 30000, opts[:user] || 'public', opts[:password] || '' ) conn.auto_commit = true conn end def server_version @server_version ||= synchronize{|c| c.server_version} end def execute(sql, opts=OPTS) synchronize(opts[:server]) do |conn| r = log_yield(sql) do begin conn.query(sql) rescue => e raise_error(e) end end if block_given? yield(r) else begin case opts[:type] when :dui # This is cubrid's API, but it appears to be completely broken, # giving StandardError: ERROR: CCI, -18, Invalid request handle #r.affected_rows # Work around bugs by using the ROW_COUNT function. begin r2 = conn.query(ROW_COUNT) r2.each{|a| return a.first.to_i} ensure r2.close if r2 end when :insert begin r2 = conn.query(LAST_INSERT_ID) r2.each{|a| return a.first.to_i} ensure r2.close if r2 end end ensure r.close end end end end def execute_ddl(sql, opts=OPTS) execute(sql, opts.merge(:type=>:ddl)) end def execute_dui(sql, opts=OPTS) execute(sql, opts.merge(:type=>:dui)) end def execute_insert(sql, opts=OPTS) execute(sql, opts.merge(:type=>:insert)) end private def begin_transaction(conn, opts=OPTS) log_yield(TRANSACTION_BEGIN){conn.auto_commit = false} end def commit_transaction(conn, opts=OPTS) log_yield(TRANSACTION_COMMIT){conn.commit} end def database_error_classes [StandardError] end def remove_transaction(conn, committed) conn.auto_commit = true ensure super end # This doesn't actually work, as the cubrid ruby driver # does not implement transactions correctly. def rollback_transaction(conn, opts=OPTS) log_yield(TRANSACTION_ROLLBACK){conn.rollback} end end class Dataset < Sequel::Dataset include Sequel::Cubrid::DatasetMethods COLUMN_INFO_NAME = "name".freeze COLUMN_INFO_TYPE = "type_name".freeze Database::DatasetClass = self def fetch_rows(sql) execute(sql) do |stmt| begin procs = cols = stmt.column_info.map{|c| [output_identifier(c[COLUMN_INFO_NAME]), CUBRID_TYPE_PROCS[c[COLUMN_INFO_TYPE]]]} @columns = cols.map{|c| c.first} stmt.each do |r| row = {} cols.zip(r).each{|(k, p), v| row[k] = (v && p) ? p.call(v) : v} yield row end ensure stmt.close end end self end end end end ruby-sequel-4.1.1/lib/sequel/adapters/db2.rb000066400000000000000000000167011220156535500206460ustar00rootroot00000000000000require 'db2/db2cli' Sequel.require %w'shared/db2', 'adapters' module Sequel module DB2 @convert_smallint_to_bool = true # Underlying error raised by Sequel, since ruby-db2 doesn't # use exceptions. class DB2Error < StandardError end class << self # Whether to convert smallint values to bool, true by default. # Can also be overridden per dataset. attr_accessor :convert_smallint_to_bool end tt = Class.new do def boolean(s) !s.to_i.zero? end def date(s) Date.new(s.year, s.month, s.day) end def time(s) Sequel::SQLTime.create(s.hour, s.minute, s.second) end end.new # Hash holding type translation methods, used by Dataset#fetch_rows. DB2_TYPES = { :boolean => tt.method(:boolean), DB2CLI::SQL_BLOB => ::Sequel::SQL::Blob.method(:new), DB2CLI::SQL_TYPE_DATE => tt.method(:date), DB2CLI::SQL_TYPE_TIME => tt.method(:time), DB2CLI::SQL_DECIMAL => ::BigDecimal.method(:new) } DB2_TYPES[DB2CLI::SQL_CLOB] = DB2_TYPES[DB2CLI::SQL_BLOB] class Database < Sequel::Database include DatabaseMethods set_adapter_scheme :db2 TEMPORARY = 'GLOBAL TEMPORARY '.freeze rc, NullHandle = DB2CLI.SQLAllocHandle(DB2CLI::SQL_HANDLE_ENV, DB2CLI::SQL_NULL_HANDLE) # Hash of connection procs for converting attr_reader :conversion_procs def connect(server) opts = server_opts(server) dbc = checked_error("Could not allocate database connection"){DB2CLI.SQLAllocHandle(DB2CLI::SQL_HANDLE_DBC, NullHandle)} checked_error("Could not connect to database"){DB2CLI.SQLConnect(dbc, opts[:database], opts[:user], opts[:password])} dbc end def disconnect_connection(conn) DB2CLI.SQLDisconnect(conn) DB2CLI.SQLFreeHandle(DB2CLI::SQL_HANDLE_DBC, conn) end def execute(sql, opts=OPTS, &block) synchronize(opts[:server]){|conn| log_connection_execute(conn, sql, &block)} end def execute_insert(sql, opts=OPTS) synchronize(opts[:server]) do |conn| log_connection_execute(conn, sql) sql = "SELECT IDENTITY_VAL_LOCAL() FROM SYSIBM.SYSDUMMY1" log_connection_execute(conn, sql) do |sth| name, buflen, datatype, size, digits, nullable = checked_error("Could not describe column"){DB2CLI.SQLDescribeCol(sth, 1, 256)} if DB2CLI.SQLFetch(sth) != DB2CLI::SQL_NO_DATA_FOUND v, _ = checked_error("Could not get data"){DB2CLI.SQLGetData(sth, 1, datatype, size)} if v.is_a?(String) return v.to_i else return nil end end end end end ERROR_MAP = {} %w'SQL_INVALID_HANDLE SQL_STILL_EXECUTING SQL_ERROR'.each do |s| ERROR_MAP[DB2CLI.const_get(s)] = s end def check_error(rc, msg) case rc when DB2CLI::SQL_SUCCESS, DB2CLI::SQL_SUCCESS_WITH_INFO, DB2CLI::SQL_NO_DATA_FOUND nil when DB2CLI::SQL_INVALID_HANDLE, DB2CLI::SQL_STILL_EXECUTING e = DB2Error.new("#{ERROR_MAP[rc]}: #{msg}") e.set_backtrace(caller) raise_error(e, :disconnect=>true) else e = DB2Error.new("#{ERROR_MAP[rc] || "Error code #{rc}"}: #{msg}") e.set_backtrace(caller) raise_error(e, :disconnect=>true) end end def checked_error(msg) rc, *ary= yield check_error(rc, msg) ary.length <= 1 ? ary.first : ary end def to_application_timestamp_db2(v) to_application_timestamp(v.to_s) end private def adapter_initialize @conversion_procs = DB2_TYPES.dup @conversion_procs[DB2CLI::SQL_TYPE_TIMESTAMP] = method(:to_application_timestamp_db2) end def database_error_classes [DB2Error] end def begin_transaction(conn, opts=OPTS) log_yield(TRANSACTION_BEGIN){DB2CLI.SQLSetConnectAttr(conn, DB2CLI::SQL_ATTR_AUTOCOMMIT, DB2CLI::SQL_AUTOCOMMIT_OFF)} set_transaction_isolation(conn, opts) end def remove_transaction(conn, committed) DB2CLI.SQLSetConnectAttr(conn, DB2CLI::SQL_ATTR_AUTOCOMMIT, DB2CLI::SQL_AUTOCOMMIT_ON) ensure super end def rollback_transaction(conn, opts=OPTS) log_yield(TRANSACTION_ROLLBACK){DB2CLI.SQLEndTran(DB2CLI::SQL_HANDLE_DBC, conn, DB2CLI::SQL_ROLLBACK)} end def commit_transaction(conn, opts=OPTS) log_yield(TRANSACTION_COMMIT){DB2CLI.SQLEndTran(DB2CLI::SQL_HANDLE_DBC, conn, DB2CLI::SQL_COMMIT)} end def log_connection_execute(conn, sql) sth = checked_error("Could not allocate statement"){DB2CLI.SQLAllocHandle(DB2CLI::SQL_HANDLE_STMT, conn)} begin checked_error("Could not execute statement: #{sql}"){log_yield(sql){DB2CLI.SQLExecDirect(sth, sql)}} if block_given? yield(sth) else checked_error("Could not get RPC"){DB2CLI.SQLRowCount(sth)} end ensure checked_error("Could not free statement"){DB2CLI.SQLFreeHandle(DB2CLI::SQL_HANDLE_STMT, sth)} end end # Convert smallint type to boolean if convert_smallint_to_bool is true def schema_column_type(db_type) if DB2.convert_smallint_to_bool && db_type =~ /smallint/i :boolean else super end end end class Dataset < Sequel::Dataset include DatasetMethods Database::DatasetClass = self MAX_COL_SIZE = 256 # Whether to convert smallint to boolean arguments for this dataset. # Defaults to the DB2 module setting. def convert_smallint_to_bool defined?(@convert_smallint_to_bool) ? @convert_smallint_to_bool : (@convert_smallint_to_bool = DB2.convert_smallint_to_bool) end # Override the default DB2.convert_smallint_to_bool setting for this dataset. attr_writer :convert_smallint_to_bool def fetch_rows(sql) execute(sql) do |sth| db = @db i = 1 column_info = get_column_info(sth) cols = column_info.map{|c| c.at(1)} @columns = cols errors = [DB2CLI::SQL_NO_DATA_FOUND, DB2CLI::SQL_ERROR] until errors.include?(rc = DB2CLI.SQLFetch(sth)) db.check_error(rc, "Could not fetch row") row = {} column_info.each do |i, c, t, s, pr| v, _ = db.checked_error("Could not get data"){DB2CLI.SQLGetData(sth, i, t, s)} row[c] = if v == DB2CLI::Null nil elsif pr pr.call(v) else v end end yield row end end self end private def get_column_info(sth) db = @db column_count = db.checked_error("Could not get number of result columns"){DB2CLI.SQLNumResultCols(sth)} convert = convert_smallint_to_bool cps = db.conversion_procs (1..column_count).map do |i| name, buflen, datatype, size, digits, nullable = db.checked_error("Could not describe column"){DB2CLI.SQLDescribeCol(sth, i, MAX_COL_SIZE)} pr = if datatype == DB2CLI::SQL_SMALLINT && convert && size <= 5 && digits <= 1 cps[:boolean] else cps[datatype] end [i, output_identifier(name), datatype, size, pr] end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/dbi.rb000066400000000000000000000051131220156535500207300ustar00rootroot00000000000000require 'dbi' module Sequel module DBI class Database < Sequel::Database set_adapter_scheme :dbi DBI_ADAPTERS = { :ado => "ADO", :db2 => "DB2", :frontbase => "FrontBase", :interbase => "InterBase", :msql => "Msql", :mysql => "Mysql", :odbc => "ODBC", :oracle => "Oracle", :pg => "pg", :proxy => "Proxy", :sqlite => "SQLite", :sqlrelay => "SQLRelay" } # Converts a uri to an options hash. These options are then passed # to a newly created database object. def self.uri_to_options(uri) # :nodoc: database = (m = /\/(.*)/.match(uri.path)) && (m[1]) if m = /dbi-(.+)/.match(uri.scheme) adapter = DBI_ADAPTERS[m[1].to_sym] || m[1] database = "#{adapter}:dbname=#{database}" end { :user => uri.user, :password => uri.password, :host => uri.host, :port => uri.port, :database => database } end private_class_method :uri_to_options def connect(server) opts = server_opts(server) dbname = opts[:database] if dbname !~ /^DBI:/ then dbname = "DBI:#{dbname}" [:host, :port].each{|sym| dbname += ";#{sym}=#{opts[sym]}" unless blank_object?(opts[sym])} end ::DBI.connect(dbname, opts[:user], opts[:password]) end def disconnect_connection(c) c.disconnect end def execute(sql, opts=OPTS) synchronize(opts[:server]) do |conn| r = log_yield(sql){conn.execute(sql)} yield(r) if block_given? r end end def execute_dui(sql, opts=OPTS) synchronize(opts[:server]){|conn| log_yield(sql){conn.do(sql)}} end private def adapter_initialize case @opts[:db_type] when 'mssql' Sequel.require 'adapters/shared/mssql' extend Sequel::MSSQL::DatabaseMethods extend_datasets Sequel::MSSQL::DatasetMethods end end end class Dataset < Sequel::Dataset Database::DatasetClass = self def fetch_rows(sql) execute(sql) do |s| begin columns = cols = s.column_names.map{|c| output_identifier(c)} @columns = columns s.fetch do |r| row = {} cols.each{|c| row[c] = r.shift} yield row end ensure s.finish rescue nil end end self end end end end ruby-sequel-4.1.1/lib/sequel/adapters/do.rb000066400000000000000000000133771220156535500206070ustar00rootroot00000000000000require 'data_objects' module Sequel # Module holding the DataObjects support for Sequel. DataObjects is a # ruby library with a standard API for accessing databases. # # The DataObjects adapter currently supports PostgreSQL, MySQL, and # SQLite: # # * Sequel.connect('do:sqlite3::memory:') # * Sequel.connect('do:postgres://user:password@host/database') # * Sequel.connect('do:mysql://user:password@host/database') module DataObjects # Contains procs keyed on sub adapter type that extend the # given database object so it supports the correct database type. DATABASE_SETUP = {:postgres=>proc do |db| require 'do_postgres' Sequel.require 'adapters/do/postgres' db.extend(Sequel::DataObjects::Postgres::DatabaseMethods) db.extend_datasets Sequel::Postgres::DatasetMethods end, :mysql=>proc do |db| require 'do_mysql' Sequel.require 'adapters/do/mysql' db.extend(Sequel::DataObjects::MySQL::DatabaseMethods) db.dataset_class = Sequel::DataObjects::MySQL::Dataset end, :sqlite3=>proc do |db| require 'do_sqlite3' Sequel.require 'adapters/do/sqlite' db.extend(Sequel::DataObjects::SQLite::DatabaseMethods) db.extend_datasets Sequel::SQLite::DatasetMethods db.set_integer_booleans end } # DataObjects uses it's own internal connection pooling in addition to the # pooling that Sequel uses. You should make sure that you don't set # the connection pool size to more than 8 for a # Sequel::DataObjects::Database object, or hack DataObjects (or Extlib) to # use a pool size at least as large as the pool size being used by Sequel. class Database < Sequel::Database DISCONNECT_ERROR_RE = /terminating connection due to administrator command/ set_adapter_scheme :do # Setup a DataObjects::Connection to the database. def connect(server) setup_connection(::DataObjects::Connection.new(uri(server_opts(server)))) end def disconnect_connection(conn) conn.dispose end # Execute the given SQL. If a block is given, the DataObjects::Reader # created is yielded to it. A block should not be provided unless a # a SELECT statement is being used (or something else that returns rows). # Otherwise, the return value is the insert id if opts[:type] is :insert, # or the number of affected rows, otherwise. def execute(sql, opts=OPTS) synchronize(opts[:server]) do |conn| begin command = conn.create_command(sql) res = log_yield(sql){block_given? ? command.execute_reader : command.execute_non_query} rescue ::DataObjects::Error => e raise_error(e) end if block_given? begin yield(res) ensure res.close if res end elsif opts[:type] == :insert res.insert_id else res.affected_rows end end end # Execute the SQL on the this database, returning the number of affected # rows. def execute_dui(sql, opts=OPTS) execute(sql, opts) end # Execute the SQL on this database, returning the primary key of the # table being inserted to. def execute_insert(sql, opts=OPTS) execute(sql, opts.merge(:type=>:insert)) end # Return the subadapter type for this database, i.e. sqlite3 for # do:sqlite3::memory:. def subadapter uri.split(":").first end # Return the DataObjects URI for the Sequel URI, removing the do: # prefix. def uri(opts=OPTS) opts = @opts.merge(opts) (opts[:uri] || opts[:url]).sub(/\Ado:/, '') end private # Call the DATABASE_SETUP proc directly after initialization, # so the object always uses sub adapter specific code. Also, # raise an error immediately if the connection doesn't have a # uri, since DataObjects requires one. def adapter_initialize raise(Error, "No connection string specified") unless uri if prok = DATABASE_SETUP[subadapter.to_sym] prok.call(self) end end # Method to call on a statement object to execute SQL that does # not return any rows. def connection_execute_method :execute_non_query end # dataobjects uses the DataObjects::Error class as the main error class. def database_error_classes [::DataObjects::Error] end # Recognize DataObjects::ConnectionError instances as disconnect errors. def disconnect_error?(e, opts) super || (e.is_a?(::DataObjects::Error) && (e.is_a?(::DataObjects::ConnectionError) || e.message =~ DISCONNECT_ERROR_RE)) end # Execute SQL on the connection by creating a command first def log_connection_execute(conn, sql) log_yield(sql){conn.create_command(sql).execute_non_query} end # Allow extending the given connection when it is first created. # By default, just returns the connection. def setup_connection(conn) conn end end # Dataset class for Sequel::DataObjects::Database objects. class Dataset < Sequel::Dataset Database::DatasetClass = self # Execute the SQL on the database and yield the rows as hashes # with symbol keys. def fetch_rows(sql) execute(sql) do |reader| cols = @columns = reader.fields.map{|f| output_identifier(f)} while(reader.next!) do h = {} cols.zip(reader.values).each{|k, v| h[k] = v} yield h end end self end end end end ruby-sequel-4.1.1/lib/sequel/adapters/do/000077500000000000000000000000001220156535500202475ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/adapters/do/mysql.rb000066400000000000000000000035731220156535500217510ustar00rootroot00000000000000Sequel.require 'adapters/shared/mysql' module Sequel module DataObjects # Database and Dataset instance methods for MySQL specific # support via DataObjects. module MySQL # Database instance methods for MySQL databases accessed via DataObjects. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::MySQL::DatabaseMethods private # The database name for the given database. Need to parse it out # of the connection string, since the DataObjects does no parsing on the # given connection string by default. def database_name (m = /\/(.*)/.match(URI.parse(uri).path)) && m[1] end # Recognize the tinyint(1) column as boolean. def schema_column_type(db_type) db_type =~ /\Atinyint\(1\)/ ? :boolean : super end # Apply the connectiong setting SQLs for every new connection. def setup_connection(conn) mysql_connection_setting_sqls.each{|sql| log_yield(sql){conn.create_command(sql).execute_non_query}} super end end # Dataset class for MySQL datasets accessed via DataObjects. class Dataset < DataObjects::Dataset include Sequel::MySQL::DatasetMethods APOS = Dataset::APOS APOS_RE = Dataset::APOS_RE DOUBLE_APOS = Dataset::DOUBLE_APOS # The DataObjects MySQL driver uses the number of rows actually modified in the update, # instead of the number of matched by the filter. def provides_accurate_rows_matched? false end private # do_mysql sets NO_BACKSLASH_ESCAPES, so use standard SQL string escaping def literal_string_append(sql, s) sql << APOS << s.gsub(APOS_RE, DOUBLE_APOS) << APOS end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/do/postgres.rb000066400000000000000000000021041220156535500224370ustar00rootroot00000000000000Sequel.require 'adapters/shared/postgres' module Sequel Postgres::CONVERTED_EXCEPTIONS << ::DataObjects::Error module DataObjects # Adapter, Database, and Dataset support for accessing a PostgreSQL # database via DataObjects. module Postgres # Methods to add to Database instances that access PostgreSQL via # DataObjects. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::Postgres::DatabaseMethods # Add the primary_keys and primary_key_sequences instance variables, # so we can get the correct return values for inserted rows. def self.extended(db) super db.send(:initialize_postgres_adapter) end private # Extend the adapter with the DataObjects PostgreSQL AdapterMethods def setup_connection(conn) conn = super(conn) connection_configuration_sqls.each{|sql| log_yield(sql){conn.create_command(sql).execute_non_query}} conn end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/do/sqlite.rb000066400000000000000000000016511220156535500221000ustar00rootroot00000000000000Sequel.require 'adapters/shared/sqlite' module Sequel module DataObjects # Database and Dataset support for SQLite databases accessed via DataObjects. module SQLite # Instance methods for SQLite Database objects accessed via DataObjects. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::SQLite::DatabaseMethods private # Default to a single connection for a memory database. def connection_pool_default_options o = super uri == 'sqlite3::memory:' ? o.merge(:max_connections=>1) : o end # Execute the connection pragmas on the connection def setup_connection(conn) connection_pragmas.each do |s| com = conn.create_command(s) log_yield(s){com.execute_non_query} end super end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/firebird.rb000066400000000000000000000053521220156535500217650ustar00rootroot00000000000000require 'fb' Sequel.require 'adapters/shared/firebird' module Sequel # The Sequel Firebird adapter requires the ruby fb driver located at # http://github.com/wishdev/fb. module Firebird class Database < Sequel::Database include Sequel::Firebird::DatabaseMethods set_adapter_scheme :firebird DISCONNECT_ERRORS = /Unsuccessful execution caused by a system error that precludes successful execution of subsequent statements/ def connect(server) opts = server_opts(server) Fb::Database.new( :database => "#{opts[:host]}:#{opts[:database]}", :username => opts[:user], :password => opts[:password]).connect end def disconnect_connection(conn) begin conn.close rescue Fb::Error nil end end def execute(sql, opts=OPTS) begin synchronize(opts[:server]) do |conn| if conn.transaction_started && !_trans(conn) conn.rollback raise DatabaseDisconnectError, "transaction accidently left open, rolling back and disconnecting" end r = log_yield(sql){conn.execute(sql)} yield(r) if block_given? r end rescue Fb::Error => e raise_error(e, :disconnect=>DISCONNECT_ERRORS.match(e.message)) end end private # Add the primary_keys instance variable so we can get the correct return values for inserted rows. def adapter_initialize @primary_keys = {} end def begin_transaction(conn, opts=OPTS) log_yield(TRANSACTION_BEGIN) do begin conn.transaction rescue Fb::Error => e conn.rollback raise_error(e, :disconnect=>true) end end end def commit_transaction(conn, opts=OPTS) log_yield(TRANSACTION_COMMIT){conn.commit} end def database_error_classes [Fb::Error] end def rollback_transaction(conn, opts=OPTS) log_yield(TRANSACTION_ROLLBACK){conn.rollback} end end # Dataset class for Firebird datasets class Dataset < Sequel::Dataset include Sequel::Firebird::DatasetMethods Database::DatasetClass = self # Yield all rows returned by executing the given SQL and converting # the types. def fetch_rows(sql) execute(sql) do |s| begin @columns = columns = s.fields.map{|c| output_identifier(c.name)} s.fetchall.each do |r| h = {} r.zip(columns).each{|v, c| h[c] = v} yield h end ensure s.close end end self end end end end ruby-sequel-4.1.1/lib/sequel/adapters/ibmdb.rb000066400000000000000000000346011220156535500212530ustar00rootroot00000000000000require 'ibm_db' Sequel.require 'adapters/shared/db2' module Sequel module IBMDB @convert_smallint_to_bool = true class << self # Whether to convert smallint values to bool, true by default. # Can also be overridden per dataset. attr_accessor :convert_smallint_to_bool end tt = Class.new do def boolean(s) !s.to_i.zero? end def int(s) s.to_i end end.new # Hash holding type translation methods, used by Dataset#fetch_rows. DB2_TYPES = { :boolean => tt.method(:boolean), :int => tt.method(:int), :blob => ::Sequel::SQL::Blob.method(:new), :time => ::Sequel.method(:string_to_time), :date => ::Sequel.method(:string_to_date) } DB2_TYPES[:clob] = DB2_TYPES[:blob] # Wraps an underlying connection to DB2 using IBM_DB. class Connection # A hash with prepared statement name symbol keys, where each value is # a two element array with an sql string and cached Statement value. attr_accessor :prepared_statements # Error class for exceptions raised by the connection. class Error < StandardError attr_reader :sqlstate def initialize(message, sqlstate) @sqlstate = sqlstate super(message) end end # Create the underlying IBM_DB connection. def initialize(connection_string) @conn = IBM_DB.connect(connection_string, '', '') self.autocommit = true @prepared_statements = {} end # Check whether the connection is in autocommit state or not. def autocommit IBM_DB.autocommit(@conn) == 1 end # Turn autocommit on or off for the connection. def autocommit=(value) IBM_DB.autocommit(@conn, value ? IBM_DB::SQL_AUTOCOMMIT_ON : IBM_DB::SQL_AUTOCOMMIT_OFF) end # Close the connection, disconnecting from DB2. def close IBM_DB.close(@conn) end # Commit the currently outstanding transaction on this connection. def commit IBM_DB.commit(@conn) end # Return the related error message for the connection. def error_msg IBM_DB.getErrormsg(@conn, IBM_DB::DB_CONN) end # Return the related error message for the connection. def error_sqlstate IBM_DB.getErrorstate(@conn, IBM_DB::DB_CONN) end # Execute the given SQL on the database, and return a Statement instance # holding the results. def execute(sql) stmt = IBM_DB.exec(@conn, sql) raise Error.new(error_msg, error_sqlstate) unless stmt Statement.new(stmt) end # Execute the related prepared statement on the database with the given # arguments. def execute_prepared(ps_name, *values) stmt = @prepared_statements[ps_name].last res = stmt.execute(*values) unless res raise Error.new("Error executing statement #{ps_name}: #{error_msg}", error_sqlstate) end stmt end # Prepare a statement with the given +sql+ on the database, and # cache the prepared statement value by name. def prepare(sql, ps_name) if stmt = IBM_DB.prepare(@conn, sql) ps_name = ps_name.to_sym stmt = Statement.new(stmt) @prepared_statements[ps_name] = [sql, stmt] else err = error_msg err = "Error preparing #{ps_name} with SQL: #{sql}" if error_msg.nil? || error_msg.empty? raise Error.new(err, error_sqlstate) end end # Rollback the currently outstanding transaction on this connection. def rollback IBM_DB.rollback(@conn) end end # Wraps results returned by queries on IBM_DB. class Statement # Hold the given statement. def initialize(stmt) @stmt = stmt end # Return the number of rows affected. def affected IBM_DB.num_rows(@stmt) end # If this statement is a prepared statement, execute it on the database # with the given values. def execute(*values) IBM_DB.execute(@stmt, values) end # Return the results of a query as an array of values. def fetch_array IBM_DB.fetch_array(@stmt) if @stmt end # Return the field name at the given column in the result set. def field_name(ind) IBM_DB.field_name(@stmt, ind) end # Return the field type for the given field name in the result set. def field_type(key) IBM_DB.field_type(@stmt, key) end # Return the field precision for the given field name in the result set. def field_precision(key) IBM_DB.field_precision(@stmt, key) end # Free the memory related to this statement. def free IBM_DB.free_stmt(@stmt) end # Free the memory related to this result set, only useful for prepared # statements which have a different result set on every call. def free_result IBM_DB.free_result(@stmt) end # Return the number of fields in the result set. def num_fields IBM_DB.num_fields(@stmt) end end class Database < Sequel::Database include Sequel::DB2::DatabaseMethods set_adapter_scheme :ibmdb # Hash of connection procs for converting attr_reader :conversion_procs # REORG the related table whenever it is altered. This is not always # required, but it is necessary for compatibilty with other Sequel # code in many cases. def alter_table(name, generator=nil) res = super reorg(name) res end # Create a new connection object for the given server. def connect(server) opts = server_opts(server) # use uncataloged connection so that host and port can be supported connection_string = ( \ 'Driver={IBM DB2 ODBC DRIVER};' \ "Database=#{opts[:database]};" \ "Hostname=#{opts[:host]};" \ "Port=#{opts[:port] || 50000};" \ 'Protocol=TCPIP;' \ "Uid=#{opts[:user]};" \ "Pwd=#{opts[:password]};" \ ) Connection.new(connection_string) end # Execute the given SQL on the database. def execute(sql, opts=OPTS, &block) if sql.is_a?(Symbol) execute_prepared_statement(sql, opts, &block) else synchronize(opts[:server]){|c| _execute(c, sql, opts, &block)} end rescue Connection::Error => e raise_error(e) end # Execute the given SQL on the database, returning the last inserted # identity value. def execute_insert(sql, opts=OPTS) synchronize(opts[:server]) do |c| if sql.is_a?(Symbol) execute_prepared_statement(sql, opts) else _execute(c, sql, opts) end _execute(c, "SELECT IDENTITY_VAL_LOCAL() FROM SYSIBM.SYSDUMMY1", opts){|stmt| i = stmt.fetch_array.first.to_i; i} end rescue Connection::Error => e raise_error(e) end # Execute a prepared statement named by name on the database. def execute_prepared_statement(ps_name, opts) args = opts[:arguments] ps = prepared_statement(ps_name) sql = ps.prepared_sql synchronize(opts[:server]) do |conn| unless conn.prepared_statements.fetch(ps_name, []).first == sql log_yield("PREPARE #{ps_name}: #{sql}"){conn.prepare(sql, ps_name)} end args = args.map{|v| v.nil? ? nil : prepared_statement_arg(v)} log_sql = "EXECUTE #{ps_name}" if ps.log_sql log_sql << " (" log_sql << sql log_sql << ")" end begin stmt = log_yield(log_sql, args){conn.execute_prepared(ps_name, *args)} if block_given? yield(stmt) else stmt.affected end ensure stmt.free_result if stmt end end end # On DB2, a table might need to be REORGed if you are testing existence # of it. This REORGs automatically if the database raises a specific # error that indicates it should be REORGed. def table_exists?(name) v ||= false # only retry once sch, table_name = schema_and_table(name) name = SQL::QualifiedIdentifier.new(sch, table_name) if sch from(name).first true rescue DatabaseError => e if e.to_s =~ /Operation not allowed for reason code "7" on table/ && v == false # table probably needs reorg reorg(name) v = true retry end false end private # Execute the given SQL on the database. def _execute(conn, sql, opts) stmt = log_yield(sql){conn.execute(sql)} if block_given? yield(stmt) else stmt.affected end ensure stmt.free if stmt end def adapter_initialize @conversion_procs = DB2_TYPES.dup @conversion_procs[:timestamp] = method(:to_application_timestamp) end # IBM_DB uses an autocommit setting instead of sending SQL queries. # So starting a transaction just turns autocommit off. def begin_transaction(conn, opts=OPTS) log_yield(TRANSACTION_BEGIN){conn.autocommit = false} set_transaction_isolation(conn, opts) end # This commits transaction in progress on the # connection and sets autocommit back on. def commit_transaction(conn, opts=OPTS) log_yield(TRANSACTION_COMMIT){conn.commit} end def database_error_classes [Connection::Error] end def database_exception_sqlstate(exception, opts) exception.sqlstate end # Don't convert smallint to boolean for the metadata # dataset, since the DB2 metadata does not use # boolean columns, and some smallint columns are # accidently treated as booleans. def metadata_dataset ds = super ds.convert_smallint_to_bool = false ds end # Format Numeric, Date, and Time types specially for use # as IBM_DB prepared statements argument vlaues. def prepared_statement_arg(v) case v when Numeric v.to_s when Date, Time literal(v).gsub("'", '') else v end end # Set autocommit back on def remove_transaction(conn, committed) conn.autocommit = true ensure super end # This rolls back the transaction in progress on the # connection and sets autocommit back on. def rollback_transaction(conn, opts=OPTS) log_yield(TRANSACTION_ROLLBACK){conn.rollback} end # Convert smallint type to boolean if convert_smallint_to_bool is true def schema_column_type(db_type) if Sequel::IBMDB.convert_smallint_to_bool && db_type =~ /smallint/i :boolean else super end end end class Dataset < Sequel::Dataset include Sequel::DB2::DatasetMethods Database::DatasetClass = self module CallableStatementMethods # Extend given dataset with this module so subselects inside subselects in # prepared statements work. def subselect_sql_append(sql, ds) ps = ds.to_prepared_statement(:select).clone(:append_sql=>sql) ps.extend(CallableStatementMethods) ps = ps.bind(@opts[:bind_vars]) if @opts[:bind_vars] ps.prepared_args = prepared_args ps.prepared_sql end end # Methods for DB2 prepared statements using the native driver. module PreparedStatementMethods include Sequel::Dataset::UnnumberedArgumentMapper private # Execute the prepared statement with arguments instead of the given SQL. def execute(sql, opts=OPTS, &block) super(prepared_statement_name, {:arguments=>bind_arguments}.merge(opts), &block) end # Execute the prepared statment with arguments instead of the given SQL. def execute_dui(sql, opts=OPTS, &block) super(prepared_statement_name, {:arguments=>bind_arguments}.merge(opts), &block) end # Execute the prepared statement with arguments instead of the given SQL. def execute_insert(sql, opts=OPTS, &block) super(prepared_statement_name, {:arguments=>bind_arguments}.merge(opts), &block) end end # Emulate support of bind arguments in called statements. def call(type, bind_arguments={}, *values, &block) ps = to_prepared_statement(type, values) ps.extend(CallableStatementMethods) ps.call(bind_arguments, &block) end # Whether to convert smallint to boolean arguments for this dataset. # Defaults to the IBMDB module setting. def convert_smallint_to_bool defined?(@convert_smallint_to_bool) ? @convert_smallint_to_bool : (@convert_smallint_to_bool = IBMDB.convert_smallint_to_bool) end # Override the default IBMDB.convert_smallint_to_bool setting for this dataset. attr_writer :convert_smallint_to_bool # Fetch the rows from the database and yield plain hashes. def fetch_rows(sql) execute(sql) do |stmt| columns = [] convert = convert_smallint_to_bool cps = db.conversion_procs stmt.num_fields.times do |i| k = stmt.field_name i key = output_identifier(k) type = stmt.field_type(k).downcase.to_sym # decide if it is a smallint from precision type = :boolean if type ==:int && convert && stmt.field_precision(k) < 8 columns << [key, cps[type]] end cols = columns.map{|c| c.at(0)} @columns = cols while res = stmt.fetch_array row = {} res.zip(columns).each do |v, (k, pr)| row[k] = ((pr ? pr.call(v) : v) if v) end yield row end end self end # Store the given type of prepared statement in the associated database # with the given name. def prepare(type, name=nil, *values) ps = to_prepared_statement(type, values) ps.extend(PreparedStatementMethods) if name ps.prepared_statement_name = name db.set_prepared_statement(name, ps) end ps end end end end ruby-sequel-4.1.1/lib/sequel/adapters/informix.rb000066400000000000000000000024431220156535500220300ustar00rootroot00000000000000require 'informix' Sequel.require 'adapters/shared/informix' module Sequel module Informix class Database < Sequel::Database include DatabaseMethods set_adapter_scheme :informix def connect(server) opts = server_opts(server) ::Informix.connect(opts[:database], opts[:user], opts[:password]) end # Returns number of rows affected def execute_dui(sql, opts=OPTS) synchronize(opts[:server]){|c| log_yield(sql){c.immediate(sql)}} end def execute(sql, opts=OPTS) synchronize(opts[:server]){|c| yield log_yield(sql){c.cursor(sql)}} end end class Dataset < Sequel::Dataset include DatasetMethods Database::DatasetClass = self def fetch_rows(sql) execute(sql) do |cursor| begin col_map = nil cursor.open.each_hash do |h| unless col_map col_map = {} @columns = h.keys.map{|k| col_map[k] = output_identifier(k)} end h2 = {} h.each{|k,v| h2[col_map[k]||k] = v} yield h2 end ensure cursor.respond_to?(:free) ? cursor.free : cursor.drop end end self end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc.rb000066400000000000000000000755441220156535500211130ustar00rootroot00000000000000require 'java' Sequel.require 'adapters/utils/stored_procedures' module Sequel # Houses Sequel's JDBC support when running on JRuby. module JDBC # Make it accesing the java.lang hierarchy more ruby friendly. module JavaLang include_package 'java.lang' end # Make it accesing the java.sql hierarchy more ruby friendly. module JavaSQL include_package 'java.sql' end # Make it accesing the javax.naming hierarchy more ruby friendly. module JavaxNaming include_package 'javax.naming' end # Used to identify a jndi connection and to extract the jndi # resource name. JNDI_URI_REGEXP = /\Ajdbc:jndi:(.+)/ # The types to check for 0 scale to transform :decimal types # to :integer. DECIMAL_TYPE_RE = /number|numeric|decimal/io # Contains procs keyed on sub adapter type that extend the # given database object so it supports the correct database type. DATABASE_SETUP = {:postgresql=>proc do |db| JDBC.load_gem(:Postgres) org.postgresql.Driver Sequel.require 'adapters/jdbc/postgresql' db.extend(Sequel::JDBC::Postgres::DatabaseMethods) db.dataset_class = Sequel::JDBC::Postgres::Dataset org.postgresql.Driver end, :mysql=>proc do |db| JDBC.load_gem(:MySQL) com.mysql.jdbc.Driver Sequel.require 'adapters/jdbc/mysql' db.extend(Sequel::JDBC::MySQL::DatabaseMethods) db.extend_datasets Sequel::MySQL::DatasetMethods com.mysql.jdbc.Driver end, :sqlite=>proc do |db| JDBC.load_gem(:SQLite3) org.sqlite.JDBC Sequel.require 'adapters/jdbc/sqlite' db.extend(Sequel::JDBC::SQLite::DatabaseMethods) db.extend_datasets Sequel::SQLite::DatasetMethods db.set_integer_booleans org.sqlite.JDBC end, :oracle=>proc do |db| Java::oracle.jdbc.driver.OracleDriver Sequel.require 'adapters/jdbc/oracle' db.extend(Sequel::JDBC::Oracle::DatabaseMethods) db.dataset_class = Sequel::JDBC::Oracle::Dataset Java::oracle.jdbc.driver.OracleDriver end, :sqlserver=>proc do |db| com.microsoft.sqlserver.jdbc.SQLServerDriver Sequel.require 'adapters/jdbc/sqlserver' db.extend(Sequel::JDBC::SQLServer::DatabaseMethods) db.extend_datasets Sequel::MSSQL::DatasetMethods db.send(:set_mssql_unicode_strings) com.microsoft.sqlserver.jdbc.SQLServerDriver end, :jtds=>proc do |db| JDBC.load_gem(:JTDS) Java::net.sourceforge.jtds.jdbc.Driver Sequel.require 'adapters/jdbc/jtds' db.extend(Sequel::JDBC::JTDS::DatabaseMethods) db.dataset_class = Sequel::JDBC::JTDS::Dataset db.send(:set_mssql_unicode_strings) Java::net.sourceforge.jtds.jdbc.Driver end, :h2=>proc do |db| JDBC.load_gem(:H2) org.h2.Driver Sequel.require 'adapters/jdbc/h2' db.extend(Sequel::JDBC::H2::DatabaseMethods) db.dataset_class = Sequel::JDBC::H2::Dataset org.h2.Driver end, :hsqldb=>proc do |db| JDBC.load_gem(:HSQLDB) org.hsqldb.jdbcDriver Sequel.require 'adapters/jdbc/hsqldb' db.extend(Sequel::JDBC::HSQLDB::DatabaseMethods) db.dataset_class = Sequel::JDBC::HSQLDB::Dataset org.hsqldb.jdbcDriver end, :derby=>proc do |db| JDBC.load_gem(:Derby) org.apache.derby.jdbc.EmbeddedDriver Sequel.require 'adapters/jdbc/derby' db.extend(Sequel::JDBC::Derby::DatabaseMethods) db.dataset_class = Sequel::JDBC::Derby::Dataset org.apache.derby.jdbc.EmbeddedDriver end, :as400=>proc do |db| com.ibm.as400.access.AS400JDBCDriver Sequel.require 'adapters/jdbc/as400' db.extend(Sequel::JDBC::AS400::DatabaseMethods) db.dataset_class = Sequel::JDBC::AS400::Dataset com.ibm.as400.access.AS400JDBCDriver end, :"informix-sqli"=>proc do |db| com.informix.jdbc.IfxDriver Sequel.require 'adapters/jdbc/informix' db.extend(Sequel::JDBC::Informix::DatabaseMethods) db.extend_datasets Sequel::Informix::DatasetMethods com.informix.jdbc.IfxDriver end, :db2=>proc do |db| com.ibm.db2.jcc.DB2Driver Sequel.require 'adapters/jdbc/db2' db.extend(Sequel::JDBC::DB2::DatabaseMethods) db.dataset_class = Sequel::JDBC::DB2::Dataset com.ibm.db2.jcc.DB2Driver end, :firebirdsql=>proc do |db| org.firebirdsql.jdbc.FBDriver Sequel.require 'adapters/jdbc/firebird' db.extend(Sequel::JDBC::Firebird::DatabaseMethods) db.extend_datasets Sequel::Firebird::DatasetMethods org.firebirdsql.jdbc.FBDriver end, :jdbcprogress=>proc do |db| com.progress.sql.jdbc.JdbcProgressDriver Sequel.require 'adapters/jdbc/progress' db.extend(Sequel::JDBC::Progress::DatabaseMethods) db.extend_datasets Sequel::Progress::DatasetMethods com.progress.sql.jdbc.JdbcProgressDriver end, :cubrid=>proc do |db| Java::cubrid.jdbc.driver.CUBRIDDriver Sequel.require 'adapters/jdbc/cubrid' db.extend(Sequel::JDBC::Cubrid::DatabaseMethods) db.extend_datasets Sequel::Cubrid::DatasetMethods Java::cubrid.jdbc.driver.CUBRIDDriver end } # Allowing loading the necessary JDBC support via a gem, which # works for PostgreSQL, MySQL, and SQLite. def self.load_gem(name) begin require "jdbc/#{name.to_s.downcase}" rescue LoadError # jdbc gem not used, hopefully the user has the .jar in their CLASSPATH else if defined?(::Jdbc) && ( ::Jdbc.const_defined?(name) rescue nil ) jdbc_module = ::Jdbc.const_get(name) # e.g. Jdbc::SQLite3 jdbc_module.load_driver if jdbc_module.respond_to?(:load_driver) end end end # JDBC Databases offer a fairly uniform interface that does not change # much based on the sub adapter. class Database < Sequel::Database set_adapter_scheme :jdbc # The type of database we are connecting to attr_reader :database_type # The Java database driver we are using attr_reader :driver # Whether to convert some Java types to ruby types when retrieving rows. # True by default, can be set to false to roughly double performance when # fetching rows. attr_accessor :convert_types # Execute the given stored procedure with the give name. If a block is # given, the stored procedure should return rows. def call_sproc(name, opts = OPTS) args = opts[:args] || [] sql = "{call #{name}(#{args.map{'?'}.join(',')})}" synchronize(opts[:server]) do |conn| cps = conn.prepareCall(sql) i = 0 args.each{|arg| set_ps_arg(cps, arg, i+=1)} begin if block_given? yield log_yield(sql){cps.executeQuery} else case opts[:type] when :insert log_yield(sql){cps.executeUpdate} last_insert_id(conn, opts) else log_yield(sql){cps.executeUpdate} end end rescue NativeException, JavaSQL::SQLException => e raise_error(e) ensure cps.close end end end # Connect to the database using JavaSQL::DriverManager.getConnection. def connect(server) opts = server_opts(server) conn = if jndi? get_connection_from_jndi else args = [uri(opts)] args.concat([opts[:user], opts[:password]]) if opts[:user] && opts[:password] begin JavaSQL::DriverManager.setLoginTimeout(opts[:login_timeout]) if opts[:login_timeout] JavaSQL::DriverManager.getConnection(*args) rescue JavaSQL::SQLException, NativeException, StandardError => e raise e unless driver # If the DriverManager can't get the connection - use the connect # method of the driver. (This happens under Tomcat for instance) props = java.util.Properties.new if opts && opts[:user] && opts[:password] props.setProperty("user", opts[:user]) props.setProperty("password", opts[:password]) end opts[:jdbc_properties].each{|k,v| props.setProperty(k.to_s, v)} if opts[:jdbc_properties] begin c = driver.new.connect(args[0], props) raise(Sequel::DatabaseError, 'driver.new.connect returned nil: probably bad JDBC connection string') unless c c rescue JavaSQL::SQLException, NativeException, StandardError => e2 unless e2.message == e.message e2.message << "\n#{e.class.name}: #{e.message}" end raise e2 end end end setup_connection(conn) end # Close given adapter connections, and delete any related prepared statements. def disconnect_connection(c) @connection_prepared_statements_mutex.synchronize{@connection_prepared_statements.delete(c)} c.close end # Execute the given SQL. If a block is given, if should be a SELECT # statement or something else that returns rows. def execute(sql, opts=OPTS, &block) return call_sproc(sql, opts, &block) if opts[:sproc] return execute_prepared_statement(sql, opts, &block) if [Symbol, Dataset].any?{|c| sql.is_a?(c)} synchronize(opts[:server]) do |conn| statement(conn) do |stmt| if block yield log_yield(sql){stmt.executeQuery(sql)} else case opts[:type] when :ddl log_yield(sql){stmt.execute(sql)} when :insert log_yield(sql){execute_statement_insert(stmt, sql)} last_insert_id(conn, opts.merge(:stmt=>stmt)) else log_yield(sql){stmt.executeUpdate(sql)} end end end end end alias execute_dui execute # Execute the given DDL SQL, which should not return any # values or rows. def execute_ddl(sql, opts=OPTS) execute(sql, {:type=>:ddl}.merge(opts)) end # Execute the given INSERT SQL, returning the last inserted # row id. def execute_insert(sql, opts=OPTS) execute(sql, {:type=>:insert}.merge(opts)) end # Use the JDBC metadata to get the index information for the table. def indexes(table, opts=OPTS) m = output_identifier_meth im = input_identifier_meth schema, table = schema_and_table(table) schema ||= opts[:schema] schema = im.call(schema) if schema table = im.call(table) indexes = {} metadata(:getIndexInfo, nil, schema, table, false, true) do |r| next unless name = r[:column_name] next if respond_to?(:primary_key_index_re, true) and r[:index_name] =~ primary_key_index_re i = indexes[m.call(r[:index_name])] ||= {:columns=>[], :unique=>[false, 0].include?(r[:non_unique])} i[:columns] << m.call(name) end indexes end # Whether or not JNDI is being used for this connection. def jndi? !!(uri =~ JNDI_URI_REGEXP) end # All tables in this database def tables(opts=OPTS) get_tables('TABLE', opts) end # The uri for this connection. You can specify the uri # using the :uri, :url, or :database options. You don't # need to worry about this if you use Sequel.connect # with the JDBC connectrion strings. def uri(opts=OPTS) opts = @opts.merge(opts) ur = opts[:uri] || opts[:url] || opts[:database] ur =~ /^\Ajdbc:/ ? ur : "jdbc:#{ur}" end # All views in this database def views(opts=OPTS) get_tables('VIEW', opts) end private # Call the DATABASE_SETUP proc directly after initialization, # so the object always uses sub adapter specific code. Also, # raise an error immediately if the connection doesn't have a # uri, since JDBC requires one. def adapter_initialize @connection_prepared_statements = {} @connection_prepared_statements_mutex = Mutex.new @convert_types = typecast_value_boolean(@opts.fetch(:convert_types, true)) raise(Error, "No connection string specified") unless uri resolved_uri = jndi? ? get_uri_from_jndi : uri if match = /\Ajdbc:([^:]+)/.match(resolved_uri) and prok = DATABASE_SETUP[match[1].to_sym] @driver = prok.call(self) end end # Yield the native prepared statements hash for the given connection # to the block in a thread-safe manner. def cps_sync(conn, &block) @connection_prepared_statements_mutex.synchronize{yield(@connection_prepared_statements[conn] ||= {})} end def database_error_classes [NativeException] end def database_exception_sqlstate(exception, opts) if database_exception_use_sqlstates? while exception.respond_to?(:cause) exception = exception.cause return exception.getSQLState if exception.respond_to?(:getSQLState) end end nil end # Whether the JDBC subadapter should use SQL states for exception handling, true by default. def database_exception_use_sqlstates? true end # Raise a disconnect error if the SQL state of the cause of the exception indicates so. def disconnect_error?(exception, opts) cause = exception.respond_to?(:cause) ? exception.cause : exception super || (cause.respond_to?(:getSQLState) && cause.getSQLState =~ /^08/) end # Execute the prepared statement. If the provided name is a # dataset, use that as the prepared statement, otherwise use # it as a key to look it up in the prepared_statements hash. # If the connection we are using has already prepared an identical # statement, use that statement instead of creating another. # Otherwise, prepare a new statement for the connection, bind the # variables, and execute it. def execute_prepared_statement(name, opts=OPTS) args = opts[:arguments] if name.is_a?(Dataset) ps = name name = ps.prepared_statement_name else ps = prepared_statement(name) end sql = ps.prepared_sql synchronize(opts[:server]) do |conn| if name and cps = cps_sync(conn){|cpsh| cpsh[name]} and cps[0] == sql cps = cps[1] else log_yield("CLOSE #{name}"){cps[1].close} if cps cps = log_yield("PREPARE#{" #{name}:" if name} #{sql}"){prepare_jdbc_statement(conn, sql, opts)} cps_sync(conn){|cpsh| cpsh[name] = [sql, cps]} if name end i = 0 args.each{|arg| set_ps_arg(cps, arg, i+=1)} msg = "EXECUTE#{" #{name}" if name}" if ps.log_sql msg << " (" msg << sql msg << ")" end begin if block_given? yield log_yield(msg, args){cps.executeQuery} else case opts[:type] when :ddl log_yield(msg, args){cps.execute} when :insert log_yield(msg, args){execute_prepared_statement_insert(cps)} last_insert_id(conn, opts.merge(:prepared=>true, :stmt=>cps)) else log_yield(msg, args){cps.executeUpdate} end end rescue NativeException, JavaSQL::SQLException => e raise_error(e) ensure cps.close unless name end end end # Execute the prepared insert statement def execute_prepared_statement_insert(stmt) stmt.executeUpdate end # Execute the insert SQL using the statement def execute_statement_insert(stmt, sql) stmt.executeUpdate(sql) end # Gets the connection from JNDI. def get_connection_from_jndi jndi_name = JNDI_URI_REGEXP.match(uri)[1] JavaxNaming::InitialContext.new.lookup(jndi_name).connection end # Gets the JDBC connection uri from the JNDI resource. def get_uri_from_jndi conn = get_connection_from_jndi conn.meta_data.url ensure conn.close if conn end # Backbone of the tables and views support. def get_tables(type, opts) ts = [] m = output_identifier_meth metadata(:getTables, nil, nil, nil, [type].to_java(:string)){|h| ts << m.call(h[:table_name])} ts end # Support Date objects used in bound variables def java_sql_date(date) java.sql.Date.new(Time.local(date.year, date.month, date.day).to_i * 1000) end # Support DateTime objects used in bound variables def java_sql_datetime(datetime) ts = java.sql.Timestamp.new(Time.local(datetime.year, datetime.month, datetime.day, datetime.hour, datetime.min, datetime.sec).to_i * 1000) ts.setNanos((datetime.sec_fraction * (RUBY_VERSION >= '1.9.0' ? 1000000000 : 86400000000000)).to_i) ts end # Support fractional seconds for Time objects used in bound variables def java_sql_timestamp(time) ts = java.sql.Timestamp.new(time.to_i * 1000) # Work around jruby 1.6 ruby 1.9 mode bug ts.setNanos((RUBY_VERSION >= '1.9.0' && time.nsec != 0) ? time.nsec : time.usec * 1000) ts end # Log the given SQL and then execute it on the connection, used by # the transaction code. def log_connection_execute(conn, sql) statement(conn){|s| log_yield(sql){s.execute(sql)}} end # By default, there is no support for determining the last inserted # id, so return nil. This method should be overridden in # sub adapters. def last_insert_id(conn, opts) nil end # Yield the metadata for this database def metadata(*args, &block) synchronize do |c| result = c.getMetaData.send(*args) begin metadata_dataset.send(:process_result_set, result, &block) ensure result.close end end end # Created a JDBC prepared statement on the connection with the given SQL. def prepare_jdbc_statement(conn, sql, opts) conn.prepareStatement(sql) end # Java being java, you need to specify the type of each argument # for the prepared statement, and bind it individually. This # guesses which JDBC method to use, and hopefully JRuby will convert # things properly for us. def set_ps_arg(cps, arg, i) case arg when Integer cps.setLong(i, arg) when Sequel::SQL::Blob cps.setBytes(i, arg.to_java_bytes) when String cps.setString(i, arg) when Float cps.setDouble(i, arg) when TrueClass, FalseClass cps.setBoolean(i, arg) when NilClass set_ps_arg_nil(cps, i) when DateTime cps.setTimestamp(i, java_sql_datetime(arg)) when Date cps.setDate(i, java_sql_date(arg)) when Time cps.setTimestamp(i, java_sql_timestamp(arg)) when Java::JavaSql::Timestamp cps.setTimestamp(i, arg) when Java::JavaSql::Date cps.setDate(i, arg) else cps.setObject(i, arg) end end # Use setString with a nil value by default, but this doesn't work on all subadapters. def set_ps_arg_nil(cps, i) cps.setString(i, nil) end # Return the connection. Used to do configuration on the # connection object before adding it to the connection pool. def setup_connection(conn) conn end # Parse the table schema for the given table. def schema_parse_table(table, opts=OPTS) m = output_identifier_meth(opts[:dataset]) im = input_identifier_meth(opts[:dataset]) ds = dataset schema, table = schema_and_table(table) schema ||= opts[:schema] schema = im.call(schema) if schema table = im.call(table) pks, ts = [], [] metadata(:getPrimaryKeys, nil, schema, table) do |h| next if schema_parse_table_skip?(h, schema) pks << h[:column_name] end schemas = [] metadata(:getColumns, nil, schema, table, nil) do |h| next if schema_parse_table_skip?(h, schema) s = {:type=>schema_column_type(h[:type_name]), :db_type=>h[:type_name], :default=>(h[:column_def] == '' ? nil : h[:column_def]), :allow_null=>(h[:nullable] != 0), :primary_key=>pks.include?(h[:column_name]), :column_size=>h[:column_size], :scale=>h[:decimal_digits]} if s[:db_type] =~ DECIMAL_TYPE_RE && s[:scale] == 0 s[:type] = :integer end schemas << h[:table_schem] unless schemas.include?(h[:table_schem]) ts << [m.call(h[:column_name]), s] end if schemas.length > 1 raise Error, 'Schema parsing in the jdbc adapter resulted in columns being returned for a table with the same name in multiple schemas. Please explicitly qualify your table with a schema.' end ts end # Whether schema_parse_table should skip the given row when # parsing the schema. def schema_parse_table_skip?(h, schema) h[:table_schem] == 'INFORMATION_SCHEMA' end # Yield a new statement object, and ensure that it is closed before returning. def statement(conn) stmt = conn.createStatement yield stmt rescue NativeException, JavaSQL::SQLException => e raise_error(e) ensure stmt.close if stmt end end class Dataset < Sequel::Dataset include StoredProcedures Database::DatasetClass = self # Use JDBC PreparedStatements instead of emulated ones. Statements # created using #prepare are cached at the connection level to allow # reuse. This also supports bind variables by using unnamed # prepared statements created using #call. module PreparedStatementMethods include Sequel::Dataset::UnnumberedArgumentMapper private # Execute the prepared SQL using the stored type and # arguments derived from the hash passed to call. def execute(sql, opts=OPTS, &block) super(self, {:arguments=>bind_arguments}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_dui(sql, opts=OPTS, &block) super(self, {:arguments=>bind_arguments}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_insert(sql, opts=OPTS, &block) super(self, {:arguments=>bind_arguments, :type=>:insert}.merge(opts), &block) end end # Use JDBC CallableStatements to execute stored procedures. Only supported # if the underlying database has stored procedure support. module StoredProcedureMethods include Sequel::Dataset::StoredProcedureMethods private # Execute the database stored procedure with the stored arguments. def execute(sql, opts=OPTS, &block) super(@sproc_name, {:args=>@sproc_args, :sproc=>true}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_dui(sql, opts=OPTS, &block) super(@sproc_name, {:args=>@sproc_args, :sproc=>true}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_insert(sql, opts=OPTS, &block) super(@sproc_name, {:args=>@sproc_args, :sproc=>true, :type=>:insert}.merge(opts), &block) end end # Whether to convert some Java types to ruby types when retrieving rows. # Uses the database's setting by default, can be set to false to roughly # double performance when fetching rows. attr_accessor :convert_types # Correctly return rows from the database and return them as hashes. def fetch_rows(sql, &block) execute(sql){|result| process_result_set(result, &block)} self end # Create a named prepared statement that is stored in the # database (and connection) for reuse. def prepare(type, name=nil, *values) ps = to_prepared_statement(type, values) ps.extend(PreparedStatementMethods) if name ps.prepared_statement_name = name db.set_prepared_statement(name, ps) end ps end private # Cache Java class constants to speed up lookups JAVA_SQL_TIMESTAMP = Java::JavaSQL::Timestamp JAVA_SQL_TIME = Java::JavaSQL::Time JAVA_SQL_DATE = Java::JavaSQL::Date JAVA_SQL_BLOB = Java::JavaSQL::Blob JAVA_SQL_CLOB = Java::JavaSQL::Clob JAVA_BUFFERED_READER = Java::JavaIo::BufferedReader JAVA_BIG_DECIMAL = Java::JavaMath::BigDecimal JAVA_BYTE_ARRAY = Java::byte[] JAVA_UUID = Java::JavaUtil::UUID JAVA_HASH_MAP = Java::JavaUtil::HashMap # Handle type conversions for common Java types. class TYPE_TRANSLATOR LF = "\n".freeze def time(v) Sequel.string_to_time("#{v.to_string}.#{sprintf('%03i', v.getTime.divmod(1000).last)}") end def date(v) Date.civil(v.getYear + 1900, v.getMonth + 1, v.getDate) end def decimal(v) BigDecimal.new(v.to_string) end def byte_array(v) Sequel::SQL::Blob.new(String.from_java_bytes(v)) end def blob(v) Sequel::SQL::Blob.new(String.from_java_bytes(v.getBytes(1, v.length))) end def clob(v) v.getSubString(1, v.length) end def buffered_reader(v) lines = "" c = false while(line = v.read_line) do lines << LF if c lines << line c ||= true end lines end def uuid(v) v.to_string end def hash_map(v) v.to_hash end end TYPE_TRANSLATOR_INSTANCE = tt = TYPE_TRANSLATOR.new # Cache type translator methods so that duplicate Method # objects are not created. DECIMAL_METHOD = tt.method(:decimal) TIME_METHOD = tt.method(:time) DATE_METHOD = tt.method(:date) BUFFERED_READER_METHOD = tt.method(:buffered_reader) BYTE_ARRAY_METHOD = tt.method(:byte_array) BLOB_METHOD = tt.method(:blob) CLOB_METHOD = tt.method(:clob) UUID_METHOD = tt.method(:uuid) HASH_MAP_METHOD = tt.method(:hash_map) # Convert the given Java timestamp to an instance of Sequel.datetime_class. def convert_type_timestamp(v) db.to_application_timestamp([v.getYear + 1900, v.getMonth + 1, v.getDate, v.getHours, v.getMinutes, v.getSeconds, v.getNanos]) end # Return a callable object that will convert any value of v's # class to a ruby object. If no callable object can handle v's # class, return false so that the negative lookup is cached. def convert_type_proc(v) case v when JAVA_BIG_DECIMAL DECIMAL_METHOD when JAVA_SQL_TIMESTAMP method(:convert_type_timestamp) when JAVA_SQL_TIME TIME_METHOD when JAVA_SQL_DATE DATE_METHOD when JAVA_BUFFERED_READER BUFFERED_READER_METHOD when JAVA_BYTE_ARRAY BYTE_ARRAY_METHOD when JAVA_SQL_BLOB BLOB_METHOD when JAVA_SQL_CLOB CLOB_METHOD when JAVA_UUID UUID_METHOD when JAVA_HASH_MAP HASH_MAP_METHOD else false end end # Extend the dataset with the JDBC stored procedure methods. def prepare_extend_sproc(ds) ds.extend(StoredProcedureMethods) end # Split out from fetch rows to allow processing of JDBC result sets # that don't come from issuing an SQL string. def process_result_set(result, &block) # get column names meta = result.getMetaData cols = [] i = 0 meta.getColumnCount.times{cols << [output_identifier(meta.getColumnLabel(i+=1)), i]} columns = cols.map{|c| c.at(0)} @columns = columns ct = @convert_types if (ct.nil? ? db.convert_types : ct) cols.each{|c| c << nil} process_result_set_convert(cols, result, &block) else process_result_set_no_convert(cols, result, &block) end ensure result.close end # Use conversion procs to convert data retrieved # from the database. This has been optimized, the algorithm it uses # is roughly, for each column value in each row: # * check if the value is truthy (not false/nil) # * if not truthy, return object # * otherwise, see if a conversion method exists for # the column. All columns start with a nil conversion proc, # since unlike other adapters, Sequel doesn't get the type of # the column when parsing the column metadata. # * if a conversion proc is not false/nil, call it with the object # and return the result. # * if a conversion proc has already been looked up and doesn't # exist (false value), return object. # * if a conversion proc hasn't been looked up yet (nil value), # call convert_type_proc to get the conversion method. Cache # the result of as the column's conversion proc to speed up # later processing. If the conversion proc exists, call it # and return the result, otherwise, return the object. def process_result_set_convert(cols, result) while result.next row = {} cols.each do |n, i, p| v = result.getObject(i) row[n] = if v if p p.call(v) elsif p.nil? cols[i-1][2] = p = convert_type_proc(v) if p p.call(v) else v end else v end else v end end yield row end end # Yield rows without calling any conversion procs. This # may yield Java values and not ruby values. def process_result_set_no_convert(cols, result) while result.next row = {} cols.each{|n, i| row[n] = result.getObject(i)} yield row end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/000077500000000000000000000000001220156535500205475ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/as400.rb000066400000000000000000000037151220156535500217310ustar00rootroot00000000000000Sequel.require 'adapters/jdbc/transactions' Sequel.require 'adapters/utils/emulate_offset_with_row_number' module Sequel module JDBC # Database and Dataset support for AS400 databases accessed via JDBC. module AS400 # Instance methods for AS400 Database objects accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::JDBC::Transactions TRANSACTION_BEGIN = 'Transaction.begin'.freeze TRANSACTION_COMMIT = 'Transaction.commit'.freeze TRANSACTION_ROLLBACK = 'Transaction.rollback'.freeze # AS400 uses the :as400 database type. def database_type :as400 end # TODO: Fix for AS400 def last_insert_id(conn, opts=OPTS) nil end # AS400 supports transaction isolation levels def supports_transaction_isolation_levels? true end private # Use JDBC connection's setAutoCommit to false to start transactions def begin_transaction(conn, opts=OPTS) set_transaction_isolation(conn, opts) super end end # Dataset class for AS400 datasets accessed via JDBC. class Dataset < JDBC::Dataset include EmulateOffsetWithRowNumber WILDCARD = Sequel::LiteralString.new('*').freeze FETCH_FIRST_ROW_ONLY = " FETCH FIRST ROW ONLY".freeze FETCH_FIRST = " FETCH FIRST ".freeze ROWS_ONLY = " ROWS ONLY".freeze # Modify the sql to limit the number of rows returned def select_limit_sql(sql) if l = @opts[:limit] if l == 1 sql << FETCH_FIRST_ROW_ONLY elsif l > 1 sql << FETCH_FIRST literal_append(sql, l) sql << ROWS_ONLY end end end def supports_window_functions? true end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/cubrid.rb000066400000000000000000000027451220156535500223540ustar00rootroot00000000000000Sequel.require 'adapters/shared/cubrid' Sequel.require 'adapters/jdbc/transactions' module Sequel module JDBC module Cubrid module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::Cubrid::DatabaseMethods include Sequel::JDBC::Transactions def supports_savepoints? false end private # Get the last inserted id using LAST_INSERT_ID(). def last_insert_id(conn, opts=OPTS) if stmt = opts[:stmt] rs = stmt.getGeneratedKeys begin if rs.next rs.getInt(1) end rescue NativeException nil ensure rs.close end end end # Use execute instead of executeUpdate. def execute_prepared_statement_insert(stmt) stmt.execute end # Return generated keys for insert statements, and use # execute intead of executeUpdate as CUBRID doesn't # return generated keys in executeUpdate. def execute_statement_insert(stmt, sql) stmt.execute(sql, JavaSQL::Statement.RETURN_GENERATED_KEYS) end # Return generated keys for insert statements. def prepare_jdbc_statement(conn, sql, opts) opts[:type] == :insert ? conn.prepareStatement(sql, JavaSQL::Statement.RETURN_GENERATED_KEYS) : super end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/db2.rb000066400000000000000000000043021220156535500215420ustar00rootroot00000000000000Sequel.require 'adapters/shared/db2' Sequel.require 'adapters/jdbc/transactions' module Sequel module JDBC class Database # Alias the generic JDBC versions so they can be called directly later alias jdbc_schema_parse_table schema_parse_table alias jdbc_tables tables alias jdbc_views views alias jdbc_indexes indexes end # Database and Dataset instance methods for DB2 specific # support via JDBC. module DB2 # Database instance methods for DB2 databases accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling PRIMARY_KEY_INDEX_RE = /\Asql\d+\z/i.freeze include Sequel::DB2::DatabaseMethods include Sequel::JDBC::Transactions IDENTITY_VAL_LOCAL = "SELECT IDENTITY_VAL_LOCAL() FROM SYSIBM.SYSDUMMY1".freeze %w'schema_parse_table tables views indexes'.each do |s| class_eval("def #{s}(*a) jdbc_#{s}(*a) end", __FILE__, __LINE__) end private def set_ps_arg(cps, arg, i) case arg when Sequel::SQL::Blob cps.setString(i, arg) else super end end def last_insert_id(conn, opts=OPTS) statement(conn) do |stmt| sql = IDENTITY_VAL_LOCAL rs = log_yield(sql){stmt.executeQuery(sql)} rs.next rs.getInt(1) end end # Primary key indexes appear to be named sqlNNNN on DB2 def primary_key_index_re PRIMARY_KEY_INDEX_RE end end class Dataset < JDBC::Dataset include Sequel::DB2::DatasetMethods class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR def db2_clob(v) Sequel::SQL::Blob.new(v.getSubString(1, v.length)) end end DB2_CLOB_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:db2_clob) private # Return clob as blob if use_clob_as_blob is true def convert_type_proc(v) case v when JAVA_SQL_CLOB ::Sequel::DB2::use_clob_as_blob ? DB2_CLOB_METHOD : super else super end end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/derby.rb000066400000000000000000000263761220156535500222170ustar00rootroot00000000000000Sequel.require 'adapters/jdbc/transactions' module Sequel module JDBC # Database and Dataset support for Derby databases accessed via JDBC. module Derby # Instance methods for Derby Database objects accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling PRIMARY_KEY_INDEX_RE = /\Asql\d+\z/i.freeze include ::Sequel::JDBC::Transactions # Derby doesn't support casting integer to varchar, only integer to char, # and char(254) appears to have the widest support (with char(255) failing). # This does add a bunch of extra spaces at the end, but those will be trimmed # elsewhere. def cast_type_literal(type) (type == String) ? 'CHAR(254)' : super end # Derby uses the :derby database type. def database_type :derby end # Derby uses an IDENTITY sequence for autoincrementing columns. def serial_primary_key_options {:primary_key => true, :type => :integer, :identity=>true, :start_with=>1} end # The SVN version of the database. def svn_version @svn_version ||= begin v = synchronize{|c| c.get_meta_data.get_database_product_version} v =~ /\((\d+)\)\z/ $1.to_i end end # Derby supports transaction DDL statements. def supports_transactional_ddl? true end private # Derby optimizes away Sequel's default check of SELECT NULL FROM table, # so use a SELECT * FROM table there. def _table_exists?(ds) ds.first end # Derby-specific syntax for renaming columns and changing a columns type/nullity. def alter_table_sql(table, op) case op[:op] when :rename_column "RENAME COLUMN #{quote_schema_table(table)}.#{quote_identifier(op[:name])} TO #{quote_identifier(op[:new_name])}" when :set_column_type # Derby is very limited in changing a columns type, so adding a new column and then dropping the existing column is # the best approach, as mentioned in the Derby documentation. temp_name = :x_sequel_temp_column_x [alter_table_sql(table, op.merge(:op=>:add_column, :name=>temp_name)), from(table).update_sql(temp_name=>::Sequel::SQL::Cast.new(op[:name], op[:type])), alter_table_sql(table, op.merge(:op=>:drop_column)), alter_table_sql(table, op.merge(:op=>:rename_column, :name=>temp_name, :new_name=>op[:name]))] when :set_column_null "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(op[:name])} #{op[:null] ? 'NULL' : 'NOT NULL'}" else super end end # Derby doesn't allow specifying NULL for columns, only NOT NULL. def column_definition_null_sql(sql, column) sql << " NOT NULL" if column.fetch(:null, column[:allow_null]) == false end # Add NOT LOGGED for temporary tables to improve performance. def create_table_sql(name, generator, options) s = super s << ' NOT LOGGED' if options[:temp] s end # Insert data from the current table into the new table after # creating the table, since it is not possible to do it in one step. def create_table_as(name, sql, options) super from(name).insert(sql.is_a?(Dataset) ? sql : dataset.with_sql(sql)) end # Derby currently only requires WITH NO DATA, with a separate insert # to import data. def create_table_as_sql(name, sql, options) "#{create_table_prefix_sql(name, options)} AS #{sql} WITH NO DATA" end # Temporary table creation on Derby uses DECLARE instead of CREATE. def create_table_prefix_sql(name, options) if options[:temp] "DECLARE GLOBAL TEMPORARY TABLE #{quote_identifier(name)}" else super end end DATABASE_ERROR_REGEXPS = { /The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index/ => UniqueConstraintViolation, /violation of foreign key constraint/ => ForeignKeyConstraintViolation, /The check constraint .+ was violated/ => CheckConstraintViolation, /cannot accept a NULL value/ => NotNullConstraintViolation, /A lock could not be obtained due to a deadlock/ => SerializationFailure, }.freeze def database_error_regexps DATABASE_ERROR_REGEXPS end # Use IDENTITY_VAL_LOCAL() to get the last inserted id. def last_insert_id(conn, opts=OPTS) statement(conn) do |stmt| sql = 'SELECT IDENTITY_VAL_LOCAL() FROM sysibm.sysdummy1' rs = log_yield(sql){stmt.executeQuery(sql)} rs.next rs.getInt(1) end end # Handle nil values by using setNull with the correct parameter type. def set_ps_arg_nil(cps, i) cps.setNull(i, cps.getParameterMetaData.getParameterType(i)) end # Derby uses RENAME TABLE syntax to rename tables. def rename_table_sql(name, new_name) "RENAME TABLE #{quote_schema_table(name)} TO #{quote_schema_table(new_name)}" end # Primary key indexes appear to be named sqlNNNN on Derby def primary_key_index_re PRIMARY_KEY_INDEX_RE end # If an :identity option is present in the column, add the necessary IDENTITY SQL. def type_literal(column) if column[:identity] sql = "#{super} GENERATED BY DEFAULT AS IDENTITY" if sw = column[:start_with] sql << " (START WITH #{sw.to_i}" sql << " INCREMENT BY #{column[:increment_by].to_i}" if column[:increment_by] sql << ")" end sql else super end end # Derby uses clob for text types. def uses_clob_for_text? true end # The SQL query to issue to check if a connection is valid. def valid_connection_sql @valid_connection_sql ||= select(1).sql end end # Dataset class for Derby datasets accessed via JDBC. class Dataset < JDBC::Dataset PAREN_CLOSE = Dataset::PAREN_CLOSE PAREN_OPEN = Dataset::PAREN_OPEN OFFSET = Dataset::OFFSET CAST_STRING_OPEN = "RTRIM(".freeze BITCOMP_OPEN = "((0 - ".freeze BITCOMP_CLOSE = ") - 1)".freeze BLOB_OPEN = "CAST(X'".freeze BLOB_CLOSE = "' AS BLOB)".freeze HSTAR = "H*".freeze TIME_FORMAT = "'%H:%M:%S'".freeze DEFAULT_FROM = " FROM sysibm.sysdummy1".freeze ROWS = " ROWS".freeze FETCH_FIRST = " FETCH FIRST ".freeze ROWS_ONLY = " ROWS ONLY".freeze BOOL_TRUE_OLD = '(1 = 1)'.freeze BOOL_FALSE_OLD = '(1 = 0)'.freeze BOOL_TRUE = 'TRUE'.freeze BOOL_FALSE = 'FALSE'.freeze SELECT_CLAUSE_METHODS = clause_methods(:select, %w'select distinct columns from join where group having compounds order limit lock') EMULATED_FUNCTION_MAP = {:char_length=>'length'.freeze} # Derby doesn't support an expression between CASE and WHEN, # so remove # conditions. def case_expression_sql_append(sql, ce) super(sql, ce.with_merged_expression) end # If the type is String, trim the extra spaces since CHAR is used instead # of varchar. This can cause problems if you are casting a char/varchar to # a string and the ending whitespace is important. def cast_sql_append(sql, expr, type) if type == String sql << CAST_STRING_OPEN super sql << PAREN_CLOSE else super end end def complex_expression_sql_append(sql, op, args) case op when :% sql << complex_expression_arg_pairs(args){|a, b| "MOD(#{literal(a)}, #{literal(b)})"} when :&, :|, :^, :<<, :>> raise Error, "Derby doesn't support the #{op} operator" when :'B~' sql << BITCOMP_OPEN literal_append(sql, args.at(0)) sql << BITCOMP_CLOSE when :extract sql << args.at(0).to_s << PAREN_OPEN literal_append(sql, args.at(1)) sql << PAREN_CLOSE else super end end # Derby supports GROUP BY ROLLUP (but not CUBE) def supports_group_rollup? true end # Derby does not support IS TRUE. def supports_is_true? false end # Derby does not support IN/NOT IN with multiple columns def supports_multiple_column_in? false end private JAVA_SQL_CLOB = Java::JavaSQL::Clob class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR def derby_clob(v) v.getSubString(1, v.length) end end DERBY_CLOB_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:derby_clob) # Handle clobs on Derby as strings. def convert_type_proc(v) if v.is_a?(JAVA_SQL_CLOB) DERBY_CLOB_METHOD else super end end # Derby needs a hex string casted to BLOB for blobs. def literal_blob_append(sql, v) sql << BLOB_OPEN << v.unpack(HSTAR).first << BLOB_CLOSE end # Derby needs the standard workaround to insert all default values into # a table with more than one column. def insert_supports_empty_values? false end # Derby uses an expression yielding false for false values. # Newer versions can use the FALSE literal, but the latest gem version cannot. def literal_false if db.svn_version >= 1040133 BOOL_FALSE else BOOL_FALSE_OLD end end # Derby handles fractional seconds in timestamps, but not in times def literal_sqltime(v) v.strftime(TIME_FORMAT) end # Derby uses an expression yielding true for true values. # Newer versions can use the TRUE literal, but the latest gem version cannot. def literal_true if db.svn_version >= 1040133 BOOL_TRUE else BOOL_TRUE_OLD end end # Derby doesn't support common table expressions. def select_clause_methods SELECT_CLAUSE_METHODS end # Use a default FROM table if the dataset does not contain a FROM table. def select_from_sql(sql) if @opts[:from] super else sql << DEFAULT_FROM end end # Offset comes before limit in Derby def select_limit_sql(sql) if o = @opts[:offset] sql << OFFSET literal_append(sql, o) sql << ROWS end if l = @opts[:limit] sql << FETCH_FIRST literal_append(sql, l) sql << ROWS_ONLY end end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/firebird.rb000066400000000000000000000014141220156535500226620ustar00rootroot00000000000000Sequel.require 'adapters/shared/firebird' Sequel.require 'adapters/jdbc/transactions' module Sequel module JDBC # Database and Dataset instance methods for Firebird specific # support via JDBC. module Firebird # Database instance methods for Firebird databases accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::Firebird::DatabaseMethods include Sequel::JDBC::Transactions # Add the primary_keys and primary_key_sequences instance variables, # so we can get the correct return values for inserted rows. def self.extended(db) db.instance_eval do @primary_keys = {} end end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/h2.rb000066400000000000000000000176111220156535500214130ustar00rootroot00000000000000module Sequel module JDBC # Database and Dataset support for H2 databases accessed via JDBC. module H2 # Instance methods for H2 Database objects accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling PRIMARY_KEY_INDEX_RE = /\Aprimary_key/i.freeze # Commit an existing prepared transaction with the given transaction # identifier string. def commit_prepared_transaction(transaction_id) run("COMMIT TRANSACTION #{transaction_id}") end # H2 uses the :h2 database type. def database_type :h2 end # Rollback an existing prepared transaction with the given transaction # identifier string. def rollback_prepared_transaction(transaction_id) run("ROLLBACK TRANSACTION #{transaction_id}") end # H2 uses an IDENTITY type def serial_primary_key_options {:primary_key => true, :type => :identity, :identity=>true} end # H2 supports CREATE TABLE IF NOT EXISTS syntax. def supports_create_table_if_not_exists? true end # H2 supports prepared transactions def supports_prepared_transactions? true end # H2 supports savepoints def supports_savepoints? true end private # If the :prepare option is given and we aren't in a savepoint, # prepare the transaction for a two-phase commit. def commit_transaction(conn, opts=OPTS) if (s = opts[:prepare]) && _trans(conn)[:savepoint_level] <= 1 log_connection_execute(conn, "PREPARE COMMIT #{s}") else super end end # H2 needs to add a primary key column as a constraint def alter_table_sql(table, op) case op[:op] when :add_column if (pk = op.delete(:primary_key)) || (ref = op.delete(:table)) sqls = [super(table, op)] sqls << "ALTER TABLE #{quote_schema_table(table)} ADD PRIMARY KEY (#{quote_identifier(op[:name])})" if pk if ref op[:table] = ref sqls << "ALTER TABLE #{quote_schema_table(table)} ADD FOREIGN KEY (#{quote_identifier(op[:name])}) #{column_references_sql(op)}" end sqls else super(table, op) end when :rename_column "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(op[:name])} RENAME TO #{quote_identifier(op[:new_name])}" when :set_column_null "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(op[:name])} SET#{' NOT' unless op[:null]} NULL" when :set_column_type if sch = schema(table) if cs = sch.each{|k, v| break v if k == op[:name]; nil} cs = cs.dup cs[:default] = cs[:ruby_default] op = cs.merge!(op) end end sql = "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(op[:name])} #{type_literal(op)}" column_definition_order.each{|m| send(:"column_definition_#{m}_sql", sql, op)} sql when :drop_constraint if op[:type] == :primary_key "ALTER TABLE #{quote_schema_table(table)} DROP PRIMARY KEY" else super(table, op) end else super(table, op) end end # Default to a single connection for a memory database. def connection_pool_default_options o = super uri == 'jdbc:h2:mem:' ? o.merge(:max_connections=>1) : o end DATABASE_ERROR_REGEXPS = { /Unique index or primary key violation/ => UniqueConstraintViolation, /Referential integrity constraint violation/ => ForeignKeyConstraintViolation, /Check constraint violation/ => CheckConstraintViolation, /NULL not allowed for column/ => NotNullConstraintViolation, /Deadlock detected\. The current transaction was rolled back\./ => SerializationFailure, }.freeze def database_error_regexps DATABASE_ERROR_REGEXPS end # Use IDENTITY() to get the last inserted id. def last_insert_id(conn, opts=OPTS) statement(conn) do |stmt| sql = 'SELECT IDENTITY();' rs = log_yield(sql){stmt.executeQuery(sql)} rs.next rs.getInt(1) end end def primary_key_index_re PRIMARY_KEY_INDEX_RE end # H2 does not support named column constraints. def supports_named_column_constraints? false end # Use BIGINT IDENTITY for identity columns that use bigint, fixes # the case where primary_key :column, :type=>Bignum is used. def type_literal_generic_bignum(column) column[:identity] ? 'BIGINT IDENTITY' : super end end # Dataset class for H2 datasets accessed via JDBC. class Dataset < JDBC::Dataset SELECT_CLAUSE_METHODS = clause_methods(:select, %w'select distinct columns from join where group having compounds order limit') BITWISE_METHOD_MAP = {:& =>:BITAND, :| => :BITOR, :^ => :BITXOR} APOS = Dataset::APOS HSTAR = "H*".freeze BITCOMP_OPEN = "((0 - ".freeze BITCOMP_CLOSE = ") - 1)".freeze ILIKE_PLACEHOLDER = ["CAST(".freeze, " AS VARCHAR_IGNORECASE)".freeze].freeze TIME_FORMAT = "'%H:%M:%S'".freeze # Emulate the case insensitive LIKE operator and the bitwise operators. def complex_expression_sql_append(sql, op, args) case op when :ILIKE, :"NOT ILIKE" super(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), [SQL::PlaceholderLiteralString.new(ILIKE_PLACEHOLDER, [args.at(0)]), args.at(1)]) when :&, :|, :^ sql << complex_expression_arg_pairs(args){|a, b| literal(SQL::Function.new(BITWISE_METHOD_MAP[op], a, b))} when :<< sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} * POWER(2, #{literal(b)}))"} when :>> sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} / POWER(2, #{literal(b)}))"} when :'B~' sql << BITCOMP_OPEN literal_append(sql, args.at(0)) sql << BITCOMP_CLOSE else super end end # H2 requires SQL standard datetimes def requires_sql_standard_datetimes? true end # H2 doesn't support IS TRUE def supports_is_true? false end # H2 doesn't support JOIN USING def supports_join_using? false end # H2 doesn't support multiple columns in IN/NOT IN def supports_multiple_column_in? false end private #JAVA_H2_CLOB = Java::OrgH2Jdbc::JdbcClob class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR def h2_clob(v) v.getSubString(1, v.length) end end H2_CLOB_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:h2_clob) # Handle H2 specific clobs as strings. def convert_type_proc(v) if v.is_a?(Java::OrgH2Jdbc::JdbcClob) H2_CLOB_METHOD else super end end # H2 expects hexadecimal strings for blob values def literal_blob_append(sql, v) sql << APOS << v.unpack(HSTAR).first << APOS end # H2 handles fractional seconds in timestamps, but not in times def literal_sqltime(v) v.strftime(TIME_FORMAT) end def select_clause_methods SELECT_CLAUSE_METHODS end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/hsqldb.rb000066400000000000000000000167151220156535500223630ustar00rootroot00000000000000Sequel.require 'adapters/jdbc/transactions' module Sequel module JDBC # Database and Dataset support for HSQLDB databases accessed via JDBC. module HSQLDB # Instance methods for HSQLDB Database objects accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling PRIMARY_KEY_INDEX_RE = /\Asys_idx_sys_pk_/i.freeze include ::Sequel::JDBC::Transactions # HSQLDB uses the :hsqldb database type. def database_type :hsqldb end # HSQLDB uses an IDENTITY sequence as the default value for primary # key columns. def serial_primary_key_options {:primary_key => true, :type => :integer, :identity=>true, :start_with=>1} end # The version of the database, as an integer (e.g 2.2.5 -> 20205) def db_version @db_version ||= begin v = get{DATABASE_VERSION(){}} if v =~ /(\d+)\.(\d+)\.(\d+)/ $1.to_i * 10000 + $2.to_i * 100 + $3.to_i end end end private # HSQLDB specific SQL for renaming columns, and changing column types and/or nullity. def alter_table_sql(table, op) case op[:op] when :rename_column "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(op[:name])} RENAME TO #{quote_identifier(op[:new_name])}" when :set_column_type "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(op[:name])} SET DATA TYPE #{type_literal(op)}" when :set_column_null "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(op[:name])} SET #{op[:null] ? 'NULL' : 'NOT NULL'}" else super end end # HSQLDB requires parens around the SELECT, and the WITH DATA syntax. def create_table_as_sql(name, sql, options) "#{create_table_prefix_sql(name, options)} AS (#{sql}) WITH DATA" end DATABASE_ERROR_REGEXPS = { /integrity constraint violation: unique constraint or index violation/ => UniqueConstraintViolation, /integrity constraint violation: foreign key/ => ForeignKeyConstraintViolation, /integrity constraint violation: check constraint/ => CheckConstraintViolation, /integrity constraint violation: NOT NULL check constraint/ => NotNullConstraintViolation, /serialization failure/ => SerializationFailure, }.freeze def database_error_regexps DATABASE_ERROR_REGEXPS end # Use IDENTITY() to get the last inserted id. def last_insert_id(conn, opts=OPTS) statement(conn) do |stmt| sql = 'CALL IDENTITY()' rs = log_yield(sql){stmt.executeQuery(sql)} rs.next rs.getInt(1) end end # Primary key indexes appear to start with sys_idx_sys_pk_ on HSQLDB def primary_key_index_re PRIMARY_KEY_INDEX_RE end # If an :identity option is present in the column, add the necessary IDENTITY SQL. # It's possible to use an IDENTITY type, but that defaults the sequence to start # at 0 instead of 1, and we don't want that. def type_literal(column) if column[:identity] sql = "#{super} GENERATED BY DEFAULT AS IDENTITY" if sw = column[:start_with] sql << " (START WITH #{sw.to_i}" sql << " INCREMENT BY #{column[:increment_by].to_i}" if column[:increment_by] sql << ")" end sql else super end end # HSQLDB uses clob for text types. def uses_clob_for_text? true end end # Dataset class for HSQLDB datasets accessed via JDBC. class Dataset < JDBC::Dataset BITWISE_METHOD_MAP = {:& =>:BITAND, :| => :BITOR, :^ => :BITXOR} BOOL_TRUE = 'TRUE'.freeze BOOL_FALSE = 'FALSE'.freeze # HSQLDB does support common table expressions, but the support is broken. # CTEs operate more like temprorary tables or views, lasting longer than the duration of the expression. # CTEs in earlier queries might take precedence over CTEs with the same name in later queries. # Also, if any CTE is recursive, all CTEs must be recursive. # If you want to use CTEs with HSQLDB, you'll have to manually modify the dataset to allow it. SELECT_CLAUSE_METHODS = clause_methods(:select, %w'select distinct columns from join where group having compounds order limit lock') SQL_WITH_RECURSIVE = "WITH RECURSIVE ".freeze APOS = Dataset::APOS HSTAR = "H*".freeze BLOB_OPEN = "X'".freeze BITCOMP_OPEN = "((0 - ".freeze BITCOMP_CLOSE = ") - 1)".freeze DEFAULT_FROM = " FROM (VALUES (0))".freeze TIME_FORMAT = "'%H:%M:%S'".freeze # Handle HSQLDB specific case insensitive LIKE and bitwise operator support. def complex_expression_sql_append(sql, op, args) case op when :ILIKE, :"NOT ILIKE" super(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), args.map{|v| SQL::Function.new(:ucase, v)}) when :&, :|, :^ op = BITWISE_METHOD_MAP[op] sql << complex_expression_arg_pairs(args){|a, b| literal(SQL::Function.new(op, a, b))} when :<< sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} * POWER(2, #{literal(b)}))"} when :>> sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} / POWER(2, #{literal(b)}))"} when :% sql << complex_expression_arg_pairs(args){|a, b| "MOD(#{literal(a)}, #{literal(b)})"} when :'B~' sql << BITCOMP_OPEN literal_append(sql, args.at(0)) sql << BITCOMP_CLOSE else super end end # HSQLDB requires recursive CTEs to have column aliases. def recursive_cte_requires_column_aliases? true end # HSQLDB requires SQL standard datetimes in some places. def requires_sql_standard_datetimes? true end # HSQLDB does not support IS TRUE. def supports_is_true? false end private # Use string in hex format for blob data. def literal_blob_append(sql, v) sql << BLOB_OPEN << v.unpack(HSTAR).first << APOS end # HSQLDB uses FALSE for false values. def literal_false BOOL_FALSE end # HSQLDB handles fractional seconds in timestamps, but not in times def literal_sqltime(v) v.strftime(TIME_FORMAT) end # HSQLDB uses TRUE for true values. def literal_true BOOL_TRUE end # HSQLDB does not support CTEs well enough for Sequel to enable support for them. def select_clause_methods SELECT_CLAUSE_METHODS end # Use a default FROM table if the dataset does not contain a FROM table. def select_from_sql(sql) if @opts[:from] super else sql << DEFAULT_FROM end end # Use WITH RECURSIVE instead of WITH if any of the CTEs is recursive def select_with_sql_base opts[:with].any?{|w| w[:recursive]} ? SQL_WITH_RECURSIVE : super end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/informix.rb000066400000000000000000000010471220156535500227310ustar00rootroot00000000000000Sequel.require 'adapters/shared/informix' module Sequel module JDBC # Database and Dataset instance methods for Informix specific # support via JDBC. module Informix # Database instance methods for Informix databases accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::Informix::DatabaseMethods private # TODO: implement def last_insert_id(conn, opts=OPTS) nil end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/jtds.rb000066400000000000000000000024201220156535500220360ustar00rootroot00000000000000Sequel.require 'adapters/jdbc/mssql' module Sequel module JDBC # Database and Dataset instance methods for JTDS specific # support via JDBC. module JTDS module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::JDBC::MSSQL::DatabaseMethods private # JTDS exception handling with SQLState is less accurate than with regexps. def database_exception_use_sqlstates? false end # Handle nil values by using setNull with the correct parameter type. def set_ps_arg_nil(cps, i) cps.setNull(i, cps.getParameterMetaData.getParameterType(i)) end end # Dataset class for JTDS datasets accessed via JDBC. class Dataset < JDBC::Dataset include Sequel::MSSQL::DatasetMethods class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR def jtds_clob(v) v.getSubString(1, v.length) end end JTDS_CLOB_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:jtds_clob) # Handle CLOB types retrieved via JTDS. def convert_type_proc(v) if v.is_a?(Java::NetSourceforgeJtdsJdbc::ClobImpl) JTDS_CLOB_METHOD else super end end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/mssql.rb000066400000000000000000000027201220156535500222340ustar00rootroot00000000000000Sequel.require 'adapters/shared/mssql' module Sequel module JDBC class Database # Alias the generic JDBC version so it can be called directly later alias jdbc_schema_parse_table schema_parse_table end # Database and Dataset instance methods for MSSQL specific # support via JDBC. module MSSQL # Database instance methods for MSSQL databases accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling PRIMARY_KEY_INDEX_RE = /\Apk__/i.freeze ATAT_IDENTITY = 'SELECT @@IDENTITY'.freeze SCOPE_IDENTITY = 'SELECT SCOPE_IDENTITY()'.freeze include Sequel::MSSQL::DatabaseMethods private # Get the last inserted id using SCOPE_IDENTITY(). def last_insert_id(conn, opts=OPTS) statement(conn) do |stmt| sql = opts[:prepared] ? ATAT_IDENTITY : SCOPE_IDENTITY rs = log_yield(sql){stmt.executeQuery(sql)} rs.next rs.getInt(1) end end # Call the generic JDBC version instead of MSSQL version, # since the JDBC version handles primary keys. def schema_parse_table(table, opts=OPTS) jdbc_schema_parse_table(table, opts) end # Primary key indexes appear to start with pk__ on MSSQL def primary_key_index_re PRIMARY_KEY_INDEX_RE end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/mysql.rb000066400000000000000000000046641220156535500222530ustar00rootroot00000000000000Sequel.require 'adapters/shared/mysql' module Sequel module JDBC # Database and Dataset instance methods for MySQL specific # support via JDBC. module MySQL # Database instance methods for MySQL databases accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::MySQL::DatabaseMethods LAST_INSERT_ID = 'SELECT LAST_INSERT_ID()'.freeze private # The database name for the given database. Need to parse it out # of the connection string, since the JDBC does no parsing on the # given connection string by default. def database_name u = URI.parse(uri.sub(/\Ajdbc:/, '')) (m = /\/(.*)/.match(u.path)) && m[1] end # MySQL exception handling with SQLState is less accurate than with regexps. def database_exception_use_sqlstates? false end # Get the last inserted id using LAST_INSERT_ID(). def last_insert_id(conn, opts=OPTS) if stmt = opts[:stmt] rs = stmt.getGeneratedKeys begin if rs.next rs.getInt(1) else 0 end ensure rs.close end else statement(conn) do |stmt| rs = stmt.executeQuery(LAST_INSERT_ID) rs.next rs.getInt(1) end end end # MySQL 5.1.12 JDBC adapter requires generated keys # and previous versions don't mind. def execute_statement_insert(stmt, sql) stmt.executeUpdate(sql, JavaSQL::Statement.RETURN_GENERATED_KEYS) end # Return generated keys for insert statements. def prepare_jdbc_statement(conn, sql, opts) opts[:type] == :insert ? conn.prepareStatement(sql, JavaSQL::Statement.RETURN_GENERATED_KEYS) : super end # Convert tinyint(1) type to boolean def schema_column_type(db_type) db_type =~ /\Atinyint\(1\)/ ? :boolean : super end # Run the default connection setting SQL statements. # Apply the connectiong setting SQLs for every new connection. def setup_connection(conn) mysql_connection_setting_sqls.each{|sql| statement(conn){|s| log_yield(sql){s.execute(sql)}}} super end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/oracle.rb000066400000000000000000000073061220156535500223470ustar00rootroot00000000000000Sequel.require 'adapters/shared/oracle' Sequel.require 'adapters/jdbc/transactions' module Sequel module JDBC # Database and Dataset support for Oracle databases accessed via JDBC. module Oracle # Instance methods for Oracle Database objects accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling PRIMARY_KEY_INDEX_RE = /\Asys_/i.freeze include Sequel::Oracle::DatabaseMethods include Sequel::JDBC::Transactions def self.extended(db) db.instance_eval do @autosequence = opts[:autosequence] @primary_key_sequences = {} end end private # Oracle exception handling with SQLState is less accurate than with regexps. def database_exception_use_sqlstates? false end def disconnect_error?(exception, opts) super || exception.message =~ /\AClosed Connection/ end def last_insert_id(conn, opts) unless sequence = opts[:sequence] if t = opts[:table] sequence = sequence_for_table(t) end end if sequence sql = "SELECT #{literal(sequence)}.currval FROM dual" statement(conn) do |stmt| begin rs = log_yield(sql){stmt.executeQuery(sql)} rs.next rs.getInt(1) rescue java.sql.SQLException nil end end end end # Primary key indexes appear to start with sys_ on Oracle def primary_key_index_re PRIMARY_KEY_INDEX_RE end def schema_parse_table(*) sch = super sch.each do |c, s| if s[:type] == :decimal && s[:scale] == -127 s[:type] = :integer elsif s[:db_type] == 'DATE' s[:type] = :datetime end end sch end def schema_parse_table_skip?(h, schema) super || (h[:table_schem] != current_user unless schema) end # As of Oracle 9.2, releasing savepoints is no longer supported. def supports_releasing_savepoints? false end end # Dataset class for Oracle datasets accessed via JDBC. class Dataset < JDBC::Dataset include Sequel::Oracle::DatasetMethods private JAVA_BIG_DECIMAL = ::Sequel::JDBC::Dataset::JAVA_BIG_DECIMAL JAVA_BIG_DECIMAL_CONSTRUCTOR = java.math.BigDecimal.java_class.constructor(Java::long).method(:new_instance) class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR def oracle_decimal(v) if v.scale == 0 i = v.long_value if v.equals(JAVA_BIG_DECIMAL_CONSTRUCTOR.call(i)) i else decimal(v) end else decimal(v) end end end ORACLE_DECIMAL_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:oracle_decimal) def convert_type_oracle_timestamp(v) db.to_application_timestamp(v.to_string) end def convert_type_oracle_timestamptz(v) convert_type_oracle_timestamp(db.synchronize(@opts[:server]){|c| v.timestampValue(c)}) end def convert_type_proc(v) case v when JAVA_BIG_DECIMAL ORACLE_DECIMAL_METHOD when Java::OracleSql::TIMESTAMPTZ method(:convert_type_oracle_timestamptz) when Java::OracleSql::TIMESTAMP method(:convert_type_oracle_timestamp) else super end end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/postgresql.rb000066400000000000000000000141441220156535500233030ustar00rootroot00000000000000Sequel.require 'adapters/shared/postgres' module Sequel Postgres::CONVERTED_EXCEPTIONS << NativeException module JDBC # Adapter, Database, and Dataset support for accessing a PostgreSQL # database via JDBC. module Postgres # Methods to add to Database instances that access PostgreSQL via # JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::Postgres::DatabaseMethods # Add the primary_keys and primary_key_sequences instance variables, # so we can get the correct return values for inserted rows. def self.extended(db) super db.send(:initialize_postgres_adapter) end # See Sequel::Postgres::Adapter#copy_into def copy_into(table, opts=OPTS) data = opts[:data] data = Array(data) if data.is_a?(String) if block_given? && data raise Error, "Cannot provide both a :data option and a block to copy_into" elsif !block_given? && !data raise Error, "Must provide either a :data option or a block to copy_into" end synchronize(opts) do |conn| begin copy_manager = org.postgresql.copy.CopyManager.new(conn) copier = copy_manager.copy_in(copy_into_sql(table, opts)) if block_given? while buf = yield copier.writeToCopy(buf.to_java_bytes, 0, buf.length) end else data.each { |d| copier.writeToCopy(d.to_java_bytes, 0, d.length) } end rescue Exception => e copier.cancelCopy raise ensure unless e begin copier.endCopy rescue NativeException => e2 raise_error(e2) end end end end end # See Sequel::Postgres::Adapter#copy_table def copy_table(table, opts=OPTS) synchronize(opts[:server]) do |conn| copy_manager = org.postgresql.copy.CopyManager.new(conn) copier = copy_manager.copy_out(copy_table_sql(table, opts)) begin if block_given? while buf = copier.readFromCopy yield(String.from_java_bytes(buf)) end nil else b = '' while buf = copier.readFromCopy b << String.from_java_bytes(buf) end b end ensure raise DatabaseDisconnectError, "disconnecting as a partial COPY may leave the connection in an unusable state" if buf end end end private # Use setNull for nil arguments as the default behavior of setString # with nil doesn't appear to work correctly on PostgreSQL. def set_ps_arg_nil(cps, i) cps.setNull(i, JavaSQL::Types::NULL) end # Execute the connection configuration SQL queries on the connection. def setup_connection(conn) conn = super(conn) statement(conn) do |stmt| connection_configuration_sqls.each{|sql| log_yield(sql){stmt.execute(sql)}} end conn end end # Dataset subclass used for datasets that connect to PostgreSQL via JDBC. class Dataset < JDBC::Dataset include Sequel::Postgres::DatasetMethods APOS = Dataset::APOS class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR # Convert Java::OrgPostgresqlUtil::PGobject to ruby strings def pg_object(v) v.to_string end end # Handle conversions of PostgreSQL array instances class PGArrayConverter # Set the method that will return the correct conversion # proc for elements of this array. def initialize(meth) @conversion_proc_method = meth @conversion_proc = nil end # Convert Java::OrgPostgresqlJdbc4::Jdbc4Array to ruby arrays def call(v) _pg_array(v.array) end private # Handle multi-dimensional Java arrays by recursively mapping them # to ruby arrays of ruby values. def _pg_array(v) v.to_ary.map do |i| if i.respond_to?(:to_ary) _pg_array(i) elsif i if @conversion_proc.nil? @conversion_proc = @conversion_proc_method.call(i) end if @conversion_proc @conversion_proc.call(i) else i end else i end end end end PG_OBJECT_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:pg_object) # Add the shared PostgreSQL prepared statement methods def prepare(type, name=nil, *values) ps = to_prepared_statement(type, values) ps.extend(JDBC::Dataset::PreparedStatementMethods) ps.extend(::Sequel::Postgres::DatasetMethods::PreparedStatementMethods) if name ps.prepared_statement_name = name db.set_prepared_statement(name, ps) end ps end private # Handle PostgreSQL array and object types. Object types are just # turned into strings, similarly to how the native adapter treats # the types. def convert_type_proc(v) case v when Java::OrgPostgresqlJdbc4::Jdbc4Array PGArrayConverter.new(method(:convert_type_proc)) when Java::OrgPostgresqlUtil::PGobject PG_OBJECT_METHOD else super end end # Literalize strings similar to the native postgres adapter def literal_string_append(sql, v) sql << APOS << db.synchronize(@opts[:server]){|c| c.escape_string(v)} << APOS end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/progress.rb000066400000000000000000000012151220156535500227370ustar00rootroot00000000000000Sequel.require 'adapters/shared/progress' Sequel.require 'adapters/jdbc/transactions' module Sequel module JDBC # Database and Dataset instance methods for Progress v9 specific # support via JDBC. module Progress # Database instance methods for Progress databases accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::Progress::DatabaseMethods include Sequel::JDBC::Transactions # Progress DatabaseMetaData doesn't even implement supportsSavepoints() def supports_savepoints? false end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/sqlite.rb000066400000000000000000000041551220156535500224020ustar00rootroot00000000000000Sequel.require 'adapters/shared/sqlite' module Sequel module JDBC # Database and Dataset support for SQLite databases accessed via JDBC. module SQLite # Instance methods for SQLite Database objects accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::SQLite::DatabaseMethods LAST_INSERT_ROWID = 'SELECT last_insert_rowid()'.freeze FOREIGN_KEY_ERROR_RE = /query does not return ResultSet/.freeze # Swallow pointless exceptions when the foreign key list pragma # doesn't return any rows. def foreign_key_list(table, opts=OPTS) super rescue Sequel::DatabaseError => e raise unless e.message =~ FOREIGN_KEY_ERROR_RE [] end # Swallow pointless exceptions when the index list pragma # doesn't return any rows. def indexes(table, opts=OPTS) super rescue Sequel::DatabaseError => e raise unless e.message =~ FOREIGN_KEY_ERROR_RE {} end private DATABASE_ERROR_REGEXPS = Sequel::SQLite::DatabaseMethods::DATABASE_ERROR_REGEXPS.merge(/Abort due to constraint violation/ => ConstraintViolation).freeze def database_error_regexps DATABASE_ERROR_REGEXPS end # Use last_insert_rowid() to get the last inserted id. def last_insert_id(conn, opts=OPTS) statement(conn) do |stmt| rs = stmt.executeQuery(LAST_INSERT_ROWID) rs.next rs.getInt(1) end end # Default to a single connection for a memory database. def connection_pool_default_options o = super uri == 'jdbc:sqlite::memory:' ? o.merge(:max_connections=>1) : o end # Execute the connection pragmas on the connection. def setup_connection(conn) conn = super(conn) statement(conn) do |stmt| connection_pragmas.each{|s| log_yield(s){stmt.execute(s)}} end conn end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/sqlserver.rb000066400000000000000000000045301220156535500231240ustar00rootroot00000000000000Sequel.require 'adapters/jdbc/mssql' module Sequel module JDBC # Database and Dataset instance methods for SQLServer specific # support via JDBC. module SQLServer # Database instance methods for SQLServer databases accessed via JDBC. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::JDBC::MSSQL::DatabaseMethods # Work around a bug in SQL Server JDBC Driver 3.0, where the metadata # for the getColumns result set specifies an incorrect type for the # IS_AUTOINCREMENT column. The column is a string, but the type is # specified as a short. This causes getObject() to throw a # com.microsoft.sqlserver.jdbc.SQLServerException: "The conversion # from char to SMALLINT is unsupported." Using getString() rather # than getObject() for this column avoids the problem. # Reference: http://social.msdn.microsoft.com/Forums/en/sqldataaccess/thread/20df12f3-d1bf-4526-9daa-239a83a8e435 module MetadataDatasetMethods def process_result_set_convert(cols, result) while result.next row = {} cols.each do |n, i, p| v = (n == :is_autoincrement ? result.getString(i) : result.getObject(i)) row[n] = if v if p p.call(v) elsif p.nil? cols[i-1][2] = p = convert_type_proc(v) if p p.call(v) else v end else v end else v end end yield row end end def process_result_set_no_convert(cols, result) while result.next row = {} cols.each do |n, i| row[n] = (n == :is_autoincrement ? result.getString(i) : result.getObject(i)) end yield row end end end def metadata_dataset super.extend(MetadataDatasetMethods) end private def disconnect_error?(exception, opts) super || (exception.message =~ /connection is closed/) end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/jdbc/transactions.rb000066400000000000000000000102251220156535500236040ustar00rootroot00000000000000module Sequel module JDBC module Transactions TRANSACTION_BEGIN = 'Transaction.begin'.freeze TRANSACTION_COMMIT = 'Transaction.commit'.freeze TRANSACTION_RELEASE_SP = 'Transaction.release_savepoint'.freeze TRANSACTION_ROLLBACK = 'Transaction.rollback'.freeze TRANSACTION_ROLLBACK_SP = 'Transaction.rollback_savepoint'.freeze TRANSACTION_SAVEPOINT= 'Transaction.savepoint'.freeze # Check the JDBC DatabaseMetaData for savepoint support def supports_savepoints? return @supports_savepoints if defined?(@supports_savepoints) @supports_savepoints = synchronize{|c| c.getMetaData.supports_savepoints} end # Check the JDBC DatabaseMetaData for support for serializable isolation, # since that's the value most people will use. def supports_transaction_isolation_levels? synchronize{|conn| conn.getMetaData.supportsTransactionIsolationLevel(JavaSQL::Connection::TRANSACTION_SERIALIZABLE)} end private JDBC_TRANSACTION_ISOLATION_LEVELS = {:uncommitted=>JavaSQL::Connection::TRANSACTION_READ_UNCOMMITTED, :committed=>JavaSQL::Connection::TRANSACTION_READ_COMMITTED, :repeatable=>JavaSQL::Connection::TRANSACTION_REPEATABLE_READ, :serializable=>JavaSQL::Connection::TRANSACTION_SERIALIZABLE} # Set the transaction isolation level on the given connection using # the JDBC API. def set_transaction_isolation(conn, opts) level = opts.fetch(:isolation, transaction_isolation_level) if (jdbc_level = JDBC_TRANSACTION_ISOLATION_LEVELS[level]) && conn.getMetaData.supportsTransactionIsolationLevel(jdbc_level) _trans(conn)[:original_jdbc_isolation_level] = conn.getTransactionIsolation log_yield("Transaction.isolation_level = #{level}"){conn.setTransactionIsolation(jdbc_level)} end end # Most JDBC drivers that support savepoints support releasing them. def supports_releasing_savepoints? true end # Use JDBC connection's setAutoCommit to false to start transactions def begin_transaction(conn, opts=OPTS) if supports_savepoints? th = _trans(conn) if sps = th[:savepoints] sps << log_yield(TRANSACTION_SAVEPOINT){conn.set_savepoint} else log_yield(TRANSACTION_BEGIN){conn.setAutoCommit(false)} th[:savepoints] = [] set_transaction_isolation(conn, opts) end th[:savepoint_level] += 1 else log_yield(TRANSACTION_BEGIN){conn.setAutoCommit(false)} set_transaction_isolation(conn, opts) end end # Use JDBC connection's commit method to commit transactions def commit_transaction(conn, opts=OPTS) if supports_savepoints? sps = _trans(conn)[:savepoints] if sps.empty? log_yield(TRANSACTION_COMMIT){conn.commit} elsif supports_releasing_savepoints? log_yield(TRANSACTION_RELEASE_SP){supports_releasing_savepoints? ? conn.release_savepoint(sps.last) : sps.last} end else log_yield(TRANSACTION_COMMIT){conn.commit} end end # Use JDBC connection's setAutoCommit to true to enable non-transactional behavior def remove_transaction(conn, committed) if jdbc_level = _trans(conn)[:original_jdbc_isolation_level] conn.setTransactionIsolation(jdbc_level) end if supports_savepoints? sps = _trans(conn)[:savepoints] conn.setAutoCommit(true) if sps.empty? sps.pop else conn.setAutoCommit(true) end ensure super end # Use JDBC connection's rollback method to rollback transactions def rollback_transaction(conn, opts=OPTS) if supports_savepoints? sps = _trans(conn)[:savepoints] if sps.empty? log_yield(TRANSACTION_ROLLBACK){conn.rollback} else log_yield(TRANSACTION_ROLLBACK_SP){conn.rollback(sps.last)} end else log_yield(TRANSACTION_ROLLBACK){conn.rollback} end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/mock.rb000066400000000000000000000257451220156535500211400ustar00rootroot00000000000000module Sequel module Mock # Connection class for Sequel's mock adapter. class Connection # Sequel::Mock::Database object that created this connection attr_reader :db # Shard this connection operates on, when using Sequel's # sharding support (always :default for databases not using # sharding). attr_reader :server # The specific database options for this connection. attr_reader :opts # Store the db, server, and opts. def initialize(db, server, opts) @db = db @server = server @opts = opts end # Delegate to the db's #_execute method. def execute(sql) @db.send(:_execute, self, sql, :log=>false) end end # Database class for Sequel's mock adapter. class Database < Sequel::Database set_adapter_scheme :mock # Map of database type names to module names, used for handling # mock adapters for specific database types. SHARED_ADAPTERS = { 'access'=>'Access', 'db2'=>'DB2', 'firebird'=>'Firebird', 'informix'=>'Informix', 'mssql'=>'MSSQL', 'mysql'=>'MySQL', 'oracle'=>'Oracle', 'postgres'=>'Postgres', 'sqlite'=>'SQLite' } # Procs to run for specific database types to get the mock adapter # to work with the shared adapter SHARED_ADAPTER_SETUP = { 'postgres' => lambda do |db| db.instance_eval do @server_version = 90103 initialize_postgres_adapter end db.extend(Module.new do def bound_variable_arg(arg, conn) arg end def primary_key(table) :id end end) end, 'oracle' => lambda do |db| db.instance_eval do @primary_key_sequences = {} end end, 'mssql' => lambda do |db| db.instance_eval do @server_version = 10000000 end end } # Set the autogenerated primary key integer # to be returned when running an insert query. # Argument types supported: # # nil :: Return nil for all inserts # Integer :: Starting integer for next insert, with # futher inserts getting an incremented # value # Array :: First insert gets the first value in the # array, second gets the second value, etc. # Proc :: Called with the insert SQL query, uses # the value returned # Class :: Should be an Exception subclass, will create a new # instance an raise it wrapped in a DatabaseError. attr_writer :autoid # Set the columns to set in the dataset when the dataset fetches # rows. Argument types supported: # nil :: Set no columns # Array of Symbols: Used for all datasets # Array (otherwise): First retrieval gets the first value in the # array, second gets the second value, etc. # Proc :: Called with the select SQL query, uses the value # returned, which should be an array of symbols attr_writer :columns # Set the hashes to yield by execute when retrieving rows. # Argument types supported: # # nil :: Yield no rows # Hash :: Always yield a single row with this hash # Array of Hashes :: Yield separately for each hash in this array # Array (otherwise) :: First retrieval gets the first value # in the array, second gets the second value, etc. # Proc :: Called with the select SQL query, uses # the value returned, which should be a hash or # array of hashes. # Class :: Should be an Exception subclass, will create a new # instance an raise it wrapped in a DatabaseError. attr_writer :fetch # Set the number of rows to return from update or delete. # Argument types supported: # # nil :: Return 0 for all updates and deletes # Integer :: Used for all updates and deletes # Array :: First update/delete gets the first value in the # array, second gets the second value, etc. # Proc :: Called with the update/delete SQL query, uses # the value returned. # Class :: Should be an Exception subclass, will create a new # instance an raise it wrapped in a DatabaseError. attr_writer :numrows # Mock the server version, useful when using the shared adapters attr_accessor :server_version # Return a related Connection option connecting to the given shard. def connect(server) Connection.new(self, server, server_opts(server)) end def disconnect_connection(c) end # Store the sql used for later retrieval with #sqls, and return # the appropriate value using either the #autoid, #fetch, or # #numrows methods. def execute(sql, opts=OPTS, &block) synchronize(opts[:server]){|c| _execute(c, sql, opts, &block)} end alias execute_ddl execute # Store the sql used, and return the value of the #numrows method. def execute_dui(sql, opts=OPTS) execute(sql, opts.merge(:meth=>:numrows)) end # Store the sql used, and return the value of the #autoid method. def execute_insert(sql, opts=OPTS) execute(sql, opts.merge(:meth=>:autoid)) end # Return all stored SQL queries, and clear the cache # of SQL queries. def sqls s = @sqls.dup @sqls.clear s end # Enable use of savepoints. def supports_savepoints? shared_adapter? ? super : true end private def _autoid(sql, v, ds=nil) case v when Integer if ds ds.autoid += 1 if ds.autoid.is_a?(Integer) else @autoid += 1 end v else _nextres(v, sql, nil) end end def _execute(c, sql, opts=OPTS, &block) sql += " -- args: #{opts[:arguments].inspect}" if opts[:arguments] sql += " -- #{@opts[:append]}" if @opts[:append] sql += " -- #{c.server.is_a?(Symbol) ? c.server : c.server.inspect}" if c.server != :default log_info(sql) unless opts[:log] == false @sqls << sql ds = opts[:dataset] begin if block columns(ds, sql) if ds _fetch(sql, (ds._fetch if ds) || @fetch, &block) elsif meth = opts[:meth] if meth == :numrows _numrows(sql, (ds.numrows if ds) || @numrows) else v = ds.autoid if ds _autoid(sql, v || @autoid, (ds if v)) end end rescue => e raise_error(e) end end def _fetch(sql, f, &block) case f when Hash yield f.dup when Array if f.all?{|h| h.is_a?(Hash)} f.each{|h| yield h.dup} else _fetch(sql, f.shift, &block) end when Proc h = f.call(sql) if h.is_a?(Hash) yield h.dup elsif h h.each{|h1| yield h1.dup} end when Class if f < Exception raise f else raise Error, "Invalid @autoid/@numrows attribute: #{v.inspect}" end when nil # nothing else raise Error, "Invalid @fetch attribute: #{f.inspect}" end end def _nextres(v, sql, default) case v when Integer v when Array v.empty? ? default : _nextres(v.shift, sql, default) when Proc v.call(sql) when Class if v < Exception raise v else raise Error, "Invalid @autoid/@numrows attribute: #{v.inspect}" end when nil default else raise Error, "Invalid @autoid/@numrows attribute: #{v.inspect}" end end def _numrows(sql, v) _nextres(v, sql, 0) end # Additional options supported: # # :autoid :: Call #autoid= with the value # :columns :: Call #columns= with the value # :fetch :: Call #fetch= with the value # :numrows :: Call #numrows= with the value # :extend :: A module the object is extended with. # :sqls :: The array to store the SQL queries in. def adapter_initialize opts = @opts @sqls = opts[:sqls] || [] if mod_name = SHARED_ADAPTERS[opts[:host]] @shared_adapter = true require "sequel/adapters/shared/#{opts[:host]}" extend Sequel.const_get(mod_name)::DatabaseMethods extend_datasets Sequel.const_get(mod_name)::DatasetMethods if pr = SHARED_ADAPTER_SETUP[opts[:host]] pr.call(self) end else @shared_adapter = false end self.autoid = opts[:autoid] self.columns = opts[:columns] self.fetch = opts[:fetch] self.numrows = opts[:numrows] extend(opts[:extend]) if opts[:extend] sqls end def columns(ds, sql, cs=@columns) case cs when Array unless cs.empty? if cs.all?{|c| c.is_a?(Symbol)} ds.columns(*cs) else columns(ds, sql, cs.shift) end end when Proc ds.columns(*cs.call(sql)) when nil # nothing else raise Error, "Invalid @columns attribute: #{cs.inspect}" end end def quote_identifiers_default shared_adapter? ? super : false end def identifier_input_method_default shared_adapter? ? super : nil end def identifier_output_method_default shared_adapter? ? super : nil end def shared_adapter? @shared_adapter end end class Dataset < Sequel::Dataset Database::DatasetClass = self # Override the databases's autoid setting for this dataset attr_accessor :autoid # Override the databases's fetch setting for this dataset attr_accessor :_fetch # Override the databases's numrows setting for this dataset attr_accessor :numrows # If arguments are provided, use them to set the columns # for this dataset and return self. Otherwise, use the # default Sequel behavior and return the columns. def columns(*cs) if cs.empty? super else @columns = cs self end end def fetch_rows(sql, &block) execute(sql, &block) end private def execute(sql, opts=OPTS, &block) super(sql, opts.merge(:dataset=>self), &block) end def execute_dui(sql, opts=OPTS, &block) super(sql, opts.merge(:dataset=>self), &block) end def execute_insert(sql, opts=OPTS, &block) super(sql, opts.merge(:dataset=>self), &block) end end end end ruby-sequel-4.1.1/lib/sequel/adapters/mysql.rb000066400000000000000000000335741220156535500213530ustar00rootroot00000000000000begin require "mysqlplus" rescue LoadError require 'mysql' end raise(LoadError, "require 'mysql' did not define Mysql::CLIENT_MULTI_RESULTS!\n You are probably using the pure ruby mysql.rb driver,\n which Sequel does not support. You need to install\n the C based adapter, and make sure that the mysql.so\n file is loaded instead of the mysql.rb file.\n") unless defined?(Mysql::CLIENT_MULTI_RESULTS) Sequel.require %w'shared/mysql_prepared_statements', 'adapters' module Sequel # Module for holding all MySQL-related classes and modules for Sequel. module MySQL TYPE_TRANSLATOR = tt = Class.new do def boolean(s) s.to_i != 0 end def integer(s) s.to_i end def float(s) s.to_f end end.new # Hash with integer keys and callable values for converting MySQL types. MYSQL_TYPES = {} { [0, 246] => ::BigDecimal.method(:new), [2, 3, 8, 9, 13, 247, 248] => tt.method(:integer), [4, 5] => tt.method(:float), [249, 250, 251, 252] => ::Sequel::SQL::Blob.method(:new) }.each do |k,v| k.each{|n| MYSQL_TYPES[n] = v} end class << self # Whether to convert invalid date time values by default. # # Only applies to Sequel::Database instances created after this # has been set. attr_accessor :convert_invalid_date_time end self.convert_invalid_date_time = false # Database class for MySQL databases used with Sequel. class Database < Sequel::Database include Sequel::MySQL::DatabaseMethods include Sequel::MySQL::PreparedStatements::DatabaseMethods # Regular expression used for getting accurate number of rows # matched by an update statement. AFFECTED_ROWS_RE = /Rows matched:\s+(\d+)\s+Changed:\s+\d+\s+Warnings:\s+\d+/.freeze set_adapter_scheme :mysql # Hash of conversion procs for the current database attr_reader :conversion_procs # # Whether to convert tinyint columns to bool for the current database attr_reader :convert_tinyint_to_bool # By default, Sequel raises an exception if in invalid date or time is used. # However, if this is set to nil or :nil, the adapter treats dates # like 0000-00-00 and times like 838:00:00 as nil values. If set to :string, # it returns the strings as is. attr_reader :convert_invalid_date_time # Connect to the database. In addition to the usual database options, # the following options have effect: # # * :auto_is_null - Set to true to use MySQL default behavior of having # a filter for an autoincrement column equals NULL to return the last # inserted row. # * :charset - Same as :encoding (:encoding takes precendence) # * :compress - Set to false to not compress results from the server # * :config_default_group - The default group to read from the in # the MySQL config file. # * :config_local_infile - If provided, sets the Mysql::OPT_LOCAL_INFILE # option on the connection with the given value. # * :connect_timeout - Set the timeout in seconds before a connection # attempt is abandoned. # * :encoding - Set all the related character sets for this # connection (connection, client, database, server, and results). # * :read_timeout - Set the timeout in seconds for reading back results # to a query. # * :socket - Use a unix socket file instead of connecting via TCP/IP. # * :timeout - Set the timeout in seconds before the server will # disconnect this connection (a.k.a @@wait_timeout). def connect(server) opts = server_opts(server) conn = Mysql.init conn.options(Mysql::READ_DEFAULT_GROUP, opts[:config_default_group] || "client") conn.options(Mysql::OPT_LOCAL_INFILE, opts[:config_local_infile]) if opts.has_key?(:config_local_infile) conn.ssl_set(opts[:sslkey], opts[:sslcert], opts[:sslca], opts[:sslcapath], opts[:sslcipher]) if opts[:sslca] || opts[:sslkey] if encoding = opts[:encoding] || opts[:charset] # Set encoding before connecting so that the mysql driver knows what # encoding we want to use, but this can be overridden by READ_DEFAULT_GROUP. conn.options(Mysql::SET_CHARSET_NAME, encoding) end if read_timeout = opts[:read_timeout] and defined? Mysql::OPT_READ_TIMEOUT conn.options(Mysql::OPT_READ_TIMEOUT, read_timeout) end if connect_timeout = opts[:connect_timeout] and defined? Mysql::OPT_CONNECT_TIMEOUT conn.options(Mysql::OPT_CONNECT_TIMEOUT, connect_timeout) end conn.real_connect( opts[:host] || 'localhost', opts[:user], opts[:password], opts[:database], (opts[:port].to_i if opts[:port]), opts[:socket], Mysql::CLIENT_MULTI_RESULTS + Mysql::CLIENT_MULTI_STATEMENTS + (opts[:compress] == false ? 0 : Mysql::CLIENT_COMPRESS) ) sqls = mysql_connection_setting_sqls # Set encoding a slightly different way after connecting, # in case the READ_DEFAULT_GROUP overrode the provided encoding. # Doesn't work across implicit reconnects, but Sequel doesn't turn on # that feature. sqls.unshift("SET NAMES #{literal(encoding.to_s)}") if encoding sqls.each{|sql| log_yield(sql){conn.query(sql)}} add_prepared_statements_cache(conn) conn end # Closes given database connection. def disconnect_connection(c) c.close rescue Mysql::Error nil end # Modify the type translators for the date, time, and timestamp types # depending on the value given. def convert_invalid_date_time=(v) m0 = ::Sequel.method(:string_to_time) @conversion_procs[11] = (v != false) ? lambda{|v| convert_date_time(v, &m0)} : m0 m1 = ::Sequel.method(:string_to_date) m = (v != false) ? lambda{|v| convert_date_time(v, &m1)} : m1 [10, 14].each{|i| @conversion_procs[i] = m} m2 = method(:to_application_timestamp) m = (v != false) ? lambda{|v| convert_date_time(v, &m2)} : m2 [7, 12].each{|i| @conversion_procs[i] = m} @convert_invalid_date_time = v end # Modify the type translator used for the tinyint type based # on the value given. def convert_tinyint_to_bool=(v) @conversion_procs[1] = TYPE_TRANSLATOR.method(v ? :boolean : :integer) @convert_tinyint_to_bool = v end # Return the number of matched rows when executing a delete/update statement. def execute_dui(sql, opts=OPTS) execute(sql, opts){|c| return affected_rows(c)} end # Return the last inserted id when executing an insert statement. def execute_insert(sql, opts=OPTS) execute(sql, opts){|c| return c.insert_id} end # Return the version of the MySQL server two which we are connecting. def server_version(server=nil) @server_version ||= (synchronize(server){|conn| conn.server_version if conn.respond_to?(:server_version)} || super) end private # Execute the given SQL on the given connection. If the :type # option is :select, yield the result of the query, otherwise # yield the connection if a block is given. def _execute(conn, sql, opts) begin r = log_yield((log_sql = opts[:log_sql]) ? sql + log_sql : sql){conn.query(sql)} if opts[:type] == :select yield r if r elsif block_given? yield conn end if conn.respond_to?(:more_results?) while conn.more_results? do if r r.free r = nil end begin conn.next_result r = conn.use_result rescue Mysql::Error => e raise_error(e, :disconnect=>true) if MYSQL_DATABASE_DISCONNECT_ERRORS.match(e.message) break end yield r if opts[:type] == :select end end rescue Mysql::Error => e raise_error(e) ensure r.free if r # Use up all results to avoid a commands out of sync message. if conn.respond_to?(:more_results?) while conn.more_results? do begin conn.next_result r = conn.use_result rescue Mysql::Error => e raise_error(e, :disconnect=>true) if MYSQL_DATABASE_DISCONNECT_ERRORS.match(e.message) break end r.free if r end end end end def adapter_initialize @conversion_procs = MYSQL_TYPES.dup self.convert_tinyint_to_bool = Sequel::MySQL.convert_tinyint_to_bool self.convert_invalid_date_time = Sequel::MySQL.convert_invalid_date_time end # Try to get an accurate number of rows matched using the query # info. Fall back to affected_rows if there was no match, but # that may be inaccurate. def affected_rows(conn) s = conn.info if s && s =~ AFFECTED_ROWS_RE $1.to_i else conn.affected_rows end end # MySQL connections use the query method to execute SQL without a result def connection_execute_method :query end # If convert_invalid_date_time is nil, :nil, or :string and # the conversion raises an InvalidValue exception, return v # if :string and nil otherwise. def convert_date_time(v) begin yield v rescue InvalidValue case @convert_invalid_date_time when nil, :nil nil when :string v else raise end end end # The MySQL adapter main error class is Mysql::Error def database_error_classes [Mysql::Error] end def database_exception_sqlstate(exception, opts) exception.sqlstate end # Raise a disconnect error if the exception message matches the list # of recognized exceptions. def disconnect_error?(e, opts) super || (e.is_a?(::Mysql::Error) && MYSQL_DATABASE_DISCONNECT_ERRORS.match(e.message)) end # The database name when using the native adapter is always stored in # the :database option. def database_name @opts[:database] end # Convert tinyint(1) type to boolean if convert_tinyint_to_bool is true def schema_column_type(db_type) convert_tinyint_to_bool && db_type =~ /\Atinyint\(1\)/ ? :boolean : super end end # Dataset class for MySQL datasets accessed via the native driver. class Dataset < Sequel::Dataset include Sequel::MySQL::DatasetMethods include Sequel::MySQL::PreparedStatements::DatasetMethods Database::DatasetClass = self # Yield all rows matching this dataset. If the dataset is set to # split multiple statements, yield arrays of hashes one per statement # instead of yielding results for all statements as hashes. def fetch_rows(sql) execute(sql) do |r| i = -1 cps = db.conversion_procs cols = r.fetch_fields.map do |f| # Pretend tinyint is another integer type if its length is not 1, to # avoid casting to boolean if Sequel::MySQL.convert_tinyint_to_bool # is set. type_proc = f.type == 1 && cast_tinyint_integer?(f) ? cps[2] : cps[f.type] [output_identifier(f.name), type_proc, i+=1] end @columns = cols.map{|c| c.first} if opts[:split_multiple_result_sets] s = [] yield_rows(r, cols){|h| s << h} yield s else yield_rows(r, cols){|h| yield h} end end self end # Don't allow graphing a dataset that splits multiple statements def graph(*) raise(Error, "Can't graph a dataset that splits multiple result sets") if opts[:split_multiple_result_sets] super end # Makes each yield arrays of rows, with each array containing the rows # for a given result set. Does not work with graphing. So you can submit # SQL with multiple statements and easily determine which statement # returned which results. # # Modifies the row_proc of the returned dataset so that it still works # as expected (running on the hashes instead of on the arrays of hashes). # If you modify the row_proc afterward, note that it will receive an array # of hashes instead of a hash. def split_multiple_result_sets raise(Error, "Can't split multiple statements on a graphed dataset") if opts[:graph] ds = clone(:split_multiple_result_sets=>true) ds.row_proc = proc{|x| x.map{|h| row_proc.call(h)}} if row_proc ds end private # Whether a tinyint field should be casted as an integer. By default, # casts to integer if the field length is not 1. Can be overwritten # to make tinyint casting dataset dependent. def cast_tinyint_integer?(field) field.length != 1 end # Set the :type option to :select if it hasn't been set. def execute(sql, opts=OPTS, &block) super(sql, {:type=>:select}.merge(opts), &block) end # Handle correct quoting of strings using ::MySQL.quote. def literal_string_append(sql, v) sql << "'" sql << ::Mysql.quote(v) sql << "'" end # Yield each row of the given result set r with columns cols # as a hash with symbol keys def yield_rows(r, cols) while row = r.fetch_row h = {} cols.each{|n, p, i| v = row[i]; h[n] = (v && p) ? p.call(v) : v} yield h end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/mysql2.rb000066400000000000000000000152071220156535500214260ustar00rootroot00000000000000require 'mysql2' Sequel.require %w'shared/mysql_prepared_statements', 'adapters' module Sequel # Module for holding all Mysql2-related classes and modules for Sequel. module Mysql2 # Database class for MySQL databases used with Sequel. class Database < Sequel::Database include Sequel::MySQL::DatabaseMethods include Sequel::MySQL::PreparedStatements::DatabaseMethods set_adapter_scheme :mysql2 # Whether to convert tinyint columns to bool for this database attr_accessor :convert_tinyint_to_bool # Connect to the database. In addition to the usual database options, # the following options have effect: # # * :auto_is_null - Set to true to use MySQL default behavior of having # a filter for an autoincrement column equals NULL to return the last # inserted row. # * :charset - Same as :encoding (:encoding takes precendence) # * :encoding - Set all the related character sets for this # connection (connection, client, database, server, and results). # # The options hash is also passed to mysql2, and can include mysql2 # options such as :local_infile. def connect(server) opts = server_opts(server) opts[:host] ||= 'localhost' opts[:username] ||= opts.delete(:user) opts[:flags] = ::Mysql2::Client::FOUND_ROWS if ::Mysql2::Client.const_defined?(:FOUND_ROWS) conn = ::Mysql2::Client.new(opts) conn.query_options.merge!(:symbolize_keys=>true, :cache_rows=>false) sqls = mysql_connection_setting_sqls # Set encoding a slightly different way after connecting, # in case the READ_DEFAULT_GROUP overrode the provided encoding. # Doesn't work across implicit reconnects, but Sequel doesn't turn on # that feature. if encoding = opts[:encoding] || opts[:charset] sqls.unshift("SET NAMES #{conn.escape(encoding.to_s)}") end sqls.each{|sql| log_yield(sql){conn.query(sql)}} add_prepared_statements_cache(conn) conn end # Return the number of matched rows when executing a delete/update statement. def execute_dui(sql, opts=OPTS) execute(sql, opts){|c| return c.affected_rows} end # Return the last inserted id when executing an insert statement. def execute_insert(sql, opts=OPTS) execute(sql, opts){|c| return c.last_id} end # Return the version of the MySQL server two which we are connecting. def server_version(server=nil) @server_version ||= (synchronize(server){|conn| conn.server_info[:id]} || super) end private # Execute the given SQL on the given connection. If the :type # option is :select, yield the result of the query, otherwise # yield the connection if a block is given. def _execute(conn, sql, opts) begin stream = opts[:stream] r = log_yield((log_sql = opts[:log_sql]) ? sql + log_sql : sql){conn.query(sql, :database_timezone => timezone, :application_timezone => Sequel.application_timezone, :stream=>stream)} if opts[:type] == :select if r if stream begin r2 = yield r ensure # If r2 is nil, it means the block did not exit normally, # so the rest of the results must be drained to prevent # "commands out of sync" errors. r.each{} unless r2 end else yield r end end elsif block_given? yield conn end rescue ::Mysql2::Error => e raise_error(e) end end # Set the convert_tinyint_to_bool setting based on the default value. def adapter_initialize self.convert_tinyint_to_bool = Sequel::MySQL.convert_tinyint_to_bool end # MySQL connections use the query method to execute SQL without a result def connection_execute_method :query end # The MySQL adapter main error class is Mysql2::Error def database_error_classes [::Mysql2::Error] end def database_exception_sqlstate(exception, opts) exception.sql_state end # If a connection object is available, try pinging it. Otherwise, if the # error is a Mysql2::Error, check the SQL state and exception message for # disconnects. def disconnect_error?(e, opts) super || ((conn = opts[:conn]) && !conn.ping) || (e.is_a?(::Mysql2::Error) && (e.sql_state =~ /\A08/ || MYSQL_DATABASE_DISCONNECT_ERRORS.match(e.message))) end # The database name when using the native adapter is always stored in # the :database option. def database_name @opts[:database] end # Convert tinyint(1) type to boolean if convert_tinyint_to_bool is true def schema_column_type(db_type) convert_tinyint_to_bool && db_type =~ /\Atinyint\(1\)/ ? :boolean : super end end # Dataset class for MySQL datasets accessed via the native driver. class Dataset < Sequel::Dataset include Sequel::MySQL::DatasetMethods include Sequel::MySQL::PreparedStatements::DatasetMethods Database::DatasetClass = self # Yield all rows matching this dataset. def fetch_rows(sql) execute(sql) do |r| @columns = if identifier_output_method r.fields.map!{|c| output_identifier(c.to_s)} else r.fields end r.each(:cast_booleans=>convert_tinyint_to_bool?){|h| yield h} end self end # Return a clone of the dataset that will stream rows when iterating # over the result set, so it can handle large datasets that # won't fit in memory (Requires mysql 0.3.12 to have an effect). def stream clone(:stream=>true) end private # Whether to cast tinyint(1) columns to integer instead of boolean. # By default, uses the opposite of the database's convert_tinyint_to_bool # setting. Exists for compatibility with the mysql adapter. def convert_tinyint_to_bool? @db.convert_tinyint_to_bool end # Set the :type option to :select if it hasn't been set. def execute(sql, opts=OPTS, &block) super(sql, {:type=>:select, :stream=>@opts[:stream]}.merge(opts), &block) end # Handle correct quoting of strings using ::Mysql2::Client#escape. def literal_string_append(sql, v) sql << APOS << db.synchronize(@opts[:server]){|c| c.escape(v)} << APOS end end end end ruby-sequel-4.1.1/lib/sequel/adapters/odbc.rb000066400000000000000000000106731220156535500211100ustar00rootroot00000000000000require 'odbc' module Sequel module ODBC class Database < Sequel::Database set_adapter_scheme :odbc GUARDED_DRV_NAME = /^\{.+\}$/.freeze DRV_NAME_GUARDS = '{%s}'.freeze DISCONNECT_ERRORS = /\A08S01/.freeze def connect(server) opts = server_opts(server) conn = if opts.include?(:drvconnect) ::ODBC::Database.new.drvconnect(opts[:drvconnect]) elsif opts.include?(:driver) Deprecation.deprecate("The odbc driver's handling of the :driver option is thought to be broken and will probably be removed in the future. If you are successfully using it, please contact the developers.") drv = ::ODBC::Driver.new drv.name = 'Sequel ODBC Driver130' opts.each do |param, value| if :driver == param and not (value =~ GUARDED_DRV_NAME) value = DRV_NAME_GUARDS % value end drv.attrs[param.to_s.upcase] = value.to_s end ::ODBC::Database.new.drvconnect(drv) else ::ODBC::connect(opts[:database], opts[:user], opts[:password]) end conn.autocommit = true conn end def disconnect_connection(c) c.disconnect end def execute(sql, opts=OPTS) synchronize(opts[:server]) do |conn| begin r = log_yield(sql){conn.run(sql)} yield(r) if block_given? rescue ::ODBC::Error, ArgumentError => e raise_error(e) ensure r.drop if r end nil end end def execute_dui(sql, opts=OPTS) synchronize(opts[:server]) do |conn| begin log_yield(sql){conn.do(sql)} rescue ::ODBC::Error, ArgumentError => e raise_error(e) end end end private def adapter_initialize case @opts[:db_type] when 'mssql' Sequel.require 'adapters/odbc/mssql' extend Sequel::ODBC::MSSQL::DatabaseMethods self.dataset_class = Sequel::ODBC::MSSQL::Dataset set_mssql_unicode_strings when 'progress' Sequel.require 'adapters/shared/progress' extend Sequel::Progress::DatabaseMethods extend_datasets(Sequel::Progress::DatasetMethods) when 'db2' Sequel.require 'adapters/shared/db2' extend ::Sequel::DB2::DatabaseMethods extend_datasets ::Sequel::DB2::DatasetMethods end end def connection_execute_method :do end def database_error_classes [::ODBC::Error] end def disconnect_error?(e, opts) super || (e.is_a?(::ODBC::Error) && DISCONNECT_ERRORS.match(e.message)) end end class Dataset < Sequel::Dataset BOOL_TRUE = '1'.freeze BOOL_FALSE = '0'.freeze ODBC_DATE_FORMAT = "{d '%Y-%m-%d'}".freeze TIMESTAMP_FORMAT="{ts '%Y-%m-%d %H:%M:%S'}".freeze Database::DatasetClass = self def fetch_rows(sql) execute(sql) do |s| i = -1 cols = s.columns(true).map{|c| [output_identifier(c.name), i+=1]} columns = cols.map{|c| c.at(0)} @columns = columns if rows = s.fetch_all rows.each do |row| hash = {} cols.each{|n,i| hash[n] = convert_odbc_value(row[i])} yield hash end end end self end private def convert_odbc_value(v) # When fetching a result set, the Ruby ODBC driver converts all ODBC # SQL types to an equivalent Ruby type; with the exception of # SQL_TYPE_DATE, SQL_TYPE_TIME and SQL_TYPE_TIMESTAMP. # # The conversions below are consistent with the mappings in # ODBCColumn#mapSqlTypeToGenericType and Column#klass. case v when ::ODBC::TimeStamp db.to_application_timestamp([v.year, v.month, v.day, v.hour, v.minute, v.second]) when ::ODBC::Time Sequel::SQLTime.create(v.hour, v.minute, v.second) when ::ODBC::Date Date.new(v.year, v.month, v.day) else v end end def default_timestamp_format TIMESTAMP_FORMAT end def literal_date(v) v.strftime(ODBC_DATE_FORMAT) end def literal_false BOOL_FALSE end def literal_true BOOL_TRUE end end end end ruby-sequel-4.1.1/lib/sequel/adapters/odbc/000077500000000000000000000000001220156535500205545ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/adapters/odbc/mssql.rb000066400000000000000000000027061220156535500222450ustar00rootroot00000000000000Sequel.require 'adapters/shared/mssql' module Sequel module ODBC # Database and Dataset instance methods for MSSQL specific # support via ODBC. module MSSQL module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::MSSQL::DatabaseMethods LAST_INSERT_ID_SQL='SELECT SCOPE_IDENTITY()'.freeze # Return the last inserted identity value. def execute_insert(sql, opts=OPTS) synchronize(opts[:server]) do |conn| begin log_yield(sql){conn.do(sql)} begin s = log_yield(LAST_INSERT_ID_SQL){conn.run(LAST_INSERT_ID_SQL)} if (rows = s.fetch_all) and (row = rows.first) and (v = row.first) Integer(v) end ensure s.drop if s end rescue ::ODBC::Error => e raise_error(e) end end end end class Dataset < ODBC::Dataset include Sequel::MSSQL::DatasetMethods private # Use ODBC format, not Microsoft format, as the ODBC layer does # some translation. def default_timestamp_format TIMESTAMP_FORMAT end # Use ODBC format, not Microsoft format, as the ODBC layer does # some translation. def literal_date(v) v.strftime(ODBC_DATE_FORMAT) end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/openbase.rb000066400000000000000000000025261220156535500217730ustar00rootroot00000000000000require 'openbase' module Sequel module OpenBase class Database < Sequel::Database set_adapter_scheme :openbase def connect(server) opts = server_opts(server) OpenBase.new( opts[:database], opts[:host] || 'localhost', opts[:user], opts[:password] ) end def disconnect_connection(c) c.disconnect end def execute(sql, opts=OPTS) synchronize(opts[:server]) do |conn| r = log_yield(sql){conn.execute(sql)} yield(r) if block_given? r end end end class Dataset < Sequel::Dataset SELECT_CLAUSE_METHODS = clause_methods(:select, %w'select distinct columns from join where group having compounds order limit') Database::DatasetClass = self def fetch_rows(sql) execute(sql) do |result| begin @columns = result.column_infos.map{|c| output_identifier(c.name)} result.each do |r| row = {} r.each_with_index {|v, i| row[@columns[i]] = v} yield row end ensure # result.close end end self end private def select_clause_methods SELECT_CLAUSE_METHODS end end end end ruby-sequel-4.1.1/lib/sequel/adapters/oracle.rb000066400000000000000000000335601220156535500214460ustar00rootroot00000000000000require 'oci8' Sequel.require 'adapters/shared/oracle' module Sequel module Oracle class Database < Sequel::Database include DatabaseMethods set_adapter_scheme :oracle # ORA-00028: your session has been killed # ORA-01012: not logged on # ORA-03113: end-of-file on communication channel # ORA-03114: not connected to ORACLE CONNECTION_ERROR_CODES = [ 28, 1012, 3113, 3114 ] ORACLE_TYPES = { :blob=>lambda{|b| Sequel::SQL::Blob.new(b.read)}, :clob=>lambda{|b| b.read} } # Hash of conversion procs for this database. attr_reader :conversion_procs def connect(server) opts = server_opts(server) if opts[:database] dbname = opts[:host] ? \ "//#{opts[:host]}#{":#{opts[:port]}" if opts[:port]}/#{opts[:database]}" : opts[:database] else dbname = opts[:host] end conn = OCI8.new(opts[:user], opts[:password], dbname, opts[:privilege]) if prefetch_rows = opts.fetch(:prefetch_rows, 100) conn.prefetch_rows = typecast_value_integer(prefetch_rows) end conn.autocommit = true conn.non_blocking = true # The ruby-oci8 gem which retrieves oracle columns with a type of # DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE is complex based on the # ruby version (1.9.2 or later) and Oracle version (9 or later) # In the now standard case of 1.9.2 and Oracle 9 or later, the timezone # is determined by the Oracle session timezone. Thus if the user # requests Sequel provide UTC timezone to the application, # we need to alter the session timezone to be UTC if Sequel.application_timezone == :utc conn.exec("ALTER SESSION SET TIME_ZONE='-00:00'") end class << conn attr_reader :prepared_statements end conn.instance_variable_set(:@prepared_statements, {}) conn end def disconnect_connection(c) c.logoff rescue OCIInvalidHandle nil end def execute(sql, opts=OPTS, &block) _execute(nil, sql, opts, &block) end def execute_insert(sql, opts=OPTS) _execute(:insert, sql, opts) end private def _execute(type, sql, opts=OPTS, &block) synchronize(opts[:server]) do |conn| begin return execute_prepared_statement(conn, type, sql, opts, &block) if sql.is_a?(Symbol) if args = opts[:arguments] r = conn.parse(sql) args = cursor_bind_params(conn, r, args) nr = log_yield(sql, args){r.exec} r = nr unless block_given? else r = log_yield(sql){conn.exec(sql)} end if block_given? begin yield(r) ensure r.close end elsif type == :insert last_insert_id(conn, opts) else r end rescue OCIException, RuntimeError => e # ruby-oci8 is naughty and raises strings in some places raise_error(e) end end end def adapter_initialize @autosequence = @opts[:autosequence] @primary_key_sequences = {} @conversion_procs = ORACLE_TYPES.dup end PS_TYPES = {'string'.freeze=>String, 'integer'.freeze=>Integer, 'float'.freeze=>Float, 'decimal'.freeze=>Float, 'date'.freeze=>Time, 'datetime'.freeze=>Time, 'time'.freeze=>Time, 'boolean'.freeze=>String, 'blob'.freeze=>OCI8::BLOB} def cursor_bind_params(conn, cursor, args) cursor i = 0 args.map do |arg, type| i += 1 case arg when true arg = 'Y' when false arg = 'N' when BigDecimal arg = arg.to_f when ::Sequel::SQL::Blob raise Error, "Sequel's oracle adapter does not currently support using a blob in a bound variable" end if t = PS_TYPES[type] cursor.bind_param(i, arg, t) else cursor.bind_param(i, arg, arg.class) end arg end end def connection_execute_method :exec end def database_error_classes [OCIException, RuntimeError] end def database_specific_error_class(exception, opts) case exception.code when 1400, 1407 NotNullConstraintViolation when 1 UniqueConstraintViolation when 2291, 2292 ForeignKeyConstraintViolation when 2290 CheckConstraintViolation when 8177 SerializationFailure else super end end def execute_prepared_statement(conn, type, name, opts) ps = prepared_statement(name) sql = ps.prepared_sql if cursora = conn.prepared_statements[name] cursor, cursor_sql = cursora if cursor_sql != sql cursor.close cursor = nil end end unless cursor cursor = log_yield("PREPARE #{name}: #{sql}"){conn.parse(sql)} conn.prepared_statements[name] = [cursor, sql] end args = cursor_bind_params(conn, cursor, opts[:arguments]) log_sql = "EXECUTE #{name}" if ps.log_sql log_sql << " (" log_sql << sql log_sql << ")" end r = log_yield(log_sql, args){cursor.exec} if block_given? yield(cursor) elsif type == :insert last_insert_id(conn, opts) else r end end def last_insert_id(conn, opts) unless sequence = opts[:sequence] if t = opts[:table] sequence = sequence_for_table(t) end end if sequence sql = "SELECT #{literal(sequence)}.currval FROM dual" begin cursor = log_yield(sql){conn.exec(sql)} row = cursor.fetch row.each{|v| return (v.to_i if v)} rescue OCIError nil ensure cursor.close if cursor end end end def begin_transaction(conn, opts=OPTS) log_yield(TRANSACTION_BEGIN){conn.autocommit = false} set_transaction_isolation(conn, opts) end def commit_transaction(conn, opts=OPTS) log_yield(TRANSACTION_COMMIT){conn.commit} end def disconnect_error?(e, opts) super || (e.is_a?(::OCIError) && CONNECTION_ERROR_CODES.include?(e.code)) end def oracle_column_type(h) case h[:oci8_type] when :number case h[:scale] when 0 :integer when -127 :float else :decimal end when :date :datetime else schema_column_type(h[:db_type]) end end def remove_transaction(conn, committed) conn.autocommit = true ensure super end def rollback_transaction(conn, opts=OPTS) log_yield(TRANSACTION_ROLLBACK){conn.rollback} end def schema_parse_table(table, opts=OPTS) schema, table = schema_and_table(table) schema ||= opts[:schema] schema_and_table = if ds = opts[:dataset] ds.literal(schema ? SQL::QualifiedIdentifier.new(schema, table) : SQL::Identifier.new(table)) else "#{"#{quote_identifier(schema)}." if schema}#{quote_identifier(table)}" end table_schema = [] m = output_identifier_meth(ds) im = input_identifier_meth(ds) # Primary Keys ds = metadata_dataset.from(:all_constraints___cons, :all_cons_columns___cols). where(:cols__table_name=>im.call(table), :cons__constraint_type=>'P', :cons__constraint_name=>:cols__constraint_name, :cons__owner=>:cols__owner) ds = ds.where(:cons__owner=>im.call(schema)) if schema pks = ds.select_map(:cols__column_name) # Default values defaults = begin metadata_dataset.from(:user_tab_cols). where(:table_name=>im.call(table)). to_hash(:column_name, :data_default) rescue DatabaseError {} end metadata = synchronize(opts[:server]) do |conn| begin log_yield("Connection.describe_table"){conn.describe_table(schema_and_table)} rescue OCIError => e raise_error(e) end end metadata.columns.each do |column| h = { :primary_key => pks.include?(column.name), :default => defaults[column.name], :oci8_type => column.data_type, :db_type => column.type_string.split(' ')[0], :type_string => column.type_string, :charset_form => column.charset_form, :char_used => column.char_used?, :char_size => column.char_size, :data_size => column.data_size, :precision => column.precision, :scale => column.scale, :fsprecision => column.fsprecision, :lfprecision => column.lfprecision, :allow_null => column.nullable? } h[:type] = oracle_column_type(h) table_schema << [m.call(column.name), h] end table_schema end end class Dataset < Sequel::Dataset include DatasetMethods Database::DatasetClass = self PREPARED_ARG_PLACEHOLDER = ':'.freeze # Oracle already supports named bind arguments, so use directly. module ArgumentMapper include Sequel::Dataset::ArgumentMapper protected # Return a hash with the same values as the given hash, # but with the keys converted to strings. def map_to_prepared_args(bind_vars) prepared_args.map{|v, t| [bind_vars[v], t]} end private # Oracle uses a : before the name of the argument for named # arguments. def prepared_arg(k) y, type = k.to_s.split("__", 2) prepared_args << [y.to_sym, type] i = prepared_args.length LiteralString.new(":#{i}") end # Always assume a prepared argument. def prepared_arg?(k) true end end # Oracle prepared statement uses a new prepared statement each time # it is called, but it does use the bind arguments. module BindArgumentMethods include ArgumentMapper private # Run execute_select on the database with the given SQL and the stored # bind arguments. def execute(sql, opts=OPTS, &block) super(prepared_sql, {:arguments=>bind_arguments}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_dui(sql, opts=OPTS, &block) super(prepared_sql, {:arguments=>bind_arguments}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_insert(sql, opts=OPTS, &block) super(prepared_sql, {:arguments=>bind_arguments}.merge(opts), &block) end end module PreparedStatementMethods include BindArgumentMethods private # Execute the stored prepared statement name and the stored bind # arguments instead of the SQL given. def execute(sql, opts=OPTS, &block) super(prepared_statement_name, opts, &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_dui(sql, opts=OPTS, &block) super(prepared_statement_name, opts, &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_insert(sql, opts=OPTS, &block) super(prepared_statement_name, opts, &block) end end # Execute the given type of statement with the hash of values. def call(type, bind_vars={}, *values, &block) ps = to_prepared_statement(type, values) ps.extend(BindArgumentMethods) ps.call(bind_vars, &block) end def fetch_rows(sql) execute(sql) do |cursor| cps = db.conversion_procs cols = columns = cursor.get_col_names.map{|c| output_identifier(c)} metadata = cursor.column_metadata cm = cols.zip(metadata).map{|c, m| [c, cps[m.data_type]]} @columns = columns while r = cursor.fetch row = {} r.zip(cm).each{|v, (c, cp)| row[c] = ((v && cp) ? cp.call(v) : v)} yield row end end self end # Prepare the given type of query with the given name and store # it in the database. Note that a new native prepared statement is # created on each call to this prepared statement. def prepare(type, name=nil, *values) ps = to_prepared_statement(type, values) ps.extend(PreparedStatementMethods) if name ps.prepared_statement_name = name db.set_prepared_statement(name, ps) end ps end # Oracle requires type specifiers for placeholders, at least # if you ever want to use a nil/NULL value as the value for # the placeholder. def requires_placeholder_type_specifiers? true end private def literal_other_append(sql, v) case v when OraDate literal_append(sql, db.to_application_timestamp(v)) when OCI8::CLOB v.rewind literal_append(sql, v.read) else super end end def prepared_arg_placeholder PREPARED_ARG_PLACEHOLDER end end end end ruby-sequel-4.1.1/lib/sequel/adapters/postgres.rb000066400000000000000000000770231220156535500220510ustar00rootroot00000000000000Sequel.require 'adapters/shared/postgres' begin require 'pg' SEQUEL_POSTGRES_USES_PG = true rescue LoadError => e SEQUEL_POSTGRES_USES_PG = false begin require 'postgres' # Attempt to get uniform behavior for the PGconn object no matter # if pg, postgres, or postgres-pr is used. class PGconn unless method_defined?(:escape_string) if self.respond_to?(:escape) # If there is no escape_string instance method, but there is an # escape class method, use that instead. def escape_string(str) Sequel::Postgres.force_standard_strings ? str.gsub("'", "''") : self.class.escape(str) end else # Raise an error if no valid string escaping method can be found. def escape_string(obj) if Sequel::Postgres.force_standard_strings str.gsub("'", "''") else raise Sequel::Error, "string escaping not supported with this postgres driver. Try using ruby-pg, ruby-postgres, or postgres-pr." end end end end unless method_defined?(:escape_bytea) if self.respond_to?(:escape_bytea) # If there is no escape_bytea instance method, but there is an # escape_bytea class method, use that instead. def escape_bytea(obj) self.class.escape_bytea(obj) end else begin require 'postgres-pr/typeconv/conv' require 'postgres-pr/typeconv/bytea' extend Postgres::Conversion # If we are using postgres-pr, use the encode_bytea method from # that. def escape_bytea(obj) self.class.encode_bytea(obj) end instance_eval{alias unescape_bytea decode_bytea} rescue # If no valid bytea escaping method can be found, create one that # raises an error def escape_bytea(obj) raise Sequel::Error, "bytea escaping not supported with this postgres driver. Try using ruby-pg, ruby-postgres, or postgres-pr." end # If no valid bytea unescaping method can be found, create one that # raises an error def self.unescape_bytea(obj) raise Sequel::Error, "bytea unescaping not supported with this postgres driver. Try using ruby-pg, ruby-postgres, or postgres-pr." end end end end alias_method :finish, :close unless method_defined?(:finish) alias_method :async_exec, :exec unless method_defined?(:async_exec) unless method_defined?(:block) def block(timeout=nil) end end unless defined?(CONNECTION_OK) CONNECTION_OK = -1 end unless method_defined?(:status) def status CONNECTION_OK end end end class PGresult alias_method :nfields, :num_fields unless method_defined?(:nfields) alias_method :ntuples, :num_tuples unless method_defined?(:ntuples) alias_method :ftype, :type unless method_defined?(:ftype) alias_method :fname, :fieldname unless method_defined?(:fname) alias_method :cmd_tuples, :cmdtuples unless method_defined?(:cmd_tuples) end rescue LoadError raise e end end module Sequel Dataset::NON_SQL_OPTIONS << :cursor module Postgres CONVERTED_EXCEPTIONS << PGError PG_TYPES[17] = Class.new do def bytea(s) ::Sequel::SQL::Blob.new(Adapter.unescape_bytea(s)) end end.new.method(:bytea) @use_iso_date_format = true class << self # As an optimization, Sequel sets the date style to ISO, so that PostgreSQL provides # the date in a known format that Sequel can parse faster. This can be turned off # if you require a date style other than ISO. attr_accessor :use_iso_date_format end # PGconn subclass for connection specific methods used with the # pg, postgres, or postgres-pr driver. class Adapter < ::PGconn DISCONNECT_ERROR_RE = /\Acould not receive data from server/ self.translate_results = false if respond_to?(:translate_results=) # Hash of prepared statements for this connection. Keys are # string names of the server side prepared statement, and values # are SQL strings. attr_reader(:prepared_statements) if SEQUEL_POSTGRES_USES_PG # Raise a Sequel::DatabaseDisconnectError if a PGError is raised and # the connection status cannot be determined or it is not OK. def check_disconnect_errors begin yield rescue PGError => e disconnect = false begin s = status rescue PGError disconnect = true end status_ok = (s == Adapter::CONNECTION_OK) disconnect ||= !status_ok disconnect ||= e.message =~ DISCONNECT_ERROR_RE disconnect ? raise(Sequel.convert_exception_class(e, Sequel::DatabaseDisconnectError)) : raise rescue IOError, Errno::EPIPE, Errno::ECONNRESET => e disconnect = true raise(Sequel.convert_exception_class(e, Sequel::DatabaseDisconnectError)) ensure block if status_ok && !disconnect end end # Execute the given SQL with this connection. If a block is given, # yield the results, otherwise, return the number of changed rows. def execute(sql, args=nil) args = args.map{|v| @db.bound_variable_arg(v, self)} if args q = check_disconnect_errors{execute_query(sql, args)} begin block_given? ? yield(q) : q.cmd_tuples ensure q.clear if q && q.respond_to?(:clear) end end private # Return the PGResult object that is returned by executing the given # sql and args. def execute_query(sql, args) @db.log_yield(sql, args){args ? async_exec(sql, args) : async_exec(sql)} end end # Database class for PostgreSQL databases used with Sequel and the # pg, postgres, or postgres-pr driver. class Database < Sequel::Database include Sequel::Postgres::DatabaseMethods INFINITE_TIMESTAMP_STRINGS = ['infinity'.freeze, '-infinity'.freeze].freeze INFINITE_DATETIME_VALUES = ([PLUS_INFINITY, MINUS_INFINITY] + INFINITE_TIMESTAMP_STRINGS).freeze set_adapter_scheme :postgres # Whether infinite timestamps/dates should be converted on retrieval. By default, no # conversion is done, so an error is raised if you attempt to retrieve an infinite # timestamp/date. You can set this to :nil to convert to nil, :string to leave # as a string, or :float to convert to an infinite float. attr_reader :convert_infinite_timestamps # Convert given argument so that it can be used directly by pg. Currently, pg doesn't # handle fractional seconds in Time/DateTime or blobs with "\0", and it won't ever # handle Sequel::SQLTime values correctly. Only public for use by the adapter, shouldn't # be used by external code. def bound_variable_arg(arg, conn) case arg when Sequel::SQL::Blob conn.escape_bytea(arg) when Sequel::SQLTime literal(arg) when DateTime, Time literal(arg) else arg end end # Connects to the database. In addition to the standard database # options, using the :encoding or :charset option changes the # client encoding for the connection, :connect_timeout is a # connection timeout in seconds, and :sslmode sets whether postgres's # sslmode. :connect_timeout and :ssl_mode are only supported if the pg # driver is used. def connect(server) opts = server_opts(server) conn = if SEQUEL_POSTGRES_USES_PG connection_params = { :host => opts[:host], :port => opts[:port] || 5432, :dbname => opts[:database], :user => opts[:user], :password => opts[:password], :connect_timeout => opts[:connect_timeout] || 20, :sslmode => opts[:sslmode] }.delete_if { |key, value| blank_object?(value) } Adapter.connect(connection_params) else Adapter.connect( (opts[:host] unless blank_object?(opts[:host])), opts[:port] || 5432, nil, '', opts[:database], opts[:user], opts[:password] ) end if encoding = opts[:encoding] || opts[:charset] if conn.respond_to?(:set_client_encoding) conn.set_client_encoding(encoding) else conn.async_exec("set client_encoding to '#{encoding}'") end end conn.instance_variable_set(:@db, self) conn.instance_variable_set(:@prepared_statements, {}) if SEQUEL_POSTGRES_USES_PG connection_configuration_sqls.each{|sql| conn.execute(sql)} conn end # Set whether to allow infinite timestamps/dates. Make sure the # conversion proc for date reflects that setting. def convert_infinite_timestamps=(v) @convert_infinite_timestamps = case v when Symbol v when 'nil' :nil when 'string' :string when 'float' :float when String typecast_value_boolean(v) else false end pr = old_pr = @use_iso_date_format ? TYPE_TRANSLATOR.method(:date) : Sequel.method(:string_to_date) if v pr = lambda do |val| case val when *INFINITE_TIMESTAMP_STRINGS infinite_timestamp_value(val) else old_pr.call(val) end end end conversion_procs[1082] = pr end # Disconnect given connection def disconnect_connection(conn) begin conn.finish rescue PGError, IOError end end if SEQUEL_POSTGRES_USES_PG && Object.const_defined?(:PG) && ::PG.const_defined?(:Constants) && ::PG::Constants.const_defined?(:PG_DIAG_SCHEMA_NAME) # Return a hash of information about the related PGError (or Sequel::DatabaseError that # wraps a PGError), with the following entries: # # :schema :: The schema name related to the error # :table :: The table name related to the error # :column :: the column name related to the error # :constraint :: The constraint name related to the error # :type :: The datatype name related to the error # # This requires a PostgreSQL 9.3+ server and 9.3+ client library, # and ruby-pg 0.16.0+ to be supported. def error_info(e) e = e.wrapped_exception if e.is_a?(DatabaseError) r = e.result h = {} h[:schema] = r.error_field(::PG::PG_DIAG_SCHEMA_NAME) h[:table] = r.error_field(::PG::PG_DIAG_TABLE_NAME) h[:column] = r.error_field(::PG::PG_DIAG_COLUMN_NAME) h[:constraint] = r.error_field(::PG::PG_DIAG_CONSTRAINT_NAME) h[:type] = r.error_field(::PG::PG_DIAG_DATATYPE_NAME) h end end # Execute the given SQL with the given args on an available connection. def execute(sql, opts=OPTS, &block) synchronize(opts[:server]){|conn| check_database_errors{_execute(conn, sql, opts, &block)}} end if SEQUEL_POSTGRES_USES_PG # +copy_table+ uses PostgreSQL's +COPY TO STDOUT+ SQL statement to return formatted # results directly to the caller. This method is only supported if pg is the # underlying ruby driver. This method should only be called if you want # results returned to the client. If you are using +COPY TO+ # with a filename, you should just use +run+ instead of this method. # # The table argument supports the following types: # # String :: Uses the first argument directly as literal SQL. If you are using # a version of PostgreSQL before 9.0, you will probably want to # use a string if you are using any options at all, as the syntax # Sequel uses for options is only compatible with PostgreSQL 9.0+. # Dataset :: Uses a query instead of a table name when copying. # other :: Uses a table name (usually a symbol) when copying. # # The following options are respected: # # :format :: The format to use. text is the default, so this should be :csv or :binary. # :options :: An options SQL string to use, which should contain comma separated options. # :server :: The server on which to run the query. # # If a block is provided, the method continually yields to the block, one yield # per row. If a block is not provided, a single string is returned with all # of the data. def copy_table(table, opts=OPTS) synchronize(opts[:server]) do |conn| conn.execute(copy_table_sql(table, opts)) begin if block_given? while buf = conn.get_copy_data yield buf end nil else b = '' b << buf while buf = conn.get_copy_data b end ensure raise DatabaseDisconnectError, "disconnecting as a partial COPY may leave the connection in an unusable state" if buf end end end # +copy_into+ uses PostgreSQL's +COPY FROM STDIN+ SQL statement to do very fast inserts # into a table using input preformatting in either CSV or PostgreSQL text format. # This method is only supported if pg 0.14.0+ is the underlying ruby driver. # This method should only be called if you want # results returned to the client. If you are using +COPY FROM+ # with a filename, you should just use +run+ instead of this method. # # The following options are respected: # # :columns :: The columns to insert into, with the same order as the columns in the # input data. If this isn't given, uses all columns in the table. # :data :: The data to copy to PostgreSQL, which should already be in CSV or PostgreSQL # text format. This can be either a string, or any object that responds to # each and yields string. # :format :: The format to use. text is the default, so this should be :csv or :binary. # :options :: An options SQL string to use, which should contain comma separated options. # :server :: The server on which to run the query. # # If a block is provided and :data option is not, this will yield to the block repeatedly. # The block should return a string, or nil to signal that it is finished. def copy_into(table, opts=OPTS) data = opts[:data] data = Array(data) if data.is_a?(String) if block_given? && data raise Error, "Cannot provide both a :data option and a block to copy_into" elsif !block_given? && !data raise Error, "Must provide either a :data option or a block to copy_into" end synchronize(opts[:server]) do |conn| conn.execute(copy_into_sql(table, opts)) begin if block_given? while buf = yield conn.put_copy_data(buf) end else data.each{|buff| conn.put_copy_data(buff)} end rescue Exception => e conn.put_copy_end("ruby exception occurred while copying data into PostgreSQL") ensure conn.put_copy_end unless e while res = conn.get_result raise e if e check_database_errors{res.check} end end end end # Listens on the given channel (or multiple channels if channel is an array), waiting for notifications. # After a notification is received, or the timeout has passed, stops listening to the channel. Options: # # :after_listen :: An object that responds to +call+ that is called with the underlying connection after the LISTEN # statement is sent, but before the connection starts waiting for notifications. # :loop :: Whether to continually wait for notifications, instead of just waiting for a single # notification. If this option is given, a block must be provided. If this object responds to call, it is # called with the underlying connection after each notification is received (after the block is called). # If a :timeout option is used, and a callable object is given, the object will also be called if the # timeout expires. If :loop is used and you want to stop listening, you can either break from inside the # block given to #listen, or you can throw :stop from inside the :loop object's call method or the block. # :server :: The server on which to listen, if the sharding support is being used. # :timeout :: How long to wait for a notification, in seconds (can provide a float value for # fractional seconds). If not given or nil, waits indefinitely. # # This method is only supported if pg is used as the underlying ruby driver. It returns the # channel the notification was sent to (as a string), unless :loop was used, in which case it returns nil. # If a block is given, it is yielded 3 arguments: # * the channel the notification was sent to (as a string) # * the backend pid of the notifier (as an integer), # * and the payload of the notification (as a string or nil). def listen(channels, opts=OPTS, &block) check_database_errors do synchronize(opts[:server]) do |conn| begin channels = Array(channels) channels.each do |channel| sql = "LISTEN " dataset.send(:identifier_append, sql, channel) conn.execute(sql) end opts[:after_listen].call(conn) if opts[:after_listen] timeout = opts[:timeout] ? [opts[:timeout]] : [] if l = opts[:loop] raise Error, 'calling #listen with :loop requires a block' unless block loop_call = l.respond_to?(:call) catch(:stop) do loop do conn.wait_for_notify(*timeout, &block) l.call(conn) if loop_call end end nil else conn.wait_for_notify(*timeout, &block) end ensure conn.execute("UNLISTEN *") end end end end end # If convert_infinite_timestamps is true and the value is infinite, return an appropriate # value based on the convert_infinite_timestamps setting. def to_application_timestamp(value) if convert_infinite_timestamps case value when *INFINITE_TIMESTAMP_STRINGS infinite_timestamp_value(value) else super end else super end end private # Execute the given SQL string or prepared statement on the connection object. def _execute(conn, sql, opts, &block) if sql.is_a?(Symbol) execute_prepared_statement(conn, sql, opts, &block) else conn.execute(sql, opts[:arguments], &block) end end # Execute the prepared statement name with the given arguments on the connection. def _execute_prepared_statement(conn, ps_name, args, opts) conn.exec_prepared(ps_name, args) end # Add the primary_keys and primary_key_sequences instance variables, # so we can get the correct return values for inserted rows. def adapter_initialize @use_iso_date_format = typecast_value_boolean(@opts.fetch(:use_iso_date_format, Postgres.use_iso_date_format)) initialize_postgres_adapter conversion_procs[1082] = TYPE_TRANSLATOR.method(:date) if @use_iso_date_format self.convert_infinite_timestamps = @opts[:convert_infinite_timestamps] end # Convert exceptions raised from the block into DatabaseErrors. def check_database_errors begin yield rescue => e raise_error(e, :classes=>CONVERTED_EXCEPTIONS) end end # Set the DateStyle to ISO if configured, for faster date parsing. def connection_configuration_sqls sqls = super sqls << "SET DateStyle = 'ISO'" if @use_iso_date_format sqls end def database_error_classes [PGError] end def database_exception_sqlstate(exception, opts) if exception.respond_to?(:result) && (result = exception.result) result.error_field(::PGresult::PG_DIAG_SQLSTATE) end end # Execute the prepared statement with the given name on an available # connection, using the given args. If the connection has not prepared # a statement with the given name yet, prepare it. If the connection # has prepared a statement with the same name and different SQL, # deallocate that statement first and then prepare this statement. # If a block is given, yield the result, otherwise, return the number # of rows changed. def execute_prepared_statement(conn, name, opts=OPTS, &block) ps = prepared_statement(name) sql = ps.prepared_sql ps_name = name.to_s if args = opts[:arguments] args = args.map{|arg| bound_variable_arg(arg, conn)} end unless conn.prepared_statements[ps_name] == sql conn.execute("DEALLOCATE #{ps_name}") if conn.prepared_statements.include?(ps_name) conn.check_disconnect_errors{log_yield("PREPARE #{ps_name} AS #{sql}"){conn.prepare(ps_name, sql)}} conn.prepared_statements[ps_name] = sql end log_sql = "EXECUTE #{ps_name}" if ps.log_sql log_sql << " (" log_sql << sql log_sql << ")" end q = conn.check_disconnect_errors{log_yield(log_sql, args){_execute_prepared_statement(conn, ps_name, args, opts)}} begin block_given? ? yield(q) : q.cmd_tuples ensure q.clear if q && q.respond_to?(:clear) end end # Return an appropriate value for the given infinite timestamp string. def infinite_timestamp_value(value) case convert_infinite_timestamps when :nil nil when :string value else value == 'infinity' ? PLUS_INFINITY : MINUS_INFINITY end end # Don't log, since logging is done by the underlying connection. def log_connection_execute(conn, sql) conn.execute(sql) end # If the value is an infinite value (either an infinite float or a string returned by # by PostgreSQL for an infinite timestamp), return it without converting it if # convert_infinite_timestamps is set. def typecast_value_date(value) if convert_infinite_timestamps case value when *INFINITE_DATETIME_VALUES value else super end else super end end # If the value is an infinite value (either an infinite float or a string returned by # by PostgreSQL for an infinite timestamp), return it without converting it if # convert_infinite_timestamps is set. def typecast_value_datetime(value) if convert_infinite_timestamps case value when *INFINITE_DATETIME_VALUES value else super end else super end end end # Dataset class for PostgreSQL datasets that use the pg, postgres, or # postgres-pr driver. class Dataset < Sequel::Dataset include Sequel::Postgres::DatasetMethods Database::DatasetClass = self APOS = Sequel::Dataset::APOS # Yield all rows returned by executing the given SQL and converting # the types. def fetch_rows(sql) return cursor_fetch_rows(sql){|h| yield h} if @opts[:cursor] execute(sql){|res| yield_hash_rows(res, fetch_rows_set_cols(res)){|h| yield h}} end # Uses a cursor for fetching records, instead of fetching the entire result # set at once. Can be used to process large datasets without holding # all rows in memory (which is what the underlying drivers do # by default). Options: # # * :rows_per_fetch - the number of rows per fetch (default 1000). Higher # numbers result in fewer queries but greater memory use. # # Usage: # # DB[:huge_table].use_cursor.each{|row| p row} # DB[:huge_table].use_cursor(:rows_per_fetch=>10000).each{|row| p row} # # This is untested with the prepared statement/bound variable support, # and unlikely to work with either. def use_cursor(opts=OPTS) clone(:cursor=>{:rows_per_fetch=>1000}.merge(opts)) end if SEQUEL_POSTGRES_USES_PG PREPARED_ARG_PLACEHOLDER = LiteralString.new('$').freeze # PostgreSQL specific argument mapper used for mapping the named # argument hash to a array with numbered arguments. Only used with # the pg driver. module ArgumentMapper include Sequel::Dataset::ArgumentMapper protected # An array of bound variable values for this query, in the correct order. def map_to_prepared_args(hash) prepared_args.map{|k| hash[k.to_sym]} end private def prepared_arg(k) y = k if i = prepared_args.index(y) i += 1 else prepared_args << y i = prepared_args.length end LiteralString.new("#{prepared_arg_placeholder}#{i}") end # Always assume a prepared argument. def prepared_arg?(k) true end end # Allow use of bind arguments for PostgreSQL using the pg driver. module BindArgumentMethods include ArgumentMapper include ::Sequel::Postgres::DatasetMethods::PreparedStatementMethods private # Execute the given SQL with the stored bind arguments. def execute(sql, opts=OPTS, &block) super(sql, {:arguments=>bind_arguments}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_dui(sql, opts=OPTS, &block) super(sql, {:arguments=>bind_arguments}.merge(opts), &block) end end # Allow use of server side prepared statements for PostgreSQL using the # pg driver. module PreparedStatementMethods include BindArgumentMethods # Raise a more obvious error if you attempt to call a unnamed prepared statement. def call(*) raise Error, "Cannot call prepared statement without a name" if prepared_statement_name.nil? super end private # Execute the stored prepared statement name and the stored bind # arguments instead of the SQL given. def execute(sql, opts=OPTS, &block) super(prepared_statement_name, opts, &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_dui(sql, opts=OPTS, &block) super(prepared_statement_name, opts, &block) end end # Execute the given type of statement with the hash of values. def call(type, bind_vars=OPTS, *values, &block) ps = to_prepared_statement(type, values) ps.extend(BindArgumentMethods) ps.call(bind_vars, &block) end # Prepare the given type of statement with the given name, and store # it in the database to be called later. def prepare(type, name=nil, *values) ps = to_prepared_statement(type, values) ps.extend(PreparedStatementMethods) if name ps.prepared_statement_name = name db.set_prepared_statement(name, ps) end ps end private # PostgreSQL uses $N for placeholders instead of ?, so use a $ # as the placeholder. def prepared_arg_placeholder PREPARED_ARG_PLACEHOLDER end end private # Use a cursor to fetch groups of records at a time, yielding them to the block. def cursor_fetch_rows(sql) server_opts = {:server=>@opts[:server] || :read_only} db.transaction(server_opts) do begin execute_ddl("DECLARE sequel_cursor NO SCROLL CURSOR WITHOUT HOLD FOR #{sql}", server_opts) rows_per_fetch = @opts[:cursor][:rows_per_fetch].to_i rows_per_fetch = 1000 if rows_per_fetch <= 0 fetch_sql = "FETCH FORWARD #{rows_per_fetch} FROM sequel_cursor" cols = nil # Load columns only in the first fetch, so subsequent fetches are faster execute(fetch_sql) do |res| cols = fetch_rows_set_cols(res) yield_hash_rows(res, cols){|h| yield h} return if res.ntuples < rows_per_fetch end loop do execute(fetch_sql) do |res| yield_hash_rows(res, cols){|h| yield h} return if res.ntuples < rows_per_fetch end end ensure execute_ddl("CLOSE sequel_cursor", server_opts) end end end # Set the @columns based on the result set, and return the array of # field numers, type conversion procs, and name symbol arrays. def fetch_rows_set_cols(res) cols = [] procs = db.conversion_procs res.nfields.times do |fieldnum| cols << [fieldnum, procs[res.ftype(fieldnum)], output_identifier(res.fname(fieldnum))] end @columns = cols.map{|c| c.at(2)} cols end # Use the driver's escape_bytea def literal_blob_append(sql, v) sql << APOS << db.synchronize(@opts[:server]){|c| c.escape_bytea(v)} << APOS end # Use the driver's escape_string def literal_string_append(sql, v) sql << APOS << db.synchronize(@opts[:server]){|c| c.escape_string(v)} << APOS end # For each row in the result set, yield a hash with column name symbol # keys and typecasted values. def yield_hash_rows(res, cols) res.ntuples.times do |recnum| converted_rec = {} cols.each do |fieldnum, type_proc, fieldsym| value = res.getvalue(recnum, fieldnum) converted_rec[fieldsym] = (value && type_proc) ? type_proc.call(value) : value end yield converted_rec end end end end end if SEQUEL_POSTGRES_USES_PG && !ENV['NO_SEQUEL_PG'] begin require 'sequel_pg' rescue LoadError if RUBY_PLATFORM =~ /mingw|mswin/ begin require "#{RUBY_VERSION[0...3]}/sequel_pg" rescue LoadError end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/000077500000000000000000000000001220156535500211135ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/adapters/shared/access.rb000066400000000000000000000225001220156535500227000ustar00rootroot00000000000000module Sequel require 'adapters/utils/emulate_offset_with_reverse_and_count' module Access module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling # Access uses type :access as the database_type def database_type :access end # Doesn't work, due to security restrictions on MSysObjects #def tables # from(:MSysObjects).filter(:Type=>1, :Flags=>0).select_map(:Name).map{|x| x.to_sym} #end # Access doesn't support renaming tables from an SQL query, # so create a copy of the table and then drop the from table. def rename_table(from_table, to_table) create_table(to_table, :as=>from(from_table)) drop_table(from_table) end # Access uses type Counter for an autoincrementing keys def serial_primary_key_options {:primary_key => true, :type=>:Counter} end private def alter_table_op_sql(table, op) case op[:op] when :set_column_type "ALTER COLUMN #{quote_identifier(op[:name])} #{type_literal(op)}" else super end end # Access doesn't support CREATE TABLE AS, it only supports SELECT INTO. # Emulating CREATE TABLE AS using SELECT INTO is only possible if a dataset # is given as the argument, it can't work with a string, so raise an # Error if a string is given. def create_table_as(name, ds, options) raise(Error, "must provide dataset instance as value of create_table :as option on Access") unless ds.is_a?(Sequel::Dataset) run(ds.into(name).sql) end DATABASE_ERROR_REGEXPS = { /The changes you requested to the table were not successful because they would create duplicate values in the index, primary key, or relationship/ => UniqueConstraintViolation, /You cannot add or change a record because a related record is required|The record cannot be deleted or changed because table/ => ForeignKeyConstraintViolation, /One or more values are prohibited by the validation rule/ => CheckConstraintViolation, /You must enter a value in the .+ field|cannot contain a Null value because the Required property for this field is set to True/ => NotNullConstraintViolation, }.freeze def database_error_regexps DATABASE_ERROR_REGEXPS end # The SQL to drop an index for the table. def drop_index_sql(table, op) "DROP INDEX #{quote_identifier(op[:name] || default_index_name(table, op[:columns]))} ON #{quote_schema_table(table)}" end def identifier_input_method_default nil end def identifier_output_method_default nil end # Access doesn't have a 64-bit integer type, so use integer and hope # the user isn't using more than 32 bits. def type_literal_generic_bignum(column) :integer end # Access doesn't have a true boolean class, so it uses bit def type_literal_generic_trueclass(column) :bit end # Access uses image type for blobs def type_literal_generic_file(column) :image end end module DatasetMethods include EmulateOffsetWithReverseAndCount SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select distinct limit columns into from join where group order having compounds') DATE_FORMAT = '#%Y-%m-%d#'.freeze TIMESTAMP_FORMAT = '#%Y-%m-%d %H:%M:%S#'.freeze TOP = " TOP ".freeze BRACKET_CLOSE = Dataset::BRACKET_CLOSE BRACKET_OPEN = Dataset::BRACKET_OPEN PAREN_CLOSE = Dataset::PAREN_CLOSE PAREN_OPEN = Dataset::PAREN_OPEN INTO = Dataset::INTO FROM = Dataset::FROM SPACE = Dataset::SPACE NOT_EQUAL = ' <> '.freeze OPS = {:'%'=>' Mod '.freeze, :'||'=>' & '.freeze} BOOL_FALSE = '0'.freeze BOOL_TRUE = '-1'.freeze DATE_FUNCTION = 'Date()'.freeze NOW_FUNCTION = 'Now()'.freeze TIME_FUNCTION = 'Time()'.freeze CAST_TYPES = {String=>:CStr, Integer=>:CLng, Date=>:CDate, Time=>:CDate, DateTime=>:CDate, Numeric=>:CDec, BigDecimal=>:CDec, File=>:CStr, Float=>:CDbl, TrueClass=>:CBool, FalseClass=>:CBool} EXTRACT_MAP = {:year=>"'yyyy'", :month=>"'m'", :day=>"'d'", :hour=>"'h'", :minute=>"'n'", :second=>"'s'"} COMMA = Dataset::COMMA DATEPART_OPEN = "datepart(".freeze # Access doesn't support CASE, but it can be emulated with nested # IIF function calls. def case_expression_sql_append(sql, ce) literal_append(sql, ce.with_merged_expression.conditions.reverse.inject(ce.default){|exp,(cond,val)| Sequel::SQL::Function.new(:IIF, cond, val, exp)}) end # Access doesn't support CAST, it uses separate functions for # type conversion def cast_sql_append(sql, expr, type) sql << CAST_TYPES.fetch(type, type).to_s sql << PAREN_OPEN literal_append(sql, expr) sql << PAREN_CLOSE end def complex_expression_sql_append(sql, op, args) case op when :ILIKE complex_expression_sql_append(sql, :LIKE, args) when :'NOT ILIKE' complex_expression_sql_append(sql, :'NOT LIKE', args) when :LIKE, :'NOT LIKE' sql << PAREN_OPEN literal_append(sql, args.at(0)) sql << SPACE << op.to_s << SPACE literal_append(sql, args.at(1)) sql << PAREN_CLOSE when :'!=' sql << PAREN_OPEN literal_append(sql, args.at(0)) sql << NOT_EQUAL literal_append(sql, args.at(1)) sql << PAREN_CLOSE when :'%', :'||' sql << PAREN_OPEN c = false op_str = OPS[op] args.each do |a| sql << op_str if c literal_append(sql, a) c ||= true end sql << PAREN_CLOSE when :extract part = args.at(0) raise(Sequel::Error, "unsupported extract argument: #{part.inspect}") unless format = EXTRACT_MAP[part] sql << DATEPART_OPEN << format.to_s << COMMA literal_append(sql, args.at(1)) sql << PAREN_CLOSE else super end end # Use Date() and Now() for CURRENT_DATE and CURRENT_TIMESTAMP def constant_sql_append(sql, constant) case constant when :CURRENT_DATE sql << DATE_FUNCTION when :CURRENT_TIMESTAMP sql << NOW_FUNCTION when :CURRENT_TIME sql << TIME_FUNCTION else super end end # Emulate cross join by using multiple tables in the FROM clause. def cross_join(table) clone(:from=>@opts[:from] + [table]) end def emulated_function_sql_append(sql, f) case f.f when :char_length literal_append(sql, SQL::Function.new(:len, f.args.first)) else super end end # Access uses [] to escape metacharacters, instead of backslashes. def escape_like(string) string.gsub(/[\\*#?\[]/){|m| "[#{m}]"} end # Specify a table for a SELECT ... INTO query. def into(table) clone(:into => table) end # Access doesn't support INTERSECT or EXCEPT def supports_intersect_except? false end # Access does not support IS TRUE def supports_is_true? false end # Access doesn't support JOIN USING def supports_join_using? false end # Access does not support multiple columns for the IN/NOT IN operators def supports_multiple_column_in? false end # Access doesn't support truncate, so do a delete instead. def truncate delete nil end private # Access uses # to quote dates def literal_date(d) d.strftime(DATE_FORMAT) end # Access uses # to quote datetimes def literal_datetime(t) t.strftime(TIMESTAMP_FORMAT) end alias literal_time literal_datetime # Use 0 for false on MSSQL def literal_false BOOL_FALSE end # Use 0 for false on MSSQL def literal_true BOOL_TRUE end # Access requires parentheses when joining more than one table def select_from_sql(sql) if f = @opts[:from] sql << FROM if (j = @opts[:join]) && !j.empty? sql << (PAREN_OPEN * j.length) end source_list_append(sql, f) end end def select_into_sql(sql) if i = @opts[:into] sql << INTO identifier_append(sql, i) end end # Access requires parentheses when joining more than one table def select_join_sql(sql) if js = @opts[:join] js.each do |j| literal_append(sql, j) sql << PAREN_CLOSE end end end # Access uses TOP for limits def select_limit_sql(sql) if l = @opts[:limit] sql << TOP literal_append(sql, l) end end # Access uses [] for quoting identifiers def quoted_identifier_append(sql, v) sql << BRACKET_OPEN << v.to_s << BRACKET_CLOSE end # Access requires the limit clause come before other clauses def select_clause_methods SELECT_CLAUSE_METHODS end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/cubrid.rb000066400000000000000000000147411220156535500227170ustar00rootroot00000000000000Sequel.require 'adapters/utils/split_alter_table' module Sequel module Cubrid module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::Database::SplitAlterTable AUTOINCREMENT = 'AUTO_INCREMENT'.freeze COLUMN_DEFINITION_ORDER = [:auto_increment, :default, :null, :unique, :primary_key, :references] def database_type :cubrid end def indexes(table, opts=OPTS) m = output_identifier_meth m2 = input_identifier_meth indexes = {} metadata_dataset. from(:db_index___i). join(:db_index_key___k, :index_name=>:index_name, :class_name=>:class_name). where(:i__class_name=>m2.call(table), :is_primary_key=>'NO'). order(:k__key_order). select(:i__index_name, :k__key_attr_name___column, :is_unique). each do |row| index = indexes[m.call(row[:index_name])] ||= {:columns=>[], :unique=>row[:is_unique]=='YES'} index[:columns] << m.call(row[:column]) end indexes end def supports_savepoints? false end def schema_parse_table(table_name, opts) m = output_identifier_meth(opts[:dataset]) m2 = input_identifier_meth(opts[:dataset]) pks = metadata_dataset. from(:db_index___i). join(:db_index_key___k, :index_name=>:index_name, :class_name=>:class_name). where(:i__class_name=>m2.call(table_name), :is_primary_key=>'YES'). order(:k__key_order). select_map(:k__key_attr_name). map{|c| m.call(c)} metadata_dataset. from(:db_attribute). where(:class_name=>m2.call(table_name)). order(:def_order). select(:attr_name, :data_type___db_type, :default_value___default, :is_nullable___allow_null). map do |row| name = m.call(row.delete(:attr_name)) row[:allow_null] = row[:allow_null] == 'YES' row[:primary_key] = pks.include?(name) row[:type] = schema_column_type(row[:db_type]) [name, row] end end def tables(opts=OPTS) _tables('CLASS') end def views(opts=OPTS) _tables('VCLASS') end private def _tables(type) m = output_identifier_meth metadata_dataset. from(:db_class). where(:is_system_class=>'NO', :class_type=>type). select_map(:class_name). map{|c| m.call(c)} end def alter_table_op_sql(table, op) case op[:op] when :rename_column "RENAME COLUMN #{quote_identifier(op[:name])} AS #{quote_identifier(op[:new_name])}" when :set_column_type, :set_column_null, :set_column_default o = op[:op] opts = schema(table).find{|x| x.first == op[:name]} opts = opts ? opts.last.dup : {} opts[:name] = o == :rename_column ? op[:new_name] : op[:name] opts[:type] = o == :set_column_type ? op[:type] : opts[:db_type] opts[:null] = o == :set_column_null ? op[:null] : opts[:allow_null] opts[:default] = o == :set_column_default ? op[:default] : opts[:ruby_default] opts.delete(:default) if opts[:default] == nil "CHANGE COLUMN #{quote_identifier(op[:name])} #{column_definition_sql(op.merge(opts))}" else super end end def alter_table_sql(table, op) case op[:op] when :drop_index "ALTER TABLE #{quote_schema_table(table)} #{drop_index_sql(table, op)}" else super end end def auto_increment_sql AUTOINCREMENT end # CUBRID requires auto increment before primary key def column_definition_order COLUMN_DEFINITION_ORDER end # CUBRID requires FOREIGN KEY keywords before a column reference def column_references_sql(column) sql = super sql = " FOREIGN KEY#{sql}" unless column[:columns] sql end def connection_execute_method :query end DATABASE_ERROR_REGEXPS = { /Operation would have caused one or more unique constraint violations/ => UniqueConstraintViolation, /The constraint of the foreign key .+ is invalid|Update\/Delete operations are restricted by the foreign key/ => ForeignKeyConstraintViolation, /cannot be made NULL/ => NotNullConstraintViolation, /Your transaction .+ has been unilaterally aborted by the system/ => SerializationFailure, }.freeze def database_error_regexps DATABASE_ERROR_REGEXPS end # CUBRID is case insensitive, so don't modify identifiers def identifier_input_method_default nil end # CUBRID is case insensitive, so don't modify identifiers def identifier_output_method_default nil end # CUBRID does not support named column constraints. def supports_named_column_constraints? false end # CUBRID doesn't support booleans, it recommends using smallint. def type_literal_generic_trueclass(column) :smallint end # CUBRID uses clob for text types. def uses_clob_for_text? true end end module DatasetMethods SELECT_CLAUSE_METHODS = Sequel::Dataset.clause_methods(:select, %w'select distinct columns from join where group having compounds order limit') LIMIT = Sequel::Dataset::LIMIT COMMA = Sequel::Dataset::COMMA BOOL_FALSE = '0'.freeze BOOL_TRUE = '1'.freeze def supports_join_using? false end def supports_multiple_column_in? false end def supports_timestamp_usecs? false end # CUBRID supposedly supports TRUNCATE, but it appears not to work in my testing. # Fallback to using DELETE. def truncate delete nil end private def literal_false BOOL_FALSE end def literal_true BOOL_TRUE end # CUBRID doesn't support CTEs or FOR UPDATE. def select_clause_methods SELECT_CLAUSE_METHODS end # CUBRID requires a limit to use an offset, # and requires a FROM table if a limit is used. def select_limit_sql(sql) if @opts[:from] && (l = @opts[:limit]) sql << LIMIT if o = @opts[:offset] literal_append(sql, o) sql << COMMA end literal_append(sql, l) end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/db2.rb000066400000000000000000000331721220156535500221150ustar00rootroot00000000000000Sequel.require 'adapters/utils/emulate_offset_with_row_number' module Sequel module DB2 @use_clob_as_blob = true class << self # Whether to use clob as the generic File type, true by default. attr_accessor :use_clob_as_blob end module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling AUTOINCREMENT = 'GENERATED ALWAYS AS IDENTITY'.freeze NOT_NULL = ' NOT NULL'.freeze NULL = ''.freeze # DB2 always uses :db2 as it's database type def database_type :db2 end # Return the database version as a string. Don't rely on this, # it may return an integer in the future. def db2_version return @db2_version if @db2_version @db2_version = metadata_dataset.with_sql("select service_level from sysibmadm.env_inst_info").first[:service_level] end alias_method :server_version, :db2_version # Use SYSIBM.SYSCOLUMNS to get the information on the tables. def schema_parse_table(table, opts = OPTS) m = output_identifier_meth(opts[:dataset]) im = input_identifier_meth(opts[:dataset]) metadata_dataset.with_sql("SELECT * FROM SYSIBM.SYSCOLUMNS WHERE TBNAME = #{literal(im.call(table))} ORDER BY COLNO"). collect do |column| column[:db_type] = column.delete(:typename) if column[:db_type] == "DECIMAL" column[:db_type] << "(#{column[:longlength]},#{column[:scale]})" end column[:allow_null] = column.delete(:nulls) == 'Y' column[:primary_key] = column.delete(:identity) == 'Y' || !column[:keyseq].nil? column[:type] = schema_column_type(column[:db_type]) [ m.call(column.delete(:name)), column] end end # Use SYSCAT.TABLES to get the tables for the database def tables metadata_dataset. with_sql("SELECT TABNAME FROM SYSCAT.TABLES WHERE TYPE='T' AND OWNER = #{literal(input_identifier_meth.call(opts[:user]))}"). all.map{|h| output_identifier_meth.call(h[:tabname]) } end # Use SYSCAT.TABLES to get the views for the database def views metadata_dataset. with_sql("SELECT TABNAME FROM SYSCAT.TABLES WHERE TYPE='V' AND OWNER = #{literal(input_identifier_meth.call(opts[:user]))}"). all.map{|h| output_identifier_meth.call(h[:tabname]) } end # Use SYSCAT.INDEXES to get the indexes for the table def indexes(table, opts = OPTS) m = output_identifier_meth indexes = {} metadata_dataset. from(:syscat__indexes). select(:indname, :uniquerule, :colnames). where(:tabname=>input_identifier_meth.call(table), :system_required=>0). each do |r| indexes[m.call(r[:indname])] = {:unique=>(r[:uniquerule]=='U'), :columns=>r[:colnames][1..-1].split('+').map{|v| m.call(v)}} end indexes end # DB2 supports transaction isolation levels. def supports_transaction_isolation_levels? true end private # Handle DB2 specific alter table operations. def alter_table_sql(table, op) case op[:op] when :add_column if op[:primary_key] && op[:auto_increment] && op[:type] == Integer [ "ALTER TABLE #{quote_schema_table(table)} ADD #{column_definition_sql(op.merge(:auto_increment=>false, :primary_key=>false, :default=>0, :null=>false))}", "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{literal(op[:name])} DROP DEFAULT", "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{literal(op[:name])} SET #{AUTOINCREMENT}" ] else "ALTER TABLE #{quote_schema_table(table)} ADD #{column_definition_sql(op)}" end when :drop_column "ALTER TABLE #{quote_schema_table(table)} DROP #{column_definition_sql(op)}" when :rename_column # renaming is only possible after db2 v9.7 "ALTER TABLE #{quote_schema_table(table)} RENAME COLUMN #{quote_identifier(op[:name])} TO #{quote_identifier(op[:new_name])}" when :set_column_type "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(op[:name])} SET DATA TYPE #{type_literal(op)}" when :set_column_default "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(op[:name])} SET DEFAULT #{literal(op[:default])}" when :add_constraint if op[:type] == :unique sqls = op[:columns].map{|c| ["ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(c)} SET NOT NULL", reorg_sql(table)]} sqls << super sqls.flatten else super end else super end end # DB2 uses an identity column for autoincrement. def auto_increment_sql AUTOINCREMENT end # Add null/not null SQL fragment to column creation SQL. def column_definition_null_sql(sql, column) null = column.fetch(:null, column[:allow_null]) null = false if column[:primary_key] sql << NOT_NULL if null == false sql << NULL if null == true end # Supply columns with NOT NULL if they are part of a composite # primary key or unique constraint def column_list_sql(g) ks = [] g.constraints.each{|c| ks = c[:columns] if [:primary_key, :unique].include?(c[:type])} g.columns.each{|c| c[:null] = false if ks.include?(c[:name]) } super end # Insert data from the current table into the new table after # creating the table, since it is not possible to do it in one step. def create_table_as(name, sql, options) super from(name).insert(sql.is_a?(Dataset) ? sql : dataset.with_sql(sql)) end # DB2 requires parens around the SELECT, and DEFINITION ONLY at the end. def create_table_as_sql(name, sql, options) "#{create_table_prefix_sql(name, options)} AS (#{sql}) DEFINITION ONLY" end # Here we use DGTT which has most backward compatibility, which uses # DECLARE instead of CREATE. CGTT can only be used after version 9.7. # http://www.ibm.com/developerworks/data/library/techarticle/dm-0912globaltemptable/ def create_table_prefix_sql(name, options) if options[:temp] "DECLARE GLOBAL TEMPORARY TABLE #{quote_identifier(name)}" else super end end DATABASE_ERROR_REGEXPS = { /DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505|One or more values in the INSERT statement, UPDATE statement, or foreign key update caused by a DELETE statement are not valid because the primary key, unique constraint or unique index/ => UniqueConstraintViolation, /DB2 SQL Error: (SQLCODE=-530, SQLSTATE=23503|SQLCODE=-532, SQLSTATE=23504)|The insert or update value of the FOREIGN KEY .+ is not equal to any value of the parent key of the parent table|A parent row cannot be deleted because the relationship .+ restricts the deletion/ => ForeignKeyConstraintViolation, /DB2 SQL Error: SQLCODE=-545, SQLSTATE=23513|The requested operation is not allowed because a row does not satisfy the check constraint/ => CheckConstraintViolation, /DB2 SQL Error: SQLCODE=-407, SQLSTATE=23502|Assignment of a NULL value to a NOT NULL column/ => NotNullConstraintViolation, /DB2 SQL Error: SQLCODE=-911, SQLSTATE=40001|The current transaction has been rolled back because of a deadlock or timeout/ => SerializationFailure, }.freeze def database_error_regexps DATABASE_ERROR_REGEXPS end # DB2 has issues with quoted identifiers, so # turn off database quoting by default. def quote_identifiers_default false end # DB2 uses RENAME TABLE to rename tables. def rename_table_sql(name, new_name) "RENAME TABLE #{quote_schema_table(name)} TO #{quote_schema_table(new_name)}" end # Run the REORG TABLE command for the table, necessary when # the table has been altered. def reorg(table) synchronize(opts[:server]){|c| c.execute(reorg_sql(table))} end # The SQL to use for REORGing a table. def reorg_sql(table) "CALL ADMIN_CMD(#{literal("REORG TABLE #{table}")})" end # Treat clob as blob if use_clob_as_blob is true def schema_column_type(db_type) (::Sequel::DB2::use_clob_as_blob && db_type.downcase == 'clob') ? :blob : super end # SQL to set the transaction isolation level def set_transaction_isolation_sql(level) "SET CURRENT ISOLATION #{Database::TRANSACTION_ISOLATION_LEVELS[level]}" end # We uses the clob type by default for Files. # Note: if user select to use blob, then insert statement should use # use this for blob value: # cast(X'fffefdfcfbfa' as blob(2G)) def type_literal_generic_file(column) ::Sequel::DB2::use_clob_as_blob ? :clob : :blob end # DB2 uses smallint to store booleans. def type_literal_generic_trueclass(column) :smallint end alias type_literal_generic_falseclass type_literal_generic_trueclass # DB2 uses clob for text types. def uses_clob_for_text? true end end module DatasetMethods include EmulateOffsetWithRowNumber PAREN_CLOSE = Dataset::PAREN_CLOSE PAREN_OPEN = Dataset::PAREN_OPEN BITWISE_METHOD_MAP = {:& =>:BITAND, :| => :BITOR, :^ => :BITXOR, :'B~'=>:BITNOT} EMULATED_FUNCTION_MAP = {:char_length=>'length'.freeze} BOOL_TRUE = '1'.freeze BOOL_FALSE = '0'.freeze CAST_STRING_OPEN = "RTRIM(CHAR(".freeze CAST_STRING_CLOSE = "))".freeze FETCH_FIRST_ROW_ONLY = " FETCH FIRST ROW ONLY".freeze FETCH_FIRST = " FETCH FIRST ".freeze ROWS_ONLY = " ROWS ONLY".freeze EMPTY_FROM_TABLE = ' FROM "SYSIBM"."SYSDUMMY1"'.freeze HSTAR = "H*".freeze BLOB_OPEN = "BLOB(X'".freeze BLOB_CLOSE = "')".freeze # DB2 casts strings using RTRIM and CHAR instead of VARCHAR. def cast_sql_append(sql, expr, type) if(type == String) sql << CAST_STRING_OPEN literal_append(sql, expr) sql << CAST_STRING_CLOSE else super end end def complex_expression_sql_append(sql, op, args) case op when :&, :|, :^ # works with db2 v9.5 and after op = BITWISE_METHOD_MAP[op] sql << complex_expression_arg_pairs(args){|a, b| literal(SQL::Function.new(op, a, b))} when :<< sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} * POWER(2, #{literal(b)}))"} when :>> sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} / POWER(2, #{literal(b)}))"} when :% sql << complex_expression_arg_pairs(args){|a, b| "MOD(#{literal(a)}, #{literal(b)})"} when :'B~' literal_append(sql, SQL::Function.new(:BITNOT, *args)) when :extract sql << args.at(0).to_s sql << PAREN_OPEN literal_append(sql, args.at(1)) sql << PAREN_CLOSE else super end end # DB2 supports GROUP BY CUBE def supports_group_cube? true end # DB2 supports GROUP BY ROLLUP def supports_group_rollup? true end # DB2 does not support IS TRUE. def supports_is_true? false end # DB2 does not support multiple columns in IN. def supports_multiple_column_in? false end # DB2 only allows * in SELECT if it is the only thing being selected. def supports_select_all_and_column? false end # DB2 does not support fractional seconds in timestamps. def supports_timestamp_usecs? false end # DB2 supports window functions def supports_window_functions? true end # DB2 does not support WHERE 1. def supports_where_true? false end private # DB2 needs the standard workaround to insert all default values into # a table with more than one column. def insert_supports_empty_values? false end # Use 0 for false on DB2 def literal_false BOOL_FALSE end # Use 1 for true on DB2 def literal_true BOOL_TRUE end # DB2 uses a literal hexidecimal number for blob strings def literal_blob_append(sql, v) if ::Sequel::DB2.use_clob_as_blob super else sql << BLOB_OPEN << v.unpack(HSTAR).first << BLOB_CLOSE end end # Add a fallback table for empty from situation def select_from_sql(sql) @opts[:from] ? super : (sql << EMPTY_FROM_TABLE) end # Modify the sql to limit the number of rows returned # Note: # # After db2 v9.7, MySQL flavored "LIMIT X OFFSET Y" can be enabled using # # db2set DB2_COMPATIBILITY_VECTOR=MYSQL # db2stop # db2start # # Support for this feature is not used in this adapter however. def select_limit_sql(sql) if l = @opts[:limit] if l == 1 sql << FETCH_FIRST_ROW_ONLY else sql << FETCH_FIRST literal_append(sql, l) sql << ROWS_ONLY end end end def _truncate_sql(table) # "TRUNCATE #{table} IMMEDIATE" is only for newer version of db2, so we # use the following one "ALTER TABLE #{quote_schema_table(table)} ACTIVATE NOT LOGGED INITIALLY WITH EMPTY TABLE" end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/firebird.rb000066400000000000000000000166671220156535500232460ustar00rootroot00000000000000module Sequel module Firebird module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling AUTO_INCREMENT = ''.freeze TEMPORARY = 'GLOBAL TEMPORARY '.freeze def clear_primary_key(*tables) tables.each{|t| @primary_keys.delete(dataset.send(:input_identifier, t))} end def create_trigger(*args) self << create_trigger_sql(*args) end def database_type :firebird end def drop_sequence(name) self << drop_sequence_sql(name) end # Return primary key for the given table. def primary_key(table) t = dataset.send(:input_identifier, table) @primary_keys.fetch(t) do pk = fetch("SELECT RDB$FIELD_NAME FROM RDB$INDEX_SEGMENTS NATURAL JOIN RDB$RELATION_CONSTRAINTS WHERE RDB$CONSTRAINT_TYPE = 'PRIMARY KEY' AND RDB$RELATION_NAME = ?", t).single_value @primary_keys[t] = dataset.send(:output_identifier, pk.rstrip) if pk end end def restart_sequence(*args) self << restart_sequence_sql(*args) end def sequences(opts=OPTS) ds = self[:"rdb$generators"].server(opts[:server]).filter(:"rdb$system_flag" => 0).select(:"rdb$generator_name") block_given? ? yield(ds) : ds.map{|r| ds.send(:output_identifier, r[:"rdb$generator_name"])} end def tables(opts=OPTS) tables_or_views(0, opts) end def views(opts=OPTS) tables_or_views(1, opts) end private # Use Firebird specific syntax for add column def alter_table_sql(table, op) case op[:op] when :add_column "ALTER TABLE #{quote_schema_table(table)} ADD #{column_definition_sql(op)}" when :drop_column "ALTER TABLE #{quote_schema_table(table)} DROP #{column_definition_sql(op)}" when :rename_column "ALTER TABLE #{quote_schema_table(table)} ALTER #{quote_identifier(op[:name])} TO #{quote_identifier(op[:new_name])}" when :set_column_type "ALTER TABLE #{quote_schema_table(table)} ALTER #{quote_identifier(op[:name])} TYPE #{type_literal(op)}" else super(table, op) end end def auto_increment_sql() AUTO_INCREMENT end def create_sequence_sql(name, opts=OPTS) "CREATE SEQUENCE #{quote_identifier(name)}" end # Firebird gets an override because of the mess of creating a # sequence and trigger for auto-incrementing primary keys. def create_table_from_generator(name, generator, options) drop_statement, create_statements = create_table_sql_list(name, generator, options) (execute_ddl(drop_statement) rescue nil) if drop_statement create_statements.each{|sql| execute_ddl(sql)} end def create_table_sql_list(name, generator, options=OPTS) statements = [create_table_sql(name, generator, options)] drop_seq_statement = nil generator.columns.each do |c| if c[:auto_increment] c[:sequence_name] ||= "seq_#{name}_#{c[:name]}" unless c[:create_sequence] == false drop_seq_statement = drop_sequence_sql(c[:sequence_name]) statements << create_sequence_sql(c[:sequence_name]) statements << restart_sequence_sql(c[:sequence_name], {:restart_position => c[:sequence_start_position]}) if c[:sequence_start_position] end unless c[:create_trigger] == false c[:trigger_name] ||= "BI_#{name}_#{c[:name]}" c[:quoted_name] = quote_identifier(c[:name]) trigger_definition = <<-END begin if ((new.#{c[:quoted_name]} is null) or (new.#{c[:quoted_name]} = 0)) then begin new.#{c[:quoted_name]} = next value for #{c[:sequence_name]}; end end END statements << create_trigger_sql(name, c[:trigger_name], trigger_definition, {:events => [:insert]}) end end end [drop_seq_statement, statements] end def create_trigger_sql(table, name, definition, opts=OPTS) events = opts[:events] ? Array(opts[:events]) : [:insert, :update, :delete] whence = opts[:after] ? 'AFTER' : 'BEFORE' inactive = opts[:inactive] ? 'INACTIVE' : 'ACTIVE' position = opts.fetch(:position, 0) sql = <<-end_sql CREATE TRIGGER #{quote_identifier(name)} for #{quote_identifier(table)} #{inactive} #{whence} #{events.map{|e| e.to_s.upcase}.join(' OR ')} position #{position} as #{definition} end_sql sql end def drop_sequence_sql(name) "DROP SEQUENCE #{quote_identifier(name)}" end def remove_cached_schema(table) clear_primary_key(table) super end def restart_sequence_sql(name, opts=OPTS) seq_name = quote_identifier(name) "ALTER SEQUENCE #{seq_name} RESTART WITH #{opts[:restart_position]}" end def tables_or_views(type, opts) ds = self[:"rdb$relations"].server(opts[:server]).filter(:"rdb$relation_type" => type, Sequel::SQL::Function.new(:COALESCE, :"rdb$system_flag", 0) => 0).select(:"rdb$relation_name") ds.map{|r| ds.send(:output_identifier, r[:"rdb$relation_name"].rstrip)} end def type_literal_generic_string(column) column[:text] ? :"BLOB SUB_TYPE TEXT" : super end end module DatasetMethods BOOL_TRUE = '1'.freeze BOOL_FALSE = '0'.freeze NULL = LiteralString.new('NULL').freeze SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'with select distinct limit columns from join where group having compounds order') INSERT_CLAUSE_METHODS = Dataset.clause_methods(:insert, %w'insert into columns values returning') FIRST = " FIRST ".freeze SKIP = " SKIP ".freeze DEFAULT_FROM = " FROM RDB$DATABASE" # Insert given values into the database. def insert(*values) if @opts[:sql] || @opts[:returning] super else returning(insert_pk).insert(*values){|r| return r.values.first} end end # Insert a record returning the record inserted def insert_select(*values) returning.insert(*values){|r| return r} end def requires_sql_standard_datetimes? true end def supports_insert_select? true end # Firebird does not support INTERSECT or EXCEPT def supports_intersect_except? false end private def insert_clause_methods INSERT_CLAUSE_METHODS end def insert_pk(*values) pk = db.primary_key(opts[:from].first) pk ? Sequel::SQL::Identifier.new(pk) : NULL end def literal_false BOOL_FALSE end def literal_true BOOL_TRUE end # The order of clauses in the SELECT SQL statement def select_clause_methods SELECT_CLAUSE_METHODS end # Use a default FROM table if the dataset does not contain a FROM table. def select_from_sql(sql) if @opts[:from] super else sql << DEFAULT_FROM end end def select_limit_sql(sql) if l = @opts[:limit] sql << FIRST literal_append(sql, l) end if o = @opts[:offset] sql << SKIP literal_append(sql, o) end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/informix.rb000066400000000000000000000023321220156535500232730ustar00rootroot00000000000000module Sequel module Informix module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling TEMPORARY = 'TEMP '.freeze # Informix uses the :informix database type def database_type :informix end private # Informix has issues with quoted identifiers, so # turn off database quoting by default. def quote_identifiers_default false end # SQL fragment for showing a table is temporary def temporary_table_sql TEMPORARY end end module DatasetMethods SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select limit distinct columns from join where having group compounds order') FIRST = " FIRST ".freeze SKIP = " SKIP ".freeze private # Informix does not support INTERSECT or EXCEPT def supports_intersect_except? false end def select_clause_methods SELECT_CLAUSE_METHODS end def select_limit_sql(sql) if o = @opts[:offset] sql << SKIP literal_append(sql, o) end if l = @opts[:limit] sql << FIRST literal_append(sql, l) end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/mssql.rb000066400000000000000000001012021220156535500225730ustar00rootroot00000000000000Sequel.require %w'emulate_offset_with_row_number split_alter_table', 'adapters/utils' module Sequel Dataset::NON_SQL_OPTIONS << :disable_insert_output module MSSQL module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling AUTO_INCREMENT = 'IDENTITY(1,1)'.freeze SERVER_VERSION_RE = /^(\d+)\.(\d+)\.(\d+)/.freeze SERVER_VERSION_SQL = "SELECT CAST(SERVERPROPERTY('ProductVersion') AS varchar)".freeze SQL_BEGIN = "BEGIN TRANSACTION".freeze SQL_COMMIT = "COMMIT TRANSACTION".freeze SQL_ROLLBACK = "IF @@TRANCOUNT > 0 ROLLBACK TRANSACTION".freeze SQL_ROLLBACK_TO_SAVEPOINT = 'IF @@TRANCOUNT > 0 ROLLBACK TRANSACTION autopoint_%d'.freeze SQL_SAVEPOINT = 'SAVE TRANSACTION autopoint_%d'.freeze MSSQL_DEFAULT_RE = /\A(?:\(N?('.*')\)|\(\((-?\d+(?:\.\d+)?)\)\))\z/ FOREIGN_KEY_ACTION_MAP = {0 => :no_action, 1 => :cascade, 2 => :set_null, 3 => :set_default}.freeze include Sequel::Database::SplitAlterTable # Whether to use N'' to quote strings, which allows unicode characters inside the # strings. True by default for compatibility, can be set to false for a possible # performance increase. This sets the default for all datasets created from this # Database object. attr_reader :mssql_unicode_strings def mssql_unicode_strings=(v) @mssql_unicode_strings = v reset_default_dataset end # The types to check for 0 scale to transform :decimal types # to :integer. DECIMAL_TYPE_RE = /number|numeric|decimal/io # Microsoft SQL Server uses the :mssql type. def database_type :mssql end # Microsoft SQL Server namespaces indexes per table. def global_index_namespace? false end # Return foreign key information using the system views, including # :name, :on_delete, and :on_update entries in the hashes. def foreign_key_list(table, opts=OPTS) m = output_identifier_meth im = input_identifier_meth schema, table = schema_and_table(table) current_schema = m.call(get(Sequel.function('schema_name'))) fk_action_map = FOREIGN_KEY_ACTION_MAP ds = metadata_dataset.from(:sys__foreign_keys___fk). join(:sys__foreign_key_columns___fkc, :constraint_object_id => :object_id). join(:sys__all_columns___pc, :object_id => :fkc__parent_object_id, :column_id => :fkc__parent_column_id). join(:sys__all_columns___rc, :object_id => :fkc__referenced_object_id, :column_id => :fkc__referenced_column_id). where{{object_schema_name(:fk__parent_object_id) => im.call(schema || current_schema)}}. where{{object_name(:fk__parent_object_id) => im.call(table)}}. select{[:fk__name, :fk__delete_referential_action, :fk__update_referential_action, :pc__name___column, :rc__name___referenced_column, object_schema_name(:fk__referenced_object_id).as(:schema), object_name(:fk__referenced_object_id).as(:table)]}. order(:name, :fkc__constraint_column_id) h = {} ds.each do |row| if r = h[row[:name]] r[:columns] << m.call(row[:column]) r[:key] << m.call(row[:referenced_column]) else referenced_schema = m.call(row[:schema]) referenced_table = m.call(row[:table]) h[row[:name]] = { :name => m.call(row[:name]), :table => (referenced_schema == current_schema) ? referenced_table : :"#{referenced_schema}__#{referenced_table}", :columns => [m.call(row[:column])], :key => [m.call(row[:referenced_column])], :on_update => fk_action_map[row[:update_referential_action]], :on_delete => fk_action_map[row[:delete_referential_action]] } end end h.values end # Use the system tables to get index information def indexes(table, opts=OPTS) m = output_identifier_meth im = input_identifier_meth indexes = {} metadata_dataset.from(:sys__tables___t). join(:sys__indexes___i, :object_id=>:object_id). join(:sys__index_columns___ic, :object_id=>:object_id, :index_id=>:index_id). join(:sys__columns___c, :object_id=>:object_id, :column_id=>:column_id). select(:i__name, :i__is_unique, :c__name___column). where{{t__name=>im.call(table)}}. where(:i__is_primary_key=>0, :i__is_disabled=>0). order(:i__name, :ic__index_column_id). each do |r| index = indexes[m.call(r[:name])] ||= {:columns=>[], :unique=>(r[:is_unique] && r[:is_unique]!=0)} index[:columns] << m.call(r[:column]) end indexes end # The version of the MSSQL server, as an integer (e.g. 10001600 for # SQL Server 2008 Express). def server_version(server=nil) return @server_version if @server_version @server_version = synchronize(server) do |conn| (conn.server_version rescue nil) if conn.respond_to?(:server_version) end unless @server_version m = SERVER_VERSION_RE.match(fetch(SERVER_VERSION_SQL).single_value.to_s) @server_version = (m[1].to_i * 1000000) + (m[2].to_i * 10000) + m[3].to_i end @server_version end # MSSQL supports savepoints, though it doesn't support committing/releasing them savepoint def supports_savepoints? true end # MSSQL supports transaction isolation levels def supports_transaction_isolation_levels? true end # MSSQL supports transaction DDL statements. def supports_transactional_ddl? true end # Microsoft SQL Server supports using the INFORMATION_SCHEMA to get # information on tables. def tables(opts=OPTS) information_schema_tables('BASE TABLE', opts) end # Microsoft SQL Server supports using the INFORMATION_SCHEMA to get # information on views. def views(opts=OPTS) information_schema_tables('VIEW', opts) end private # Add dropping of the default constraint to the list of SQL queries. # This is necessary before dropping the column or changing its type. def add_drop_default_constraint_sql(sqls, table, column) if constraint = default_constraint_name(table, column) sqls << "ALTER TABLE #{quote_schema_table(table)} DROP CONSTRAINT #{constraint}" end end # MSSQL uses the IDENTITY(1,1) column for autoincrementing columns. def auto_increment_sql AUTO_INCREMENT end # MSSQL specific syntax for altering tables. def alter_table_sql(table, op) case op[:op] when :add_column "ALTER TABLE #{quote_schema_table(table)} ADD #{column_definition_sql(op)}" when :drop_column sqls = [] add_drop_default_constraint_sql(sqls, table, op[:name]) sqls << super when :rename_column "sp_rename #{literal("#{quote_schema_table(table)}.#{quote_identifier(op[:name])}")}, #{literal(op[:new_name].to_s)}, 'COLUMN'" when :set_column_type sqls = [] if sch = schema(table) if cs = sch.each{|k, v| break v if k == op[:name]; nil} cs = cs.dup add_drop_default_constraint_sql(sqls, table, op[:name]) cs[:default] = cs[:ruby_default] op = cs.merge!(op) default = op.delete(:default) end end sqls << "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{column_definition_sql(op)}" sqls << alter_table_sql(table, op.merge(:op=>:set_column_default, :default=>default)) if default sqls when :set_column_null sch = schema(table).find{|k,v| k.to_s == op[:name].to_s}.last type = sch[:db_type] if [:string, :decimal].include?(sch[:type]) and size = (sch[:max_chars] || sch[:column_size]) type += "(#{size}#{", #{sch[:scale]}" if sch[:scale] && sch[:scale].to_i > 0})" end "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(op[:name])} #{type_literal(:type=>type)} #{'NOT ' unless op[:null]}NULL" when :set_column_default "ALTER TABLE #{quote_schema_table(table)} ADD CONSTRAINT #{quote_identifier("sequel_#{table}_#{op[:name]}_def")} DEFAULT #{literal(op[:default])} FOR #{quote_identifier(op[:name])}" else super(table, op) end end # SQL to start a new savepoint def begin_savepoint_sql(depth) SQL_SAVEPOINT % depth end # SQL to BEGIN a transaction. def begin_transaction_sql SQL_BEGIN end # Handle MSSQL specific default format. def column_schema_normalize_default(default, type) if m = MSSQL_DEFAULT_RE.match(default) default = m[1] || m[2] end super(default, type) end # Commit the active transaction on the connection, does not commit/release # savepoints. def commit_transaction(conn, opts=OPTS) log_connection_execute(conn, commit_transaction_sql) unless _trans(conn)[:savepoint_level] > 1 end # SQL to COMMIT a transaction. def commit_transaction_sql SQL_COMMIT end # MSSQL uses the name of the table to decide the difference between # a regular and temporary table, with temporary table names starting with # a #. def create_table_prefix_sql(name, options) "CREATE TABLE #{quote_schema_table(options[:temp] ? "##{name}" : name)}" end # MSSQL doesn't support CREATE TABLE AS, it only supports SELECT INTO. # Emulating CREATE TABLE AS using SELECT INTO is only possible if a dataset # is given as the argument, it can't work with a string, so raise an # Error if a string is given. def create_table_as(name, ds, options) raise(Error, "must provide dataset instance as value of create_table :as option on MSSQL") unless ds.is_a?(Sequel::Dataset) run(ds.into(name).sql) end DATABASE_ERROR_REGEXPS = { /Violation of UNIQUE KEY constraint/ => UniqueConstraintViolation, /conflicted with the (FOREIGN KEY.*|REFERENCE) constraint/ => ForeignKeyConstraintViolation, /conflicted with the CHECK constraint/ => CheckConstraintViolation, /column does not allow nulls/ => NotNullConstraintViolation, /was deadlocked on lock resources with another process and has been chosen as the deadlock victim/ => SerializationFailure, }.freeze def database_error_regexps DATABASE_ERROR_REGEXPS end # The name of the constraint for setting the default value on the table and column. # The SQL used to select default constraints utilizes MSSQL catalog views which were introduced in 2005. # This method intentionally does not support MSSQL 2000. def default_constraint_name(table, column_name) if server_version >= 9000000 table_name = schema_and_table(table).compact.join('.') self[:sys__default_constraints]. where{{:parent_object_id => Sequel::SQL::Function.new(:object_id, table_name), col_name(:parent_object_id, :parent_column_id) => column_name.to_s}}. get(:name) end end # The SQL to drop an index for the table. def drop_index_sql(table, op) "DROP INDEX #{quote_identifier(op[:name] || default_index_name(table, op[:columns]))} ON #{quote_schema_table(table)}" end # support for clustered index type def index_definition_sql(table_name, index) index_name = index[:name] || default_index_name(table_name, index[:columns]) if index[:type] == :full_text "CREATE FULLTEXT INDEX ON #{quote_schema_table(table_name)} #{literal(index[:columns])} KEY INDEX #{literal(index[:key_index])}" else "CREATE #{'UNIQUE ' if index[:unique]}#{'CLUSTERED ' if index[:type] == :clustered}INDEX #{quote_identifier(index_name)} ON #{quote_schema_table(table_name)} #{literal(index[:columns])}#{" INCLUDE #{literal(index[:include])}" if index[:include]}#{" WHERE #{filter_expr(index[:where])}" if index[:where]}" end end # Backbone of the tables and views support. def information_schema_tables(type, opts) m = output_identifier_meth metadata_dataset.from(:information_schema__tables___t). select(:table_name). filter(:table_type=>type, :table_schema=>(opts[:schema]||'dbo').to_s). map{|x| m.call(x[:table_name])} end # Always quote identifiers in the metadata_dataset, so schema parsing works. def metadata_dataset ds = super ds.quote_identifiers = true ds end # Use sp_rename to rename the table def rename_table_sql(name, new_name) "sp_rename #{literal(quote_schema_table(name))}, #{quote_identifier(schema_and_table(new_name).pop)}" end # SQL to rollback to a savepoint def rollback_savepoint_sql(depth) SQL_ROLLBACK_TO_SAVEPOINT % depth end # SQL to ROLLBACK a transaction. def rollback_transaction_sql SQL_ROLLBACK end # The closest MSSQL equivalent of a boolean datatype is the bit type. def schema_column_type(db_type) case db_type when /\A(?:bit)\z/io :boolean when /\A(?:(?:small)?money)\z/io :decimal else super end end # MSSQL uses the INFORMATION_SCHEMA to hold column information, and # parses primary key information from the sysindexes, sysindexkeys, # and syscolumns system tables. def schema_parse_table(table_name, opts) m = output_identifier_meth(opts[:dataset]) m2 = input_identifier_meth(opts[:dataset]) tn = m2.call(table_name.to_s) table_id = get{object_id(tn)} info_sch_sch = opts[:information_schema_schema] inf_sch_qual = lambda{|s| info_sch_sch ? Sequel.qualify(info_sch_sch, s) : Sequel.expr(s)} sys_qual = lambda{|s| info_sch_sch ? Sequel.qualify(info_sch_sch, Sequel.qualify(Sequel.lit(''), s)) : Sequel.expr(s)} pk_index_id = metadata_dataset.from(sys_qual.call(:sysindexes)). where(:id=>table_id, :indid=>1..254){{(status & 2048)=>2048}}. get(:indid) pk_cols = metadata_dataset.from(sys_qual.call(:sysindexkeys).as(:sik)). join(sys_qual.call(:syscolumns).as(:sc), :id=>:id, :colid=>:colid). where(:sik__id=>table_id, :sik__indid=>pk_index_id). select_order_map(:sc__name) ds = metadata_dataset.from(inf_sch_qual.call(:information_schema__tables).as(:t)). join(inf_sch_qual.call(:information_schema__columns).as(:c), :table_catalog=>:table_catalog, :table_schema => :table_schema, :table_name => :table_name). select(:column_name___column, :data_type___db_type, :character_maximum_length___max_chars, :column_default___default, :is_nullable___allow_null, :numeric_precision___column_size, :numeric_scale___scale). filter(:c__table_name=>tn) if schema = opts[:schema] ds.filter!(:c__table_schema=>schema) end ds.map do |row| row[:primary_key] = pk_cols.include?(row[:column]) row[:allow_null] = row[:allow_null] == 'YES' ? true : false row[:default] = nil if blank_object?(row[:default]) row[:type] = if row[:db_type] =~ DECIMAL_TYPE_RE && row[:scale] == 0 :integer else schema_column_type(row[:db_type]) end [m.call(row.delete(:column)), row] end end # Set the mssql_unicode_strings settings from the given options. def set_mssql_unicode_strings @mssql_unicode_strings = typecast_value_boolean(@opts.fetch(:mssql_unicode_strings, true)) end # MSSQL has both datetime and timestamp classes, most people are going # to want datetime def type_literal_generic_datetime(column) :datetime end # MSSQL has both datetime and timestamp classes, most people are going # to want datetime def type_literal_generic_time(column) column[:only_time] ? :time : :datetime end # MSSQL doesn't have a true boolean class, so it uses bit def type_literal_generic_trueclass(column) :bit end # MSSQL uses varbinary(max) type for blobs def type_literal_generic_file(column) :'varbinary(max)' end end module DatasetMethods include EmulateOffsetWithRowNumber BOOL_TRUE = '1'.freeze BOOL_FALSE = '0'.freeze COMMA_SEPARATOR = ', '.freeze DELETE_CLAUSE_METHODS = Dataset.clause_methods(:delete, %w'with delete from output from2 where') INSERT_CLAUSE_METHODS = Dataset.clause_methods(:insert, %w'with insert into columns output values') SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'with select distinct limit columns into from lock join where group having order compounds') UPDATE_CLAUSE_METHODS = Dataset.clause_methods(:update, %w'with update limit table set output from where') UPDATE_CLAUSE_METHODS_2000 = Dataset.clause_methods(:update, %w'update table set output from where') NOLOCK = ' WITH (NOLOCK)'.freeze UPDLOCK = ' WITH (UPDLOCK)'.freeze WILDCARD = LiteralString.new('*').freeze CONSTANT_MAP = {:CURRENT_DATE=>'CAST(CURRENT_TIMESTAMP AS DATE)'.freeze, :CURRENT_TIME=>'CAST(CURRENT_TIMESTAMP AS TIME)'.freeze} EXTRACT_MAP = {:year=>"yy", :month=>"m", :day=>"d", :hour=>"hh", :minute=>"n", :second=>"s"} BRACKET_CLOSE = Dataset::BRACKET_CLOSE BRACKET_OPEN = Dataset::BRACKET_OPEN COMMA = Dataset::COMMA PAREN_CLOSE = Dataset::PAREN_CLOSE PAREN_SPACE_OPEN = Dataset::PAREN_SPACE_OPEN SPACE = Dataset::SPACE FROM = Dataset::FROM APOS = Dataset::APOS APOS_RE = Dataset::APOS_RE DOUBLE_APOS = Dataset::DOUBLE_APOS INTO = Dataset::INTO DOUBLE_BRACKET_CLOSE = ']]'.freeze DATEPART_SECOND_OPEN = "CAST((datepart(".freeze DATEPART_SECOND_MIDDLE = ') + datepart(ns, '.freeze DATEPART_SECOND_CLOSE = ")/1000000000.0) AS double precision)".freeze DATEPART_OPEN = "datepart(".freeze UNION_ALL = ' UNION ALL '.freeze SELECT_SPACE = 'SELECT '.freeze TIMESTAMP_USEC_FORMAT = ".%03d".freeze OUTPUT_INSERTED = " OUTPUT INSERTED.*".freeze HEX_START = '0x'.freeze UNICODE_STRING_START = "N'".freeze BACKSLASH_CRLF_RE = /\\((?:\r\n)|\n)/.freeze BACKSLASH_CRLF_REPLACE = '\\\\\\\\\\1\\1'.freeze TOP_PAREN = " TOP (".freeze TOP = " TOP ".freeze OUTPUT = " OUTPUT ".freeze HSTAR = "H*".freeze CASE_SENSITIVE_COLLATION = 'Latin1_General_CS_AS'.freeze CASE_INSENSITIVE_COLLATION = 'Latin1_General_CI_AS'.freeze DEFAULT_TIMESTAMP_FORMAT = "'%Y-%m-%dT%H:%M:%S%N%z'".freeze FORMAT_DATE = "'%Y%m%d'".freeze CROSS_APPLY = 'CROSS APPLY'.freeze OUTER_APPLY = 'OUTER APPLY'.freeze Sequel::Dataset.def_mutation_method(:disable_insert_output, :output, :module=>self) # Allow overriding of the mssql_unicode_strings option at the dataset level. attr_writer :mssql_unicode_strings # Use the database's mssql_unicode_strings setting if the dataset hasn't overridden it. def mssql_unicode_strings defined?(@mssql_unicode_strings) ? @mssql_unicode_strings : (@mssql_unicode_strings = db.mssql_unicode_strings) end # MSSQL uses + for string concatenation, and LIKE is case insensitive by default. def complex_expression_sql_append(sql, op, args) case op when :'||' super(sql, :+, args) when :LIKE, :"NOT LIKE" super(sql, op, args.map{|a| LiteralString.new("(#{literal(a)} COLLATE #{CASE_SENSITIVE_COLLATION})")}) when :ILIKE, :"NOT ILIKE" super(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), args.map{|a| LiteralString.new("(#{literal(a)} COLLATE #{CASE_INSENSITIVE_COLLATION})")}) when :<< sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} * POWER(2, #{literal(b)}))"} when :>> sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} / POWER(2, #{literal(b)}))"} when :extract part = args.at(0) raise(Sequel::Error, "unsupported extract argument: #{part.inspect}") unless format = EXTRACT_MAP[part] if part == :second expr = literal(args.at(1)) sql << DATEPART_SECOND_OPEN << format.to_s << COMMA << expr << DATEPART_SECOND_MIDDLE << expr << DATEPART_SECOND_CLOSE else sql << DATEPART_OPEN << format.to_s << COMMA literal_append(sql, args.at(1)) sql << PAREN_CLOSE end else super end end # MSSQL doesn't support the SQL standard CURRENT_DATE or CURRENT_TIME def constant_sql_append(sql, constant) if c = CONSTANT_MAP[constant] sql << c else super end end # Uses CROSS APPLY to join the given table into the current dataset. def cross_apply(table) join_table(:cross_apply, table) end # Disable the use of INSERT OUTPUT def disable_insert_output clone(:disable_insert_output=>true) end # MSSQL treats [] as a metacharacter in LIKE expresions. def escape_like(string) string.gsub(/[\\%_\[\]]/){|m| "\\#{m}"} end # There is no function on Microsoft SQL Server that does character length # and respects trailing spaces (datalength respects trailing spaces, but # counts bytes instead of characters). Use a hack to work around the # trailing spaces issue. def emulated_function_sql_append(sql, f) case f.f when :char_length literal_append(sql, SQL::Function.new(:len, Sequel.join([f.args.first, 'x'])) - 1) when :trim literal_append(sql, SQL::Function.new(:ltrim, SQL::Function.new(:rtrim, f.args.first))) else super end end # MSSQL uses the CONTAINS keyword for full text search def full_text_search(cols, terms, opts = OPTS) terms = "\"#{terms.join('" OR "')}\"" if terms.is_a?(Array) filter("CONTAINS (?, ?)", cols, terms) end # Use the OUTPUT clause to get the value of all columns for the newly inserted record. def insert_select(*values) return unless supports_insert_select? naked.clone(default_server_opts(:sql=>output(nil, [SQL::ColumnAll.new(:inserted)]).insert_sql(*values))).single_record end # Specify a table for a SELECT ... INTO query. def into(table) clone(:into => table) end # MSSQL uses a UNION ALL statement to insert multiple values at once. def multi_insert_sql(columns, values) c = false sql = LiteralString.new('') u = UNION_ALL values.each do |v| sql << u if c sql << SELECT_SPACE expression_list_append(sql, v) c ||= true end [insert_sql(columns, sql)] end # Allows you to do a dirty read of uncommitted data using WITH (NOLOCK). def nolock lock_style(:dirty) end # Uses OUTER APPLY to join the given table into the current dataset. def outer_apply(table) join_table(:outer_apply, table) end # Include an OUTPUT clause in the eventual INSERT, UPDATE, or DELETE query. # # The first argument is the table to output into, and the second argument # is either an Array of column values to select, or a Hash which maps output # column names to selected values, in the style of #insert or #update. # # Output into a returned result set is not currently supported. # # Examples: # # dataset.output(:output_table, [:deleted__id, :deleted__name]) # dataset.output(:output_table, :id => :inserted__id, :name => :inserted__name) def output(into, values) raise(Error, "SQL Server versions 2000 and earlier do not support the OUTPUT clause") unless supports_output_clause? output = {} case values when Hash output[:column_list], output[:select_list] = values.keys, values.values when Array output[:select_list] = values end output[:into] = into clone({:output => output}) end # MSSQL uses [] to quote identifiers. def quoted_identifier_append(sql, name) sql << BRACKET_OPEN << name.to_s.gsub(/\]/, DOUBLE_BRACKET_CLOSE) << BRACKET_CLOSE end # The version of the database server. def server_version db.server_version(@opts[:server]) end # MSSQL 2005+ supports GROUP BY CUBE. def supports_group_cube? is_2005_or_later? end # MSSQL 2005+ supports GROUP BY ROLLUP def supports_group_rollup? is_2005_or_later? end # MSSQL supports insert_select via the OUTPUT clause. def supports_insert_select? supports_output_clause? && !opts[:disable_insert_output] end # MSSQL 2005+ supports INTERSECT and EXCEPT def supports_intersect_except? is_2005_or_later? end # MSSQL does not support IS TRUE def supports_is_true? false end # MSSQL doesn't support JOIN USING def supports_join_using? false end # MSSQL 2005+ supports modifying joined datasets def supports_modifying_joins? is_2005_or_later? end # MSSQL does not support multiple columns for the IN/NOT IN operators def supports_multiple_column_in? false end # MSSQL 2005+ supports the output clause. def supports_output_clause? is_2005_or_later? end # MSSQL 2005+ supports window functions def supports_window_functions? true end # MSSQL cannot use WHERE 1. def supports_where_true? false end protected # If returned primary keys are requested, use OUTPUT unless already set on the # dataset. If OUTPUT is already set, use existing returning values. If OUTPUT # is only set to return a single columns, return an array of just that column. # Otherwise, return an array of hashes. def _import(columns, values, opts=OPTS) if opts[:return] == :primary_key && !@opts[:output] output(nil, [SQL::QualifiedIdentifier.new(:inserted, first_primary_key)])._import(columns, values, opts) elsif @opts[:output] statements = multi_insert_sql(columns, values) @db.transaction(opts.merge(:server=>@opts[:server])) do statements.map{|st| with_sql(st)} end.first.map{|v| v.length == 1 ? v.values.first : v} else super end end # MSSQL does not allow ordering in sub-clauses unless 'top' (limit) is specified def aggregate_dataset (options_overlap(Sequel::Dataset::COUNT_FROM_SELF_OPTS) && !options_overlap([:limit])) ? unordered.from_self : super end private # Whether we are using SQL Server 2005 or later. def is_2005_or_later? server_version >= 9000000 end # Whether we are using SQL Server 2008 or later. def is_2008_or_later? server_version >= 10000000 end # Use strict ISO-8601 format with T between date and time, # since that is the format that is multilanguage and not # DATEFORMAT dependent. def default_timestamp_format DEFAULT_TIMESTAMP_FORMAT end # MSSQL supports the OUTPUT clause for DELETE statements. # It also allows prepending a WITH clause. def delete_clause_methods DELETE_CLAUSE_METHODS end # Only include the primary table in the main delete clause def delete_from_sql(sql) sql << FROM source_list_append(sql, @opts[:from][0..0]) end # MSSQL supports FROM clauses in DELETE and UPDATE statements. def delete_from2_sql(sql) if joined_dataset? select_from_sql(sql) select_join_sql(sql) end end alias update_from_sql delete_from2_sql # Return the first primary key for the current table. If this table has # multiple primary keys, this will only return one of them. Used by #_import. def first_primary_key @db.schema(self).map{|k, v| k if v[:primary_key] == true}.compact.first end # MSSQL raises an error if you try to provide more than 3 decimal places # for a fractional timestamp. This probably doesn't work for smalldatetime # fields. def format_timestamp_usec(usec) sprintf(TIMESTAMP_USEC_FORMAT, usec/1000) end # MSSQL supports the OUTPUT clause for INSERT statements. # It also allows prepending a WITH clause. def insert_clause_methods INSERT_CLAUSE_METHODS end # Use OUTPUT INSERTED.* to return all columns of the inserted row, # for use with the prepared statement code. def insert_output_sql(sql) if @opts.has_key?(:returning) sql << OUTPUT_INSERTED else output_sql(sql) end end # Handle CROSS APPLY and OUTER APPLY JOIN types def join_type_sql(join_type) case join_type when :cross_apply CROSS_APPLY when :outer_apply OUTER_APPLY else super end end # MSSQL uses a literal hexidecimal number for blob strings def literal_blob_append(sql, v) sql << HEX_START << v.unpack(HSTAR).first end # Use YYYYmmdd format, since that's the only want that is # multilanguage and not DATEFORMAT dependent. def literal_date(v) v.strftime(FORMAT_DATE) end # Use 0 for false on MSSQL def literal_false BOOL_FALSE end # Optionally use unicode string syntax for all strings. Don't double # backslashes. def literal_string_append(sql, v) sql << (mssql_unicode_strings ? UNICODE_STRING_START : APOS) sql << v.gsub(APOS_RE, DOUBLE_APOS).gsub(BACKSLASH_CRLF_RE, BACKSLASH_CRLF_REPLACE) << APOS end # Use 1 for true on MSSQL def literal_true BOOL_TRUE end # MSSQL adds the limit before the columns def select_clause_methods SELECT_CLAUSE_METHODS end def select_into_sql(sql) if i = @opts[:into] sql << INTO identifier_append(sql, i) end end # MSSQL uses TOP N for limit. For MSSQL 2005+ TOP (N) is used # to allow the limit to be a bound variable. def select_limit_sql(sql) if l = @opts[:limit] if is_2005_or_later? sql << TOP_PAREN literal_append(sql, l) sql << PAREN_CLOSE else sql << TOP literal_append(sql, l) end end end alias update_limit_sql select_limit_sql # Support different types of locking styles def select_lock_sql(sql) case @opts[:lock] when :update sql << UPDLOCK when :dirty sql << NOLOCK else super end end # SQL fragment for MSSQL's OUTPUT clause. def output_sql(sql) return unless supports_output_clause? return unless output = @opts[:output] sql << OUTPUT column_list_append(sql, output[:select_list]) if into = output[:into] sql << INTO identifier_append(sql, into) if column_list = output[:column_list] sql << PAREN_SPACE_OPEN source_list_append(sql, column_list) sql << PAREN_CLOSE end end end alias delete_output_sql output_sql alias update_output_sql output_sql # MSSQL supports the OUTPUT and TOP clause for UPDATE statements. # It also allows prepending a WITH clause. For MSSQL 2000 # and below, exclude WITH and TOP. def update_clause_methods if is_2005_or_later? UPDATE_CLAUSE_METHODS else UPDATE_CLAUSE_METHODS_2000 end end # Only include the primary table in the main update clause def update_table_sql(sql) sql << SPACE source_list_append(sql, @opts[:from][0..0]) end def uses_with_rollup? !is_2008_or_later? end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/mysql.rb000066400000000000000000001005341220156535500226100ustar00rootroot00000000000000Sequel.require 'adapters/utils/split_alter_table' Sequel.require 'adapters/utils/replace' module Sequel Dataset::NON_SQL_OPTIONS << :insert_ignore Dataset::NON_SQL_OPTIONS << :update_ignore Dataset::NON_SQL_OPTIONS << :on_duplicate_key_update module MySQL @convert_tinyint_to_bool = true class << self # Sequel converts the column type tinyint(1) to a boolean by default when # using the native MySQL or Mysql2 adapter. You can turn off the conversion by setting # this to false. This setting is ignored when connecting to MySQL via the do or jdbc # adapters, both of which automatically do the conversion. attr_accessor :convert_tinyint_to_bool # Set the default charset used for CREATE TABLE. You can pass the # :charset option to create_table to override this setting. attr_accessor :default_charset # Set the default collation used for CREATE TABLE. You can pass the # :collate option to create_table to override this setting. attr_accessor :default_collate # Set the default engine used for CREATE TABLE. You can pass the # :engine option to create_table to override this setting. attr_accessor :default_engine end # Methods shared by Database instances that connect to MySQL, # currently supported by the native and JDBC adapters. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling AUTO_INCREMENT = 'AUTO_INCREMENT'.freeze CAST_TYPES = {String=>:CHAR, Integer=>:SIGNED, Time=>:DATETIME, DateTime=>:DATETIME, Numeric=>:DECIMAL, BigDecimal=>:DECIMAL, File=>:BINARY} COLUMN_DEFINITION_ORDER = [:collate, :null, :default, :unique, :primary_key, :auto_increment, :references] PRIMARY = 'PRIMARY'.freeze MYSQL_TIMESTAMP_RE = /\ACURRENT_(?:DATE|TIMESTAMP)?\z/ include Sequel::Database::SplitAlterTable # MySQL's cast rules are restrictive in that you can't just cast to any possible # database type. def cast_type_literal(type) CAST_TYPES[type] || super end # Commit an existing prepared transaction with the given transaction # identifier string. def commit_prepared_transaction(transaction_id) run("XA COMMIT #{literal(transaction_id)}") end # MySQL uses the :mysql database type def database_type :mysql end # Use the Information Schema's KEY_COLUMN_USAGE table to get # basic information on foreign key columns, but include the # constraint name. def foreign_key_list(table, opts=OPTS) m = output_identifier_meth im = input_identifier_meth ds = metadata_dataset. from(:INFORMATION_SCHEMA__KEY_COLUMN_USAGE). where(:TABLE_NAME=>im.call(table), :TABLE_SCHEMA=>Sequel.function(:DATABASE)). exclude(:CONSTRAINT_NAME=>'PRIMARY'). exclude(:REFERENCED_TABLE_NAME=>nil). select(:CONSTRAINT_NAME___name, :COLUMN_NAME___column, :REFERENCED_TABLE_NAME___table, :REFERENCED_COLUMN_NAME___key) h = {} ds.each do |row| if r = h[row[:name]] r[:columns] << m.call(row[:column]) r[:key] << m.call(row[:key]) else h[row[:name]] = {:name=>m.call(row[:name]), :columns=>[m.call(row[:column])], :table=>m.call(row[:table]), :key=>[m.call(row[:key])]} end end h.values end # MySQL namespaces indexes per table. def global_index_namespace? false end # Use SHOW INDEX FROM to get the index information for the # table. # # By default partial indexes are not included, you can use the # option :partial to override this. def indexes(table, opts=OPTS) indexes = {} remove_indexes = [] m = output_identifier_meth im = input_identifier_meth metadata_dataset.with_sql("SHOW INDEX FROM ?", SQL::Identifier.new(im.call(table))).each do |r| name = r[:Key_name] next if name == PRIMARY name = m.call(name) remove_indexes << name if r[:Sub_part] && ! opts[:partial] i = indexes[name] ||= {:columns=>[], :unique=>r[:Non_unique] != 1} i[:columns] << m.call(r[:Column_name]) end indexes.reject{|k,v| remove_indexes.include?(k)} end # Rollback an existing prepared transaction with the given transaction # identifier string. def rollback_prepared_transaction(transaction_id) run("XA ROLLBACK #{literal(transaction_id)}") end # Get version of MySQL server, used for determined capabilities. def server_version @server_version ||= begin m = /(\d+)\.(\d+)\.(\d+)/.match(get(SQL::Function.new(:version))) (m[1].to_i * 10000) + (m[2].to_i * 100) + m[3].to_i end end # MySQL supports CREATE TABLE IF NOT EXISTS syntax. def supports_create_table_if_not_exists? true end # MySQL supports prepared transactions (two-phase commit) using XA def supports_prepared_transactions? server_version >= 50000 end # MySQL supports savepoints def supports_savepoints? server_version >= 50000 end # MySQL doesn't support savepoints inside prepared transactions in from # 5.5.12 to 5.5.23, see http://bugs.mysql.com/bug.php?id=64374 def supports_savepoints_in_prepared_transactions? super && (server_version <= 50512 || server_version >= 50523) end # MySQL supports transaction isolation levels def supports_transaction_isolation_levels? true end # Return an array of symbols specifying table names in the current database. # # Options: # * :server - Set the server to use def tables(opts=OPTS) full_tables('BASE TABLE', opts) end # Changes the database in use by issuing a USE statement. I would be # very careful if I used this. def use(db_name) disconnect @opts[:database] = db_name if self << "USE #{db_name}" @schemas = {} self end # Return an array of symbols specifying view names in the current database. # # Options: # * :server - Set the server to use def views(opts=OPTS) full_tables('VIEW', opts) end private # Use MySQL specific syntax for some alter table operations. def alter_table_op_sql(table, op) case op[:op] when :add_column if related = op.delete(:table) sql = super op[:table] = related op[:key] ||= primary_key_from_schema(related) sql << ", ADD FOREIGN KEY (#{quote_identifier(op[:name])})#{column_references_sql(op)}" else super end when :rename_column, :set_column_type, :set_column_null, :set_column_default o = op[:op] opts = schema(table).find{|x| x.first == op[:name]} opts = opts ? opts.last.dup : {} opts[:name] = o == :rename_column ? op[:new_name] : op[:name] opts[:type] = o == :set_column_type ? op[:type] : opts[:db_type] opts[:null] = o == :set_column_null ? op[:null] : opts[:allow_null] opts[:default] = o == :set_column_default ? op[:default] : opts[:ruby_default] opts.delete(:default) if opts[:default] == nil opts.delete(:primary_key) unless op[:type] || opts[:type] raise Error, "cannot determine database type to use for CHANGE COLUMN operation" end opts = op.merge(opts) opts.delete(:auto_increment) if op[:auto_increment] == false "CHANGE COLUMN #{quote_identifier(op[:name])} #{column_definition_sql(opts)}" when :drop_constraint case op[:type] when :primary_key "DROP PRIMARY KEY" when :foreign_key name = op[:name] || foreign_key_name(table, op[:columns]) "DROP FOREIGN KEY #{quote_identifier(name)}" when :unique "DROP INDEX #{quote_identifier(op[:name])}" end when :add_constraint if op[:type] == :foreign_key op[:key] ||= primary_key_from_schema(op[:table]) end super else super end end # MySQL server requires table names when dropping indexes. def alter_table_sql(table, op) case op[:op] when :drop_index "#{drop_index_sql(table, op)} ON #{quote_schema_table(table)}" when :drop_constraint if op[:type] == :primary_key if (pk = primary_key_from_schema(table)).length == 1 return [alter_table_sql(table, {:op=>:rename_column, :name=>pk.first, :new_name=>pk.first, :auto_increment=>false}), super] end end super else super end end # Handle MySQL specific default format. def column_schema_normalize_default(default, type) if column_schema_default_string_type?(type) return if [:date, :datetime, :time].include?(type) && MYSQL_TIMESTAMP_RE.match(default) default = "'#{default.gsub("'", "''").gsub('\\', '\\\\')}'" end super(default, type) end # Don't allow combining adding foreign key operations with other # operations, since in some cases adding a foreign key constraint in # the same query as other operations results in MySQL error 150. def combinable_alter_table_op?(op) super && !(op[:op] == :add_constraint && op[:type] == :foreign_key) && !(op[:op] == :drop_constraint && op[:type] == :primary_key) end # The SQL queries to execute on initial connection def mysql_connection_setting_sqls sqls = [] # Increase timeout so mysql server doesn't disconnect us # Value used by default is maximum allowed value on Windows. sqls << "SET @@wait_timeout = #{opts[:timeout] || 2147483}" # By default, MySQL 'where id is null' selects the last inserted id sqls << "SET SQL_AUTO_IS_NULL=0" unless opts[:auto_is_null] # If the user has specified one or more sql modes, enable them if sql_mode = opts[:sql_mode] sql_mode = Array(sql_mode).join(',').upcase sqls << "SET sql_mode = '#{sql_mode}'" end sqls end # Use MySQL specific AUTO_INCREMENT text. def auto_increment_sql AUTO_INCREMENT end # MySQL needs to set transaction isolation before begining a transaction def begin_new_transaction(conn, opts) set_transaction_isolation(conn, opts) log_connection_execute(conn, begin_transaction_sql) end # Use XA START to start a new prepared transaction if the :prepare # option is given. def begin_transaction(conn, opts=OPTS) if (s = opts[:prepare]) && (th = _trans(conn))[:savepoint_level] == 0 log_connection_execute(conn, "XA START #{literal(s)}") th[:savepoint_level] += 1 else super end end # The order of the column definition, as an array of symbols. def column_definition_order COLUMN_DEFINITION_ORDER end # MySQL doesn't allow default values on text columns, so ignore if it the # generic text type is used def column_definition_sql(column) column.delete(:default) if column[:type] == File || (column[:type] == String && column[:text] == true) super end # Prepare the XA transaction for a two-phase commit if the # :prepare option is given. def commit_transaction(conn, opts=OPTS) if (s = opts[:prepare]) && _trans(conn)[:savepoint_level] <= 1 log_connection_execute(conn, "XA END #{literal(s)}") log_connection_execute(conn, "XA PREPARE #{literal(s)}") else super end end # Use MySQL specific syntax for engine type and character encoding def create_table_sql(name, generator, options = OPTS) engine = options.fetch(:engine, Sequel::MySQL.default_engine) charset = options.fetch(:charset, Sequel::MySQL.default_charset) collate = options.fetch(:collate, Sequel::MySQL.default_collate) generator.constraints.sort_by{|c| (c[:type] == :primary_key) ? -1 : 1} # Proc for figuring out the primary key for a given table. key_proc = lambda do |t| if t == name if pk = generator.primary_key_name [pk] elsif !(pkc = generator.constraints.select{|con| con[:type] == :primary_key}).empty? pkc.first[:columns] end else primary_key_from_schema(t) end end # Manually set the keys, since MySQL requires one, it doesn't use the primary # key if none are specified. generator.constraints.each do |c| if c[:type] == :foreign_key c[:key] ||= key_proc.call(c[:table]) end end # Split column constraints into table constraints in some cases: # * foreign key - Always # * unique, primary_key - Only if constraint has a name generator.columns.each do |c| if t = c.delete(:table) same_table = t == name k = c[:key] key ||= key_proc.call(t) if same_table && !k.nil? generator.constraints.unshift(:type=>:unique, :columns=>Array(k)) end generator.foreign_key([c[:name]], t, c.merge(:name=>c[:foreign_key_constraint_name], :type=>:foreign_key, :key=>key)) end end "#{super}#{" ENGINE=#{engine}" if engine}#{" DEFAULT CHARSET=#{charset}" if charset}#{" DEFAULT COLLATE=#{collate}" if collate}" end DATABASE_ERROR_REGEXPS = { /Duplicate entry .+ for key/ => UniqueConstraintViolation, /foreign key constraint fails/ => ForeignKeyConstraintViolation, /cannot be null/ => NotNullConstraintViolation, /Deadlock found when trying to get lock; try restarting transaction/ => SerializationFailure, }.freeze def database_error_regexps DATABASE_ERROR_REGEXPS end # Backbone of the tables and views support using SHOW FULL TABLES. def full_tables(type, opts) m = output_identifier_meth metadata_dataset.with_sql('SHOW FULL TABLES').server(opts[:server]).map{|r| m.call(r.values.first) if r.delete(:Table_type) == type}.compact end # MySQL folds unquoted identifiers to lowercase, so it shouldn't need to upcase identifiers on input. def identifier_input_method_default nil end # MySQL folds unquoted identifiers to lowercase, so it shouldn't need to upcase identifiers on output. def identifier_output_method_default nil end # Handle MySQL specific index SQL syntax def index_definition_sql(table_name, index) index_name = quote_identifier(index[:name] || default_index_name(table_name, index[:columns])) index_type = case index[:type] when :full_text "FULLTEXT " when :spatial "SPATIAL " else using = " USING #{index[:type]}" unless index[:type] == nil "UNIQUE " if index[:unique] end "CREATE #{index_type}INDEX #{index_name}#{using} ON #{quote_schema_table(table_name)} #{literal(index[:columns])}" end # Parse the schema for the given table to get an array of primary key columns def primary_key_from_schema(table) schema(table).select{|a| a[1][:primary_key]}.map{|a| a[0]} end # Rollback the currently open XA transaction def rollback_transaction(conn, opts=OPTS) if (s = opts[:prepare]) && _trans(conn)[:savepoint_level] <= 1 log_connection_execute(conn, "XA END #{literal(s)}") log_connection_execute(conn, "XA PREPARE #{literal(s)}") log_connection_execute(conn, "XA ROLLBACK #{literal(s)}") else super end end # Recognize MySQL set type. def schema_column_type(db_type) case db_type when /\Aset/io :set when /\Amediumint/io :integer when /\Amediumtext/io :string else super end end # Use the MySQL specific DESCRIBE syntax to get a table description. def schema_parse_table(table_name, opts) m = output_identifier_meth(opts[:dataset]) im = input_identifier_meth(opts[:dataset]) table = SQL::Identifier.new(im.call(table_name)) table = SQL::QualifiedIdentifier.new(im.call(opts[:schema]), table) if opts[:schema] metadata_dataset.with_sql("DESCRIBE ?", table).map do |row| row[:auto_increment] = true if row.delete(:Extra).to_s =~ /auto_increment/io row[:allow_null] = row.delete(:Null) == 'YES' row[:default] = row.delete(:Default) row[:primary_key] = row.delete(:Key) == 'PRI' row[:default] = nil if blank_object?(row[:default]) row[:db_type] = row.delete(:Type) row[:type] = schema_column_type(row[:db_type]) [m.call(row.delete(:Field)), row] end end # MySQL can combine multiple alter table ops into a single query. def supports_combining_alter_table_ops? true end # MySQL supports CREATE OR REPLACE VIEW. def supports_create_or_replace_view? true end # MySQL does not support named column constraints. def supports_named_column_constraints? false end # Respect the :size option if given to produce # tinyblob, mediumblob, and longblob if :tiny, # :medium, or :long is given. def type_literal_generic_file(column) case column[:size] when :tiny # < 2^8 bytes :tinyblob when :medium # < 2^24 bytes :mediumblob when :long # < 2^32 bytes :longblob else # 2^16 bytes :blob end end # MySQL has both datetime and timestamp classes, most people are going # to want datetime def type_literal_generic_datetime(column) if column[:default] == Sequel::CURRENT_TIMESTAMP :timestamp else :datetime end end # MySQL has both datetime and timestamp classes, most people are going # to want datetime def type_literal_generic_time(column) column[:only_time] ? :time : type_literal_generic_datetime(column) end # MySQL doesn't have a true boolean class, so it uses tinyint(1) def type_literal_generic_trueclass(column) :'tinyint(1)' end end # Dataset methods shared by datasets that use MySQL databases. module DatasetMethods BOOL_TRUE = '1'.freeze BOOL_FALSE = '0'.freeze COMMA_SEPARATOR = ', '.freeze FOR_SHARE = ' LOCK IN SHARE MODE'.freeze SQL_CALC_FOUND_ROWS = ' SQL_CALC_FOUND_ROWS'.freeze DELETE_CLAUSE_METHODS = Dataset.clause_methods(:delete, %w'delete from where order limit') INSERT_CLAUSE_METHODS = Dataset.clause_methods(:insert, %w'insert ignore into columns values on_duplicate_key_update') SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select distinct calc_found_rows columns from join where group having compounds order limit lock') UPDATE_CLAUSE_METHODS = Dataset.clause_methods(:update, %w'update ignore table set where order limit') APOS = Dataset::APOS APOS_RE = Dataset::APOS_RE DOUBLE_APOS = Dataset::DOUBLE_APOS SPACE = Dataset::SPACE PAREN_OPEN = Dataset::PAREN_OPEN PAREN_CLOSE = Dataset::PAREN_CLOSE NOT_SPACE = Dataset::NOT_SPACE FROM = Dataset::FROM COMMA = Dataset::COMMA LIMIT = Dataset::LIMIT GROUP_BY = Dataset::GROUP_BY ESCAPE = Dataset::ESCAPE BACKSLASH = Dataset::BACKSLASH REGEXP = 'REGEXP'.freeze LIKE = 'LIKE'.freeze BINARY = 'BINARY '.freeze CONCAT = "CONCAT".freeze CAST_BITCOMP_OPEN = "CAST(~".freeze CAST_BITCOMP_CLOSE = " AS SIGNED INTEGER)".freeze STRAIGHT_JOIN = 'STRAIGHT_JOIN'.freeze NATURAL_LEFT_JOIN = 'NATURAL LEFT JOIN'.freeze BACKTICK = '`'.freeze BACKTICK_RE = /`/.freeze DOUBLE_BACKTICK = '``'.freeze EMPTY_COLUMNS = " ()".freeze EMPTY_VALUES = " VALUES ()".freeze IGNORE = " IGNORE".freeze ON_DUPLICATE_KEY_UPDATE = " ON DUPLICATE KEY UPDATE ".freeze EQ_VALUES = '=VALUES('.freeze EQ = '='.freeze WITH_ROLLUP = ' WITH ROLLUP'.freeze MATCH_AGAINST = ["(MATCH ".freeze, " AGAINST (".freeze, "))".freeze].freeze MATCH_AGAINST_BOOLEAN = ["(MATCH ".freeze, " AGAINST (".freeze, " IN BOOLEAN MODE))".freeze].freeze EXPLAIN = 'EXPLAIN '.freeze EXPLAIN_EXTENDED = 'EXPLAIN EXTENDED '.freeze BACKSLASH_RE = /\\/.freeze QUAD_BACKSLASH = "\\\\\\\\".freeze BLOB_START = "0x".freeze HSTAR = "H*".freeze include Sequel::Dataset::Replace # MySQL specific syntax for LIKE/REGEXP searches, as well as # string concatenation. def complex_expression_sql_append(sql, op, args) case op when :IN, :"NOT IN" ds = args.at(1) if ds.is_a?(Sequel::Dataset) && ds.opts[:limit] super(sql, op, [args.at(0), ds.from_self]) else super end when :~, :'!~', :'~*', :'!~*', :LIKE, :'NOT LIKE', :ILIKE, :'NOT ILIKE' sql << PAREN_OPEN literal_append(sql, args.at(0)) sql << SPACE sql << 'NOT ' if [:'NOT LIKE', :'NOT ILIKE', :'!~', :'!~*'].include?(op) sql << ([:~, :'!~', :'~*', :'!~*'].include?(op) ? REGEXP : LIKE) sql << SPACE sql << BINARY if [:~, :'!~', :LIKE, :'NOT LIKE'].include?(op) literal_append(sql, args.at(1)) if [:LIKE, :'NOT LIKE', :ILIKE, :'NOT ILIKE'].include?(op) sql << ESCAPE literal_append(sql, BACKSLASH) end sql << PAREN_CLOSE when :'||' if args.length > 1 sql << CONCAT array_sql_append(sql, args) else literal_append(sql, args.at(0)) end when :'B~' sql << CAST_BITCOMP_OPEN literal_append(sql, args.at(0)) sql << CAST_BITCOMP_CLOSE else super end end # Use GROUP BY instead of DISTINCT ON if arguments are provided. def distinct(*args) args.empty? ? super : group(*args) end # Sets up the select methods to use SQL_CALC_FOUND_ROWS option. # # dataset.calc_found_rows.limit(10) # # SELECT SQL_CALC_FOUND_ROWS * FROM table LIMIT 10 def calc_found_rows clone(:calc_found_rows => true) end # Return the results of an EXPLAIN query as a string. Options: # :extended :: Use EXPLAIN EXPTENDED instead of EXPLAIN if true. def explain(opts=OPTS) # Load the PrettyTable class, needed for explain output Sequel.extension(:_pretty_table) unless defined?(Sequel::PrettyTable) ds = db.send(:metadata_dataset).with_sql((opts[:extended] ? EXPLAIN_EXTENDED : EXPLAIN) + select_sql).naked rows = ds.all Sequel::PrettyTable.string(rows, ds.columns) end # Return a cloned dataset which will use LOCK IN SHARE MODE to lock returned rows. def for_share lock_style(:share) end # Adds full text filter def full_text_search(cols, terms, opts = OPTS) filter(full_text_sql(cols, terms, opts)) end # MySQL specific full text search syntax. def full_text_sql(cols, terms, opts = OPTS) terms = terms.join(' ') if terms.is_a?(Array) SQL::PlaceholderLiteralString.new((opts[:boolean] ? MATCH_AGAINST_BOOLEAN : MATCH_AGAINST), [Array(cols), terms]) end # Transforms an CROSS JOIN to an INNER JOIN if the expr is not nil. # Raises an error on use of :full_outer type, since MySQL doesn't support it. def join_table(type, table, expr=nil, opts=OPTS, &block) type = :inner if (type == :cross) && !expr.nil? raise(Sequel::Error, "MySQL doesn't support FULL OUTER JOIN") if type == :full_outer super(type, table, expr, opts, &block) end # Transforms :natural_inner to NATURAL LEFT JOIN and straight to # STRAIGHT_JOIN. def join_type_sql(join_type) case join_type when :straight STRAIGHT_JOIN when :natural_inner NATURAL_LEFT_JOIN else super end end # Sets up the insert methods to use INSERT IGNORE. # Useful if you have a unique key and want to just skip # inserting rows that violate the unique key restriction. # # dataset.insert_ignore.multi_insert( # [{:name => 'a', :value => 1}, {:name => 'b', :value => 2}] # ) # # INSERT IGNORE INTO tablename (name, value) VALUES (a, 1), (b, 2) def insert_ignore clone(:insert_ignore=>true) end # Sets up the insert methods to use ON DUPLICATE KEY UPDATE # If you pass no arguments, ALL fields will be # updated with the new values. If you pass the fields you # want then ONLY those field will be updated. # # Useful if you have a unique key and want to update # inserting rows that violate the unique key restriction. # # dataset.on_duplicate_key_update.multi_insert( # [{:name => 'a', :value => 1}, {:name => 'b', :value => 2}] # ) # # INSERT INTO tablename (name, value) VALUES (a, 1), (b, 2) # # ON DUPLICATE KEY UPDATE name=VALUES(name), value=VALUES(value) # # dataset.on_duplicate_key_update(:value).multi_insert( # [{:name => 'a', :value => 1}, {:name => 'b', :value => 2}] # ) # # INSERT INTO tablename (name, value) VALUES (a, 1), (b, 2) # # ON DUPLICATE KEY UPDATE value=VALUES(value) def on_duplicate_key_update(*args) clone(:on_duplicate_key_update => args) end # MySQL specific syntax for inserting multiple values at once. def multi_insert_sql(columns, values) sql = LiteralString.new('VALUES ') expression_list_append(sql, values.map{|r| Array(r)}) [insert_sql(columns, sql)] end # MySQL uses the nonstandard ` (backtick) for quoting identifiers. def quoted_identifier_append(sql, c) sql << BACKTICK << c.to_s.gsub(BACKTICK_RE, DOUBLE_BACKTICK) << BACKTICK end # MySQL can emulate DISTINCT ON with its non-standard GROUP BY implementation, # though the rows returned cannot be made deterministic through ordering. def supports_distinct_on? true end # MySQL supports GROUP BY WITH ROLLUP (but not CUBE) def supports_group_rollup? true end # MySQL does not support INTERSECT or EXCEPT def supports_intersect_except? false end # MySQL supports modifying joined datasets def supports_modifying_joins? true end # MySQL's DISTINCT ON emulation using GROUP BY does not respect the # queries ORDER BY clause. def supports_ordered_distinct_on? false end # MySQL supports pattern matching via regular expressions def supports_regexp? true end # MySQL does support fractional timestamps in literal timestamps, but it # ignores them. Also, using them seems to cause problems on 1.9. Since # they are ignored anyway, not using them is probably best. def supports_timestamp_usecs? false end # Sets up the update methods to use UPDATE IGNORE. # Useful if you have a unique key and want to just skip # updating rows that violate the unique key restriction. # # dataset.update_ignore.update({:name => 'a', :value => 1}) # # UPDATE IGNORE tablename SET name = 'a', value = 1 def update_ignore clone(:update_ignore=>true) end private # MySQL supports the ORDER BY and LIMIT clauses for DELETE statements def delete_clause_methods DELETE_CLAUSE_METHODS end # Consider the first table in the joined dataset is the table to delete # from, but include the others for the purposes of selecting rows. def delete_from_sql(sql) if joined_dataset? sql << SPACE source_list_append(sql, @opts[:from][0..0]) sql << FROM source_list_append(sql, @opts[:from]) select_join_sql(sql) else super end end # MySQL supports the IGNORE and ON DUPLICATE KEY UPDATE clauses for INSERT statements def insert_clause_methods INSERT_CLAUSE_METHODS end alias replace_clause_methods insert_clause_methods # MySQL doesn't use the SQL standard DEFAULT VALUES. def insert_columns_sql(sql) values = opts[:values] if values.is_a?(Array) && values.empty? sql << EMPTY_COLUMNS else super end end # MySQL supports INSERT IGNORE INTO def insert_ignore_sql(sql) sql << IGNORE if opts[:insert_ignore] end # MySQL supports UPDATE IGNORE def update_ignore_sql(sql) sql << IGNORE if opts[:update_ignore] end # MySQL supports INSERT ... ON DUPLICATE KEY UPDATE def insert_on_duplicate_key_update_sql(sql) if update_cols = opts[:on_duplicate_key_update] update_vals = nil if update_cols.empty? update_cols = columns elsif update_cols.last.is_a?(Hash) update_vals = update_cols.last update_cols = update_cols[0..-2] end sql << ON_DUPLICATE_KEY_UPDATE c = false co = COMMA values = EQ_VALUES endp = PAREN_CLOSE update_cols.each do |col| sql << co if c quote_identifier_append(sql, col) sql << values quote_identifier_append(sql, col) sql << endp c ||= true end if update_vals eq = EQ update_vals.map do |col,v| sql << co if c quote_identifier_append(sql, col) sql << eq literal_append(sql, v) c ||= true end end end end # MySQL doesn't use the standard DEFAULT VALUES for empty values. def insert_values_sql(sql) values = opts[:values] if values.is_a?(Array) && values.empty? sql << EMPTY_VALUES else super end end # MySQL allows a LIMIT in DELETE and UPDATE statements. def limit_sql(sql) if l = @opts[:limit] sql << LIMIT literal_append(sql, l) end end alias delete_limit_sql limit_sql alias update_limit_sql limit_sql # MySQL uses a preceding X for hex escaping strings def literal_blob_append(sql, v) sql << BLOB_START << v.unpack(HSTAR).first end # Use 0 for false on MySQL def literal_false BOOL_FALSE end # Raise error for infinitate and NaN values def literal_float(v) if v.infinite? || v.nan? raise InvalidValue, "Infinite floats and NaN values are not valid on MySQL" else super end end # SQL fragment for String. Doubles \ and ' by default. def literal_string_append(sql, v) sql << APOS << v.gsub(BACKSLASH_RE, QUAD_BACKSLASH).gsub(APOS_RE, DOUBLE_APOS) << APOS end # Use 1 for true on MySQL def literal_true BOOL_TRUE end # MySQL does not support the SQL WITH clause for SELECT statements def select_clause_methods SELECT_CLAUSE_METHODS end # Support FOR SHARE locking when using the :share lock style. def select_lock_sql(sql) @opts[:lock] == :share ? (sql << FOR_SHARE) : super end # MySQL specific SQL_CALC_FOUND_ROWS option def select_calc_found_rows_sql(sql) sql << SQL_CALC_FOUND_ROWS if opts[:calc_found_rows] end # MySQL supports the ORDER BY and LIMIT clauses for UPDATE statements def update_clause_methods UPDATE_CLAUSE_METHODS end # MySQL uses WITH ROLLUP syntax. def uses_with_rollup? true end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/mysql_prepared_statements.rb000066400000000000000000000163221220156535500267420ustar00rootroot00000000000000Sequel.require %w'shared/mysql utils/stored_procedures', 'adapters' module Sequel module MySQL # This module is used by the mysql and mysql2 adapters to support # prepared statements and stored procedures. module PreparedStatements module DatabaseMethods disconnect_errors = <<-END.split("\n").map{|l| l.strip} Commands out of sync; you can't run this command now Can't connect to local MySQL server through socket MySQL server has gone away Lost connection to MySQL server during query This connection is still waiting for a result, try again once you have the result closed MySQL connection END # Error messages for mysql and mysql2 that indicate the current connection should be disconnected MYSQL_DATABASE_DISCONNECT_ERRORS = /\A#{Regexp.union(disconnect_errors)}/o # Support stored procedures on MySQL def call_sproc(name, opts=OPTS, &block) args = opts[:args] || [] execute("CALL #{name}#{args.empty? ? '()' : literal(args)}", opts.merge(:sproc=>false), &block) end # Executes the given SQL using an available connection, yielding the # connection if the block is given. def execute(sql, opts=OPTS, &block) if opts[:sproc] call_sproc(sql, opts, &block) elsif sql.is_a?(Symbol) execute_prepared_statement(sql, opts, &block) else synchronize(opts[:server]){|conn| _execute(conn, sql, opts, &block)} end end private def add_prepared_statements_cache(conn) class << conn attr_accessor :prepared_statements end conn.prepared_statements = {} end # Stupid MySQL doesn't use SQLState error codes correctly, mapping # all constraint violations to 23000 even though it recognizes # different types. def database_specific_error_class(exception, opts) case exception.errno when 1048 NotNullConstraintViolation when 1062 UniqueConstraintViolation when 1451, 1452 ForeignKeyConstraintViolation else super end end # Executes a prepared statement on an available connection. If the # prepared statement already exists for the connection and has the same # SQL, reuse it, otherwise, prepare the new statement. Because of the # usual MySQL stupidity, we are forced to name arguments via separate # SET queries. Use @sequel_arg_N (for N starting at 1) for these # arguments. def execute_prepared_statement(ps_name, opts, &block) args = opts[:arguments] ps = prepared_statement(ps_name) sql = ps.prepared_sql synchronize(opts[:server]) do |conn| unless conn.prepared_statements[ps_name] == sql _execute(conn, "PREPARE #{ps_name} FROM #{literal(sql)}", opts) conn.prepared_statements[ps_name] = sql end i = 0 _execute(conn, "SET " + args.map {|arg| "@sequel_arg_#{i+=1} = #{literal(arg)}"}.join(", "), opts) unless args.empty? opts = opts.merge(:log_sql=>" (#{sql})") if ps.log_sql _execute(conn, "EXECUTE #{ps_name}#{" USING #{(1..i).map{|j| "@sequel_arg_#{j}"}.join(', ')}" unless i == 0}", opts, &block) end end end module DatasetMethods include Sequel::Dataset::StoredProcedures # Methods to add to MySQL prepared statement calls without using a # real database prepared statement and bound variables. module CallableStatementMethods # Extend given dataset with this module so subselects inside subselects in # prepared statements work. def subselect_sql_append(sql, ds) ps = ds.to_prepared_statement(:select).clone(:append_sql => sql) ps.extend(CallableStatementMethods) ps = ps.bind(@opts[:bind_vars]) if @opts[:bind_vars] ps.prepared_args = prepared_args ps.prepared_sql end end # Methods for MySQL prepared statements using the native driver. module PreparedStatementMethods include Sequel::Dataset::UnnumberedArgumentMapper # Raise a more obvious error if you attempt to call a unnamed prepared statement. def call(*) raise Error, "Cannot call prepared statement without a name" if prepared_statement_name.nil? super end private # Execute the prepared statement with the bind arguments instead of # the given SQL. def execute(sql, opts=OPTS, &block) super(prepared_statement_name, {:arguments=>bind_arguments}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_dui(sql, opts=OPTS, &block) super(prepared_statement_name, {:arguments=>bind_arguments}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_insert(sql, opts=OPTS, &block) super(prepared_statement_name, {:arguments=>bind_arguments}.merge(opts), &block) end end # Methods for MySQL stored procedures using the native driver. module StoredProcedureMethods include Sequel::Dataset::StoredProcedureMethods private # Execute the database stored procedure with the stored arguments. def execute(sql, opts=OPTS, &block) super(@sproc_name, {:args=>@sproc_args, :sproc=>true}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_dui(sql, opts=OPTS, &block) super(@sproc_name, {:args=>@sproc_args, :sproc=>true}.merge(opts), &block) end end # MySQL is different in that it supports prepared statements but not bound # variables outside of prepared statements. The default implementation # breaks the use of subselects in prepared statements, so extend the # temporary prepared statement that this creates with a module that # fixes it. def call(type, bind_arguments={}, *values, &block) ps = to_prepared_statement(type, values) ps.extend(CallableStatementMethods) ps.call(bind_arguments, &block) end # Store the given type of prepared statement in the associated database # with the given name. def prepare(type, name=nil, *values) ps = to_prepared_statement(type, values) ps.extend(PreparedStatementMethods) if name ps.prepared_statement_name = name db.set_prepared_statement(name, ps) end ps end private # Extend the dataset with the MySQL stored procedure methods. def prepare_extend_sproc(ds) ds.extend(StoredProcedureMethods) end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/oracle.rb000066400000000000000000000357211220156535500227150ustar00rootroot00000000000000Sequel.require 'adapters/utils/emulate_offset_with_row_number' module Sequel module Oracle module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling TEMPORARY = 'GLOBAL TEMPORARY '.freeze AUTOINCREMENT = ''.freeze attr_accessor :autosequence def create_sequence(name, opts=OPTS) self << create_sequence_sql(name, opts) end def create_trigger(*args) self << create_trigger_sql(*args) end def current_user @current_user ||= metadata_dataset.get{sys_context('USERENV', 'CURRENT_USER')} end def drop_sequence(name) self << drop_sequence_sql(name) end # Oracle uses the :oracle database type def database_type :oracle end # Oracle namespaces indexes per table. def global_index_namespace? false end def tables(opts=OPTS) m = output_identifier_meth metadata_dataset.from(:tab).server(opts[:server]).select(:tname).filter(:tabtype => 'TABLE').map{|r| m.call(r[:tname])} end def views(opts=OPTS) m = output_identifier_meth metadata_dataset.from(:tab).server(opts[:server]).select(:tname).filter(:tabtype => 'VIEW').map{|r| m.call(r[:tname])} end def view_exists?(name) m = input_identifier_meth metadata_dataset.from(:tab).filter(:tname =>m.call(name), :tabtype => 'VIEW').count > 0 end # Oracle supports deferrable constraints. def supports_deferrable_constraints? true end # DB2 supports transaction isolation levels. def supports_transaction_isolation_levels? true end private # Handle Oracle specific ALTER TABLE SQL def alter_table_sql(table, op) case op[:op] when :add_column if op[:primary_key] sqls = [] sqls << alter_table_sql(table, op.merge(:primary_key=>nil)) if op[:auto_increment] seq_name = default_sequence_name(table, op[:name]) sqls << drop_sequence_sql(seq_name) sqls << create_sequence_sql(seq_name, op) sqls << "UPDATE #{quote_schema_table(table)} SET #{quote_identifier(op[:name])} = #{seq_name}.nextval" end sqls << "ALTER TABLE #{quote_schema_table(table)} ADD PRIMARY KEY (#{quote_identifier(op[:name])})" sqls else "ALTER TABLE #{quote_schema_table(table)} ADD #{column_definition_sql(op)}" end when :set_column_null "ALTER TABLE #{quote_schema_table(table)} MODIFY #{quote_identifier(op[:name])} #{op[:null] ? 'NULL' : 'NOT NULL'}" when :set_column_type "ALTER TABLE #{quote_schema_table(table)} MODIFY #{quote_identifier(op[:name])} #{type_literal(op)}" when :set_column_default "ALTER TABLE #{quote_schema_table(table)} MODIFY #{quote_identifier(op[:name])} DEFAULT #{literal(op[:default])}" else super(table, op) end end def auto_increment_sql AUTOINCREMENT end def create_sequence_sql(name, opts=OPTS) "CREATE SEQUENCE #{quote_identifier(name)} start with #{opts [:start_with]||1} increment by #{opts[:increment_by]||1} nomaxvalue" end def create_table_from_generator(name, generator, options) drop_statement, create_statements = create_table_sql_list(name, generator, options) (execute_ddl(drop_statement) rescue nil) if drop_statement create_statements.each{|sql| execute_ddl(sql)} end def create_table_sql_list(name, generator, options=OPTS) statements = [create_table_sql(name, generator, options)] drop_seq_statement = nil generator.columns.each do |c| if c[:auto_increment] c[:sequence_name] ||= default_sequence_name(name, c[:name]) unless c[:create_sequence] == false drop_seq_statement = drop_sequence_sql(c[:sequence_name]) statements << create_sequence_sql(c[:sequence_name], c) end unless c[:create_trigger] == false c[:trigger_name] ||= "BI_#{name}_#{c[:name]}" trigger_definition = <<-end_sql BEGIN IF :NEW.#{quote_identifier(c[:name])} IS NULL THEN SELECT #{c[:sequence_name]}.nextval INTO :NEW.#{quote_identifier(c[:name])} FROM dual; END IF; END; end_sql statements << create_trigger_sql(name, c[:trigger_name], trigger_definition, {:events => [:insert]}) end end end [drop_seq_statement, statements] end def create_trigger_sql(table, name, definition, opts=OPTS) events = opts[:events] ? Array(opts[:events]) : [:insert, :update, :delete] sql = <<-end_sql CREATE#{' OR REPLACE' if opts[:replace]} TRIGGER #{quote_identifier(name)} #{opts[:after] ? 'AFTER' : 'BEFORE'} #{events.map{|e| e.to_s.upcase}.join(' OR ')} ON #{quote_schema_table(table)} REFERENCING NEW AS NEW FOR EACH ROW #{definition} end_sql sql end DATABASE_ERROR_REGEXPS = { /unique constraint .+ violated/ => UniqueConstraintViolation, /integrity constraint .+ violated/ => ForeignKeyConstraintViolation, /check constraint .+ violated/ => CheckConstraintViolation, /cannot insert NULL into|cannot update .+ to NULL/ => NotNullConstraintViolation, /can't serialize access for this transaction/ => SerializationFailure, }.freeze def database_error_regexps DATABASE_ERROR_REGEXPS end def default_sequence_name(table, column) "seq_#{table}_#{column}" end def drop_sequence_sql(name) "DROP SEQUENCE #{quote_identifier(name)}" end def remove_cached_schema(table) @primary_key_sequences.delete(table) super end TRANSACTION_ISOLATION_LEVELS = {:uncommitted=>'READ COMMITTED'.freeze, :committed=>'READ COMMITTED'.freeze, :repeatable=>'SERIALIZABLE'.freeze, :serializable=>'SERIALIZABLE'.freeze} # Oracle doesn't support READ UNCOMMITTED OR REPEATABLE READ transaction # isolation levels, so upgrade to the next highest level in those cases. def set_transaction_isolation_sql(level) "SET TRANSACTION ISOLATION LEVEL #{TRANSACTION_ISOLATION_LEVELS[level]}" end def sequence_for_table(table) return nil unless autosequence @primary_key_sequences.fetch(table) do |key| pk = schema(table).select{|k, v| v[:primary_key]} @primary_key_sequences[table] = if pk.length == 1 seq = "seq_#{table}_#{pk.first.first}" seq.to_sym unless from(:user_sequences).filter(:sequence_name=>input_identifier_meth.call(seq)).empty? end end end # Oracle supports CREATE OR REPLACE VIEW. def supports_create_or_replace_view? true end # Oracle's integer/:number type handles larger values than # most other databases's bigint types, so it should be # safe to use for Bignum. def type_literal_generic_bignum(column) :integer end # Oracle doesn't have a time type, so use timestamp for all # time columns. def type_literal_generic_time(column) :timestamp end # Oracle doesn't have a boolean type or even a reasonable # facsimile. Using a char(1) seems to be the recommended way. def type_literal_generic_trueclass(column) :'char(1)' end # SQL fragment for showing a table is temporary def temporary_table_sql TEMPORARY end # Oracle uses clob for text types. def uses_clob_for_text? true end end module DatasetMethods include EmulateOffsetWithRowNumber SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'with select distinct columns from join where group having compounds order lock') ROW_NUMBER_EXPRESSION = LiteralString.new('ROWNUM').freeze SPACE = Dataset::SPACE APOS = Dataset::APOS APOS_RE = Dataset::APOS_RE DOUBLE_APOS = Dataset::DOUBLE_APOS FROM = Dataset::FROM BITCOMP_OPEN = "((0 - ".freeze BITCOMP_CLOSE = ") - 1)".freeze TIMESTAMP_FORMAT = "TIMESTAMP '%Y-%m-%d %H:%M:%S%N %z'".freeze TIMESTAMP_OFFSET_FORMAT = "%+03i:%02i".freeze BOOL_FALSE = "'N'".freeze BOOL_TRUE = "'Y'".freeze HSTAR = "H*".freeze DUAL = ['DUAL'.freeze].freeze def complex_expression_sql_append(sql, op, args) case op when :& sql << complex_expression_arg_pairs(args){|a, b| "CAST(BITAND(#{literal(a)}, #{literal(b)}) AS INTEGER)"} when :| sql << complex_expression_arg_pairs(args) do |a, b| s1 = '' complex_expression_sql_append(s1, :&, [a, b]) "(#{literal(a)} - #{s1} + #{literal(b)})" end when :^ sql << complex_expression_arg_pairs(args) do |*x| s1 = '' s2 = '' complex_expression_sql_append(s1, :|, x) complex_expression_sql_append(s2, :&, x) "(#{s1} - #{s2})" end when :'B~' sql << BITCOMP_OPEN literal_append(sql, args.at(0)) sql << BITCOMP_CLOSE when :<< sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} * power(2, #{literal(b)}))"} when :>> sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} / power(2, #{literal(b)}))"} when :% sql << complex_expression_arg_pairs(args){|a, b| "MOD(#{literal(a)}, #{literal(b)})"} else super end end # Oracle doesn't support CURRENT_TIME, as it doesn't have # a type for storing just time values without a date, so # use CURRENT_TIMESTAMP in its place. def constant_sql_append(sql, c) if c == :CURRENT_TIME super(sql, :CURRENT_TIMESTAMP) else super end end # Oracle treats empty strings like NULL values, and doesn't support # char_length, so make char_length use length with a nonempty string. # Unfortunately, as Oracle treats the empty string as NULL, there is # no way to get trim to return an empty string instead of nil if # the string only contains spaces. def emulated_function_sql_append(sql, f) case f.f when :char_length literal_append(sql, Sequel::SQL::Function.new(:length, Sequel.join([f.args.first, 'x'])) - 1) else super end end # Oracle uses MINUS instead of EXCEPT, and doesn't support EXCEPT ALL def except(dataset, opts=OPTS) raise(Sequel::Error, "EXCEPT ALL not supported") if opts[:all] compound_clone(:minus, dataset, opts) end # Use a custom expression with EXISTS to determine whether a dataset # is empty. def empty? db[:dual].where(@opts[:offset] ? exists : unordered.exists).get(1) == nil end # Oracle requires SQL standard datetimes def requires_sql_standard_datetimes? true end # Create a copy of this dataset associated to the given sequence name, # which will be used when calling insert to find the most recently # inserted value for the sequence. def sequence(s) clone(:sequence=>s) end # Handle LIMIT by using a unlimited subselect filtered with ROWNUM. def select_sql if (limit = @opts[:limit]) && !@opts[:sql] ds = clone(:limit=>nil) # Lock doesn't work in subselects, so don't use a subselect when locking. # Don't use a subselect if custom SQL is used, as it breaks somethings. ds = ds.from_self unless @opts[:lock] sql = @opts[:append_sql] || '' subselect_sql_append(sql, ds.where(SQL::ComplexExpression.new(:<=, ROW_NUMBER_EXPRESSION, limit))) sql else super end end # Oracle requires recursive CTEs to have column aliases. def recursive_cte_requires_column_aliases? true end # Oracle supports GROUP BY CUBE def supports_group_cube? true end # Oracle supports GROUP BY ROLLUP def supports_group_rollup? true end # Oracle does not support INTERSECT ALL or EXCEPT ALL def supports_intersect_except_all? false end # Oracle does not support IS TRUE. def supports_is_true? false end # Oracle does not support SELECT *, column def supports_select_all_and_column? false end # Oracle supports timezones in literal timestamps. def supports_timestamp_timezones? true end # Oracle does not support WHERE 'Y' for WHERE TRUE. def supports_where_true? false end # Oracle supports window functions def supports_window_functions? true end private # Oracle doesn't support the use of AS when aliasing a dataset. It doesn't require # the use of AS anywhere, so this disables it in all cases. def as_sql_append(sql, aliaz) sql << SPACE quote_identifier_append(sql, aliaz) end # The strftime format to use when literalizing the time. def default_timestamp_format TIMESTAMP_FORMAT end # If this dataset is associated with a sequence, return the most recently # inserted sequence value. def execute_insert(sql, opts=OPTS) f = @opts[:from] super(sql, {:table=>(f.first if f), :sequence=>@opts[:sequence]}.merge(opts)) end # Use a colon for the timestamp offset, since Oracle appears to require it. def format_timestamp_offset(hour, minute) sprintf(TIMESTAMP_OFFSET_FORMAT, hour, minute) end # Oracle doesn't support empty values when inserting. def insert_supports_empty_values? false end # Use string in hex format for blob data. def literal_blob_append(sql, v) sql << APOS << v.unpack(HSTAR).first << APOS end # Oracle uses 'N' for false values. def literal_false BOOL_FALSE end # Oracle uses the SQL standard of only doubling ' inside strings. def literal_string_append(sql, v) sql << APOS << v.gsub(APOS_RE, DOUBLE_APOS) << APOS end # Oracle uses 'Y' for true values. def literal_true BOOL_TRUE end # Use the Oracle-specific SQL clauses (no limit, since it is emulated). def select_clause_methods SELECT_CLAUSE_METHODS end # Modify the SQL to add the list of tables to select FROM # Oracle doesn't support select without FROM clause # so add the dummy DUAL table if the dataset doesn't select # from a table. def select_from_sql(sql) sql << FROM source_list_append(sql, @opts[:from] || DUAL) end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/postgres.rb000066400000000000000000001677711220156535500233310ustar00rootroot00000000000000Sequel.require 'adapters/utils/pg_types' module Sequel # Top level module for holding all PostgreSQL-related modules and classes # for Sequel. There are a few module level accessors that are added via # metaprogramming. These are: # # client_min_messages :: Change the minimum level of messages that PostgreSQL will send to the # the client. The PostgreSQL default is NOTICE, the Sequel default is # WARNING. Set to nil to not change the server default. Overridable on # a per instance basis via the :client_min_messages option. # force_standard_strings :: Set to false to not force the use of standard strings. Overridable # on a per instance basis via the :force_standard_strings option. # # It is not recommened you use these module-level accessors. Instead, # use the database option to make the setting per-Database. # # All adapters that connect to PostgreSQL support the following option in # addition to those mentioned above: # # :search_path :: Set the schema search_path for this Database's connections. # Allows to to set which schemas do not need explicit # qualification, and in which order to check the schemas when # an unqualified object is referenced. module Postgres # Array of exceptions that need to be converted. JDBC # uses NativeExceptions, the native adapter uses PGError. CONVERTED_EXCEPTIONS = [] @client_min_messages = :warning @force_standard_strings = true class << self # By default, Sequel sets the minimum level of log messages sent to the client # to WARNING, where PostgreSQL uses a default of NOTICE. This is to avoid a lot # of mostly useless messages when running migrations, such as a couple of lines # for every serial primary key field. attr_accessor :client_min_messages # By default, Sequel forces the use of standard strings, so that # '\\' is interpreted as \\ and not \. While PostgreSQL <9.1 defaults # to interpreting plain strings, newer versions use standard strings by # default. Sequel assumes that SQL standard strings will be used. Setting # this to false means Sequel will use the database's default. attr_accessor :force_standard_strings end class CreateTableGenerator < Sequel::Schema::Generator # Add an exclusion constraint when creating the table. Elements should be # an array of 2 element arrays, with the first element being the column or # expression the exclusion constraint is applied to, and the second element # being the operator to use for the column/expression to check for exclusion. # # Example: # # exclude([[:col1, '&&'], [:col2, '=']]) # # EXCLUDE USING gist (col1 WITH &&, col2 WITH =) # # Options supported: # # :name :: Name the constraint with the given name (useful if you may # need to drop the constraint later) # :using :: Override the index_method for the exclusion constraint (defaults to gist). # :where :: Create a partial exclusion constraint, which only affects # a subset of table rows, value should be a filter expression. def exclude(elements, opts=OPTS) constraints << {:type => :exclude, :elements => elements}.merge(opts) end end class AlterTableGenerator < Sequel::Schema::AlterTableGenerator # Adds an exclusion constraint to an existing table, see # CreateTableGenerator#exclude. def add_exclusion_constraint(elements, opts=OPTS) @operations << {:op => :add_constraint, :type => :exclude, :elements => elements}.merge(opts) end # Validate the constraint with the given name, which should have # been added previously with NOT VALID. def validate_constraint(name) @operations << {:op => :validate_constraint, :name => name} end end # Error raised when Sequel determines a PostgreSQL exclusion constraint has been violated. class ExclusionConstraintViolation < Sequel::ConstraintViolation; end # Methods shared by Database instances that connect to PostgreSQL. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling PREPARED_ARG_PLACEHOLDER = LiteralString.new('$').freeze RE_CURRVAL_ERROR = /currval of sequence "(.*)" is not yet defined in this session|relation "(.*)" does not exist/.freeze FOREIGN_KEY_LIST_ON_DELETE_MAP = {'a'.freeze=>:no_action, 'r'.freeze=>:restrict, 'c'.freeze=>:cascade, 'n'.freeze=>:set_null, 'd'.freeze=>:set_default}.freeze POSTGRES_DEFAULT_RE = /\A(?:B?('.*')::[^']+|\((-?\d+(?:\.\d+)?)\))\z/ UNLOGGED = 'UNLOGGED '.freeze # SQL fragment for custom sequences (ones not created by serial primary key), # Returning the schema and literal form of the sequence name, by parsing # the column defaults table. SELECT_CUSTOM_SEQUENCE_SQL = (<<-end_sql SELECT name.nspname AS "schema", CASE WHEN split_part(def.adsrc, '''', 2) ~ '.' THEN substr(split_part(def.adsrc, '''', 2), strpos(split_part(def.adsrc, '''', 2), '.')+1) ELSE split_part(def.adsrc, '''', 2) END AS "sequence" FROM pg_class t JOIN pg_namespace name ON (t.relnamespace = name.oid) JOIN pg_attribute attr ON (t.oid = attrelid) JOIN pg_attrdef def ON (adrelid = attrelid AND adnum = attnum) JOIN pg_constraint cons ON (conrelid = adrelid AND adnum = conkey[1]) WHERE cons.contype = 'p' AND def.adsrc ~* 'nextval' end_sql ).strip.gsub(/\s+/, ' ').freeze # SQL fragment for determining primary key column for the given table. Only # returns the first primary key if the table has a composite primary key. SELECT_PK_SQL = (<<-end_sql SELECT pg_attribute.attname AS pk FROM pg_class, pg_attribute, pg_index, pg_namespace WHERE pg_class.oid = pg_attribute.attrelid AND pg_class.relnamespace = pg_namespace.oid AND pg_class.oid = pg_index.indrelid AND pg_index.indkey[0] = pg_attribute.attnum AND pg_index.indisprimary = 't' end_sql ).strip.gsub(/\s+/, ' ').freeze # SQL fragment for getting sequence associated with table's # primary key, assuming it was a serial primary key column. SELECT_SERIAL_SEQUENCE_SQL = (<<-end_sql SELECT name.nspname AS "schema", seq.relname AS "sequence" FROM pg_class seq, pg_attribute attr, pg_depend dep, pg_namespace name, pg_constraint cons, pg_class t WHERE seq.oid = dep.objid AND seq.relnamespace = name.oid AND seq.relkind = 'S' AND attr.attrelid = dep.refobjid AND attr.attnum = dep.refobjsubid AND attr.attrelid = cons.conrelid AND attr.attnum = cons.conkey[1] AND attr.attrelid = t.oid AND cons.contype = 'p' end_sql ).strip.gsub(/\s+/, ' ').freeze # A hash of conversion procs, keyed by type integer (oid) and # having callable values for the conversion proc for that type. attr_reader :conversion_procs # Commit an existing prepared transaction with the given transaction # identifier string. def commit_prepared_transaction(transaction_id) run("COMMIT PREPARED #{literal(transaction_id)}") end # Creates the function in the database. Arguments: # * name : name of the function to create # * definition : string definition of the function, or object file for a dynamically loaded C function. # * opts : options hash: # * :args : function arguments, can be either a symbol or string specifying a type or an array of 1-3 elements: # * element 1 : argument data type # * element 2 : argument name # * element 3 : argument mode (e.g. in, out, inout) # * :behavior : Should be IMMUTABLE, STABLE, or VOLATILE. PostgreSQL assumes VOLATILE by default. # * :cost : The estimated cost of the function, used by the query planner. # * :language : The language the function uses. SQL is the default. # * :link_symbol : For a dynamically loaded see function, the function's link symbol if different from the definition argument. # * :returns : The data type returned by the function. If you are using OUT or INOUT argument modes, this is ignored. # Otherwise, if this is not specified, void is used by default to specify the function is not supposed to return a value. # * :rows : The estimated number of rows the function will return. Only use if the function returns SETOF something. # * :security_definer : Makes the privileges of the function the same as the privileges of the user who defined the function instead of # the privileges of the user who runs the function. There are security implications when doing this, see the PostgreSQL documentation. # * :set : Configuration variables to set while the function is being run, can be a hash or an array of two pairs. search_path is # often used here if :security_definer is used. # * :strict : Makes the function return NULL when any argument is NULL. def create_function(name, definition, opts=OPTS) self << create_function_sql(name, definition, opts) end # Create the procedural language in the database. Arguments: # * name : Name of the procedural language (e.g. plpgsql) # * opts : options hash: # * :handler : The name of a previously registered function used as a call handler for this language. # * :replace : Replace the installed language if it already exists (on PostgreSQL 9.0+). # * :trusted : Marks the language being created as trusted, allowing unprivileged users to create functions using this language. # * :validator : The name of previously registered function used as a validator of functions defined in this language. def create_language(name, opts=OPTS) self << create_language_sql(name, opts) end # Create a schema in the database. Arguments: # * name : Name of the schema (e.g. admin) # * opts : options hash: # * :if_not_exists : Don't raise an error if the schema already exists (PostgreSQL 9.3+) # * :owner : The owner to set for the schema (defaults to current user if not specified) def create_schema(name, opts=OPTS) self << create_schema_sql(name, opts) end # Create a trigger in the database. Arguments: # * table : the table on which this trigger operates # * name : the name of this trigger # * function : the function to call for this trigger, which should return type trigger. # * opts : options hash: # * :after : Calls the trigger after execution instead of before. # * :args : An argument or array of arguments to pass to the function. # * :each_row : Calls the trigger for each row instead of for each statement. # * :events : Can be :insert, :update, :delete, or an array of any of those. Calls the trigger whenever that type of statement is used. By default, # the trigger is called for insert, update, or delete. def create_trigger(table, name, function, opts=OPTS) self << create_trigger_sql(table, name, function, opts) end # PostgreSQL uses the :postgres database type. def database_type :postgres end # Use PostgreSQL's DO syntax to execute an anonymous code block. The code should # be the literal code string to use in the underlying procedural language. Options: # # :language :: The procedural language the code is written in. The PostgreSQL # default is plpgsql. Can be specified as a string or a symbol. def do(code, opts=OPTS) language = opts[:language] run "DO #{"LANGUAGE #{literal(language.to_s)} " if language}#{literal(code)}" end # Drops the function from the database. Arguments: # * name : name of the function to drop # * opts : options hash: # * :args : The arguments for the function. See create_function_sql. # * :cascade : Drop other objects depending on this function. # * :if_exists : Don't raise an error if the function doesn't exist. def drop_function(name, opts=OPTS) self << drop_function_sql(name, opts) end # Drops a procedural language from the database. Arguments: # * name : name of the procedural language to drop # * opts : options hash: # * :cascade : Drop other objects depending on this function. # * :if_exists : Don't raise an error if the function doesn't exist. def drop_language(name, opts=OPTS) self << drop_language_sql(name, opts) end # Drops a schema from the database. Arguments: # * name : name of the schema to drop # * opts : options hash: # * :cascade : Drop all objects in this schema. # * :if_exists : Don't raise an error if the schema doesn't exist. def drop_schema(name, opts=OPTS) self << drop_schema_sql(name, opts) end # Drops a trigger from the database. Arguments: # * table : table from which to drop the trigger # * name : name of the trigger to drop # * opts : options hash: # * :cascade : Drop other objects depending on this function. # * :if_exists : Don't raise an error if the function doesn't exist. def drop_trigger(table, name, opts=OPTS) self << drop_trigger_sql(table, name, opts) end # Return full foreign key information using the pg system tables, including # :name, :on_delete, :on_update, and :deferrable entries in the hashes. def foreign_key_list(table, opts=OPTS) m = output_identifier_meth schema, _ = opts.fetch(:schema, schema_and_table(table)) range = 0...32 base_ds = metadata_dataset. from(:pg_constraint___co). join(:pg_class___cl, :oid=>:conrelid). where(:cl__relkind=>'r', :co__contype=>'f', :cl__oid=>regclass_oid(table)) # We split the parsing into two separate queries, which are merged manually later. # This is because PostgreSQL stores both the referencing and referenced columns in # arrays, and I don't know a simple way to not create a cross product, as PostgreSQL # doesn't appear to have a function that takes an array and element and gives you # the index of that element in the array. ds = base_ds. join(:pg_attribute___att, :attrelid=>:oid, :attnum=>SQL::Function.new(:ANY, :co__conkey)). order(:co__conname, SQL::CaseExpression.new(range.map{|x| [SQL::Subscript.new(:co__conkey, [x]), x]}, 32, :att__attnum)). select(:co__conname___name, :att__attname___column, :co__confupdtype___on_update, :co__confdeltype___on_delete, SQL::BooleanExpression.new(:AND, :co__condeferrable, :co__condeferred).as(:deferrable)) ref_ds = base_ds. join(:pg_class___cl2, :oid=>:co__confrelid). join(:pg_attribute___att2, :attrelid=>:oid, :attnum=>SQL::Function.new(:ANY, :co__confkey)). order(:co__conname, SQL::CaseExpression.new(range.map{|x| [SQL::Subscript.new(:co__conkey, [x]), x]}, 32, :att2__attnum)). select(:co__conname___name, :cl2__relname___table, :att2__attname___refcolumn) # If a schema is given, we only search in that schema, and the returned :table # entry is schema qualified as well. if schema ref_ds = ref_ds.join(:pg_namespace___nsp2, :oid=>:cl2__relnamespace). select_more(:nsp2__nspname___schema) end h = {} fklod_map = FOREIGN_KEY_LIST_ON_DELETE_MAP ds.each do |row| if r = h[row[:name]] r[:columns] << m.call(row[:column]) else h[row[:name]] = {:name=>m.call(row[:name]), :columns=>[m.call(row[:column])], :on_update=>fklod_map[row[:on_update]], :on_delete=>fklod_map[row[:on_delete]], :deferrable=>row[:deferrable]} end end ref_ds.each do |row| r = h[row[:name]] r[:table] ||= schema ? SQL::QualifiedIdentifier.new(m.call(row[:schema]), m.call(row[:table])) : m.call(row[:table]) r[:key] ||= [] r[:key] << m.call(row[:refcolumn]) end h.values end # Use the pg_* system tables to determine indexes on a table def indexes(table, opts=OPTS) m = output_identifier_meth range = 0...32 attnums = server_version >= 80100 ? SQL::Function.new(:ANY, :ind__indkey) : range.map{|x| SQL::Subscript.new(:ind__indkey, [x])} ds = metadata_dataset. from(:pg_class___tab). join(:pg_index___ind, :indrelid=>:oid). join(:pg_class___indc, :oid=>:indexrelid). join(:pg_attribute___att, :attrelid=>:tab__oid, :attnum=>attnums). left_join(:pg_constraint___con, :conname=>:indc__relname). filter(:indc__relkind=>'i', :ind__indisprimary=>false, :indexprs=>nil, :indpred=>nil, :indisvalid=>true, :tab__oid=>regclass_oid(table, opts)). order(:indc__relname, SQL::CaseExpression.new(range.map{|x| [SQL::Subscript.new(:ind__indkey, [x]), x]}, 32, :att__attnum)). select(:indc__relname___name, :ind__indisunique___unique, :att__attname___column, :con__condeferrable___deferrable) ds.filter!(:indisready=>true, :indcheckxmin=>false) if server_version >= 80300 indexes = {} ds.each do |r| i = indexes[m.call(r[:name])] ||= {:columns=>[], :unique=>r[:unique], :deferrable=>r[:deferrable]} i[:columns] << m.call(r[:column]) end indexes end # Dataset containing all current database locks def locks dataset.from(:pg_class).join(:pg_locks, :relation=>:relfilenode).select(:pg_class__relname, Sequel::SQL::ColumnAll.new(:pg_locks)) end # Notifies the given channel. See the PostgreSQL NOTIFY documentation. Options: # # :payload :: The payload string to use for the NOTIFY statement. Only supported # in PostgreSQL 9.0+. # :server :: The server to which to send the NOTIFY statement, if the sharding support # is being used. def notify(channel, opts=OPTS) sql = "NOTIFY " dataset.send(:identifier_append, sql, channel) if payload = opts[:payload] sql << ", " dataset.literal_append(sql, payload.to_s) end execute_ddl(sql, opts) end # Return primary key for the given table. def primary_key(table, opts=OPTS) quoted_table = quote_schema_table(table) Sequel.synchronize{return @primary_keys[quoted_table] if @primary_keys.has_key?(quoted_table)} sql = "#{SELECT_PK_SQL} AND pg_class.oid = #{literal(regclass_oid(table, opts))}" value = fetch(sql).single_value Sequel.synchronize{@primary_keys[quoted_table] = value} end # Return the sequence providing the default for the primary key for the given table. def primary_key_sequence(table, opts=OPTS) quoted_table = quote_schema_table(table) Sequel.synchronize{return @primary_key_sequences[quoted_table] if @primary_key_sequences.has_key?(quoted_table)} sql = "#{SELECT_SERIAL_SEQUENCE_SQL} AND t.oid = #{literal(regclass_oid(table, opts))}" if pks = fetch(sql).single_record value = literal(SQL::QualifiedIdentifier.new(pks[:schema], pks[:sequence])) Sequel.synchronize{@primary_key_sequences[quoted_table] = value} else sql = "#{SELECT_CUSTOM_SEQUENCE_SQL} AND t.oid = #{literal(regclass_oid(table, opts))}" if pks = fetch(sql).single_record value = literal(SQL::QualifiedIdentifier.new(pks[:schema], LiteralString.new(pks[:sequence]))) Sequel.synchronize{@primary_key_sequences[quoted_table] = value} end end end # Refresh the materialized view with the given name. # # DB.refresh_view(:items_view) # # REFRESH MATERIALIZED VIEW items_view def refresh_view(name, opts=OPTS) run "REFRESH MATERIALIZED VIEW #{quote_schema_table(name)}" end # Reset the database's conversion procs, requires a server query if there # any named types. def reset_conversion_procs @conversion_procs = get_conversion_procs end # Reset the primary key sequence for the given table, basing it on the # maximum current value of the table's primary key. def reset_primary_key_sequence(table) return unless seq = primary_key_sequence(table) pk = SQL::Identifier.new(primary_key(table)) db = self seq_ds = db.from(LiteralString.new(seq)) s, t = schema_and_table(table) table = Sequel.qualify(s, t) if s get{setval(seq, db[table].select{coalesce(max(pk)+seq_ds.select{:increment_by}, seq_ds.select(:min_value))}, false)} end # Rollback an existing prepared transaction with the given transaction # identifier string. def rollback_prepared_transaction(transaction_id) run("ROLLBACK PREPARED #{literal(transaction_id)}") end # PostgreSQL uses SERIAL psuedo-type instead of AUTOINCREMENT for # managing incrementing primary keys. def serial_primary_key_options {:primary_key => true, :serial => true, :type=>Integer} end # The version of the PostgreSQL server, used for determining capability. def server_version(server=nil) return @server_version if @server_version @server_version = synchronize(server) do |conn| (conn.server_version rescue nil) if conn.respond_to?(:server_version) end unless @server_version @server_version = if m = /PostgreSQL (\d+)\.(\d+)(?:(?:rc\d+)|\.(\d+))?/.match(fetch('SELECT version()').single_value) (m[1].to_i * 10000) + (m[2].to_i * 100) + m[3].to_i else 0 end end warn 'Sequel no longer supports PostgreSQL <8.2, some things may not work' if @server_version < 80200 @server_version end # PostgreSQL supports CREATE TABLE IF NOT EXISTS on 9.1+ def supports_create_table_if_not_exists? server_version >= 90100 end # PostgreSQL 9.0+ supports some types of deferrable constraints beyond foreign key constraints. def supports_deferrable_constraints? server_version >= 90000 end # PostgreSQL supports deferrable foreign key constraints. def supports_deferrable_foreign_key_constraints? true end # PostgreSQL supports DROP TABLE IF EXISTS def supports_drop_table_if_exists? true end # PostgreSQL supports prepared transactions (two-phase commit) if # max_prepared_transactions is greater than 0. def supports_prepared_transactions? return @supports_prepared_transactions if defined?(@supports_prepared_transactions) @supports_prepared_transactions = self['SHOW max_prepared_transactions'].get.to_i > 0 end # PostgreSQL supports savepoints def supports_savepoints? true end # PostgreSQL supports transaction isolation levels def supports_transaction_isolation_levels? true end # PostgreSQL supports transaction DDL statements. def supports_transactional_ddl? true end # Array of symbols specifying table names in the current database. # The dataset used is yielded to the block if one is provided, # otherwise, an array of symbols of table names is returned. # # Options: # :qualify :: Return the tables as Sequel::SQL::QualifiedIdentifier instances, # using the schema the table is located in as the qualifier. # :schema :: The schema to search # :server :: The server to use def tables(opts=OPTS, &block) pg_class_relname('r', opts, &block) end # Check whether the given type name string/symbol (e.g. :hstore) is supported by # the database. def type_supported?(type) @supported_types ||= {} @supported_types.fetch(type){@supported_types[type] = (from(:pg_type).filter(:typtype=>'b', :typname=>type.to_s).count > 0)} end # Array of symbols specifying view names in the current database. # # Options: # :qualify :: Return the views as Sequel::SQL::QualifiedIdentifier instances, # using the schema the view is located in as the qualifier. # :schema :: The schema to search # :server :: The server to use def views(opts=OPTS) pg_class_relname('v', opts) end private # Do a type name-to-oid lookup using the database and update the procs # with the related proc if the database supports the type. def add_named_conversion_procs(procs, named_procs) unless (named_procs).empty? convert_named_procs_to_procs(named_procs).each do |oid, pr| procs[oid] ||= pr end end end # Use a PostgreSQL-specific alter table generator def alter_table_generator_class Postgres::AlterTableGenerator end # Handle :using option for set_column_type op, and the :validate_constraint op. def alter_table_op_sql(table, op) case op[:op] when :set_column_type s = super if using = op[:using] using = Sequel::LiteralString.new(using) if using.is_a?(String) s << ' USING ' s << literal(using) end s when :validate_constraint "VALIDATE CONSTRAINT #{quote_identifier(op[:name])}" else super end end # If the :synchronous option is given and non-nil, set synchronous_commit # appropriately. Valid values for the :synchronous option are true, # :on, false, :off, :local, and :remote_write. def begin_new_transaction(conn, opts) super if opts.has_key?(:synchronous) case sync = opts[:synchronous] when true sync = :on when false sync = :off when nil return end log_connection_execute(conn, "SET LOCAL synchronous_commit = #{sync}") end end # Handle PostgreSQL specific default format. def column_schema_normalize_default(default, type) if m = POSTGRES_DEFAULT_RE.match(default) default = m[1] || m[2] end super(default, type) end # If the :prepare option is given and we aren't in a savepoint, # prepare the transaction for a two-phase commit. def commit_transaction(conn, opts=OPTS) if (s = opts[:prepare]) && _trans(conn)[:savepoint_level] <= 1 log_connection_execute(conn, "PREPARE TRANSACTION #{literal(s)}") else super end end # PostgreSQL can't combine rename_column operations, and it can combine # the custom validate_constraint operation. def combinable_alter_table_op?(op) (super || op[:op] == :validate_constraint) && op[:op] != :rename_column end VALID_CLIENT_MIN_MESSAGES = %w'DEBUG5 DEBUG4 DEBUG3 DEBUG2 DEBUG1 LOG NOTICE WARNING ERROR FATAL PANIC'.freeze # The SQL queries to execute when starting a new connection. def connection_configuration_sqls sqls = [] sqls << "SET standard_conforming_strings = ON" if typecast_value_boolean(@opts.fetch(:force_standard_strings, Postgres.force_standard_strings)) if (cmm = @opts.fetch(:client_min_messages, Postgres.client_min_messages)) && !cmm.to_s.empty? cmm = cmm.to_s.upcase.strip unless VALID_CLIENT_MIN_MESSAGES.include?(cmm) raise Error, "Unsupported client_min_messages setting: #{cmm}" end sqls << "SET client_min_messages = '#{cmm.to_s.upcase}'" end if search_path = @opts[:search_path] case search_path when String search_path = search_path.split(",").map{|s| s.strip} when Array # nil else raise Error, "unrecognized value for :search_path option: #{search_path.inspect}" end sqls << "SET search_path = #{search_path.map{|s| "\"#{s.gsub('"', '""')}\""}.join(',')}" end sqls end # Handle exclusion constraints. def constraint_definition_sql(constraint) case constraint[:type] when :exclude elements = constraint[:elements].map{|c, op| "#{literal(c)} WITH #{op}"}.join(', ') sql = "#{"CONSTRAINT #{quote_identifier(constraint[:name])} " if constraint[:name]}EXCLUDE USING #{constraint[:using]||'gist'} (#{elements})#{" WHERE #{filter_expr(constraint[:where])}" if constraint[:where]}" constraint_deferrable_sql_append(sql, constraint[:deferrable]) sql when :foreign_key, :check sql = super if constraint[:not_valid] sql << " NOT VALID" end sql else super end end # Convert the hash of named conversion procs into a hash a oid conversion procs. def convert_named_procs_to_procs(named_procs) h = {} from(:pg_type).where(:typtype=>'b', :typname=>named_procs.keys.map{|t| t.to_s}).select_map([:oid, :typname]).each do |oid, name| h[oid.to_i] = named_procs[name.untaint.to_sym] end h end # Copy the conversion procs related to the given oids from PG_TYPES into # the conversion procs for this instance. def copy_conversion_procs(oids) procs = conversion_procs oids.each do |oid| procs[oid] = PG_TYPES[oid] end end EXCLUSION_CONSTRAINT_SQL_STATE = '23P01'.freeze def database_specific_error_class_from_sqlstate(sqlstate) if sqlstate == EXCLUSION_CONSTRAINT_SQL_STATE ExclusionConstraintViolation else super end end DATABASE_ERROR_REGEXPS = [ # Add this check first, since otherwise it's possible for users to control # which exception class is generated. [/invalid input syntax/, DatabaseError], [/duplicate key value violates unique constraint/, UniqueConstraintViolation], [/violates foreign key constraint/, ForeignKeyConstraintViolation], [/violates check constraint/, CheckConstraintViolation], [/violates not-null constraint/, NotNullConstraintViolation], [/conflicting key value violates exclusion constraint/, ExclusionConstraintViolation], [/could not serialize access/, SerializationFailure], ].freeze def database_error_regexps DATABASE_ERROR_REGEXPS end # SQL for doing fast table insert from stdin. def copy_into_sql(table, opts) sql = "COPY #{literal(table)}" if cols = opts[:columns] sql << literal(Array(cols)) end sql << " FROM STDIN" if opts[:options] || opts[:format] sql << " (" sql << "FORMAT #{opts[:format]}" if opts[:format] sql << "#{', ' if opts[:format]}#{opts[:options]}" if opts[:options] sql << ')' end sql end # SQL for doing fast table output to stdout. def copy_table_sql(table, opts) if table.is_a?(String) table else if opts[:options] || opts[:format] options = " (" options << "FORMAT #{opts[:format]}" if opts[:format] options << "#{', ' if opts[:format]}#{opts[:options]}" if opts[:options] options << ')' end table = if table.is_a?(::Sequel::Dataset) "(#{table.sql})" else literal(table) end "COPY #{table} TO STDOUT#{options}" end end # SQL statement to create database function. def create_function_sql(name, definition, opts=OPTS) args = opts[:args] if !opts[:args].is_a?(Array) || !opts[:args].any?{|a| Array(a).length == 3 and %w'OUT INOUT'.include?(a[2].to_s)} returns = opts[:returns] || 'void' end language = opts[:language] || 'SQL' <<-END CREATE#{' OR REPLACE' if opts[:replace]} FUNCTION #{name}#{sql_function_args(args)} #{"RETURNS #{returns}" if returns} LANGUAGE #{language} #{opts[:behavior].to_s.upcase if opts[:behavior]} #{'STRICT' if opts[:strict]} #{'SECURITY DEFINER' if opts[:security_definer]} #{"COST #{opts[:cost]}" if opts[:cost]} #{"ROWS #{opts[:rows]}" if opts[:rows]} #{opts[:set].map{|k,v| " SET #{k} = #{v}"}.join("\n") if opts[:set]} AS #{literal(definition.to_s)}#{", #{literal(opts[:link_symbol].to_s)}" if opts[:link_symbol]} END end # SQL for creating a procedural language. def create_language_sql(name, opts=OPTS) "CREATE#{' OR REPLACE' if opts[:replace] && server_version >= 90000}#{' TRUSTED' if opts[:trusted]} LANGUAGE #{name}#{" HANDLER #{opts[:handler]}" if opts[:handler]}#{" VALIDATOR #{opts[:validator]}" if opts[:validator]}" end # SQL for creating a schema. def create_schema_sql(name, opts=OPTS) "CREATE SCHEMA #{'IF NOT EXISTS ' if opts[:if_not_exists]}#{quote_identifier(name)}#{" AUTHORIZATION #{literal(opts[:owner])}" if opts[:owner]}" end # DDL statement for creating a table with the given name, columns, and options def create_table_prefix_sql(name, options) temp_or_unlogged_sql = if options[:temp] raise(Error, "can't provide both :temp and :unlogged to create_table") if options[:unlogged] temporary_table_sql elsif options[:unlogged] UNLOGGED end "CREATE #{temp_or_unlogged_sql}TABLE#{' IF NOT EXISTS' if options[:if_not_exists]} #{options[:temp] ? quote_identifier(name) : quote_schema_table(name)}" end def create_table_sql(name, generator, options) sql = super if inherits = options[:inherits] sql << " INHERITS (#{Array(inherits).map{|t| quote_schema_table(t)}.join(', ')})" end sql end # Use a PostgreSQL-specific create table generator def create_table_generator_class Postgres::CreateTableGenerator end # SQL for creating a database trigger. def create_trigger_sql(table, name, function, opts=OPTS) events = opts[:events] ? Array(opts[:events]) : [:insert, :update, :delete] whence = opts[:after] ? 'AFTER' : 'BEFORE' "CREATE TRIGGER #{name} #{whence} #{events.map{|e| e.to_s.upcase}.join(' OR ')} ON #{quote_schema_table(table)}#{' FOR EACH ROW' if opts[:each_row]} EXECUTE PROCEDURE #{function}(#{Array(opts[:args]).map{|a| literal(a)}.join(', ')})" end # DDL fragment for initial part of CREATE VIEW statement def create_view_prefix_sql(name, options) create_view_sql_append_columns("CREATE #{'OR REPLACE 'if options[:replace]}#{'TEMPORARY 'if options[:temp]}#{'RECURSIVE ' if options[:recursive]}#{'MATERIALIZED ' if options[:materialized]}VIEW #{quote_schema_table(name)}", options[:columns] || options[:recursive]) end # The errors that the main adapters can raise, depends on the adapter being used def database_error_classes CONVERTED_EXCEPTIONS end # SQL for dropping a function from the database. def drop_function_sql(name, opts=OPTS) "DROP FUNCTION#{' IF EXISTS' if opts[:if_exists]} #{name}#{sql_function_args(opts[:args])}#{' CASCADE' if opts[:cascade]}" end # Support :if_exists, :cascade, and :concurrently options. def drop_index_sql(table, op) sch, _ = schema_and_table(table) "DROP INDEX#{' CONCURRENTLY' if op[:concurrently]}#{' IF EXISTS' if op[:if_exists]} #{"#{quote_identifier(sch)}." if sch}#{quote_identifier(op[:name] || default_index_name(table, op[:columns]))}#{' CASCADE' if op[:cascade]}" end # SQL for dropping a procedural language from the database. def drop_language_sql(name, opts=OPTS) "DROP LANGUAGE#{' IF EXISTS' if opts[:if_exists]} #{name}#{' CASCADE' if opts[:cascade]}" end # SQL for dropping a schema from the database. def drop_schema_sql(name, opts=OPTS) "DROP SCHEMA#{' IF EXISTS' if opts[:if_exists]} #{quote_identifier(name)}#{' CASCADE' if opts[:cascade]}" end # SQL for dropping a trigger from the database. def drop_trigger_sql(table, name, opts=OPTS) "DROP TRIGGER#{' IF EXISTS' if opts[:if_exists]} #{name} ON #{quote_schema_table(table)}#{' CASCADE' if opts[:cascade]}" end # SQL for dropping a view from the database. def drop_view_sql(name, opts=OPTS) "DROP #{'MATERIALIZED ' if opts[:materialized]}VIEW#{' IF EXISTS' if opts[:if_exists]} #{quote_schema_table(name)}#{' CASCADE' if opts[:cascade]}" end # If opts includes a :schema option, or a default schema is used, restrict the dataset to # that schema. Otherwise, just exclude the default PostgreSQL schemas except for public. def filter_schema(ds, opts) expr = if schema = opts[:schema] schema.to_s else Sequel.function(:any, Sequel.function(:current_schemas, false)) end ds.where(:pg_namespace__nspname=>expr) end # Return a hash with oid keys and callable values, used for converting types. def get_conversion_procs procs = PG_TYPES.dup procs[1184] = procs[1114] = method(:to_application_timestamp) add_named_conversion_procs(procs, PG_NAMED_TYPES) procs end # PostgreSQL folds unquoted identifiers to lowercase, so it shouldn't need to upcase identifiers on input. def identifier_input_method_default nil end # PostgreSQL folds unquoted identifiers to lowercase, so it shouldn't need to upcase identifiers on output. def identifier_output_method_default nil end # PostgreSQL specific index SQL. def index_definition_sql(table_name, index) cols = index[:columns] index_name = index[:name] || default_index_name(table_name, cols) expr = if o = index[:opclass] "(#{Array(cols).map{|c| "#{literal(c)} #{o}"}.join(', ')})" else literal(Array(cols)) end unique = "UNIQUE " if index[:unique] index_type = index[:type] filter = index[:where] || index[:filter] filter = " WHERE #{filter_expr(filter)}" if filter case index_type when :full_text expr = "(to_tsvector(#{literal(index[:language] || 'simple')}::regconfig, #{literal(dataset.send(:full_text_string_join, cols))}))" index_type = index[:index_type] || :gin when :spatial index_type = :gist end "CREATE #{unique}INDEX#{' CONCURRENTLY' if index[:concurrently]} #{quote_identifier(index_name)} ON #{quote_schema_table(table_name)} #{"USING #{index_type} " if index_type}#{expr}#{filter}" end # Setup datastructures shared by all postgres adapters. def initialize_postgres_adapter @primary_keys = {} @primary_key_sequences = {} @conversion_procs = PG_TYPES.dup reset_conversion_procs end # Backbone of the tables and views support. def pg_class_relname(type, opts) ds = metadata_dataset.from(:pg_class).filter(:relkind=>type).select(:relname).server(opts[:server]).join(:pg_namespace, :oid=>:relnamespace) ds = filter_schema(ds, opts) m = output_identifier_meth if block_given? yield(ds) elsif opts[:qualify] ds.select_append(:pg_namespace__nspname).map{|r| Sequel.qualify(m.call(r[:nspname]), m.call(r[:relname]))} else ds.map{|r| m.call(r[:relname])} end end # Use a dollar sign instead of question mark for the argument # placeholder. def prepared_arg_placeholder PREPARED_ARG_PLACEHOLDER end # Return an expression the oid for the table expr. Used by the metadata parsing # code to disambiguate unqualified tables. def regclass_oid(expr, opts=OPTS) if expr.is_a?(String) && !expr.is_a?(LiteralString) expr = Sequel.identifier(expr) end sch, table = schema_and_table(expr) sch ||= opts[:schema] if sch expr = Sequel.qualify(sch, table) end expr = if ds = opts[:dataset] ds.literal(expr) else literal(expr) end Sequel.cast(expr.to_s,:regclass).cast(:oid) end # Remove the cached entries for primary keys and sequences when a table is # changed. def remove_cached_schema(table) tab = quote_schema_table(table) Sequel.synchronize do @primary_keys.delete(tab) @primary_key_sequences.delete(tab) end super end # SQL DDL statement for renaming a table. PostgreSQL doesn't allow you to change a table's schema in # a rename table operation, so speciying a new schema in new_name will not have an effect. def rename_table_sql(name, new_name) "ALTER TABLE #{quote_schema_table(name)} RENAME TO #{quote_identifier(schema_and_table(new_name).last)}" end # PostgreSQL's autoincrementing primary keys are of type integer or bigint # using a nextval function call as a default. def schema_autoincrementing_primary_key?(schema) super && schema[:default] =~ /\Anextval/io end # Recognize PostgreSQL interval type. def schema_column_type(db_type) case db_type when /\Ainterval\z/io :interval else super end end # The dataset used for parsing table schemas, using the pg_* system catalogs. def schema_parse_table(table_name, opts) m = output_identifier_meth(opts[:dataset]) ds = metadata_dataset.select(:pg_attribute__attname___name, SQL::Cast.new(:pg_attribute__atttypid, :integer).as(:oid), SQL::Cast.new(:basetype__oid, :integer).as(:base_oid), SQL::Function.new(:format_type, :basetype__oid, :pg_type__typtypmod).as(:db_base_type), SQL::Function.new(:format_type, :pg_type__oid, :pg_attribute__atttypmod).as(:db_type), SQL::Function.new(:pg_get_expr, :pg_attrdef__adbin, :pg_class__oid).as(:default), SQL::BooleanExpression.new(:NOT, :pg_attribute__attnotnull).as(:allow_null), SQL::Function.new(:COALESCE, SQL::BooleanExpression.from_value_pairs(:pg_attribute__attnum => SQL::Function.new(:ANY, :pg_index__indkey)), false).as(:primary_key)). from(:pg_class). join(:pg_attribute, :attrelid=>:oid). join(:pg_type, :oid=>:atttypid). left_outer_join(:pg_type___basetype, :oid=>:typbasetype). left_outer_join(:pg_attrdef, :adrelid=>:pg_class__oid, :adnum=>:pg_attribute__attnum). left_outer_join(:pg_index, :indrelid=>:pg_class__oid, :indisprimary=>true). filter(:pg_attribute__attisdropped=>false). filter{|o| o.pg_attribute__attnum > 0}. filter(:pg_class__oid=>regclass_oid(table_name, opts)). order(:pg_attribute__attnum) ds.map do |row| row[:default] = nil if blank_object?(row[:default]) if row[:base_oid] row[:domain_oid] = row[:oid] row[:oid] = row.delete(:base_oid) row[:db_domain_type] = row[:db_type] row[:db_type] = row.delete(:db_base_type) else row.delete(:base_oid) row.delete(:db_base_type) end row[:type] = schema_column_type(row[:db_type]) [m.call(row.delete(:name)), row] end end # Set the transaction isolation level on the given connection def set_transaction_isolation(conn, opts) level = opts.fetch(:isolation, transaction_isolation_level) read_only = opts[:read_only] deferrable = opts[:deferrable] if level || !read_only.nil? || !deferrable.nil? sql = "SET TRANSACTION" sql << " ISOLATION LEVEL #{Sequel::Database::TRANSACTION_ISOLATION_LEVELS[level]}" if level sql << " READ #{read_only ? 'ONLY' : 'WRITE'}" unless read_only.nil? sql << " #{'NOT ' unless deferrable}DEFERRABLE" unless deferrable.nil? log_connection_execute(conn, sql) end end # Turns an array of argument specifiers into an SQL fragment used for function arguments. See create_function_sql. def sql_function_args(args) "(#{Array(args).map{|a| Array(a).reverse.join(' ')}.join(', ')})" end # PostgreSQL can combine multiple alter table ops into a single query. def supports_combining_alter_table_ops? true end # PostgreSQL supports CREATE OR REPLACE VIEW. def supports_create_or_replace_view? true end # Handle bigserial type if :serial option is present def type_literal_generic_bignum(column) column[:serial] ? :bigserial : super end # PostgreSQL uses the bytea data type for blobs def type_literal_generic_file(column) :bytea end # Handle serial type if :serial option is present def type_literal_generic_integer(column) column[:serial] ? :serial : super end # PostgreSQL prefers the text datatype. If a fixed size is requested, # the char type is used. If the text type is specifically # disallowed or there is a size specified, use the varchar type. # Otherwise use the type type. def type_literal_generic_string(column) if column[:fixed] "char(#{column[:size]||255})" elsif column[:text] == false or column[:size] "varchar(#{column[:size]||255})" else :text end end end # Instance methods for datasets that connect to a PostgreSQL database. module DatasetMethods ACCESS_SHARE = 'ACCESS SHARE'.freeze ACCESS_EXCLUSIVE = 'ACCESS EXCLUSIVE'.freeze BOOL_FALSE = 'false'.freeze BOOL_TRUE = 'true'.freeze COMMA_SEPARATOR = ', '.freeze DELETE_CLAUSE_METHODS = Dataset.clause_methods(:delete, %w'delete from using where returning') DELETE_CLAUSE_METHODS_91 = Dataset.clause_methods(:delete, %w'with delete from using where returning') EXCLUSIVE = 'EXCLUSIVE'.freeze EXPLAIN = 'EXPLAIN '.freeze EXPLAIN_ANALYZE = 'EXPLAIN ANALYZE '.freeze FOR_SHARE = ' FOR SHARE'.freeze INSERT_CLAUSE_METHODS = Dataset.clause_methods(:insert, %w'insert into columns values returning') INSERT_CLAUSE_METHODS_91 = Dataset.clause_methods(:insert, %w'with insert into columns values returning') NULL = LiteralString.new('NULL').freeze PG_TIMESTAMP_FORMAT = "TIMESTAMP '%Y-%m-%d %H:%M:%S".freeze QUERY_PLAN = 'QUERY PLAN'.to_sym ROW_EXCLUSIVE = 'ROW EXCLUSIVE'.freeze ROW_SHARE = 'ROW SHARE'.freeze SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select distinct columns from join where group having compounds order limit lock') SELECT_CLAUSE_METHODS_84 = Dataset.clause_methods(:select, %w'with select distinct columns from join where group having window compounds order limit lock') SHARE = 'SHARE'.freeze SHARE_ROW_EXCLUSIVE = 'SHARE ROW EXCLUSIVE'.freeze SHARE_UPDATE_EXCLUSIVE = 'SHARE UPDATE EXCLUSIVE'.freeze SQL_WITH_RECURSIVE = "WITH RECURSIVE ".freeze UPDATE_CLAUSE_METHODS = Dataset.clause_methods(:update, %w'update table set from where returning') UPDATE_CLAUSE_METHODS_91 = Dataset.clause_methods(:update, %w'with update table set from where returning') SPACE = Dataset::SPACE FROM = Dataset::FROM APOS = Dataset::APOS APOS_RE = Dataset::APOS_RE DOUBLE_APOS = Dataset::DOUBLE_APOS PAREN_OPEN = Dataset::PAREN_OPEN PAREN_CLOSE = Dataset::PAREN_CLOSE COMMA = Dataset::COMMA ESCAPE = Dataset::ESCAPE BACKSLASH = Dataset::BACKSLASH AS = Dataset::AS XOR_OP = ' # '.freeze CRLF = "\r\n".freeze BLOB_RE = /[\000-\037\047\134\177-\377]/n.freeze WINDOW = " WINDOW ".freeze EMPTY_STRING = ''.freeze LOCK_MODES = ['ACCESS SHARE', 'ROW SHARE', 'ROW EXCLUSIVE', 'SHARE UPDATE EXCLUSIVE', 'SHARE', 'SHARE ROW EXCLUSIVE', 'EXCLUSIVE', 'ACCESS EXCLUSIVE'].each{|s| s.freeze} # Shared methods for prepared statements when used with PostgreSQL databases. module PreparedStatementMethods # Override insert action to use RETURNING if the server supports it. def run if @prepared_type == :insert fetch_rows(prepared_sql){|r| return r.values.first} else super end end def prepared_sql return @prepared_sql if @prepared_sql @opts[:returning] = insert_pk if @prepared_type == :insert super @prepared_sql end end # Return the results of an EXPLAIN ANALYZE query as a string def analyze explain(:analyze=>true) end # Handle converting the ruby xor operator (^) into the # PostgreSQL xor operator (#), and use the ILIKE and NOT ILIKE # operators. def complex_expression_sql_append(sql, op, args) case op when :^ j = XOR_OP c = false args.each do |a| sql << j if c literal_append(sql, a) c ||= true end when :ILIKE, :'NOT ILIKE' sql << PAREN_OPEN literal_append(sql, args.at(0)) sql << SPACE << op.to_s << SPACE literal_append(sql, args.at(1)) sql << ESCAPE literal_append(sql, BACKSLASH) sql << PAREN_CLOSE else super end end # Return the results of an EXPLAIN query as a string def explain(opts=OPTS) with_sql((opts[:analyze] ? EXPLAIN_ANALYZE : EXPLAIN) + select_sql).map(QUERY_PLAN).join(CRLF) end # Return a cloned dataset which will use FOR SHARE to lock returned rows. def for_share lock_style(:share) end # PostgreSQL specific full text search syntax, using tsearch2 (included # in 8.3 by default, and available for earlier versions as an add-on). def full_text_search(cols, terms, opts = OPTS) lang = opts[:language] || 'simple' terms = terms.join(' | ') if terms.is_a?(Array) filter("to_tsvector(?::regconfig, ?) @@ to_tsquery(?::regconfig, ?)", lang, full_text_string_join(cols), lang, terms) end # Insert given values into the database. def insert(*values) if @opts[:returning] # already know which columns to return, let the standard code # handle it super elsif @opts[:sql] # raw SQL used, so don't know which table is being inserted # into, and therefore can't determine primary key. Run the # insert statement and return nil. super nil else # Force the use of RETURNING with the primary key value. returning(insert_pk).insert(*values){|r| return r.values.first} end end # Insert a record returning the record inserted def insert_select(*values) returning.insert(*values){|r| return r} end # Locks all tables in the dataset's FROM clause (but not in JOINs) with # the specified mode (e.g. 'EXCLUSIVE'). If a block is given, starts # a new transaction, locks the table, and yields. If a block is not given # just locks the tables. Note that PostgreSQL will probably raise an error # if you lock the table outside of an existing transaction. Returns nil. def lock(mode, opts=OPTS) if block_given? # perform locking inside a transaction and yield to block @db.transaction(opts){lock(mode, opts); yield} else sql = 'LOCK TABLE ' source_list_append(sql, @opts[:from]) mode = mode.to_s.upcase.strip unless LOCK_MODES.include?(mode) raise Error, "Unsupported lock mode: #{mode}" end sql << " IN #{mode} MODE" @db.execute(sql, opts) end nil end # PostgreSQL allows inserting multiple rows at once. def multi_insert_sql(columns, values) sql = LiteralString.new('VALUES ') expression_list_append(sql, values.map{|r| Array(r)}) [insert_sql(columns, sql)] end # PostgreSQL supports using the WITH clause in subqueries if it # supports using WITH at all (i.e. on PostgreSQL 8.4+). def supports_cte_in_subqueries? supports_cte? end # DISTINCT ON is a PostgreSQL extension def supports_distinct_on? true end # PostgreSQL supports modifying joined datasets def supports_modifying_joins? true end # Returning is always supported. def supports_returning?(type) true end # PostgreSQL supports pattern matching via regular expressions def supports_regexp? true end # PostgreSQL supports timezones in literal timestamps def supports_timestamp_timezones? true end # PostgreSQL 8.4+ supports window functions def supports_window_functions? server_version >= 80400 end # Truncates the dataset. Returns nil. # # Options: # :cascade :: whether to use the CASCADE option, useful when truncating # tables with Foreign Keys. # :only :: truncate using ONLY, so child tables are unaffected # :restart :: use RESTART IDENTITY to restart any related sequences # # :only and :restart only work correctly on PostgreSQL 8.4+. # # Usage: # DB[:table].truncate # TRUNCATE TABLE "table" # # => nil # DB[:table].truncate(:cascade => true, :only=>true, :restart=>true) # TRUNCATE TABLE ONLY "table" RESTART IDENTITY CASCADE # # => nil def truncate(opts = OPTS) if opts.empty? super() else clone(:truncate_opts=>opts).truncate end end # Return a clone of the dataset with an addition named window that can be referenced in window functions. def window(name, opts) clone(:window=>(@opts[:window]||[]) + [[name, SQL::Window.new(opts)]]) end protected # If returned primary keys are requested, use RETURNING unless already set on the # dataset. If RETURNING is already set, use existing returning values. If RETURNING # is only set to return a single columns, return an array of just that column. # Otherwise, return an array of hashes. def _import(columns, values, opts=OPTS) if @opts[:returning] statements = multi_insert_sql(columns, values) @db.transaction(opts.merge(:server=>@opts[:server])) do statements.map{|st| returning_fetch_rows(st)} end.first.map{|v| v.length == 1 ? v.values.first : v} elsif opts[:return] == :primary_key returning(insert_pk)._import(columns, values, opts) else super end end private # Format TRUNCATE statement with PostgreSQL specific options. def _truncate_sql(table) to = @opts[:truncate_opts] || {} "TRUNCATE TABLE#{' ONLY' if to[:only]} #{table}#{' RESTART IDENTITY' if to[:restart]}#{' CASCADE' if to[:cascade]}" end # Allow truncation of multiple source tables. def check_truncation_allowed! raise(InvalidOperation, "Grouped datasets cannot be truncated") if opts[:group] raise(InvalidOperation, "Joined datasets cannot be truncated") if opts[:join] end # PostgreSQL allows deleting from joined datasets def delete_clause_methods if server_version >= 90100 DELETE_CLAUSE_METHODS_91 else DELETE_CLAUSE_METHODS end end # Only include the primary table in the main delete clause def delete_from_sql(sql) sql << FROM source_list_append(sql, @opts[:from][0..0]) end # Use USING to specify additional tables in a delete query def delete_using_sql(sql) join_from_sql(:USING, sql) end # PostgreSQL allows a RETURNING clause. def insert_clause_methods if server_version >= 90100 INSERT_CLAUSE_METHODS_91 else INSERT_CLAUSE_METHODS end end # Return the primary key to use for RETURNING in an INSERT statement def insert_pk if (f = opts[:from]) && !f.empty? && (pk = db.primary_key(f.first)) Sequel::SQL::Identifier.new(pk) end end # For multiple table support, PostgreSQL requires at least # two from tables, with joins allowed. def join_from_sql(type, sql) if(from = @opts[:from][1..-1]).empty? raise(Error, 'Need multiple FROM tables if updating/deleting a dataset with JOINs') if @opts[:join] else sql << SPACE << type.to_s << SPACE source_list_append(sql, from) select_join_sql(sql) end end # Use a generic blob quoting method, hopefully overridden in one of the subadapter methods def literal_blob_append(sql, v) sql << APOS << v.gsub(BLOB_RE){|b| "\\#{("%o" % b[0..1].unpack("C")[0]).rjust(3, '0')}"} << APOS end # PostgreSQL uses FALSE for false values def literal_false BOOL_FALSE end # PostgreSQL quotes NaN and Infinity. def literal_float(value) if value.finite? super elsif value.nan? "'NaN'" elsif value.infinite? == 1 "'Infinity'" else "'-Infinity'" end end # Assume that SQL standard quoting is on, per Sequel's defaults def literal_string_append(sql, v) sql << APOS << v.gsub(APOS_RE, DOUBLE_APOS) << APOS end # PostgreSQL uses FALSE for false values def literal_true BOOL_TRUE end # The order of clauses in the SELECT SQL statement def select_clause_methods server_version >= 80400 ? SELECT_CLAUSE_METHODS_84 : SELECT_CLAUSE_METHODS end # PostgreSQL requires parentheses around compound datasets if they use # CTEs, and using them in other places doesn't hurt. def compound_dataset_sql_append(sql, ds) sql << PAREN_OPEN super sql << PAREN_CLOSE end # Support FOR SHARE locking when using the :share lock style. def select_lock_sql(sql) @opts[:lock] == :share ? (sql << FOR_SHARE) : super end # SQL fragment for named window specifications def select_window_sql(sql) if ws = @opts[:window] sql << WINDOW c = false co = COMMA as = AS ws.map do |name, window| sql << co if c literal_append(sql, name) sql << as literal_append(sql, window) c ||= true end end end # Use WITH RECURSIVE instead of WITH if any of the CTEs is recursive def select_with_sql_base opts[:with].any?{|w| w[:recursive]} ? SQL_WITH_RECURSIVE : super end # The version of the database server def server_version db.server_version(@opts[:server]) end # Concatenate the expressions with a space in between def full_text_string_join(cols) cols = Array(cols).map{|x| SQL::Function.new(:COALESCE, x, EMPTY_STRING)} cols = cols.zip([SPACE] * cols.length).flatten cols.pop SQL::StringExpression.new(:'||', *cols) end # PostgreSQL splits the main table from the joined tables def update_clause_methods if server_version >= 90100 UPDATE_CLAUSE_METHODS_91 else UPDATE_CLAUSE_METHODS end end # Use FROM to specify additional tables in an update query def update_from_sql(sql) join_from_sql(:FROM, sql) end # Only include the primary table in the main update clause def update_table_sql(sql) sql << SPACE source_list_append(sql, @opts[:from][0..0]) end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/progress.rb000066400000000000000000000021261220156535500233050ustar00rootroot00000000000000module Sequel module Progress module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling # Progress uses the :progress database type. def database_type :progress end end module DatasetMethods SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select limit distinct columns from join where group order having compounds') # Progress requires SQL standard datetimes def requires_sql_standard_datetimes? true end # Progress does not support INTERSECT or EXCEPT def supports_intersect_except? false end private def select_clause_methods SELECT_CLAUSE_METHODS end # Progress uses TOP for limit, but it is only supported in Progress 10. # The Progress adapter targets Progress 9, so it silently ignores the option. def select_limit_sql(sql) raise(Error, "OFFSET not supported") if @opts[:offset] # if l = @opts[:limit] # sql << " TOP " # literal_append(sql, l) # end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/shared/sqlite.rb000066400000000000000000000645261220156535500227560ustar00rootroot00000000000000Sequel.require 'adapters/utils/replace' module Sequel module SQLite # No matter how you connect to SQLite, the following Database options # can be used to set PRAGMAs on connections in a thread-safe manner: # :auto_vacuum, :foreign_keys, :synchronous, and :temp_store. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling AUTO_VACUUM = [:none, :full, :incremental].freeze PRIMARY_KEY_INDEX_RE = /\Asqlite_autoindex_/.freeze SYNCHRONOUS = [:off, :normal, :full].freeze TABLES_FILTER = "type = 'table' AND NOT name = 'sqlite_sequence'".freeze TEMP_STORE = [:default, :file, :memory].freeze VIEWS_FILTER = "type = 'view'".freeze TRANSACTION_MODE = { :deferred => "BEGIN DEFERRED TRANSACTION".freeze, :immediate => "BEGIN IMMEDIATE TRANSACTION".freeze, :exclusive => "BEGIN EXCLUSIVE TRANSACTION".freeze, nil => Sequel::Database::SQL_BEGIN, }.freeze # Whether to use integers for booleans in the database. SQLite recommends # booleans be stored as integers, but historically Sequel has used 't'/'f'. attr_accessor :integer_booleans # A symbol signifying the value of the auto_vacuum PRAGMA. def auto_vacuum AUTO_VACUUM[pragma_get(:auto_vacuum).to_i] end # Set the auto_vacuum PRAGMA using the given symbol (:none, :full, or # :incremental). See pragma_set. Consider using the :auto_vacuum # Database option instead. def auto_vacuum=(value) value = AUTO_VACUUM.index(value) || (raise Error, "Invalid value for auto_vacuum option. Please specify one of :none, :full, :incremental.") pragma_set(:auto_vacuum, value) end # Set the case_sensitive_like PRAGMA using the given boolean value, if using # SQLite 3.2.3+. If not using 3.2.3+, no error is raised. See pragma_set. # Consider using the :case_sensitive_like Database option instead. def case_sensitive_like=(value) pragma_set(:case_sensitive_like, !!value ? 'on' : 'off') if sqlite_version >= 30203 end # A symbol signifying the value of the default transaction mode attr_reader :transaction_mode # Set the default transaction mode. def transaction_mode=(value) if TRANSACTION_MODE.include?(value) @transaction_mode = value else raise Error, "Invalid value for transaction_mode. Please specify one of :deferred, :immediate, :exclusive, nil" end end # SQLite uses the :sqlite database type. def database_type :sqlite end # Boolean signifying the value of the foreign_keys PRAGMA, or nil # if not using SQLite 3.6.19+. def foreign_keys pragma_get(:foreign_keys).to_i == 1 if sqlite_version >= 30619 end # Set the foreign_keys PRAGMA using the given boolean value, if using # SQLite 3.6.19+. If not using 3.6.19+, no error is raised. See pragma_set. # Consider using the :foreign_keys Database option instead. def foreign_keys=(value) pragma_set(:foreign_keys, !!value ? 'on' : 'off') if sqlite_version >= 30619 end # Return the array of foreign key info hashes using the foreign_key_list PRAGMA, # including information for the :on_update and :on_delete entries. def foreign_key_list(table, opts=OPTS) m = output_identifier_meth h = {} metadata_dataset.with_sql("PRAGMA foreign_key_list(?)", input_identifier_meth.call(table)).each do |row| if r = h[row[:id]] r[:columns] << m.call(row[:from]) r[:key] << m.call(row[:to]) if r[:key] else h[row[:id]] = {:columns=>[m.call(row[:from])], :table=>m.call(row[:table]), :key=>([m.call(row[:to])] if row[:to]), :on_update=>on_delete_sql_to_sym(row[:on_update]), :on_delete=>on_delete_sql_to_sym(row[:on_delete])} end end h.values end # Use the index_list and index_info PRAGMAs to determine the indexes on the table. def indexes(table, opts=OPTS) m = output_identifier_meth im = input_identifier_meth indexes = {} metadata_dataset.with_sql("PRAGMA index_list(?)", im.call(table)).each do |r| next if r[:name] =~ PRIMARY_KEY_INDEX_RE indexes[m.call(r[:name])] = {:unique=>r[:unique].to_i==1} end indexes.each do |k, v| v[:columns] = metadata_dataset.with_sql("PRAGMA index_info(?)", im.call(k)).map(:name).map{|x| m.call(x)} end indexes end # Get the value of the given PRAGMA. def pragma_get(name) self["PRAGMA #{name}"].single_value end # Set the value of the given PRAGMA to value. # # This method is not thread safe, and will not work correctly if there # are multiple connections in the Database's connection pool. PRAGMA # modifications should be done when the connection is created, using # an option provided when creating the Database object. def pragma_set(name, value) execute_ddl("PRAGMA #{name} = #{value}") end # Set the integer_booleans option using the passed in :integer_boolean option. def set_integer_booleans @integer_booleans = @opts.has_key?(:integer_booleans) ? typecast_value_boolean(@opts[:integer_booleans]) : true end # The version of the server as an integer, where 3.6.19 = 30619. # If the server version can't be determined, 0 is used. def sqlite_version return @sqlite_version if defined?(@sqlite_version) @sqlite_version = begin v = get{sqlite_version{}} [10000, 100, 1].zip(v.split('.')).inject(0){|a, m| a + m[0] * Integer(m[1])} rescue 0 end end # SQLite supports CREATE TABLE IF NOT EXISTS syntax since 3.3.0. def supports_create_table_if_not_exists? sqlite_version >= 30300 end # SQLite 3.6.19+ supports deferrable foreign key constraints. def supports_deferrable_foreign_key_constraints? sqlite_version >= 30619 end # SQLite 3.6.8+ supports savepoints. def supports_savepoints? sqlite_version >= 30608 end # Override the default setting for whether to use timezones in timestamps. # For backwards compatibility, it is set to +true+ by default. # Anyone wanting to use SQLite's datetime functions should set it to +false+ # using this method. It's possible that the default will change in a future version, # so anyone relying on timezones in timestamps should set this to +true+. attr_writer :use_timestamp_timezones # SQLite supports timezones in timestamps, since it just stores them as strings, # but it breaks the usage of SQLite's datetime functions. def use_timestamp_timezones? defined?(@use_timestamp_timezones) ? @use_timestamp_timezones : (@use_timestamp_timezones = false) end # A symbol signifying the value of the synchronous PRAGMA. def synchronous SYNCHRONOUS[pragma_get(:synchronous).to_i] end # Set the synchronous PRAGMA using the given symbol (:off, :normal, or :full). See pragma_set. # Consider using the :synchronous Database option instead. def synchronous=(value) value = SYNCHRONOUS.index(value) || (raise Error, "Invalid value for synchronous option. Please specify one of :off, :normal, :full.") pragma_set(:synchronous, value) end # Array of symbols specifying the table names in the current database. # # Options: # * :server - Set the server to use. def tables(opts=OPTS) tables_and_views(TABLES_FILTER, opts) end # A symbol signifying the value of the temp_store PRAGMA. def temp_store TEMP_STORE[pragma_get(:temp_store).to_i] end # Set the temp_store PRAGMA using the given symbol (:default, :file, or :memory). See pragma_set. # Consider using the :temp_store Database option instead. def temp_store=(value) value = TEMP_STORE.index(value) || (raise Error, "Invalid value for temp_store option. Please specify one of :default, :file, :memory.") pragma_set(:temp_store, value) end # Array of symbols specifying the view names in the current database. # # Options: # * :server - Set the server to use. def views(opts=OPTS) tables_and_views(VIEWS_FILTER, opts) end private # Run all alter_table commands in a transaction. This is technically only # needed for drop column. def apply_alter_table(table, ops) fks = foreign_keys self.foreign_keys = false if fks transaction do if ops.length > 1 && ops.all?{|op| op[:op] == :add_constraint} # If you are just doing constraints, apply all of them at the same time, # as otherwise all but the last one get lost. alter_table_sql_list(table, [{:op=>:add_constraints, :ops=>ops}]).flatten.each{|sql| execute_ddl(sql)} else # Run each operation separately, as later operations may depend on the # results of earlier operations. ops.each{|op| alter_table_sql_list(table, [op]).flatten.each{|sql| execute_ddl(sql)}} end end ensure self.foreign_keys = true if fks end # SQLite supports limited table modification. You can add a column # or an index. Dropping columns is supported by copying the table into # a temporary table, dropping the table, and creating a new table without # the column inside of a transaction. def alter_table_sql(table, op) case op[:op] when :add_index, :drop_index super when :add_column if op[:unique] || op[:primary_key] duplicate_table(table){|columns| columns.push(op)} else super end when :drop_column ocp = lambda{|oc| oc.delete_if{|c| c.to_s == op[:name].to_s}} duplicate_table(table, :old_columns_proc=>ocp){|columns| columns.delete_if{|s| s[:name].to_s == op[:name].to_s}} when :rename_column ncp = lambda{|nc| nc.map!{|c| c.to_s == op[:name].to_s ? op[:new_name] : c}} duplicate_table(table, :new_columns_proc=>ncp){|columns| columns.each{|s| s[:name] = op[:new_name] if s[:name].to_s == op[:name].to_s}} when :set_column_default duplicate_table(table){|columns| columns.each{|s| s[:default] = op[:default] if s[:name].to_s == op[:name].to_s}} when :set_column_null duplicate_table(table){|columns| columns.each{|s| s[:null] = op[:null] if s[:name].to_s == op[:name].to_s}} when :set_column_type duplicate_table(table){|columns| columns.each{|s| s.merge!(op) if s[:name].to_s == op[:name].to_s}} when :drop_constraint case op[:type] when :primary_key duplicate_table(table){|columns| columns.each{|s| s[:primary_key] = nil}} when :foreign_key if op[:columns] duplicate_table(table, :skip_foreign_key_columns=>op[:columns]) else duplicate_table(table, :no_foreign_keys=>true) end else duplicate_table(table) end when :add_constraint duplicate_table(table, :constraints=>[op]) when :add_constraints duplicate_table(table, :constraints=>op[:ops]) else raise Error, "Unsupported ALTER TABLE operation: #{op[:op].inspect}" end end def begin_new_transaction(conn, opts) mode = opts[:mode] || @transaction_mode sql = TRANSACTION_MODE[mode] or raise Error, "transaction :mode must be one of: :deferred, :immediate, :exclusive, nil" log_connection_execute(conn, sql) set_transaction_isolation(conn, opts) end # A name to use for the backup table def backup_table_name(table, opts=OPTS) table = table.gsub('`', '') (opts[:times]||1000).times do |i| table_name = "#{table}_backup#{i}" return table_name unless table_exists?(table_name) end end # Surround default with parens to appease SQLite def column_definition_default_sql(sql, column) sql << " DEFAULT (#{literal(column[:default])})" if column.include?(:default) end # Add null/not null SQL fragment to column creation SQL. def column_definition_null_sql(sql, column) column = column.merge(:null=>false) if column[:primary_key] super(sql, column) end # Array of PRAGMA SQL statements based on the Database options that should be applied to # new connections. def connection_pragmas ps = [] v = typecast_value_boolean(opts.fetch(:foreign_keys, 1)) ps << "PRAGMA foreign_keys = #{v ? 1 : 0}" v = typecast_value_boolean(opts.fetch(:case_sensitive_like, 1)) ps << "PRAGMA case_sensitive_like = #{v ? 1 : 0}" [[:auto_vacuum, AUTO_VACUUM], [:synchronous, SYNCHRONOUS], [:temp_store, TEMP_STORE]].each do |prag, con| if v = opts[prag] raise(Error, "Value for PRAGMA #{prag} not supported, should be one of #{con.join(', ')}") unless v = con.index(v.to_sym) ps << "PRAGMA #{prag} = #{v}" end end ps end # SQLite support creating temporary views. def create_view_prefix_sql(name, options) "CREATE #{'TEMPORARY 'if options[:temp]}VIEW #{quote_schema_table(name)}" end DATABASE_ERROR_REGEXPS = { /is not unique\z/ => UniqueConstraintViolation, /foreign key constraint failed\z/ => ForeignKeyConstraintViolation, /\A(SQLITE ERROR 19 \(CONSTRAINT\) : )?constraint failed\z/ => ConstraintViolation, /may not be NULL\z/ => NotNullConstraintViolation, }.freeze def database_error_regexps DATABASE_ERROR_REGEXPS end # The array of column schema hashes for the current columns in the table def defined_columns_for(table) cols = parse_pragma(table, {}) cols.each do |c| c[:default] = LiteralString.new(c[:default]) if c[:default] c[:type] = c[:db_type] end cols end # Duplicate an existing table by creating a new table, copying all records # from the existing table into the new table, deleting the existing table # and renaming the new table to the existing table's name. def duplicate_table(table, opts=OPTS) remove_cached_schema(table) def_columns = defined_columns_for(table) old_columns = def_columns.map{|c| c[:name]} opts[:old_columns_proc].call(old_columns) if opts[:old_columns_proc] yield def_columns if block_given? constraints = (opts[:constraints] || []).dup pks = [] def_columns.each{|c| pks << c[:name] if c[:primary_key]} if pks.length > 1 constraints << {:type=>:primary_key, :columns=>pks} def_columns.each{|c| c[:primary_key] = false if c[:primary_key]} end # If dropping a foreign key constraint, drop all foreign key constraints, # as there is no way to determine which one to drop. unless opts[:no_foreign_keys] fks = foreign_key_list(table) # If dropping a column, if there is a foreign key with that # column, don't include it when building a copy of the table. if ocp = opts[:old_columns_proc] fks.delete_if{|c| ocp.call(c[:columns].dup) != c[:columns]} end # Skip any foreign key columns where a constraint for those # foreign keys is being dropped. if sfkc = opts[:skip_foreign_key_columns] fks.delete_if{|c| c[:columns] == sfkc} end constraints.concat(fks.each{|h| h[:type] = :foreign_key}) end def_columns_str = (def_columns.map{|c| column_definition_sql(c)} + constraints.map{|c| constraint_definition_sql(c)}).join(', ') new_columns = old_columns.dup opts[:new_columns_proc].call(new_columns) if opts[:new_columns_proc] qt = quote_schema_table(table) bt = quote_identifier(backup_table_name(qt)) a = [ "ALTER TABLE #{qt} RENAME TO #{bt}", "CREATE TABLE #{qt}(#{def_columns_str})", "INSERT INTO #{qt}(#{dataset.send(:identifier_list, new_columns)}) SELECT #{dataset.send(:identifier_list, old_columns)} FROM #{bt}", "DROP TABLE #{bt}" ] indexes(table).each do |name, h| if (h[:columns].map{|x| x.to_s} - new_columns).empty? a << alter_table_sql(table, h.merge(:op=>:add_index, :name=>name)) end end a end # SQLite folds unquoted identifiers to lowercase, so it shouldn't need to upcase identifiers on input. def identifier_input_method_default nil end # SQLite folds unquoted identifiers to lowercase, so it shouldn't need to upcase identifiers on output. def identifier_output_method_default nil end # Does the reverse of on_delete_clause, eg. converts strings like +'SET NULL'+ # to symbols +:set_null+. def on_delete_sql_to_sym str case str when 'RESTRICT' :restrict when 'CASCADE' :cascade when 'SET NULL' :set_null when 'SET DEFAULT' :set_default when 'NO ACTION' :no_action end end # Parse the output of the table_info pragma def parse_pragma(table_name, opts) metadata_dataset.with_sql("PRAGMA table_info(?)", input_identifier_meth(opts[:dataset]).call(table_name)).map do |row| row.delete(:cid) row[:allow_null] = row.delete(:notnull).to_i == 0 row[:default] = row.delete(:dflt_value) row[:primary_key] = row.delete(:pk).to_i > 0 row[:default] = nil if blank_object?(row[:default]) || row[:default] == 'NULL' row[:db_type] = row.delete(:type) row[:type] = schema_column_type(row[:db_type]) row end end # SQLite treats integer primary keys as autoincrementing (alias of rowid). def schema_autoincrementing_primary_key?(schema) super && schema[:db_type].downcase == 'integer' end # SQLite supports schema parsing using the table_info PRAGMA, so # parse the output of that into the format Sequel expects. def schema_parse_table(table_name, opts) m = output_identifier_meth(opts[:dataset]) parse_pragma(table_name, opts).map do |row| [m.call(row.delete(:name)), row] end end # Backbone of the tables and views support. def tables_and_views(filter, opts) m = output_identifier_meth metadata_dataset.from(:sqlite_master).server(opts[:server]).filter(filter).map{|r| m.call(r[:name])} end # SQLite only supports AUTOINCREMENT on integer columns, not # bigint columns, so use integer instead of bigint for those # columns. def type_literal_generic_bignum(column) column[:auto_increment] ? :integer : super end end # Instance methods for datasets that connect to an SQLite database module DatasetMethods include Dataset::Replace SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select distinct columns from join where group having compounds order limit') CONSTANT_MAP = {:CURRENT_DATE=>"date(CURRENT_TIMESTAMP, 'localtime')".freeze, :CURRENT_TIMESTAMP=>"datetime(CURRENT_TIMESTAMP, 'localtime')".freeze, :CURRENT_TIME=>"time(CURRENT_TIMESTAMP, 'localtime')".freeze} EMULATED_FUNCTION_MAP = {:char_length=>'length'.freeze} EXTRACT_MAP = {:year=>"'%Y'", :month=>"'%m'", :day=>"'%d'", :hour=>"'%H'", :minute=>"'%M'", :second=>"'%f'"} NOT_SPACE = Dataset::NOT_SPACE COMMA = Dataset::COMMA PAREN_CLOSE = Dataset::PAREN_CLOSE AS = Dataset::AS APOS = Dataset::APOS EXTRACT_OPEN = "CAST(strftime(".freeze EXTRACT_CLOSE = ') AS '.freeze NUMERIC = 'NUMERIC'.freeze INTEGER = 'INTEGER'.freeze BACKTICK = '`'.freeze BACKTICK_RE = /`/.freeze DOUBLE_BACKTICK = '``'.freeze BLOB_START = "X'".freeze HSTAR = "H*".freeze DATE_OPEN = "date(".freeze DATETIME_OPEN = "datetime(".freeze def cast_sql_append(sql, expr, type) if type == Time or type == DateTime sql << DATETIME_OPEN literal_append(sql, expr) sql << PAREN_CLOSE elsif type == Date sql << DATE_OPEN literal_append(sql, expr) sql << PAREN_CLOSE else super end end # SQLite doesn't support a NOT LIKE b, you need to use NOT (a LIKE b). # It doesn't support xor or the extract function natively, so those have to be emulated. def complex_expression_sql_append(sql, op, args) case op when :"NOT LIKE", :"NOT ILIKE" sql << NOT_SPACE complex_expression_sql_append(sql, (op == :"NOT ILIKE" ? :ILIKE : :LIKE), args) when :^ sql << complex_expression_arg_pairs(args) do |a, b| a = literal(a) b = literal(b) "((~(#{a} & #{b})) & (#{a} | #{b}))" end when :extract part = args.at(0) raise(Sequel::Error, "unsupported extract argument: #{part.inspect}") unless format = EXTRACT_MAP[part] sql << EXTRACT_OPEN << format << COMMA literal_append(sql, args.at(1)) sql << EXTRACT_CLOSE << (part == :second ? NUMERIC : INTEGER) << PAREN_CLOSE else super end end # SQLite has CURRENT_TIMESTAMP and related constants in UTC instead # of in localtime, so convert those constants to local time. def constant_sql_append(sql, constant) if c = CONSTANT_MAP[constant] sql << c else super end end # SQLite performs a TRUNCATE style DELETE if no filter is specified. # Since we want to always return the count of records, add a condition # that is always true and then delete. def delete @opts[:where] ? super : where(1=>1).delete end # Return an array of strings specifying a query explanation for a SELECT of the # current dataset. Currently, the options are ignore, but it accepts options # to be compatible with other adapters. def explain(opts=nil) # Load the PrettyTable class, needed for explain output Sequel.extension(:_pretty_table) unless defined?(Sequel::PrettyTable) ds = db.send(:metadata_dataset).clone(:sql=>"EXPLAIN #{select_sql}") rows = ds.all Sequel::PrettyTable.string(rows, ds.columns) end # HAVING requires GROUP BY on SQLite def having(*cond) raise(InvalidOperation, "Can only specify a HAVING clause on a grouped dataset") unless @opts[:group] super end # SQLite uses the nonstandard ` (backtick) for quoting identifiers. def quoted_identifier_append(sql, c) sql << BACKTICK << c.to_s.gsub(BACKTICK_RE, DOUBLE_BACKTICK) << BACKTICK end # When a qualified column is selected on SQLite and the qualifier # is a subselect, the column name used is the full qualified name # (including the qualifier) instead of just the column name. To # get correct column names, you must use an alias. def select(*cols) if ((f = @opts[:from]) && f.any?{|t| t.is_a?(Dataset) || (t.is_a?(SQL::AliasedExpression) && t.expression.is_a?(Dataset))}) || ((j = @opts[:join]) && j.any?{|t| t.table.is_a?(Dataset)}) super(*cols.map{|c| alias_qualified_column(c)}) else super end end # SQLite does not support INTERSECT ALL or EXCEPT ALL def supports_intersect_except_all? false end # SQLite does not support IS TRUE def supports_is_true? false end # SQLite does not support multiple columns for the IN/NOT IN operators def supports_multiple_column_in? false end # SQLite supports timezones in literal timestamps, since it stores them # as text. But using timezones in timestamps breaks SQLite datetime # functions, so we allow the user to override the default per database. def supports_timestamp_timezones? db.use_timestamp_timezones? end # SQLite cannot use WHERE 't'. def supports_where_true? false end private # SQLite uses string literals instead of identifiers in AS clauses. def as_sql_append(sql, aliaz) aliaz = aliaz.value if aliaz.is_a?(SQL::Identifier) sql << AS literal_append(sql, aliaz.to_s) end # If col is a qualified column, alias it to the same as the column name def alias_qualified_column(col) case col when Symbol t, c, a = split_symbol(col) if t && !a alias_qualified_column(SQL::QualifiedIdentifier.new(t, c)) else col end when SQL::QualifiedIdentifier SQL::AliasedExpression.new(col, col.column) else col end end # SQL fragment specifying a list of identifiers def identifier_list(columns) columns.map{|i| quote_identifier(i)}.join(COMMA) end # SQLite uses a preceding X for hex escaping strings def literal_blob_append(sql, v) sql << BLOB_START << v.unpack(HSTAR).first << APOS end # Respect the database integer_booleans setting, using 0 or 'f'. def literal_false @db.integer_booleans ? '0' : "'f'" end # Respect the database integer_booleans setting, using 1 or 't'. def literal_true @db.integer_booleans ? '1' : "'t'" end # SQLite does not support the SQL WITH clause def select_clause_methods SELECT_CLAUSE_METHODS end # SQLite does not support FOR UPDATE, but silently ignore it # instead of raising an error for compatibility with other # databases. def select_lock_sql(sql) super unless @opts[:lock] == :update end # SQLite treats a DELETE with no WHERE clause as a TRUNCATE def _truncate_sql(table) "DELETE FROM #{table}" end end end end ruby-sequel-4.1.1/lib/sequel/adapters/sqlite.rb000066400000000000000000000317301220156535500214770ustar00rootroot00000000000000require 'sqlite3' Sequel.require 'adapters/shared/sqlite' module Sequel # Top level module for holding all SQLite-related modules and classes # for Sequel. module SQLite TYPE_TRANSLATOR = tt = Class.new do FALSE_VALUES = (%w'0 false f no n' + [0]).freeze def blob(s) Sequel::SQL::Blob.new(s.to_s) end def boolean(s) s = s.downcase if s.is_a?(String) !FALSE_VALUES.include?(s) end def date(s) case s when String Sequel.string_to_date(s) when Integer Date.jd(s) when Float Date.jd(s.to_i) else raise Sequel::Error, "unhandled type when converting to date: #{s.inspect} (#{s.class.inspect})" end end def integer(s) s.to_i end def float(s) s.to_f end def numeric(s) s = s.to_s unless s.is_a?(String) ::BigDecimal.new(s) rescue s end def time(s) case s when String Sequel.string_to_time(s) when Integer Sequel::SQLTime.create(s/3600, (s % 3600)/60, s % 60) when Float s, f = s.divmod(1) Sequel::SQLTime.create(s/3600, (s % 3600)/60, s % 60, (f*1000000).round) else raise Sequel::Error, "unhandled type when converting to date: #{s.inspect} (#{s.class.inspect})" end end end.new # Hash with string keys and callable values for converting SQLite types. SQLITE_TYPES = {} { %w'date' => tt.method(:date), %w'time' => tt.method(:time), %w'bit bool boolean' => tt.method(:boolean), %w'integer smallint mediumint int bigint' => tt.method(:integer), %w'numeric decimal money' => tt.method(:numeric), %w'float double real dec fixed' + ['double precision'] => tt.method(:float), %w'blob' => tt.method(:blob) }.each do |k,v| k.each{|n| SQLITE_TYPES[n] = v} end # Database class for SQLite databases used with Sequel and the # ruby-sqlite3 driver. class Database < Sequel::Database include ::Sequel::SQLite::DatabaseMethods set_adapter_scheme :sqlite # Mimic the file:// uri, by having 2 preceding slashes specify a relative # path, and 3 preceding slashes specify an absolute path. def self.uri_to_options(uri) # :nodoc: { :database => (uri.host.nil? && uri.path == '/') ? nil : "#{uri.host}#{uri.path}" } end private_class_method :uri_to_options # The conversion procs to use for this database attr_reader :conversion_procs # Connect to the database. Since SQLite is a file based database, # the only options available are :database (to specify the database # name), and :timeout, to specify how long to wait for the database to # be available if it is locked, given in milliseconds (default is 5000). def connect(server) opts = server_opts(server) opts[:database] = ':memory:' if blank_object?(opts[:database]) db = ::SQLite3::Database.new(opts[:database]) db.busy_timeout(opts.fetch(:timeout, 5000)) connection_pragmas.each{|s| log_yield(s){db.execute_batch(s)}} class << db attr_reader :prepared_statements end db.instance_variable_set(:@prepared_statements, {}) db end # Disconnect given connections from the database. def disconnect_connection(c) c.prepared_statements.each_value{|v| v.first.close} c.close end # Run the given SQL with the given arguments and yield each row. def execute(sql, opts=OPTS, &block) _execute(:select, sql, opts, &block) end # Run the given SQL with the given arguments and return the number of changed rows. def execute_dui(sql, opts=OPTS) _execute(:update, sql, opts) end # Drop any prepared statements on the connection when executing DDL. This is because # prepared statements lock the table in such a way that you can't drop or alter the # table while a prepared statement that references it still exists. def execute_ddl(sql, opts=OPTS) synchronize(opts[:server]) do |conn| conn.prepared_statements.values.each{|cps, s| cps.close} conn.prepared_statements.clear super end end # Run the given SQL with the given arguments and return the last inserted row id. def execute_insert(sql, opts=OPTS) _execute(:insert, sql, opts) end # Handle Integer and Float arguments, since SQLite can store timestamps as integers and floats. def to_application_timestamp(s) case s when String super when Integer super(Time.at(s).to_s) when Float super(DateTime.jd(s).to_s) else raise Sequel::Error, "unhandled type when converting to : #{s.inspect} (#{s.class.inspect})" end end private def adapter_initialize @conversion_procs = SQLITE_TYPES.dup @conversion_procs['datetime'] = @conversion_procs['timestamp'] = method(:to_application_timestamp) set_integer_booleans end # Yield an available connection. Rescue # any SQLite3::Exceptions and turn them into DatabaseErrors. def _execute(type, sql, opts, &block) begin synchronize(opts[:server]) do |conn| return execute_prepared_statement(conn, type, sql, opts, &block) if sql.is_a?(Symbol) log_args = opts[:arguments] args = {} opts.fetch(:arguments, {}).each{|k, v| args[k] = prepared_statement_argument(v)} case type when :select log_yield(sql, log_args){conn.query(sql, args, &block)} when :insert log_yield(sql, log_args){conn.execute(sql, args)} conn.last_insert_row_id when :update log_yield(sql, log_args){conn.execute_batch(sql, args)} conn.changes end end rescue SQLite3::Exception => e raise_error(e) end end # The SQLite adapter does not need the pool to convert exceptions. # Also, force the max connections to 1 if a memory database is being # used, as otherwise each connection gets a separate database. def connection_pool_default_options o = super.dup # Default to only a single connection if a memory database is used, # because otherwise each connection will get a separate database o[:max_connections] = 1 if @opts[:database] == ':memory:' || blank_object?(@opts[:database]) o end def prepared_statement_argument(arg) case arg when Date, DateTime, Time literal(arg)[1...-1] when SQL::Blob arg.to_blob when true, false if integer_booleans arg ? 1 : 0 else literal(arg)[1...-1] end else arg end end # Execute a prepared statement on the database using the given name. def execute_prepared_statement(conn, type, name, opts, &block) ps = prepared_statement(name) sql = ps.prepared_sql args = opts[:arguments] ps_args = {} args.each{|k, v| ps_args[k] = prepared_statement_argument(v)} if cpsa = conn.prepared_statements[name] cps, cps_sql = cpsa if cps_sql != sql cps.close cps = nil end end unless cps cps = log_yield("PREPARE #{name}: #{sql}"){conn.prepare(sql)} conn.prepared_statements[name] = [cps, sql] end log_sql = "EXECUTE #{name}" if ps.log_sql log_sql << " (" log_sql << sql log_sql << ")" end if block log_yield(log_sql, args){cps.execute(ps_args, &block)} else log_yield(log_sql, args){cps.execute!(ps_args){|r|}} case type when :insert conn.last_insert_row_id when :update conn.changes end end end # SQLite3 raises ArgumentError in addition to SQLite3::Exception in # some cases, such as operations on a closed database. def database_error_classes [SQLite3::Exception, ArgumentError] end end # Dataset class for SQLite datasets that use the ruby-sqlite3 driver. class Dataset < Sequel::Dataset include ::Sequel::SQLite::DatasetMethods Database::DatasetClass = self PREPARED_ARG_PLACEHOLDER = ':'.freeze # SQLite already supports named bind arguments, so use directly. module ArgumentMapper include Sequel::Dataset::ArgumentMapper protected # Return a hash with the same values as the given hash, # but with the keys converted to strings. def map_to_prepared_args(hash) args = {} hash.each{|k,v| args[k.to_s.gsub('.', '__')] = v} args end private # SQLite uses a : before the name of the argument for named # arguments. def prepared_arg(k) LiteralString.new("#{prepared_arg_placeholder}#{k.to_s.gsub('.', '__')}") end # Always assume a prepared argument. def prepared_arg?(k) true end end # SQLite prepared statement uses a new prepared statement each time # it is called, but it does use the bind arguments. module BindArgumentMethods include ArgumentMapper private # Run execute_select on the database with the given SQL and the stored # bind arguments. def execute(sql, opts=OPTS, &block) super(sql, {:arguments=>bind_arguments}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_dui(sql, opts=OPTS, &block) super(sql, {:arguments=>bind_arguments}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_insert(sql, opts=OPTS, &block) super(sql, {:arguments=>bind_arguments}.merge(opts), &block) end end module PreparedStatementMethods include BindArgumentMethods private # Execute the stored prepared statement name and the stored bind # arguments instead of the SQL given. def execute(sql, opts=OPTS, &block) super(prepared_statement_name, opts, &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_dui(sql, opts=OPTS, &block) super(prepared_statement_name, opts, &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_insert(sql, opts=OPTS, &block) super(prepared_statement_name, opts, &block) end end # Execute the given type of statement with the hash of values. def call(type, bind_vars={}, *values, &block) ps = to_prepared_statement(type, values) ps.extend(BindArgumentMethods) ps.call(bind_vars, &block) end # Yield a hash for each row in the dataset. def fetch_rows(sql) execute(sql) do |result| i = -1 cps = db.conversion_procs type_procs = result.types.map{|t| cps[base_type_name(t)]} cols = result.columns.map{|c| i+=1; [output_identifier(c), i, type_procs[i]]} @columns = cols.map{|c| c.first} result.each do |values| row = {} cols.each do |name,id,type_proc| v = values[id] if type_proc && v v = type_proc.call(v) end row[name] = v end yield row end end end # Prepare the given type of query with the given name and store # it in the database. Note that a new native prepared statement is # created on each call to this prepared statement. def prepare(type, name=nil, *values) ps = to_prepared_statement(type, values) ps.extend(PreparedStatementMethods) if name ps.prepared_statement_name = name db.set_prepared_statement(name, ps) end ps end private # The base type name for a given type, without any parenthetical part. def base_type_name(t) (t =~ /^(.*?)\(/ ? $1 : t).downcase if t end # Quote the string using the adapter class method. def literal_string_append(sql, v) sql << "'" << ::SQLite3::Database.quote(v) << "'" end # SQLite uses a : before the name of the argument as a placeholder. def prepared_arg_placeholder PREPARED_ARG_PLACEHOLDER end end end end ruby-sequel-4.1.1/lib/sequel/adapters/swift.rb000066400000000000000000000113221220156535500213250ustar00rootroot00000000000000module Sequel # Module holding the Swift DB support for Sequel. Swift DB is a # collection of drivers used in Swift ORM. # # The Swift adapter currently supports PostgreSQL, MySQL and SQLite3 # # Sequel.connect('swift://user:password@host/database?db_type=postgres') # Sequel.connect('swift://user:password@host/database?db_type=mysql') module Swift # Contains procs keyed on sub adapter type that extend the # given database object so it supports the correct database type. DATABASE_SETUP = {:postgres=>proc do |db| Sequel.require 'adapters/swift/postgres' db.extend(Sequel::Swift::Postgres::DatabaseMethods) db.extend_datasets Sequel::Postgres::DatasetMethods db.swift_class = ::Swift::DB::Postgres end, :mysql=>proc do |db| Sequel.require 'adapters/swift/mysql' db.extend(Sequel::Swift::MySQL::DatabaseMethods) db.dataset_class = Sequel::Swift::MySQL::Dataset db.swift_class = ::Swift::DB::Mysql end, :sqlite=>proc do |db| Sequel.require 'adapters/swift/sqlite' db.extend(Sequel::Swift::SQLite::DatabaseMethods) db.dataset_class = Sequel::Swift::SQLite::Dataset db.swift_class = ::Swift::DB::Sqlite3 db.set_integer_booleans end, } class Database < Sequel::Database set_adapter_scheme :swift # The Swift adapter class being used by this database. Connections # in this database's connection pool will be instances of this class. attr_accessor :swift_class # Create an instance of swift_class for the given options. def connect(server) opts = server_opts(server) opts[:pass] = opts[:password] setup_connection(swift_class.new(opts)) end # Execute the given SQL, yielding a Swift::Result if a block is given. def execute(sql, opts=OPTS) synchronize(opts[:server]) do |conn| begin res = log_yield(sql){conn.execute(sql)} yield res if block_given? nil rescue ::Swift::Error => e raise_error(e) end end end # Execute the SQL on the this database, returning the number of affected # rows. def execute_dui(sql, opts=OPTS) synchronize(opts[:server]) do |conn| begin log_yield(sql){conn.execute(sql).affected_rows} rescue ::Swift::Error => e raise_error(e) end end end # Execute the SQL on this database, returning the primary key of the # table being inserted to. def execute_insert(sql, opts=OPTS) synchronize(opts[:server]) do |conn| begin log_yield(sql){conn.execute(sql).insert_id} rescue ::Swift::Error => e raise_error(e) end end end private # Call the DATABASE_SETUP proc directly after initialization, # so the object always uses sub adapter specific code. Also, # raise an error immediately if the connection doesn't have a # db_type specified, since one is required to include the correct # subadapter. def adapter_initialize if db_type = @opts[:db_type] and !db_type.to_s.empty? if prok = DATABASE_SETUP[db_type.to_s.to_sym] prok.call(self) else raise(Error, "No :db_type option specified") end else raise(Error, ":db_type option not valid, should be postgres, mysql, or sqlite") end end # Method to call on a statement object to execute SQL that does # not return any rows. def connection_execute_method :execute end def database_error_classes [::Swift::Error] end # Set the :db entry to the same as the :database entry, since # Swift uses :db. def server_opts(o) o = super o[:db] ||= o[:database] o end # Allow extending the given connection when it is first created. # By default, just returns the connection. def setup_connection(conn) conn end end class Dataset < Sequel::Dataset Database::DatasetClass = self # Set the columns and yield the hashes to the block. def fetch_rows(sql) execute(sql) do |res| col_map = {} @columns = res.fields.map do |c| col_map[c] = output_identifier(c) end res.each do |r| h = {} r.each do |k, v| h[col_map[k]] = v.is_a?(StringIO) ? SQL::Blob.new(v.read) : v end yield h end end self end end end end ruby-sequel-4.1.1/lib/sequel/adapters/swift/000077500000000000000000000000001220156535500210015ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/adapters/swift/mysql.rb000066400000000000000000000025431220156535500224770ustar00rootroot00000000000000require 'swift/db/mysql' Sequel.require 'adapters/shared/mysql' module Sequel module Swift # Database and Dataset instance methods for MySQL specific # support via Swift. module MySQL # Database instance methods for MySQL databases accessed via Swift. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::MySQL::DatabaseMethods private # The database name for the given database. def database_name opts[:database] end # Consider tinyint(1) columns as boolean. def schema_column_type(db_type) db_type =~ /\Atinyint\(1\)/ ? :boolean : super end # Apply the connectiong setting SQLs for every new connection. def setup_connection(conn) mysql_connection_setting_sqls.each{|sql| log_yield(sql){conn.execute(sql)}} super end end # Dataset class for MySQL datasets accessed via Swift. class Dataset < Swift::Dataset include Sequel::MySQL::DatasetMethods APOS = Dataset::APOS private # Use Swift's escape method for quoting. def literal_string_append(sql, s) sql << APOS << db.synchronize(@opts[:server]){|c| c.escape(s)} << APOS end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/swift/postgres.rb000066400000000000000000000027041220156535500231770ustar00rootroot00000000000000require 'swift/db/postgres' Sequel.require 'adapters/shared/postgres' module Sequel Postgres::CONVERTED_EXCEPTIONS << ::Swift::Error module Swift # Adapter, Database, and Dataset support for accessing a PostgreSQL # database via Swift. module Postgres # Methods to add to Database instances that access PostgreSQL via Swift. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::Postgres::DatabaseMethods # Add the primary_keys and primary_key_sequences instance variables, # so we can get the correct return values for inserted rows. def self.extended(db) super db.send(:initialize_postgres_adapter) end private # Remove all other options except for ones specifically handled, as # otherwise swift passes them to dbic++ which passes them to PostgreSQL # which can raise an error. def server_opts(o) o = super so = {} [:db, :user, :password, :host, :port].each{|s| so[s] = o[s] if o.has_key?(s)} so end # Extend the adapter with the Swift PostgreSQL AdapterMethods. def setup_connection(conn) conn = super(conn) conn.native_bind_format = true connection_configuration_sqls.each{|sql| log_yield(sql){conn.execute(sql)}} conn end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/swift/sqlite.rb000066400000000000000000000017641220156535500226370ustar00rootroot00000000000000require 'swift/db/sqlite3' Sequel.require 'adapters/shared/sqlite' module Sequel module Swift # Database and Dataset instance methods for SQLite specific # support via Swift. module SQLite # Database instance methods for SQLite databases accessed via Swift. module DatabaseMethods extend Sequel::Database::ResetIdentifierMangling include Sequel::SQLite::DatabaseMethods # Set the correct pragmas on the connection. def connect(opts) c = super connection_pragmas.each{|s| log_yield(s){c.execute(s)}} c end end # Dataset class for SQLite datasets accessed via Swift. class Dataset < Swift::Dataset include Sequel::SQLite::DatasetMethods private # Use Swift's escape method for quoting. def literal_string_append(sql, s) sql << APOS << db.synchronize(@opts[:server]){|c| c.escape(s)} << APOS end end end end end ruby-sequel-4.1.1/lib/sequel/adapters/tinytds.rb000066400000000000000000000204451220156535500216750ustar00rootroot00000000000000require 'tiny_tds' Sequel.require 'adapters/shared/mssql' module Sequel module TinyTDS class Database < Sequel::Database include Sequel::MSSQL::DatabaseMethods set_adapter_scheme :tinytds # Transfer the :user option to the :username option. def connect(server) opts = server_opts(server) opts[:username] = opts[:user] c = TinyTds::Client.new(opts) c.query_options.merge!(:cache_rows=>false) if (ts = opts[:textsize]) sql = "SET TEXTSIZE #{typecast_value_integer(ts)}" log_yield(sql){c.execute(sql)} end c end # Execute the given +sql+ on the server. If the :return option # is present, its value should be a method symbol that is called # on the TinyTds::Result object returned from executing the # +sql+. The value of such a method is returned to the caller. # Otherwise, if a block is given, it is yielded the result object. # If no block is given and a :return is not present, +nil+ is returned. def execute(sql, opts=OPTS) synchronize(opts[:server]) do |c| begin m = opts[:return] r = nil if (args = opts[:arguments]) && !args.empty? types = [] values = [] args.each_with_index do |(k, v), i| v, type = ps_arg_type(v) types << "@#{k} #{type}" values << "@#{k} = #{v}" end case m when :do sql = "#{sql}; SELECT @@ROWCOUNT AS AffectedRows" single_value = true when :insert sql = "#{sql}; SELECT CAST(SCOPE_IDENTITY() AS bigint) AS Ident" single_value = true end sql = "EXEC sp_executesql N'#{c.escape(sql)}', N'#{c.escape(types.join(', '))}', #{values.join(', ')}" log_yield(sql) do r = c.execute(sql) r.each{|row| return row.values.first} if single_value end else log_yield(sql) do r = c.execute(sql) return r.send(m) if m end end yield(r) if block_given? rescue TinyTds::Error => e raise_error(e, :disconnect=>!c.active?) ensure r.cancel if r && c.sqlsent? end end end # Return the number of rows modified by the given +sql+. def execute_dui(sql, opts=OPTS) execute(sql, opts.merge(:return=>:do)) end # Return the value of the autogenerated primary key (if any) # for the row inserted by the given +sql+. def execute_insert(sql, opts=OPTS) execute(sql, opts.merge(:return=>:insert)) end # Execute the DDL +sql+ on the database and return nil. def execute_ddl(sql, opts=OPTS) execute(sql, opts.merge(:return=>:each)) nil end private # Choose whether to use unicode strings on initialization def adapter_initialize set_mssql_unicode_strings end # For some reason, unless you specify a column can be # NULL, it assumes NOT NULL, so turn NULL on by default unless # the column is a primary key column. def column_list_sql(g) pks = [] g.constraints.each{|c| pks = c[:columns] if c[:type] == :primary_key} g.columns.each{|c| c[:null] = true if !pks.include?(c[:name]) && !c[:primary_key] && !c.has_key?(:null) && !c.has_key?(:allow_null)} super end # tiny_tds uses TinyTds::Error as the base error class. def database_error_classes [TinyTds::Error] end # Stupid MSSQL maps foreign key and check constraint violations # to the same error code, and doesn't expose the sqlstate. Use # database error numbers if present and unambiguous, otherwise # fallback to the regexp mapping. def database_specific_error_class(exception, opts) case exception.db_error_number when 515 NotNullConstraintViolation when 2627 UniqueConstraintViolation else super end end # Return true if the :conn argument is present and not active. def disconnect_error?(e, opts) super || (opts[:conn] && !opts[:conn].active?) end # Dispose of any possible results of execution. def log_connection_execute(conn, sql) log_yield(sql){conn.execute(sql).each} end # Return a 2 element array with the literal value and type to use # in the prepared statement call for the given value and connection. def ps_arg_type(v) case v when Fixnum [v, 'int'] when Bignum [v, 'bigint'] when Float [v, 'double precision'] when Numeric [v, 'numeric'] when Time if v.is_a?(SQLTime) [literal(v), 'time'] else [literal(v), 'datetime'] end when DateTime [literal(v), 'datetime'] when Date [literal(v), 'date'] when nil ['NULL', 'nvarchar(max)'] when true ['1', 'int'] when false ['0', 'int'] when SQL::Blob [literal(v), 'varbinary(max)'] else [literal(v), 'nvarchar(max)'] end end end class Dataset < Sequel::Dataset include Sequel::MSSQL::DatasetMethods Database::DatasetClass = self # SQLite already supports named bind arguments, so use directly. module ArgumentMapper include Sequel::Dataset::ArgumentMapper protected # Return a hash with the same values as the given hash, # but with the keys converted to strings. def map_to_prepared_args(hash) args = {} hash.each{|k,v| args[k.to_s.gsub('.', '__')] = v} args end private # SQLite uses a : before the name of the argument for named # arguments. def prepared_arg(k) LiteralString.new("@#{k.to_s.gsub('.', '__')}") end # Always assume a prepared argument. def prepared_arg?(k) true end end # SQLite prepared statement uses a new prepared statement each time # it is called, but it does use the bind arguments. module PreparedStatementMethods include ArgumentMapper private # Run execute_select on the database with the given SQL and the stored # bind arguments. def execute(sql, opts=OPTS, &block) super(prepared_sql, {:arguments=>bind_arguments}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_dui(sql, opts=OPTS, &block) super(prepared_sql, {:arguments=>bind_arguments}.merge(opts), &block) end # Same as execute, explicit due to intricacies of alias and super. def execute_insert(sql, opts=OPTS, &block) super(prepared_sql, {:arguments=>bind_arguments}.merge(opts), &block) end end # Yield hashes with symbol keys, attempting to optimize for # various cases. def fetch_rows(sql) execute(sql) do |result| @columns = result.fields.map!{|c| output_identifier(c)} if db.timezone == :utc result.each(:timezone=>:utc){|r| yield r} else result.each{|r| yield r} end end self end # Create a named prepared statement that is stored in the # database (and connection) for reuse. def prepare(type, name=nil, *values) ps = to_prepared_statement(type, values) ps.extend(PreparedStatementMethods) if name ps.prepared_statement_name = name db.set_prepared_statement(name, ps) end ps end private # Properly escape the given string +v+. def literal_string_append(sql, v) sql << (mssql_unicode_strings ? UNICODE_STRING_START : APOS) sql << db.synchronize(@opts[:server]){|c| c.escape(v)}.gsub(BACKSLASH_CRLF_RE, BACKSLASH_CRLF_REPLACE) << APOS end end end end ruby-sequel-4.1.1/lib/sequel/adapters/utils/000077500000000000000000000000001220156535500210055ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/adapters/utils/emulate_offset_with_reverse_and_count.rb000066400000000000000000000044511220156535500311600ustar00rootroot00000000000000module Sequel module EmulateOffsetWithReverseAndCount # Make empty? work with an offset with an order. # By default it would break since the order would be based on # a column that empty does not select. def empty? if o = @opts[:offset] unlimited.count <= o else super end end # Emulate OFFSET support using reverse order in a subselect, requiring # a count of the number of rows. # # If offset is used, an order must be provided, since it needs to be # reversed in the subselect. Note that the order needs to be unambiguous # to work correctly, and you must select all columns that you are ordering on. def select_sql return super unless o = @opts[:offset] order = @opts[:order] || default_offset_order if order.nil? || order.empty? raise(Error, "#{db.database_type} requires an order be provided if using an offset") end ds = unlimited row_count = @opts[:offset_total_count] || ds.clone(:append_sql=>'').count dsa1 = dataset_alias(1) if o.is_a?(Symbol) && @opts[:bind_vars] && (match = Sequel::Dataset::PreparedStatementMethods::PLACEHOLDER_RE.match(o.to_s)) # Handle use of bound variable offsets. Unfortunately, prepared statement # bound variable offsets cannot be handled, since the bound variable value # isn't available until later. s = match[1].to_sym if prepared_arg?(s) o = prepared_arg(s) end end reverse_offset = row_count - o ds = if reverse_offset > 0 ds.limit(reverse_offset). reverse_order(*order). from_self(:alias=>dsa1). limit(@opts[:limit]). order(*order) else # Sequel doesn't allow a nonpositive limit. If the offset # is greater than the number of rows, the empty result set # shuld be returned, so use a condition that is always false. ds.where(1=>0) end sql = @opts[:append_sql] || '' subselect_sql_append(sql, ds) sql end private # The default order to use for datasets with offsets, if no order is defined. # By default, orders by all of the columns in the dataset. def default_offset_order clone(:append_sql=>'', :offset=>nil).columns end end end ruby-sequel-4.1.1/lib/sequel/adapters/utils/emulate_offset_with_row_number.rb000066400000000000000000000025741220156535500276360ustar00rootroot00000000000000module Sequel module EmulateOffsetWithRowNumber # Emulate OFFSET support with the ROW_NUMBER window function # # The implementation is ugly, cloning the current dataset and modifying # the clone to add a ROW_NUMBER window function (and some other things), # then using the modified clone in a subselect which is selected from. # # If offset is used, an order must be provided, because the use of ROW_NUMBER # requires an order. def select_sql return super unless o = @opts[:offset] order = @opts[:order] || default_offset_order if order.nil? || order.empty? raise(Error, "#{db.database_type} requires an order be provided if using an offset") end columns = clone(:append_sql=>'').columns dsa1 = dataset_alias(1) rn = row_number_column sql = @opts[:append_sql] || '' subselect_sql_append(sql, unlimited. unordered. select_append{ROW_NUMBER(:over, :order=>order){}.as(rn)}. from_self(:alias=>dsa1). select(*columns). limit(@opts[:limit]). where(SQL::Identifier.new(rn) > o). order(rn)) sql end private # The default order to use for datasets with offsets, if no order is defined. # By default, orders by all of the columns in the dataset. def default_offset_order clone(:append_sql=>'').columns end end end ruby-sequel-4.1.1/lib/sequel/adapters/utils/pg_types.rb000066400000000000000000000042401220156535500231640ustar00rootroot00000000000000module Sequel module Postgres NAN = 0.0/0.0 PLUS_INFINITY = 1.0/0.0 MINUS_INFINITY = -1.0/0.0 NAN_STR = 'NaN'.freeze PLUS_INFINITY_STR = 'Infinity'.freeze MINUS_INFINITY_STR = '-Infinity'.freeze TRUE_STR = 't'.freeze DASH_STR = '-'.freeze TYPE_TRANSLATOR = tt = Class.new do def boolean(s) s == TRUE_STR end def integer(s) s.to_i end def float(s) case s when NAN_STR NAN when PLUS_INFINITY_STR PLUS_INFINITY when MINUS_INFINITY_STR MINUS_INFINITY else s.to_f end end def date(s) ::Date.new(*s.split(DASH_STR).map{|x| x.to_i}) end def bytea(str) str = if str =~ /\A\\x/ # PostgreSQL 9.0+ bytea hex format str[2..-1].gsub(/(..)/){|s| s.to_i(16).chr} else # Historical PostgreSQL bytea escape format str.gsub(/\\(\\|'|[0-3][0-7][0-7])/) {|s| if s.size == 2 then s[1,1] else s[1,3].oct.chr end } end ::Sequel::SQL::Blob.new(str) end end.new # Type OIDs for string types used by PostgreSQL. These types don't # have conversion procs associated with them (since the data is # already in the form of a string). STRING_TYPES = [18, 19, 25, 1042, 1043] # Hash with type name strings/symbols and callable values for converting PostgreSQL types. # Non-builtin types that don't have fixed numbers should use this to register # conversion procs. PG_NAMED_TYPES = {} unless defined?(PG_NAMED_TYPES) # Hash with integer keys and callable values for converting PostgreSQL types. PG_TYPES = {} unless defined?(PG_TYPES) { [16] => tt.method(:boolean), [17] => tt.method(:bytea), [20, 21, 23, 26] => tt.method(:integer), [700, 701] => tt.method(:float), [1700] => ::BigDecimal.method(:new), [1083, 1266] => ::Sequel.method(:string_to_time), [1082] => ::Sequel.method(:string_to_date), [1184, 1114] => ::Sequel.method(:database_to_application_timestamp), }.each do |k,v| k.each{|n| PG_TYPES[n] = v} end end end ruby-sequel-4.1.1/lib/sequel/adapters/utils/replace.rb000066400000000000000000000015601220156535500227470ustar00rootroot00000000000000module Sequel class Dataset module Replace INSERT = Dataset::INSERT REPLACE = 'REPLACE'.freeze # Execute a REPLACE statement on the database (deletes any duplicate # rows before inserting). def replace(*values) execute_insert(replace_sql(*values)) end # SQL statement for REPLACE def replace_sql(*values) clone(:replace=>true).insert_sql(*values) end # Replace multiple rows in a single query. def multi_replace(*values) clone(:replace=>true).multi_insert(*values) end # Databases using this module support REPLACE. def supports_replace? true end private # If this is an replace instead of an insert, use replace instead def insert_insert_sql(sql) sql << (@opts[:replace] ? REPLACE : INSERT) end end end end ruby-sequel-4.1.1/lib/sequel/adapters/utils/split_alter_table.rb000066400000000000000000000024111220156535500250210ustar00rootroot00000000000000module Sequel::Database::SplitAlterTable private # Preprocess the array of operations. If it looks like some operations depend # on results of earlier operations and may require reloading the schema to # work correctly, split those operations into separate lists, and between each # list, remove the cached schema so that the later operations deal with the # then current table schema. def apply_alter_table(name, ops) modified_columns = [] op_groups = [[]] ops.each do |op| case op[:op] when :add_column, :set_column_type, :set_column_null, :set_column_default if modified_columns.include?(op[:name]) op_groups << [] else modified_columns << op[:name] end when :rename_column if modified_columns.include?(op[:name]) || modified_columns.include?(op[:new_name]) op_groups << [] end modified_columns << op[:name] unless modified_columns.include?(op[:name]) modified_columns << op[:new_name] unless modified_columns.include?(op[:new_name]) end op_groups.last << op end op_groups.each do |opgs| next if opgs.empty? alter_table_sql_list(name, opgs).each{|sql| execute_ddl(sql)} remove_cached_schema(name) end end end ruby-sequel-4.1.1/lib/sequel/adapters/utils/stored_procedures.rb000066400000000000000000000037471220156535500251000ustar00rootroot00000000000000module Sequel class Dataset module StoredProcedureMethods # The name of the stored procedure to call attr_accessor :sproc_name # The name of the stored procedure to call attr_writer :sproc_args # Call the stored procedure with the given args def call(*args, &block) sp = clone sp.sproc_args = args sp.run(&block) end # Programmer friendly string showing this is a stored procedure, # showing the name of the procedure. def inspect "<#{self.class.name}/StoredProcedure name=#{@sproc_name}>" end # Run the stored procedure with the current args on the database def run(&block) case @sproc_type when :select, :all all(&block) when :first first when :insert insert when :update update when :delete delete end end # Set the type of the stored procedure and override the corresponding _sql # method to return the empty string (since the result will be # ignored anyway). def sproc_type=(type) @sproc_type = type @opts[:sql] = '' end end module StoredProcedures # For the given type (:select, :first, :insert, :update, or :delete), # run the database stored procedure with the given name with the given # arguments. def call_sproc(type, name, *args) prepare_sproc(type, name).call(*args) end # Transform this dataset into a stored procedure that you can call # multiple times with new arguments. def prepare_sproc(type, name) sp = clone prepare_extend_sproc(sp) sp.sproc_type = type sp.sproc_name = name sp end private # Extend the dataset with the stored procedure methods. def prepare_extend_sproc(ds) ds.extend(StoredProcedureMethods) end end end end ruby-sequel-4.1.1/lib/sequel/ast_transformer.rb000066400000000000000000000154331220156535500216060ustar00rootroot00000000000000module Sequel # The +ASTTransformer+ class is designed to handle the abstract syntax trees # that Sequel uses internally and produce modified copies of them. By itself # it only produces a straight copy. It's designed to be subclassed and have # subclasses returned modified copies of the specific nodes that need to # be modified. class ASTTransformer # Return +obj+ or a potentially transformed version of it. def transform(obj) v(obj) end private # Recursive version that handles all of Sequel's internal object types # and produces copies of them. def v(o) case o when Symbol, Numeric, String, Class, TrueClass, FalseClass, NilClass o when Array o.map{|x| v(x)} when Hash h = {} o.each{|k, val| h[v(k)] = v(val)} h when SQL::ComplexExpression SQL::ComplexExpression.new(o.op, *v(o.args)) when SQL::Identifier SQL::Identifier.new(v(o.value)) when SQL::QualifiedIdentifier SQL::QualifiedIdentifier.new(v(o.table), v(o.column)) when SQL::OrderedExpression SQL::OrderedExpression.new(v(o.expression), o.descending, :nulls=>o.nulls) when SQL::AliasedExpression SQL::AliasedExpression.new(v(o.expression), o.aliaz) when SQL::CaseExpression args = [v(o.conditions), v(o.default)] args << v(o.expression) if o.expression? SQL::CaseExpression.new(*args) when SQL::Cast SQL::Cast.new(v(o.expr), o.type) when SQL::Function SQL::Function.new(o.f, *v(o.args)) when SQL::Subscript SQL::Subscript.new(v(o.f), v(o.sub)) when SQL::WindowFunction SQL::WindowFunction.new(v(o.function), v(o.window)) when SQL::Window opts = o.opts.dup opts[:partition] = v(opts[:partition]) if opts[:partition] opts[:order] = v(opts[:order]) if opts[:order] SQL::Window.new(opts) when SQL::PlaceholderLiteralString args = if o.args.is_a?(Hash) h = {} o.args.each{|k,val| h[k] = v(val)} h else v(o.args) end SQL::PlaceholderLiteralString.new(o.str, args, o.parens) when SQL::JoinOnClause SQL::JoinOnClause.new(v(o.on), o.join_type, v(o.table), v(o.table_alias)) when SQL::JoinUsingClause SQL::JoinUsingClause.new(v(o.using), o.join_type, v(o.table), v(o.table_alias)) when SQL::JoinClause SQL::JoinClause.new(o.join_type, v(o.table), v(o.table_alias)) when SQL::Wrapper SQL::Wrapper.new(v(o.value)) else o end end end # Handles qualifying existing datasets, so that unqualified columns # in the dataset are qualified with a given table name. class Qualifier < ASTTransformer # Store the dataset to use as the basis for qualification, # and the table used to qualify unqualified columns. def initialize(ds, table) @ds = ds @table = table end private # Turn SQL::Identifiers and symbols that aren't implicitly # qualified into SQL::QualifiedIdentifiers. For symbols that # are not implicitly qualified by are implicitly aliased, return an # SQL::AliasedExpressions with a qualified version of the symbol. def v(o) case o when Symbol t, column, aliaz = @ds.send(:split_symbol, o) if t o elsif aliaz SQL::AliasedExpression.new(SQL::QualifiedIdentifier.new(@table, SQL::Identifier.new(column)), aliaz) else SQL::QualifiedIdentifier.new(@table, o) end when SQL::Identifier SQL::QualifiedIdentifier.new(@table, o) when SQL::QualifiedIdentifier, SQL::JoinClause # Return these directly, so we don't accidentally qualify symbols in them. o else super end end end # +Unbinder+ is used to take a dataset filter and return a modified version # that unbinds already bound values and returns a dataset with bound value # placeholders and a hash of bind values. You can then prepare the dataset # and use the bound variables to execute it with the same values. # # This class only does a limited form of unbinding where the variable names # and values can be associated unambiguously. The only cases it handles # are SQL::ComplexExpression with an operator in +UNBIND_OPS+, a # first argument that's an instance of a member of +UNBIND_KEY_CLASSES+, and # a second argument that's an instance of a member of +UNBIND_VALUE_CLASSES+. # # So it can handle cases like: # # DB.filter(:a=>1).exclude(:b=>2).where{c > 3} # # But it cannot handle cases like: # # DB.filter(:a + 1 < 0) class Unbinder < ASTTransformer # The SQL::ComplexExpression operates that will be considered # for transformation. UNBIND_OPS = [:'=', :'!=', :<, :>, :<=, :>=] # The key classes (first argument of the ComplexExpression) that will # considered for transformation. UNBIND_KEY_CLASSES = [Symbol, SQL::Identifier, SQL::QualifiedIdentifier] # The value classes (second argument of the ComplexExpression) that # will be considered for transformation. UNBIND_VALUE_CLASSES = [Numeric, String, Date, Time] # The hash of bind variables that were extracted from the dataset filter. attr_reader :binds # Intialize an empty +binds+ hash. def initialize @binds = {} end private # Create a suitable bound variable key for the object, which should be # an instance of one of the +UNBIND_KEY_CLASSES+. def bind_key(obj) case obj when Symbol obj when String obj.to_sym when SQL::Identifier bind_key(obj.value) when SQL::QualifiedIdentifier :"#{bind_key(obj.table)}.#{bind_key(obj.column)}" else raise Error, "unhandled object in Sequel::Unbinder#bind_key: #{obj}" end end # Handle SQL::ComplexExpression instances with suitable ops # and arguments, substituting the value with a bound variable placeholder # and assigning it an entry in the +binds+ hash with a matching key. def v(o) if o.is_a?(SQL::ComplexExpression) && UNBIND_OPS.include?(o.op) l, r = o.args l = l.value if l.is_a?(Sequel::SQL::Wrapper) r = r.value if r.is_a?(Sequel::SQL::Wrapper) if UNBIND_KEY_CLASSES.any?{|c| l.is_a?(c)} && UNBIND_VALUE_CLASSES.any?{|c| r.is_a?(c)} && !r.is_a?(LiteralString) key = bind_key(l) if (old = binds[key]) && old != r raise UnbindDuplicate, "two different values for #{key.inspect}: #{[r, old].inspect}" end binds[key] = r SQL::ComplexExpression.new(o.op, l, :"$#{key}") else super end else super end end end end ruby-sequel-4.1.1/lib/sequel/connection_pool.rb000066400000000000000000000104601220156535500215600ustar00rootroot00000000000000# The base connection pool class, which all other connection pools are based # on. This class is not instantiated directly, but subclasses should at # the very least implement the following API: # # initialize(Database, Hash) :: Initialize using the passed Sequel::Database # object and options hash. # hold(Symbol, &block) :: Yield a connection object (obtained from calling # the block passed to +initialize+) to the current block. For sharded # connection pools, the Symbol passed is the shard/server to use. # disconnect(Symbol) :: Disconnect the connection object. For sharded # connection pools, the Symbol passed is the shard/server to use. # servers :: An array of shard/server symbols for all shards/servers that this # connection pool recognizes. # size :: an integer representing the total number of connections in the pool, # or for the given shard/server if sharding is supported. # # For sharded connection pools, the sharded API adds the following methods: # # add_servers(Array of Symbols) :: start recognizing all shards/servers specified # by the array of symbols. # remove_servers(Array of Symbols) :: no longer recognize all shards/servers # specified by the array of symbols. class Sequel::ConnectionPool OPTS = Sequel::OPTS # The default server to use DEFAULT_SERVER = :default # A map of [single threaded, sharded] values to symbols or ConnectionPool subclasses. CONNECTION_POOL_MAP = {[true, false] => :single, [true, true] => :sharded_single, [false, false] => :threaded, [false, true] => :sharded_threaded} # Class methods used to return an appropriate pool subclass, separated # into a module for easier overridding by extensions. module ClassMethods # Return a pool subclass instance based on the given options. If a :pool_class # option is provided is provided, use that pool class, otherwise # use a new instance of an appropriate pool subclass based on the # :single_threaded and :servers options. def get_pool(db, opts = OPTS) case v = connection_pool_class(opts) when Class v.new(db, opts) when Symbol require("sequel/connection_pool/#{v}") connection_pool_class(opts).new(db, opts) || raise(Sequel::Error, "No connection pool class found") end end private # Return a connection pool class based on the given options. def connection_pool_class(opts) CONNECTION_POOL_MAP[opts[:pool_class]] || opts[:pool_class] || CONNECTION_POOL_MAP[[!!opts[:single_threaded], !!opts[:servers]]] end end extend ClassMethods # The after_connect proc used for this pool. This is called with each new # connection made, and is usually used to set custom per-connection settings. attr_accessor :after_connect # The Sequel::Database object tied to this connection pool. attr_accessor :db # Instantiates a connection pool with the given options. The block is called # with a single symbol (specifying the server/shard to use) every time a new # connection is needed. The following options are respected for all connection # pools: # :after_connect :: The proc called after each new connection is made, with the # connection object, useful for customizations that you want to apply to all # connections. def initialize(db, opts=OPTS) @db = db @after_connect = opts[:after_connect] end # Alias for +size+, not aliased directly for ease of subclass implementation def created_count(*args) size(*args) end # An array of symbols for all shards/servers, which is a single :default by default. def servers [DEFAULT_SERVER] end private # Return a new connection by calling the connection proc with the given server name, # and checking for connection errors. def make_new(server) begin conn = @db.connect(server) @after_connect.call(conn) if @after_connect rescue Exception=>exception raise Sequel.convert_exception_class(exception, Sequel::DatabaseConnectionError) end raise(Sequel::DatabaseConnectionError, "Connection parameters not valid") unless conn conn end end ruby-sequel-4.1.1/lib/sequel/connection_pool/000077500000000000000000000000001220156535500212325ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/connection_pool/sharded_single.rb000066400000000000000000000062621220156535500245400ustar00rootroot00000000000000# A ShardedSingleConnectionPool is a single threaded connection pool that # works with multiple shards/servers. class Sequel::ShardedSingleConnectionPool < Sequel::ConnectionPool # The single threaded pool takes the following options: # # * :servers - A hash of servers to use. Keys should be symbols. If not # present, will use a single :default server. # * :servers_hash - The base hash to use for the servers. By default, # Sequel uses Hash.new(:default). You can use a hash with a default proc # that raises an error if you want to catch all cases where a nonexistent # server is used. def initialize(db, opts=OPTS) super @conns = {} @servers = opts.fetch(:servers_hash, Hash.new(:default)) add_servers([:default]) add_servers(opts[:servers].keys) if opts[:servers] end # Adds new servers to the connection pool. Primarily used in conjunction with master/slave # or shard configurations. Allows for dynamic expansion of the potential slaves/shards # at runtime. servers argument should be an array of symbols. def add_servers(servers) servers.each{|s| @servers[s] = s} end # Yield all of the currently established connections def all_connections @conns.values.each{|c| yield c} end # The connection for the given server. def conn(server=:default) @conns[@servers[server]] end # Disconnects from the database. Once a connection is requested using # #hold, the connection is reestablished. Options: # * :server - Should be a symbol specifing the server to disconnect from, # or an array of symbols to specify multiple servers. def disconnect(opts=OPTS) (opts[:server] ? Array(opts[:server]) : servers).each{|s| disconnect_server(s)} end # Yields the connection to the supplied block for the given server. # This method simulates the ConnectionPool#hold API. def hold(server=:default) begin server = pick_server(server) yield(@conns[server] ||= make_new(server)) rescue Sequel::DatabaseDisconnectError disconnect_server(server) raise end end # Remove servers from the connection pool. Primarily used in conjunction with master/slave # or shard configurations. Similar to disconnecting from all given servers, # except that after it is used, future requests for the server will use the # :default server instead. def remove_servers(servers) raise(Sequel::Error, "cannot remove default server") if servers.include?(:default) servers.each do |server| disconnect_server(server) @servers.delete(server) end end # Return an array of symbols for servers in the connection pool. def servers @servers.keys end # The number of different shards/servers this pool is connected to. def size @conns.length end def pool_type :sharded_single end private # Disconnect from the given server, if connected. def disconnect_server(server) if conn = @conns.delete(server) db.disconnect_connection(conn) end end # If the server given is in the hash, return it, otherwise, return the default server. def pick_server(server) @servers[server] end CONNECTION_POOL_MAP[[true, true]] = self end ruby-sequel-4.1.1/lib/sequel/connection_pool/sharded_threaded.rb000066400000000000000000000227411220156535500250370ustar00rootroot00000000000000Sequel.require 'connection_pool/threaded' # The slowest and most advanced connection, dealing with both multi-threaded # access and configurations with multiple shards/servers. # # In addition, this pool subclass also handles scheduling in-use connections # to be removed from the pool when they are returned to it. class Sequel::ShardedThreadedConnectionPool < Sequel::ThreadedConnectionPool # The following additional options are respected: # * :servers - A hash of servers to use. Keys should be symbols. If not # present, will use a single :default server. # * :servers_hash - The base hash to use for the servers. By default, # Sequel uses Hash.new(:default). You can use a hash with a default proc # that raises an error if you want to catch all cases where a nonexistent # server is used. def initialize(db, opts = OPTS) super @available_connections = {} @connections_to_remove = [] @servers = opts.fetch(:servers_hash, Hash.new(:default)) add_servers([:default]) add_servers(opts[:servers].keys) if opts[:servers] end # Adds new servers to the connection pool. Primarily used in conjunction with master/slave # or shard configurations. Allows for dynamic expansion of the potential slaves/shards # at runtime. servers argument should be an array of symbols. def add_servers(servers) sync do servers.each do |server| unless @servers.has_key?(server) @servers[server] = server @available_connections[server] = [] @allocated[server] = {} end end end end # A hash of connections currently being used for the given server, key is the # Thread, value is the connection. Nonexistent servers will return nil. Treat # this as read only, do not modify the resulting object. def allocated(server=:default) @allocated[server] end # Yield all of the available connections, and the ones currently allocated to # this thread. This will not yield connections currently allocated to other # threads, as it is not safe to operate on them. This holds the mutex while # it is yielding all of the connections, which means that until # the method's block returns, the pool is locked. def all_connections t = Thread.current sync do @allocated.values.each do |threads| threads.each do |thread, conn| yield conn if t == thread end end @available_connections.values.each{|v| v.each{|c| yield c}} end end # An array of connections opened but not currently used, for the given # server. Nonexistent servers will return nil. Treat this as read only, do # not modify the resulting object. def available_connections(server=:default) @available_connections[server] end # The total number of connections opened for the given server, should # be equal to available_connections.length + allocated.length. Nonexistent # servers will return the created count of the default server. def size(server=:default) server = @servers[server] @allocated[server].length + @available_connections[server].length end # Removes all connections currently available on all servers, optionally # yielding each connection to the given block. This method has the effect of # disconnecting from the database, assuming that no connections are currently # being used. If connections are being used, they are scheduled to be # disconnected as soon as they are returned to the pool. # # Once a connection is requested using #hold, the connection pool # creates new connections to the database. Options: # * :server - Should be a symbol specifing the server to disconnect from, # or an array of symbols to specify multiple servers. def disconnect(opts=OPTS) sync do (opts[:server] ? Array(opts[:server]) : @servers.keys).each do |s| disconnect_server(s) end end end # Chooses the first available connection to the given server, or if none are # available, creates a new connection. Passes the connection to the supplied # block: # # pool.hold {|conn| conn.execute('DROP TABLE posts')} # # Pool#hold is re-entrant, meaning it can be called recursively in # the same thread without blocking. # # If no connection is immediately available and the pool is already using the maximum # number of connections, Pool#hold will block until a connection # is available or the timeout expires. If the timeout expires before a # connection can be acquired, a Sequel::PoolTimeout is # raised. def hold(server=:default) server = pick_server(server) t = Thread.current if conn = owned_connection(t, server) return yield(conn) end begin unless conn = acquire(t, server) time = Time.now timeout = time + @timeout sleep_time = @sleep_time sleep sleep_time until conn = acquire(t, server) raise(::Sequel::PoolTimeout) if Time.now > timeout sleep sleep_time end end yield conn rescue Sequel::DatabaseDisconnectError sync{@connections_to_remove << conn} if conn raise ensure sync{release(t, conn, server)} if conn end end # Remove servers from the connection pool. Primarily used in conjunction with master/slave # or shard configurations. Similar to disconnecting from all given servers, # except that after it is used, future requests for the server will use the # :default server instead. def remove_servers(servers) sync do raise(Sequel::Error, "cannot remove default server") if servers.include?(:default) servers.each do |server| if @servers.include?(server) disconnect_server(server) @available_connections.delete(server) @allocated.delete(server) @servers.delete(server) end end end end # Return an array of symbols for servers in the connection pool. def servers sync{@servers.keys} end def pool_type :sharded_threaded end private # Assigns a connection to the supplied thread for the given server, if one # is available. The calling code should NOT already have the mutex when # calling this. def acquire(thread, server) sync do if conn = available(server) allocated(server)[thread] = conn end end end # Returns an available connection to the given server. If no connection is # available, tries to create a new connection. The calling code should already # have the mutex before calling this. def available(server) next_available(server) || make_new(server) end # Return a connection to the pool of available connections for the server, # returns the connection. The calling code should already have the mutex # before calling this. def checkin_connection(server, conn) available_connections(server) << conn conn end # Disconnect from the given server. Disconnects available connections # immediately, and schedules currently allocated connections for disconnection # as soon as they are returned to the pool. The calling code should already # have the mutex before calling this. def disconnect_server(server) if conns = available_connections(server) conns.each{|conn| db.disconnect_connection(conn)} conns.clear end @connections_to_remove.concat(allocated(server).values) end # Creates a new connection to the given server if the size of the pool for # the server is less than the maximum size of the pool. The calling code # should already have the mutex before calling this. def make_new(server) if (n = size(server)) >= @max_size allocated(server).to_a.each{|t, c| release(t, c, server) unless t.alive?} n = nil end default_make_new(server) if (n || size(server)) < @max_size end # Return the next available connection in the pool for the given server, or nil # if there is not currently an available connection for the server. # The calling code should already have the mutex before calling this. def next_available(server) case @connection_handling when :stack available_connections(server).pop else available_connections(server).shift end end # Returns the connection owned by the supplied thread for the given server, # if any. The calling code should NOT already have the mutex before calling this. def owned_connection(thread, server) sync{@allocated[server][thread]} end # If the server given is in the hash, return it, otherwise, return the default server. def pick_server(server) sync{@servers[server]} end # Releases the connection assigned to the supplied thread and server. If the # server or connection given is scheduled for disconnection, remove the # connection instead of releasing it back to the pool. # The calling code should already have the mutex before calling this. def release(thread, conn, server) if @connections_to_remove.include?(conn) remove(thread, conn, server) else conn = allocated(server).delete(thread) if @connection_handling == :disconnect db.disconnect_connection(conn) else checkin_connection(server, conn) end end end # Removes the currently allocated connection from the connection pool. The # calling code should already have the mutex before calling this. def remove(thread, conn, server) @connections_to_remove.delete(conn) allocated(server).delete(thread) if @servers.include?(server) db.disconnect_connection(conn) end CONNECTION_POOL_MAP[[false, true]] = self end ruby-sequel-4.1.1/lib/sequel/connection_pool/single.rb000066400000000000000000000016121220156535500230400ustar00rootroot00000000000000# This is the fastest connection pool, since it isn't a connection pool at all. # It is just a wrapper around a single connection that uses the connection pool # API. class Sequel::SingleConnectionPool < Sequel::ConnectionPool # The SingleConnectionPool always has a size of 1 if connected # and 0 if not. def size @conn ? 1 : 0 end # Yield the connection if one has been made. def all_connections yield @conn if @conn end # Disconnect the connection from the database. def disconnect(opts=nil) return unless @conn db.disconnect_connection(@conn) @conn = nil end # Yield the connection to the block. def hold(server=nil) begin yield(@conn ||= make_new(DEFAULT_SERVER)) rescue Sequel::DatabaseDisconnectError disconnect raise end end def pool_type :single end CONNECTION_POOL_MAP[[true, false]] = self end ruby-sequel-4.1.1/lib/sequel/connection_pool/threaded.rb000066400000000000000000000152641220156535500233470ustar00rootroot00000000000000# A connection pool allowing multi-threaded access to a pool of connections. # This is the default connection pool used by Sequel. class Sequel::ThreadedConnectionPool < Sequel::ConnectionPool # The maximum number of connections this pool will create (per shard/server # if sharding). attr_reader :max_size # An array of connections that are available for use by the pool. attr_reader :available_connections # A hash with thread keys and connection values for currently allocated # connections. attr_reader :allocated # The following additional options are respected: # * :connection_handling - Set how to handle available connections. By default, # uses a a queue for fairness. Can be set to :stack to use a stack, which may # offer better performance. # * :max_connections - The maximum number of connections the connection pool # will open (default 4) # * :pool_sleep_time - The amount of time to sleep before attempting to acquire # a connection again (default 0.001) # * :pool_timeout - The amount of seconds to wait to acquire a connection # before raising a PoolTimeoutError (default 5) def initialize(db, opts = OPTS) super @max_size = Integer(opts[:max_connections] || 4) raise(Sequel::Error, ':max_connections must be positive') if @max_size < 1 @mutex = Mutex.new @connection_handling = opts[:connection_handling] @available_connections = [] @allocated = {} @timeout = Float(opts[:pool_timeout] || 5) @sleep_time = Float(opts[:pool_sleep_time] || 0.001) end # The total number of connections opened, either available or allocated. # This may not be completely accurate as it isn't protected by the mutex. def size @allocated.length + @available_connections.length end # Yield all of the available connections, and the one currently allocated to # this thread. This will not yield connections currently allocated to other # threads, as it is not safe to operate on them. This holds the mutex while # it is yielding all of the available connections, which means that until # the method's block returns, the pool is locked. def all_connections hold do |c| sync do yield c @available_connections.each{|conn| yield conn} end end end # Removes all connections currently available, optionally # yielding each connection to the given block. This method has the effect of # disconnecting from the database, assuming that no connections are currently # being used. If you want to be able to disconnect connections that are # currently in use, use the ShardedThreadedConnectionPool, which can do that. # This connection pool does not, for performance reasons. To use the sharded pool, # pass the :servers=>{} option when connecting to the database. # # Once a connection is requested using #hold, the connection pool # creates new connections to the database. def disconnect(opts=OPTS) sync do @available_connections.each{|conn| db.disconnect_connection(conn)} @available_connections.clear end end # Chooses the first available connection, or if none are # available, creates a new connection. Passes the connection to the supplied # block: # # pool.hold {|conn| conn.execute('DROP TABLE posts')} # # Pool#hold is re-entrant, meaning it can be called recursively in # the same thread without blocking. # # If no connection is immediately available and the pool is already using the maximum # number of connections, Pool#hold will block until a connection # is available or the timeout expires. If the timeout expires before a # connection can be acquired, a Sequel::PoolTimeout is # raised. def hold(server=nil) t = Thread.current if conn = owned_connection(t) return yield(conn) end begin unless conn = acquire(t) time = Time.now timeout = time + @timeout sleep_time = @sleep_time sleep sleep_time until conn = acquire(t) raise(::Sequel::PoolTimeout) if Time.now > timeout sleep sleep_time end end yield conn rescue Sequel::DatabaseDisconnectError oconn = conn conn = nil db.disconnect_connection(oconn) if oconn @allocated.delete(t) raise ensure sync{release(t)} if conn end end def pool_type :threaded end private # Assigns a connection to the supplied thread, if one # is available. The calling code should NOT already have the mutex when # calling this. def acquire(thread) sync do if conn = available @allocated[thread] = conn end end end # Returns an available connection. If no connection is # available, tries to create a new connection. The calling code should already # have the mutex before calling this. def available next_available || make_new(DEFAULT_SERVER) end # Return a connection to the pool of available connections, returns the connection. # The calling code should already have the mutex before calling this. def checkin_connection(conn) @available_connections << conn conn end # Alias the default make_new method, so subclasses can call it directly. alias default_make_new make_new # Creates a new connection to the given server if the size of the pool for # the server is less than the maximum size of the pool. The calling code # should already have the mutex before calling this. def make_new(server) if (n = size) >= @max_size @allocated.keys.each{|t| release(t) unless t.alive?} n = nil end super if (n || size) < @max_size end # Return the next available connection in the pool, or nil if there # is not currently an available connection. The calling code should already # have the mutex before calling this. def next_available case @connection_handling when :stack @available_connections.pop else @available_connections.shift end end # Returns the connection owned by the supplied thread, # if any. The calling code should NOT already have the mutex before calling this. def owned_connection(thread) sync{@allocated[thread]} end # Releases the connection assigned to the supplied thread back to the pool. # The calling code should already have the mutex before calling this. def release(thread) conn = @allocated.delete(thread) if @connection_handling == :disconnect db.disconnect_connection(conn) else checkin_connection(conn) end end # Yield to the block while inside the mutex. The calling code should NOT # already have the mutex before calling this. def sync @mutex.synchronize{yield} end CONNECTION_POOL_MAP[[false, false]] = self end ruby-sequel-4.1.1/lib/sequel/core.rb000066400000000000000000000334301220156535500173220ustar00rootroot00000000000000%w'bigdecimal date thread time uri'.each{|f| require f} # Top level module for Sequel # # There are some module methods that are added via metaprogramming, one for # each supported adapter. For example: # # DB = Sequel.sqlite # Memory database # DB = Sequel.sqlite('blog.db') # DB = Sequel.postgres('database_name', :user=>'user', # :password=>'password', :host=>'host', :port=>5432, # :max_connections=>10) # # If a block is given to these methods, it is passed the opened Database # object, which is closed (disconnected) when the block exits, just # like a block passed to connect. For example: # # Sequel.sqlite('blog.db'){|db| puts db[:users].count} # # For a more expanded introduction, see the {README}[link:files/README_rdoc.html]. # For a quicker introduction, see the {cheat sheet}[link:files/doc/cheat_sheet_rdoc.html]. module Sequel @convert_two_digit_years = true @datetime_class = Time # Whether Sequel is being run in single threaded mode @single_threaded = false class << self # Sequel converts two digit years in Dates and DateTimes by default, # so 01/02/03 is interpreted at January 2nd, 2003, and 12/13/99 is interpreted # as December 13, 1999. You can override this to treat those dates as # January 2nd, 0003 and December 13, 0099, respectively, by: # # Sequel.convert_two_digit_years = false attr_accessor :convert_two_digit_years # Sequel can use either +Time+ or +DateTime+ for times returned from the # database. It defaults to +Time+. To change it to +DateTime+: # # Sequel.datetime_class = DateTime # # For ruby versions less than 1.9.2, +Time+ has a limited range (1901 to # 2038), so if you use datetimes out of that range, you need to switch # to +DateTime+. Also, before 1.9.2, +Time+ can only handle local and UTC # times, not other timezones. Note that +Time+ and +DateTime+ objects # have a different API, and in cases where they implement the same methods, # they often implement them differently (e.g. + using seconds on +Time+ and # days on +DateTime+). attr_accessor :datetime_class end # Returns true if the passed object could be a specifier of conditions, false otherwise. # Currently, Sequel considers hashes and arrays of two element arrays as # condition specifiers. # # Sequel.condition_specifier?({}) # => true # Sequel.condition_specifier?([[1, 2]]) # => true # Sequel.condition_specifier?([]) # => false # Sequel.condition_specifier?([1]) # => false # Sequel.condition_specifier?(1) # => false def self.condition_specifier?(obj) case obj when Hash true when Array !obj.empty? && !obj.is_a?(SQL::ValueList) && obj.all?{|i| i.is_a?(Array) && (i.length == 2)} else false end end # Frozen hash used as the default options hash for most options. OPTS = {}.freeze # Creates a new database object based on the supplied connection string # and optional arguments. The specified scheme determines the database # class used, and the rest of the string specifies the connection options. # For example: # # DB = Sequel.connect('sqlite:/') # Memory database # DB = Sequel.connect('sqlite://blog.db') # ./blog.db # DB = Sequel.connect('sqlite:///blog.db') # /blog.db # DB = Sequel.connect('postgres://user:password@host:port/database_name') # DB = Sequel.connect('sqlite:///blog.db', :max_connections=>10) # # If a block is given, it is passed the opened +Database+ object, which is # closed when the block exits. For example: # # Sequel.connect('sqlite://blog.db'){|db| puts db[:users].count} # # For details, see the {"Connecting to a Database" guide}[link:files/doc/opening_databases_rdoc.html]. # To set up a master/slave or sharded database connection, see the {"Master/Slave Databases and Sharding" guide}[link:files/doc/sharding_rdoc.html]. def self.connect(*args, &block) Database.connect(*args, &block) end # Assume the core extensions are not loaded by default, if the core_extensions # extension is loaded, this will be overridden. def self.core_extensions? false end # Convert the +exception+ to the given class. The given class should be # Sequel::Error or a subclass. Returns an instance of +klass+ with # the message and backtrace of +exception+. def self.convert_exception_class(exception, klass) return exception if exception.is_a?(klass) e = klass.new("#{exception.class}: #{exception.message}") e.wrapped_exception = exception e.set_backtrace(exception.backtrace) e end # Load all Sequel extensions given. Extensions are just files that exist under # sequel/extensions in the load path, and are just required. Generally, # extensions modify the behavior of +Database+ and/or +Dataset+, but Sequel ships # with some extensions that modify other classes that exist for backwards compatibility. # In some cases, requiring an extension modifies classes directly, and in others, # it just loads a module that you can extend other classes with. Consult the documentation # for each extension you plan on using for usage. # # Sequel.extension(:schema_dumper) # Sequel.extension(:pagination, :query) def self.extension(*extensions) extensions.each{|e| Kernel.require "sequel/extensions/#{e}"} end # Set the method to call on identifiers going into the database. This affects # the literalization of identifiers by calling this method on them before they are input. # Sequel upcases identifiers in all SQL strings for most databases, so to turn that off: # # Sequel.identifier_input_method = nil # # to downcase instead: # # Sequel.identifier_input_method = :downcase # # Other String instance methods work as well. def self.identifier_input_method=(value) Database.identifier_input_method = value end # Set the method to call on identifiers coming out of the database. This affects # the literalization of identifiers by calling this method on them when they are # retrieved from the database. Sequel downcases identifiers retrieved for most # databases, so to turn that off: # # Sequel.identifier_output_method = nil # # to upcase instead: # # Sequel.identifier_output_method = :upcase # # Other String instance methods work as well. def self.identifier_output_method=(value) Database.identifier_output_method = value end # The exception classed raised if there is an error parsing JSON. # This can be overridden to use an alternative json implementation. def self.json_parser_error_class JSON::ParserError end # Convert given object to json and return the result. # This can be overridden to use an alternative json implementation. def self.object_to_json(obj, *args) obj.to_json(*args) end # Parse the string as JSON and return the result. # This can be overridden to use an alternative json implementation. def self.parse_json(json) JSON.parse(json, :create_additions=>false) end # Set whether to quote identifiers for all databases by default. By default, # Sequel quotes identifiers in all SQL strings, so to turn that off: # # Sequel.quote_identifiers = false def self.quote_identifiers=(value) Database.quote_identifiers = value end # Convert each item in the array to the correct type, handling multi-dimensional # arrays. For each element in the array or subarrays, call the converter, # unless the value is nil. def self.recursive_map(array, converter) array.map do |i| if i.is_a?(Array) recursive_map(i, converter) elsif i converter.call(i) end end end # Require all given +files+ which should be in the same or a subdirectory of # this file. If a +subdir+ is given, assume all +files+ are in that subdir. # This is used to ensure that the files loaded are from the same version of # Sequel as this file. def self.require(files, subdir=nil) Array(files).each{|f| super("#{File.dirname(__FILE__).untaint}/#{"#{subdir}/" if subdir}#{f}")} end # Set whether Sequel is being used in single threaded mode. By default, # Sequel uses a thread-safe connection pool, which isn't as fast as the # single threaded connection pool, and also has some additional thread # safety checks. If your program will only have one thread, # and speed is a priority, you should set this to true: # # Sequel.single_threaded = true def self.single_threaded=(value) @single_threaded = value Database.single_threaded = value end COLUMN_REF_RE1 = /\A((?:(?!__).)+)__((?:(?!___).)+)___(.+)\z/.freeze COLUMN_REF_RE2 = /\A((?:(?!___).)+)___(.+)\z/.freeze COLUMN_REF_RE3 = /\A((?:(?!__).)+)__(.+)\z/.freeze # Splits the symbol into three parts. Each part will # either be a string or nil. # # For columns, these parts are the table, column, and alias. # For tables, these parts are the schema, table, and alias. def self.split_symbol(sym) case s = sym.to_s when COLUMN_REF_RE1 [$1, $2, $3] when COLUMN_REF_RE2 [nil, $1, $2] when COLUMN_REF_RE3 [$1, $2, nil] else [nil, s, nil] end end # Converts the given +string+ into a +Date+ object. # # Sequel.string_to_date('2010-09-10') # Date.civil(2010, 09, 10) def self.string_to_date(string) begin Date.parse(string, Sequel.convert_two_digit_years) rescue => e raise convert_exception_class(e, InvalidValue) end end # Converts the given +string+ into a +Time+ or +DateTime+ object, depending on the # value of Sequel.datetime_class. # # Sequel.string_to_datetime('2010-09-10 10:20:30') # Time.local(2010, 09, 10, 10, 20, 30) def self.string_to_datetime(string) begin if datetime_class == DateTime DateTime.parse(string, convert_two_digit_years) else datetime_class.parse(string) end rescue => e raise convert_exception_class(e, InvalidValue) end end # Converts the given +string+ into a Sequel::SQLTime object. # # v = Sequel.string_to_time('10:20:30') # Sequel::SQLTime.parse('10:20:30') # DB.literal(v) # => '10:20:30' def self.string_to_time(string) begin SQLTime.parse(string) rescue => e raise convert_exception_class(e, InvalidValue) end end if defined?(RUBY_ENGINE) && RUBY_ENGINE != 'ruby' # :nocov: # Mutex used to protect mutable data structures @data_mutex = Mutex.new # Unless in single threaded mode, protects access to any mutable # global data structure in Sequel. # Uses a non-reentrant mutex, so calling code should be careful. def self.synchronize(&block) @single_threaded ? yield : @data_mutex.synchronize(&block) end # :nocov: else # Yield directly to the block. You don't need to synchronize # access on MRI because the GVL makes certain methods atomic. def self.synchronize yield end end # Uses a transaction on all given databases with the given options. This: # # Sequel.transaction([DB1, DB2, DB3]){...} # # is equivalent to: # # DB1.transaction do # DB2.transaction do # DB3.transaction do # ... # end # end # end # # except that if Sequel::Rollback is raised by the block, the transaction is # rolled back on all databases instead of just the last one. # # Note that this method cannot guarantee that all databases will commit or # rollback. For example, if DB3 commits but attempting to commit on DB2 # fails (maybe because foreign key checks are deferred), there is no way # to uncommit the changes on DB3. For that kind of support, you need to # have two-phase commit/prepared transactions (which Sequel supports on # some databases). def self.transaction(dbs, opts=OPTS, &block) unless opts[:rollback] rescue_rollback = true opts = opts.merge(:rollback=>:reraise) end pr = dbs.reverse.inject(block){|bl, db| proc{db.transaction(opts, &bl)}} if rescue_rollback begin pr.call rescue Sequel::Rollback nil end else pr.call end end # If the supplied block takes a single argument, # yield an SQL::VirtualRow instance to the block # argument. Otherwise, evaluate the block in the context of a # SQL::VirtualRow instance. # # Sequel.virtual_row{a} # Sequel::SQL::Identifier.new(:a) # Sequel.virtual_row{|o| o.a{}} # Sequel::SQL::Function.new(:a) def self.virtual_row(&block) vr = VIRTUAL_ROW case block.arity when -1, 0 vr.instance_exec(&block) else block.call(vr) end end ### Private Class Methods ### # Helper method that the database adapter class methods that are added to Sequel via # metaprogramming use to parse arguments. def self.adapter_method(adapter, *args, &block) options = args.last.is_a?(Hash) ? args.pop : {} opts = {:adapter => adapter.to_sym} opts[:database] = args.shift if args.first.is_a?(String) if args.any? raise ::Sequel::Error, "Wrong format of arguments, either use (), (String), (Hash), or (String, Hash)" end connect(opts.merge(options), &block) end # Method that adds a database adapter class method to Sequel that calls # Sequel.adapter_method. # # Do not call this method with untrusted input, as that can result in # arbitrary code execution. def self.def_adapter_method(*adapters) # :nodoc: adapters.each do |adapter| instance_eval("def #{adapter}(*args, &block); adapter_method('#{adapter}', *args, &block) end", __FILE__, __LINE__) end end private_class_method :adapter_method, :def_adapter_method require(%w"deprecated sql connection_pool exceptions dataset database timezones ast_transformer version") # Add the database adapter class methods to Sequel via metaprogramming def_adapter_method(*Database::ADAPTERS) end ruby-sequel-4.1.1/lib/sequel/database.rb000066400000000000000000000014641220156535500201400ustar00rootroot00000000000000module Sequel # Hash of adapters that have been used. The key is the adapter scheme # symbol, and the value is the Database subclass. ADAPTER_MAP = {} # Array of all databases to which Sequel has connected. If you are # developing an application that can connect to an arbitrary number of # databases, delete the database objects from this or they will not get # garbage collected. DATABASES = [] # A Database object represents a virtual connection to a database. # The Database class is meant to be subclassed by database adapters in order # to provide the functionality needed for executing queries. class Database OPTS = Sequel::OPTS end require(%w"connecting dataset dataset_defaults logging features misc query transactions schema_generator schema_methods", 'database') end ruby-sequel-4.1.1/lib/sequel/database/000077500000000000000000000000001220156535500176065ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/database/connecting.rb000066400000000000000000000250101220156535500222600ustar00rootroot00000000000000module Sequel class Database # --------------------- # :section: 4 - Methods relating to adapters, connecting, disconnecting, and sharding # This methods involve the Database's connection pool. # --------------------- # Array of supported database adapters ADAPTERS = %w'ado amalgalite cubrid db2 dbi do firebird ibmdb informix jdbc mock mysql mysql2 odbc openbase oracle postgres sqlite swift tinytds'.collect{|x| x.to_sym} @single_threaded = false class << self # Whether to use the single threaded connection pool by default attr_accessor :single_threaded end # The Database subclass for the given adapter scheme. # Raises Sequel::AdapterNotFound if the adapter # could not be loaded. def self.adapter_class(scheme) return scheme if scheme.is_a?(Class) scheme = scheme.to_s.gsub('-', '_').to_sym unless klass = ADAPTER_MAP[scheme] # attempt to load the adapter file begin require "sequel/adapters/#{scheme}" rescue LoadError => e raise Sequel.convert_exception_class(e, AdapterNotFound) end # make sure we actually loaded the adapter unless klass = ADAPTER_MAP[scheme] raise AdapterNotFound, "Could not load #{scheme} adapter: adapter class not registered in ADAPTER_MAP" end end klass end # Returns the scheme symbol for the Database class. def self.adapter_scheme @scheme end # Connects to a database. See Sequel.connect. def self.connect(conn_string, opts = OPTS) case conn_string when String if match = /\A(jdbc|do):/o.match(conn_string) c = adapter_class(match[1].to_sym) opts = opts.merge(:orig_opts=>opts.dup) opts = {:uri=>conn_string}.merge(opts) else uri = URI.parse(conn_string) scheme = uri.scheme scheme = :dbi if scheme =~ /\Adbi-/ c = adapter_class(scheme) uri_options = c.send(:uri_to_options, uri) uri.query.split('&').collect{|s| s.split('=')}.each{|k,v| uri_options[k.to_sym] = v if k && !k.empty?} unless uri.query.to_s.strip.empty? uri_options.to_a.each{|k,v| uri_options[k] = (defined?(URI::DEFAULT_PARSER) ? URI::DEFAULT_PARSER : URI).unescape(v) if v.is_a?(String)} opts = uri_options.merge(opts).merge!(:orig_opts=>opts.dup, :uri=>conn_string, :adapter=>scheme) end when Hash opts = conn_string.merge(opts) opts = opts.merge(:orig_opts=>opts.dup) c = adapter_class(opts[:adapter_class] || opts[:adapter] || opts['adapter']) else raise Error, "Sequel::Database.connect takes either a Hash or a String, given: #{conn_string.inspect}" end # process opts a bit opts = opts.inject({}) do |m, (k,v)| k = :user if k.to_s == 'username' m[k.to_sym] = v m end begin db = c.new(opts) db.test_connection if opts[:test] && db.send(:typecast_value_boolean, opts[:test]) if block_given? return yield(db) end ensure if block_given? db.disconnect if db Sequel.synchronize{::Sequel::DATABASES.delete(db)} end end db end # Sets the adapter scheme for the Database class. Call this method in # descendants of Database to allow connection using a URL. For example the # following: # # class Sequel::MyDB::Database < Sequel::Database # set_adapter_scheme :mydb # ... # end # # would allow connection using: # # Sequel.connect('mydb://user:password@dbserver/mydb') def self.set_adapter_scheme(scheme) # :nodoc: @scheme = scheme ADAPTER_MAP[scheme] = self end private_class_method :set_adapter_scheme # The connection pool for this Database instance. All Database instances have # their own connection pools. attr_reader :pool # Returns the scheme symbol for this instance's class, which reflects which # adapter is being used. In some cases, this can be the same as the # +database_type+ (for native adapters), in others (i.e. adapters with # subadapters), it will be different. # # Sequel.connect('jdbc:postgres://...').adapter_scheme # => :jdbc def adapter_scheme self.class.adapter_scheme end # Dynamically add new servers or modify server options at runtime. Also adds new # servers to the connection pool. Intended for use with master/slave or shard # configurations where it is useful to add new server hosts at runtime. # # servers argument should be a hash with server name symbol keys and hash or # proc values. If a servers key is already in use, it's value is overridden # with the value provided. # # DB.add_servers(:f=>{:host=>"hash_host_f"}) def add_servers(servers) if h = @opts[:servers] Sequel.synchronize{h.merge!(servers)} @pool.add_servers(servers.keys) end end # The database type for this database object, the same as the adapter scheme # by default. Should be overridden in adapters (especially shared adapters) # to be the correct type, so that even if two separate Database objects are # using different adapters you can tell that they are using the same database # type. Even better, you can tell that two Database objects that are using # the same adapter are connecting to different database types (think JDBC or # DataObjects). # # Sequel.connect('jdbc:postgres://...').database_type # => :postgres def database_type adapter_scheme end # Disconnects all available connections from the connection pool. Any # connections currently in use will not be disconnected. Options: # :servers :: Should be a symbol specifing the server to disconnect from, # or an array of symbols to specify multiple servers. # # Example: # # DB.disconnect # All servers # DB.disconnect(:servers=>:server1) # Single server # DB.disconnect(:servers=>[:server1, :server2]) # Multiple servers def disconnect(opts = OPTS) pool.disconnect(opts) end # Should only be called by the connection pool code to disconnect a connection. # By default, calls the close method on the connection object, since most # adapters use that, but should be overwritten on other adapters. def disconnect_connection(conn) conn.close end # Yield a new Database instance for every server in the connection pool. # Intended for use in sharded environments where there is a need to make schema # modifications (DDL queries) on each shard. # # DB.each_server{|db| db.create_table(:users){primary_key :id; String :name}} def each_server(&block) raise(Error, "Database#each_server must be passed a block") unless block servers.each{|s| self.class.connect(server_opts(s), &block)} end # Dynamically remove existing servers from the connection pool. Intended for # use with master/slave or shard configurations where it is useful to remove # existing server hosts at runtime. # # servers should be symbols or arrays of symbols. If a nonexistent server # is specified, it is ignored. If no servers have been specified for # this database, no changes are made. If you attempt to remove the :default server, # an error will be raised. # # DB.remove_servers(:f1, :f2) def remove_servers(*servers) if h = @opts[:servers] servers.flatten.each{|s| Sequel.synchronize{h.delete(s)}} @pool.remove_servers(servers) end end # An array of servers/shards for this Database object. # # DB.servers # Unsharded: => [:default] # DB.servers # Sharded: => [:default, :server1, :server2] def servers pool.servers end # Returns true if the database is using a single-threaded connection pool. def single_threaded? @single_threaded end if !defined?(RUBY_ENGINE) || RUBY_ENGINE == 'ruby' # Acquires a database connection, yielding it to the passed block. This is # useful if you want to make sure the same connection is used for all # database queries in the block. It is also useful if you want to gain # direct access to the underlying connection object if you need to do # something Sequel does not natively support. # # If a server option is given, acquires a connection for that specific # server, instead of the :default server. # # DB.synchronize do |conn| # ... # end def synchronize(server=nil) @pool.hold(server || :default){|conn| yield conn} end else # :nocov: def synchronize(server=nil, &block) @pool.hold(server || :default, &block) end # :nocov: end # Attempts to acquire a database connection. Returns true if successful. # Will probably raise an Error if unsuccessful. If a server argument # is given, attempts to acquire a database connection to the given # server/shard. def test_connection(server=nil) synchronize(server){|conn|} true end # Check whether the given connection is currently valid, by # running a query against it. If the query fails, the # connection should probably be removed from the connection # pool. def valid_connection?(conn) sql = valid_connection_sql begin log_connection_execute(conn, sql) rescue Sequel::DatabaseError, *database_error_classes false else true end end private # The default options for the connection pool. def connection_pool_default_options {} end # Return the options for the given server by merging the generic # options for all server with the specific options for the given # server specified in the :servers option. def server_opts(server) opts = if @opts[:servers] and server_options = @opts[:servers][server] case server_options when Hash @opts.merge(server_options) when Proc @opts.merge(server_options.call(self)) else raise Error, 'Server opts should be a hash or proc' end elsif server.is_a?(Hash) @opts.merge(server) else @opts.dup end opts.delete(:servers) opts end # The SQL query to issue to check if a connection is valid. def valid_connection_sql @valid_connection_sql ||= select(nil).sql end end end ruby-sequel-4.1.1/lib/sequel/database/dataset.rb000066400000000000000000000043101220156535500215560ustar00rootroot00000000000000module Sequel class Database # --------------------- # :section: 3 - Methods that create datasets # These methods all return instances of this database's dataset class. # --------------------- # Returns a dataset for the database. If the first argument is a string, # the method acts as an alias for Database#fetch, returning a dataset for # arbitrary SQL, with or without placeholders: # # DB['SELECT * FROM items'].all # DB['SELECT * FROM items WHERE name = ?', my_name].all # # Otherwise, acts as an alias for Database#from, setting the primary # table for the dataset: # # DB[:items].sql #=> "SELECT * FROM items" def [](*args) args.first.is_a?(String) ? fetch(*args) : from(*args) end # Returns a blank dataset for this database. # # DB.dataset # SELECT * # DB.dataset.from(:items) # SELECT * FROM items def dataset @dataset_class.new(self) end # Fetches records for an arbitrary SQL statement. If a block is given, # it is used to iterate over the records: # # DB.fetch('SELECT * FROM items'){|r| p r} # # The +fetch+ method returns a dataset instance: # # DB.fetch('SELECT * FROM items').all # # +fetch+ can also perform parameterized queries for protection against SQL # injection: # # DB.fetch('SELECT * FROM items WHERE name = ?', my_name).all def fetch(sql, *args, &block) ds = @default_dataset.with_sql(sql, *args) ds.each(&block) if block ds end # Returns a new dataset with the +from+ method invoked. If a block is given, # it is used as a filter on the dataset. # # DB.from(:items) # SELECT * FROM items # DB.from(:items){id > 2} # SELECT * FROM items WHERE (id > 2) def from(*args, &block) ds = @default_dataset.from(*args) block ? ds.filter(&block) : ds end # Returns a new dataset with the select method invoked. # # DB.select(1) # SELECT 1 # DB.select{server_version{}} # SELECT server_version() # DB.select(:id).from(:items) # SELECT id FROM items def select(*args, &block) @default_dataset.select(*args, &block) end end end ruby-sequel-4.1.1/lib/sequel/database/dataset_defaults.rb000066400000000000000000000142071220156535500234530ustar00rootroot00000000000000module Sequel class Database # --------------------- # :section: 5 - Methods that set defaults for created datasets # This methods change the default behavior of this database's datasets. # --------------------- # The default class to use for datasets DatasetClass = Sequel::Dataset @identifier_input_method = nil @identifier_output_method = nil @quote_identifiers = nil class << self # The identifier input method to use by default for all databases (default: adapter default) attr_reader :identifier_input_method # The identifier output method to use by default for all databases (default: adapter default) attr_reader :identifier_output_method # Whether to quote identifiers (columns and tables) by default for all databases (default: adapter default) attr_accessor :quote_identifiers end # Change the default identifier input method to use for all databases, def self.identifier_input_method=(v) @identifier_input_method = v.nil? ? false : v end # Change the default identifier output method to use for all databases, def self.identifier_output_method=(v) @identifier_output_method = v.nil? ? false : v end # The class to use for creating datasets. Should respond to # new with the Database argument as the first argument, and # an optional options hash. attr_reader :dataset_class # The identifier input method to use by default for this database (default: adapter default) attr_reader :identifier_input_method # The identifier output method to use by default for this database (default: adapter default) attr_reader :identifier_output_method # If the database has any dataset modules associated with it, # use a subclass of the given class that includes the modules # as the dataset class. def dataset_class=(c) unless @dataset_modules.empty? c = Class.new(c) @dataset_modules.each{|m| c.send(:include, m)} end @dataset_class = c reset_default_dataset end # Equivalent to extending all datasets produced by the database with a # module. What it actually does is use a subclass of the current dataset_class # as the new dataset_class, and include the module in the subclass. # Instead of a module, you can provide a block that is used to create an # anonymous module. # # This allows you to override any of the dataset methods even if they are # defined directly on the dataset class that this Database object uses. # # Examples: # # # Introspec columns for all of DB's datasets # DB.extend_datasets(Sequel::ColumnsIntrospection) # # # Trace all SELECT queries by printing the SQL and the full backtrace # DB.extend_datasets do # def fetch_rows(sql) # puts sql # puts caller # super # end # end def extend_datasets(mod=nil, &block) raise(Error, "must provide either mod or block, not both") if mod && block mod = Module.new(&block) if block if @dataset_modules.empty? @dataset_modules = [mod] @dataset_class = Class.new(@dataset_class) else @dataset_modules << mod end @dataset_class.send(:include, mod) reset_default_dataset end # Set the method to call on identifiers going into the database: # # DB[:items] # SELECT * FROM items # DB.identifier_input_method = :upcase # DB[:items] # SELECT * FROM ITEMS def identifier_input_method=(v) reset_default_dataset @identifier_input_method = v end # Set the method to call on identifiers coming from the database: # # DB[:items].first # {:id=>1, :name=>'foo'} # DB.identifier_output_method = :upcase # DB[:items].first # {:ID=>1, :NAME=>'foo'} def identifier_output_method=(v) reset_default_dataset @identifier_output_method = v end # Set whether to quote identifiers (columns and tables) for this database: # # DB[:items] # SELECT * FROM items # DB.quote_identifiers = true # DB[:items] # SELECT * FROM "items" def quote_identifiers=(v) reset_default_dataset @quote_identifiers = v end # Returns true if the database quotes identifiers. def quote_identifiers? @quote_identifiers end private # The default dataset class to use for the database def dataset_class_default self.class.const_get(:DatasetClass) end # Reset the default dataset used by most Database methods that # create datasets. Usually done after changes to the identifier # mangling methods. def reset_default_dataset @default_dataset = dataset end # The method to apply to identifiers going into the database by default. # Should be overridden in subclasses for databases that fold unquoted # identifiers to lower case instead of uppercase, such as # MySQL, PostgreSQL, and SQLite. def identifier_input_method_default :upcase end # The method to apply to identifiers coming the database by default. # Should be overridden in subclasses for databases that fold unquoted # identifiers to lower case instead of uppercase, such as # MySQL, PostgreSQL, and SQLite. def identifier_output_method_default :downcase end # Whether to quote identifiers by default for this database, true # by default. def quote_identifiers_default true end # Reset the identifier mangling options. Overrides any already set on # the instance. Only for internal use by shared adapters. def reset_identifier_mangling @quote_identifiers = @opts.fetch(:quote_identifiers){(qi = Database.quote_identifiers).nil? ? quote_identifiers_default : qi} @identifier_input_method = @opts.fetch(:identifier_input_method){(iim = Database.identifier_input_method).nil? ? identifier_input_method_default : (iim if iim)} @identifier_output_method = @opts.fetch(:identifier_output_method){(iom = Database.identifier_output_method).nil? ? identifier_output_method_default : (iom if iom)} reset_default_dataset end end end ruby-sequel-4.1.1/lib/sequel/database/features.rb000066400000000000000000000070231220156535500217530ustar00rootroot00000000000000module Sequel class Database # --------------------- # :section: 9 - Methods that describe what the database supports # These methods all return booleans, with most describing whether or not the # database supprots a given feature. # --------------------- # Whether the database uses a global namespace for the index. If # false, the indexes are going to be namespaced per table. def global_index_namespace? true end # Whether the database supports CREATE TABLE IF NOT EXISTS syntax, # false by default. def supports_create_table_if_not_exists? false end # Whether the database supports deferrable constraints, false # by default as few databases do. def supports_deferrable_constraints? false end # Whether the database supports deferrable foreign key constraints, # false by default as few databases do. def supports_deferrable_foreign_key_constraints? supports_deferrable_constraints? end # Whether the database supports DROP TABLE IF EXISTS syntax, # default is the same as #supports_create_table_if_not_exists?. def supports_drop_table_if_exists? supports_create_table_if_not_exists? end # Whether the database supports Database#foreign_key_list for # parsing foreign keys. def supports_foreign_key_parsing? respond_to?(:foreign_key_list) end # Whether the database supports Database#indexes for parsing indexes. def supports_index_parsing? respond_to?(:indexes) end # Whether the database and adapter support prepared transactions # (two-phase commit), false by default. def supports_prepared_transactions? false end # Whether the database and adapter support savepoints, false by default. def supports_savepoints? false end # Whether the database and adapter support savepoints inside prepared transactions # (two-phase commit), default is false. def supports_savepoints_in_prepared_transactions? supports_prepared_transactions? && supports_savepoints? end # Whether the database supports schema parsing via Database#schema. def supports_schema_parsing? respond_to?(:schema_parse_table, true) end # Whether the database supports Database#tables for getting list of tables. def supports_table_listing? respond_to?(:tables) end # # Whether the database supports Database#views for getting list of views. def supports_view_listing? respond_to?(:views) end # Whether the database and adapter support transaction isolation levels, false by default. def supports_transaction_isolation_levels? false end # Whether DDL statements work correctly in transactions, false by default. def supports_transactional_ddl? false end private # Whether the database supports combining multiple alter table # operations into a single query, false by default. def supports_combining_alter_table_ops? false end # Whether the database supports CREATE OR REPLACE VIEW. If not, support # will be emulated by dropping the view first. false by default. def supports_create_or_replace_view? false end # Whether the database supports named column constraints. True # by default. Those that don't support named column constraints # have to have column constraints converted to table constraints # if the column constraints have names. def supports_named_column_constraints? true end end end ruby-sequel-4.1.1/lib/sequel/database/logging.rb000066400000000000000000000046401220156535500215650ustar00rootroot00000000000000module Sequel class Database # --------------------- # :section: 6 - Methods relating to logging # This methods affect relating to the logging of executed SQL. # --------------------- # Numeric specifying the duration beyond which queries are logged at warn # level instead of info level. attr_accessor :log_warn_duration # Array of SQL loggers to use for this database. attr_accessor :loggers # Log level at which to log SQL queries. This is actually the method # sent to the logger, so it should be the method name symbol. The default # is :info, it can be set to :debug to log at DEBUG level. attr_accessor :sql_log_level # Log a message at error level, with information about the exception. def log_exception(exception, message) log_each(:error, "#{exception.class}: #{exception.message.strip if exception.message}: #{message}") end # Log a message at level info to all loggers. def log_info(message, args=nil) log_each(:info, args ? "#{message}; #{args.inspect}" : message) end # Yield to the block, logging any errors at error level to all loggers, # and all other queries with the duration at warn or info level. def log_yield(sql, args=nil) return yield if @loggers.empty? sql = "#{sql}; #{args.inspect}" if args start = Time.now begin yield rescue => e log_exception(e, sql) raise ensure log_duration(Time.now - start, sql) unless e end end # Remove any existing loggers and just use the given logger: # # DB.logger = Logger.new($stdout) def logger=(logger) @loggers = Array(logger) end private # Log the given SQL and then execute it on the connection, used by # the transaction code. def log_connection_execute(conn, sql) log_yield(sql){conn.send(connection_execute_method, sql)} end # Log message with message prefixed by duration at info level, or # warn level if duration is greater than log_warn_duration. def log_duration(duration, message) log_each((lwd = log_warn_duration and duration >= lwd) ? :warn : sql_log_level, "(#{sprintf('%0.6fs', duration)}) #{message}") end # Log message at level (which should be :error, :warn, or :info) # to all loggers. def log_each(level, message) @loggers.each{|logger| logger.send(level, message)} end end end ruby-sequel-4.1.1/lib/sequel/database/misc.rb000066400000000000000000000437431220156535500211010ustar00rootroot00000000000000module Sequel class Database # --------------------- # :section: 7 - Miscellaneous methods # These methods don't fit neatly into another category. # --------------------- # Hash of extension name symbols to callable objects to load the extension # into the Database object (usually by extending it with a module defined # in the extension). EXTENSIONS = {} # The general default size for string columns for all Sequel::Database # instances. DEFAULT_STRING_COLUMN_SIZE = 255 # Empty exception regexp to class map, used by default if Sequel doesn't # have specific support for the database in use. DEFAULT_DATABASE_ERROR_REGEXPS = {}.freeze # Mapping of schema type symbols to class or arrays of classes for that # symbol. SCHEMA_TYPE_CLASSES = {:string=>String, :integer=>Integer, :date=>Date, :datetime=>[Time, DateTime].freeze, :time=>Sequel::SQLTime, :boolean=>[TrueClass, FalseClass].freeze, :float=>Float, :decimal=>BigDecimal, :blob=>Sequel::SQL::Blob}.freeze # Module to be included in shared adapters so that when the DatabaseMethods are # included in the database, the identifier mangling defaults are reset correctly. module ResetIdentifierMangling def extended(obj) obj.send(:reset_identifier_mangling) end end # Nested hook Proc; each new hook Proc just wraps the previous one. @initialize_hook = Proc.new {|db| } # Register a hook that will be run when a new Database is instantiated. It is # called with the new database handle. def self.after_initialize(&block) raise Error, "must provide block to after_initialize" unless block Sequel.synchronize do previous = @initialize_hook @initialize_hook = Proc.new do |db| previous.call(db) block.call(db) end end end # Apply an extension to all Database objects created in the future. def self.extension(*extensions) after_initialize{|db| db.extension(*extensions)} end # Register an extension callback for Database objects. ext should be the # extension name symbol, and mod should either be a Module that the # database is extended with, or a callable object called with the database # object. If mod is not provided, a block can be provided and is treated # as the mod object. def self.register_extension(ext, mod=nil, &block) if mod raise(Error, "cannot provide both mod and block to Database.register_extension") if block if mod.is_a?(Module) block = proc{|db| db.extend(mod)} else block = mod end end Sequel.synchronize{EXTENSIONS[ext] = block} end # Run the after_initialize hook for the given +instance+. def self.run_after_initialize(instance) @initialize_hook.call(instance) end # Converts a uri to an options hash. These options are then passed # to a newly created database object. def self.uri_to_options(uri) { :user => uri.user, :password => uri.password, :host => uri.host, :port => uri.port, :database => (m = /\/(.*)/.match(uri.path)) && (m[1]) } end private_class_method :uri_to_options # The options hash for this database attr_reader :opts # Set the timezone to use for this database, overridding Sequel.database_timezone. attr_writer :timezone # The specific default size of string columns for this Sequel::Database, usually 255 by default. attr_accessor :default_string_column_size # Constructs a new instance of a database connection with the specified # options hash. # # Accepts the following options: # :default_string_column_size :: The default size of string columns, 255 by default. # :identifier_input_method :: A string method symbol to call on identifiers going into the database # :identifier_output_method :: A string method symbol to call on identifiers coming from the database # :logger :: A specific logger to use # :loggers :: An array of loggers to use # :quote_identifiers :: Whether to quote identifiers # :servers :: A hash specifying a server/shard specific options, keyed by shard symbol # :single_threaded :: Whether to use a single-threaded connection pool # :sql_log_level :: Method to use to log SQL to a logger, :info by default. # # All options given are also passed to the connection pool. def initialize(opts = OPTS, &block) @opts ||= opts @opts = connection_pool_default_options.merge(@opts) @loggers = Array(@opts[:logger]) + Array(@opts[:loggers]) self.log_warn_duration = @opts[:log_warn_duration] block ||= proc{|server| connect(server)} @opts[:servers] = {} if @opts[:servers].is_a?(String) @opts[:adapter_class] = self.class @opts[:single_threaded] = @single_threaded = typecast_value_boolean(@opts.fetch(:single_threaded, Database.single_threaded)) @schemas = {} @default_string_column_size = @opts[:default_string_column_size] || DEFAULT_STRING_COLUMN_SIZE @prepared_statements = {} @transactions = {} @identifier_input_method = nil @identifier_output_method = nil @quote_identifiers = nil @timezone = nil @dataset_class = dataset_class_default @cache_schema = typecast_value_boolean(@opts.fetch(:cache_schema, true)) @dataset_modules = [] @schema_type_classes = SCHEMA_TYPE_CLASSES.dup self.sql_log_level = @opts[:sql_log_level] ? @opts[:sql_log_level].to_sym : :info @pool = ConnectionPool.get_pool(self, @opts) reset_identifier_mangling adapter_initialize unless typecast_value_boolean(@opts[:keep_reference]) == false Sequel.synchronize{::Sequel::DATABASES.push(self)} end Sequel::Database.run_after_initialize(self) end # If a transaction is not currently in process, yield to the block immediately. # Otherwise, add the block to the list of blocks to call after the currently # in progress transaction commits (and only if it commits). # Options: # :server :: The server/shard to use. def after_commit(opts=OPTS, &block) raise Error, "must provide block to after_commit" unless block synchronize(opts[:server]) do |conn| if h = _trans(conn) raise Error, "cannot call after_commit in a prepared transaction" if h[:prepare] (h[:after_commit] ||= []) << block else yield end end end # If a transaction is not currently in progress, ignore the block. # Otherwise, add the block to the list of the blocks to call after the currently # in progress transaction rolls back (and only if it rolls back). # Options: # :server :: The server/shard to use. def after_rollback(opts=OPTS, &block) raise Error, "must provide block to after_rollback" unless block synchronize(opts[:server]) do |conn| if h = _trans(conn) raise Error, "cannot call after_rollback in a prepared transaction" if h[:prepare] (h[:after_rollback] ||= []) << block end end end # Cast the given type to a literal type # # DB.cast_type_literal(Float) # double precision # DB.cast_type_literal(:foo) # foo def cast_type_literal(type) type_literal(:type=>type) end # Load an extension into the receiver. In addition to requiring the extension file, this # also modifies the database to work with the extension (usually extending it with a # module defined in the extension file). If no related extension file exists or the # extension does not have specific support for Database objects, an Error will be raised. # Returns self. def extension(*exts) Sequel.extension(*exts) exts.each do |ext| if pr = Sequel.synchronize{EXTENSIONS[ext]} pr.call(self) else raise(Error, "Extension #{ext} does not have specific support handling individual databases") end end self end # Convert the given timestamp from the application's timezone, # to the databases's timezone or the default database timezone if # the database does not have a timezone. def from_application_timestamp(v) Sequel.convert_output_timestamp(v, timezone) end # Return true if already in a transaction given the options, # false otherwise. Respects the :server option for selecting # a shard. def in_transaction?(opts=OPTS) synchronize(opts[:server]){|conn| !!_trans(conn)} end # Returns a string representation of the database object including the # class name and connection URI and options used when connecting (if any). def inspect a = [] a << uri.inspect if uri if (oo = opts[:orig_opts]) && !oo.empty? a << oo.inspect end "#<#{self.class}: #{a.join(' ')}>" end # Proxy the literal call to the dataset. # # DB.literal(1) # 1 # DB.literal(:a) # a # DB.literal('a') # 'a' def literal(v) schema_utility_dataset.literal(v) end # Synchronize access to the prepared statements cache. def prepared_statement(name) Sequel.synchronize{prepared_statements[name]} end # Proxy the quote_identifier method to the dataset, # useful for quoting unqualified identifiers for use # outside of datasets. def quote_identifier(v) schema_utility_dataset.quote_identifier(v) end # Return ruby class or array of classes for the given type symbol. def schema_type_class(type) @schema_type_classes[type] end # Default serial primary key options, used by the table creation # code. def serial_primary_key_options {:primary_key => true, :type => Integer, :auto_increment => true} end # Cache the prepared statement object at the given name. def set_prepared_statement(name, ps) ps.prepared_sql Sequel.synchronize{prepared_statements[name] = ps} end # The timezone to use for this database, defaulting to Sequel.database_timezone. def timezone @timezone || Sequel.database_timezone end # Convert the given timestamp to the application's timezone, # from the databases's timezone or the default database timezone if # the database does not have a timezone. def to_application_timestamp(v) Sequel.convert_timestamp(v, timezone) end # Typecast the value to the given column_type. Calls # typecast_value_#{column_type} if the method exists, # otherwise returns the value. # This method should raise Sequel::InvalidValue if assigned value # is invalid. def typecast_value(column_type, value) return nil if value.nil? meth = "typecast_value_#{column_type}" begin respond_to?(meth, true) ? send(meth, value) : value rescue ArgumentError, TypeError => e raise Sequel.convert_exception_class(e, InvalidValue) end end # Returns the URI use to connect to the database. If a URI # was not used when connecting, returns nil. def uri opts[:uri] end # Explicit alias of uri for easier subclassing. def url uri end private # Per adapter initialization method, empty by default. def adapter_initialize end # Returns true when the object is considered blank. # The only objects that are blank are nil, false, # strings with all whitespace, and ones that respond # true to empty? def blank_object?(obj) return obj.blank? if obj.respond_to?(:blank?) case obj when NilClass, FalseClass true when Numeric, TrueClass false when String obj.strip.empty? else obj.respond_to?(:empty?) ? obj.empty? : false end end # Which transaction errors to translate, blank by default. def database_error_classes [] end # An enumerable yielding pairs of regexps and exception classes, used # to match against underlying driver exception messages in # order to raise a more specific Sequel::DatabaseError subclass. def database_error_regexps DEFAULT_DATABASE_ERROR_REGEXPS end # Return the Sequel::DatabaseError subclass to wrap the given # exception in. def database_error_class(exception, opts) database_specific_error_class(exception, opts) || DatabaseError end # Return the SQLState for the given exception, if one can be # determined def database_exception_sqlstate(exception, opts) nil end # Return a specific Sequel::DatabaseError exception class if # one is appropriate for the underlying exception, # or nil if there is no specific exception class. def database_specific_error_class(exception, opts) return DatabaseDisconnectError if disconnect_error?(exception, opts) if sqlstate = database_exception_sqlstate(exception, opts) if klass = database_specific_error_class_from_sqlstate(sqlstate) return klass end else database_error_regexps.each do |regexp, klss| return klss if exception.message =~ regexp end end nil end NOT_NULL_CONSTRAINT_SQLSTATES = %w'23502'.freeze.each{|s| s.freeze} FOREIGN_KEY_CONSTRAINT_SQLSTATES = %w'23503 23506 23504'.freeze.each{|s| s.freeze} UNIQUE_CONSTRAINT_SQLSTATES = %w'23505'.freeze.each{|s| s.freeze} CHECK_CONSTRAINT_SQLSTATES = %w'23513 23514'.freeze.each{|s| s.freeze} SERIALIZATION_CONSTRAINT_SQLSTATES = %w'40001'.freeze.each{|s| s.freeze} # Given the SQLState, return the appropriate DatabaseError subclass. def database_specific_error_class_from_sqlstate(sqlstate) case sqlstate when *NOT_NULL_CONSTRAINT_SQLSTATES NotNullConstraintViolation when *FOREIGN_KEY_CONSTRAINT_SQLSTATES ForeignKeyConstraintViolation when *UNIQUE_CONSTRAINT_SQLSTATES UniqueConstraintViolation when *CHECK_CONSTRAINT_SQLSTATES CheckConstraintViolation when *SERIALIZATION_CONSTRAINT_SQLSTATES SerializationFailure end end # Return true if exception represents a disconnect error, false otherwise. def disconnect_error?(exception, opts) opts[:disconnect] end # Convert the given exception to an appropriate Sequel::DatabaseError # subclass, keeping message and traceback. def raise_error(exception, opts=OPTS) if !opts[:classes] || Array(opts[:classes]).any?{|c| exception.is_a?(c)} raise Sequel.convert_exception_class(exception, database_error_class(exception, opts)) else raise exception end end # Typecast the value to an SQL::Blob def typecast_value_blob(value) value.is_a?(Sequel::SQL::Blob) ? value : Sequel::SQL::Blob.new(value) end # Typecast the value to true, false, or nil def typecast_value_boolean(value) case value when false, 0, "0", /\Af(alse)?\z/i, /\Ano?\z/i false else blank_object?(value) ? nil : true end end # Typecast the value to a Date def typecast_value_date(value) case value when DateTime, Time Date.new(value.year, value.month, value.day) when Date value when String Sequel.string_to_date(value) when Hash Date.new(*[:year, :month, :day].map{|x| (value[x] || value[x.to_s]).to_i}) else raise InvalidValue, "invalid value for Date: #{value.inspect}" end end # Typecast the value to a DateTime or Time depending on Sequel.datetime_class def typecast_value_datetime(value) Sequel.typecast_to_application_timestamp(value) end # Typecast the value to a BigDecimal def typecast_value_decimal(value) case value when BigDecimal value when Numeric BigDecimal.new(value.to_s) when String d = BigDecimal.new(value) if d.zero? # BigDecimal parsing is loose by default, returning a 0 value for # invalid input. If a zero value is received, use Float to check # for validity. begin Float(value) rescue ArgumentError raise InvalidValue, "invalid value for BigDecimal: #{value.inspect}" end end d else raise InvalidValue, "invalid value for BigDecimal: #{value.inspect}" end end # Typecast the value to a Float def typecast_value_float(value) Float(value) end # Used for checking/removing leading zeroes from strings so they don't get # interpreted as octal. LEADING_ZERO_RE = /\A0+(\d)/.freeze if RUBY_VERSION >= '1.9' # Typecast the value to an Integer def typecast_value_integer(value) (value.is_a?(String) && value =~ LEADING_ZERO_RE) ? Integer(value, 10) : Integer(value) end else # :nocov: # Replacement string when replacing leading zeroes. LEADING_ZERO_REP = "\\1".freeze # Typecast the value to an Integer def typecast_value_integer(value) Integer(value.is_a?(String) ? value.sub(LEADING_ZERO_RE, LEADING_ZERO_REP) : value) end # :nocov: end # Typecast the value to a String def typecast_value_string(value) case value when Hash, Array raise Sequel::InvalidValue, "invalid value for String: #{value.inspect}" else value.to_s end end # Typecast the value to a Time def typecast_value_time(value) case value when Time if value.is_a?(SQLTime) value else # specifically check for nsec == 0 value to work around JRuby 1.6 ruby 1.9 mode bug SQLTime.create(value.hour, value.min, value.sec, (value.respond_to?(:nsec) && value.nsec != 0) ? value.nsec/1000.0 : value.usec) end when String Sequel.string_to_time(value) when Hash SQLTime.create(*[:hour, :minute, :second].map{|x| (value[x] || value[x.to_s]).to_i}) else raise Sequel::InvalidValue, "invalid value for Time: #{value.inspect}" end end end end ruby-sequel-4.1.1/lib/sequel/database/query.rb000066400000000000000000000260451220156535500213070ustar00rootroot00000000000000module Sequel class Database # --------------------- # :section: 1 - Methods that execute queries and/or return results # This methods generally execute SQL code on the database server. # --------------------- STRING_DEFAULT_RE = /\A'(.*)'\z/ CURRENT_TIMESTAMP_RE = /now|CURRENT|getdate|\ADate\(\)\z/io COLUMN_SCHEMA_DATETIME_TYPES = [:date, :datetime] COLUMN_SCHEMA_STRING_TYPES = [:string, :blob, :date, :datetime, :time, :enum, :set, :interval] # The prepared statement object hash for this database, keyed by name symbol attr_reader :prepared_statements # Whether the schema should be cached for this database. True by default # for performance, can be set to false to always issue a database query to # get the schema. attr_accessor :cache_schema # Runs the supplied SQL statement string on the database server. # Returns self so it can be safely chained: # # DB << "UPDATE albums SET artist_id = NULL" << "DROP TABLE artists" def <<(sql) run(sql) self end # Call the prepared statement with the given name with the given hash # of arguments. # # DB[:items].filter(:id=>1).prepare(:first, :sa) # DB.call(:sa) # SELECT * FROM items WHERE id = 1 def call(ps_name, hash={}, &block) prepared_statement(ps_name).call(hash, &block) end # Method that should be used when submitting any DDL (Data Definition # Language) SQL, such as +create_table+. By default, calls +execute_dui+. # This method should not be called directly by user code. def execute_ddl(sql, opts=OPTS, &block) execute_dui(sql, opts, &block) end # Method that should be used when issuing a DELETE, UPDATE, or INSERT # statement. By default, calls execute. # This method should not be called directly by user code. def execute_dui(sql, opts=OPTS, &block) execute(sql, opts, &block) end # Method that should be used when issuing a INSERT # statement. By default, calls execute_dui. # This method should not be called directly by user code. def execute_insert(sql, opts=OPTS, &block) execute_dui(sql, opts, &block) end # Returns a single value from the database, e.g.: # # DB.get(1) # SELECT 1 # # => 1 # DB.get{server_version{}} # SELECT server_version() def get(*args, &block) @default_dataset.get(*args, &block) end # Runs the supplied SQL statement string on the database server. Returns nil. # Options: # :server :: The server to run the SQL on. # # DB.run("SET some_server_variable = 42") def run(sql, opts=OPTS) sql = literal(sql) if sql.is_a?(SQL::PlaceholderLiteralString) execute_ddl(sql, opts) nil end # Returns the schema for the given table as an array with all members being arrays of length 2, # the first member being the column name, and the second member being a hash of column information. # The table argument can also be a dataset, as long as it only has one table. # Available options are: # # :reload :: Ignore any cached results, and get fresh information from the database. # :schema :: An explicit schema to use. It may also be implicitly provided # via the table name. # # If schema parsing is supported by the database, the column information should hash at least contain the # following entries: # # :allow_null :: Whether NULL is an allowed value for the column. # :db_type :: The database type for the column, as a database specific string. # :default :: The database default for the column, as a database specific string. # :primary_key :: Whether the columns is a primary key column. If this column is not present, # it means that primary key information is unavailable, not that the column # is not a primary key. # :ruby_default :: The database default for the column, as a ruby object. In many cases, complex # database defaults cannot be parsed into ruby objects, in which case nil will be # used as the value. # :type :: A symbol specifying the type, such as :integer or :string. # # Example: # # DB.schema(:artists) # # [[:id, # # {:type=>:integer, # # :primary_key=>true, # # :default=>"nextval('artist_id_seq'::regclass)", # # :ruby_default=>nil, # # :db_type=>"integer", # # :allow_null=>false}], # # [:name, # # {:type=>:string, # # :primary_key=>false, # # :default=>nil, # # :ruby_default=>nil, # # :db_type=>"text", # # :allow_null=>false}]] def schema(table, opts=OPTS) raise(Error, 'schema parsing is not implemented on this database') unless supports_schema_parsing? opts = opts.dup tab = if table.is_a?(Dataset) o = table.opts from = o[:from] raise(Error, "can only parse the schema for a dataset with a single from table") unless from && from.length == 1 && !o.include?(:join) && !o.include?(:sql) table.first_source_table else table end qualifiers = split_qualifiers(tab) table_name = qualifiers.pop sch = qualifiers.pop information_schema_schema = case qualifiers.length when 1 Sequel.identifier(*qualifiers) when 2 Sequel.qualify(*qualifiers) end if table.is_a?(Dataset) quoted_name = table.literal(tab) opts[:dataset] = table else quoted_name = schema_utility_dataset.literal(table) end opts[:schema] = sch if sch && !opts.include?(:schema) opts[:information_schema_schema] = information_schema_schema if information_schema_schema && !opts.include?(:information_schema_schema) Sequel.synchronize{@schemas.delete(quoted_name)} if opts[:reload] if v = Sequel.synchronize{@schemas[quoted_name]} return v end cols = schema_parse_table(table_name, opts) raise(Error, 'schema parsing returned no columns, table probably doesn\'t exist') if cols.nil? || cols.empty? cols.each{|_,c| c[:ruby_default] = column_schema_to_ruby_default(c[:default], c[:type])} Sequel.synchronize{@schemas[quoted_name] = cols} if cache_schema cols end # Returns true if a table with the given name exists. This requires a query # to the database. # # DB.table_exists?(:foo) # => false # # SELECT NULL FROM foo LIMIT 1 # # Note that since this does a SELECT from the table, it can give false negatives # if you don't have permission to SELECT from the table. def table_exists?(name) sch, table_name = schema_and_table(name) name = SQL::QualifiedIdentifier.new(sch, table_name) if sch _table_exists?(from(name)) true rescue DatabaseError false end private # Should raise an error if the table doesn't not exist, # and not raise an error if the table does exist. def _table_exists?(ds) ds.get(SQL::AliasedExpression.new(Sequel::NULL, :nil)) end # Whether the type should be treated as a string type when parsing the # column schema default value. def column_schema_default_string_type?(type) COLUMN_SCHEMA_STRING_TYPES.include?(type) end # Transform the given normalized default string into a ruby object for the # given type. def column_schema_default_to_ruby_value(default, type) case type when :boolean case default when /[f0]/i false when /[t1]/i true end when :string, :enum, :set, :interval default when :blob Sequel::SQL::Blob.new(default) when :integer Integer(default) when :float Float(default) when :date Sequel.string_to_date(default) when :datetime DateTime.parse(default) when :time Sequel.string_to_time(default) when :decimal BigDecimal.new(default) end end # Normalize the default value string for the given type # and return the normalized value. def column_schema_normalize_default(default, type) if column_schema_default_string_type?(type) return unless m = STRING_DEFAULT_RE.match(default) m[1].gsub("''", "'") else default end end # Convert the given default, which should be a database specific string, into # a ruby object. def column_schema_to_ruby_default(default, type) return default unless default.is_a?(String) if COLUMN_SCHEMA_DATETIME_TYPES.include?(type) if CURRENT_TIMESTAMP_RE.match(default) if type == :date return Sequel::CURRENT_DATE else return Sequel::CURRENT_TIMESTAMP end end end default = column_schema_normalize_default(default, type) column_schema_default_to_ruby_value(default, type) rescue nil end # Return a Method object for the dataset's output_identifier_method. # Used in metadata parsing to make sure the returned information is in the # correct format. def input_identifier_meth(ds=nil) (ds || dataset).method(:input_identifier) end # Return a dataset that uses the default identifier input and output methods # for this database. Used when parsing metadata so that column symbols are # returned as expected. def metadata_dataset @metadata_dataset ||= ( ds = dataset; ds.identifier_input_method = identifier_input_method_default; ds.identifier_output_method = identifier_output_method_default; ds ) end # Return a Method object for the dataset's output_identifier_method. # Used in metadata parsing to make sure the returned information is in the # correct format. def output_identifier_meth(ds=nil) (ds || dataset).method(:output_identifier) end # Remove the cached schema for the given schema name def remove_cached_schema(table) Sequel.synchronize{@schemas.delete(quote_schema_table(table))} if @schemas end # Match the database's column type to a ruby type via a # regular expression, and return the ruby type as a symbol # such as :integer or :string. def schema_column_type(db_type) case db_type when /\A(character( varying)?|n?(var)?char|n?text|string|clob)/io :string when /\A(int(eger)?|(big|small|tiny)int)/io :integer when /\Adate\z/io :date when /\A((small)?datetime|timestamp( with(out)? time zone)?)(\(\d+\))?\z/io :datetime when /\Atime( with(out)? time zone)?\z/io :time when /\A(bool(ean)?)\z/io :boolean when /\A(real|float|double( precision)?|double\(\d+,\d+\)( unsigned)?)\z/io :float when /\A(?:(?:(?:num(?:ber|eric)?|decimal)(?:\(\d+,\s*(\d+|false|true)\))?))\z/io $1 && ['0', 'false'].include?($1) ? :integer : :decimal when /bytea|blob|image|(var)?binary/io :blob when /\Aenum/io :enum end end end end ruby-sequel-4.1.1/lib/sequel/database/schema_generator.rb000066400000000000000000000551351220156535500234520ustar00rootroot00000000000000module Sequel # The Schema module holds the schema generators. module Schema # Schema::CreateTableGenerator is an internal class that the user is not expected # to instantiate directly. Instances are created by Database#create_table. # It is used to specify table creation parameters. It takes a Database # object and a block of column/index/constraint specifications, and # gives the Database a table description, which the database uses to # create a table. # # Schema::CreateTableGenerator has some methods but also includes method_missing, # allowing users to specify column type as a method instead of using # the column method, which makes for a nicer DSL. # # For more information on Sequel's support for schema modification, see # the {"Schema Modification" guide}[link:files/doc/schema_modification_rdoc.html]. class CreateTableGenerator # Classes specifying generic types that Sequel will convert to database-specific types. GENERIC_TYPES=[String, Integer, Fixnum, Bignum, Float, Numeric, BigDecimal, Date, DateTime, Time, File, TrueClass, FalseClass] # Return the column hashes created by this generator attr_reader :columns # Return the constraint hashes created by this generator attr_reader :constraints # Return the index hashes created by this generator attr_reader :indexes # Set the database in which to create the table, and evaluate the block # in the context of this object. def initialize(db, &block) @db = db @columns = [] @indexes = [] @constraints = [] @primary_key = nil instance_eval(&block) if block @columns.unshift(@primary_key) if @primary_key && !has_column?(primary_key_name) end # Add a method for each of the given types that creates a column # with that type as a constant. Types given should either already # be constants/classes or a capitalized string/symbol with the same name # as a constant/class. # # Do not call this method with untrusted input, as that can result in # arbitrary code execution. def self.add_type_method(*types) types.each do |type| class_eval("def #{type}(name, opts={}); column(name, #{type}, opts); end", __FILE__, __LINE__) end end # Add an unnamed constraint to the DDL, specified by the given block # or args: # # check(:num=>1..5) # CHECK num >= 1 AND num <= 5 # check{num > 5} # CHECK num > 5 def check(*args, &block) constraint(nil, *args, &block) end # Add a column with the given name, type, and opts to the DDL. # # column :num, :integer # # num INTEGER # # column :name, String, :null=>false, :default=>'a' # # name varchar(255) NOT NULL DEFAULT 'a' # # inet :ip # # ip inet # # You can also create columns via method missing, so the following are # equivalent: # # column :number, :integer # integer :number # # The following options are supported: # # :default :: The default value for the column. # :deferrable :: For foreign key columns, this ensures referential integrity will work even if # referencing table uses a foreign key value that does not # yet exist on referenced table (but will exist before the transaction commits). # Basically it adds DEFERRABLE INITIALLY DEFERRED on key creation. # If you use :immediate as the value, uses DEFERRABLE INITIALLY IMMEDIATE. # :index :: Create an index on this column. If given a hash, use the hash as the # options for the index. # :key :: For foreign key columns, the column in the associated table # that this column references. Unnecessary if this column # references the primary key of the associated table, except if you are # using MySQL. # :null :: Mark the column as allowing NULL values (if true), # or not allowing NULL values (if false). If unspecified, will default # to whatever the database default is. # :on_delete :: Specify the behavior of this column when being deleted # (:restrict, :cascade, :set_null, :set_default, :no_action). # :on_update :: Specify the behavior of this column when being updated # (:restrict, :cascade, :set_null, :set_default, :no_action). # :primary_key :: Make the column as a single primary key column. This should only # be used if you have a single, nonautoincrementing primary key column. # :primary_key_constraint_name :: The name to give the primary key constraint # :type :: Overrides the type given as the argument. Generally not used by column # itself, but can be passed as an option to other methods that call column. # :unique :: Mark the column as unique, generally has the same effect as # creating a unique index on the column. # :unique_constraint_name :: The name to give the unique key constraint def column(name, type, opts = OPTS) columns << {:name => name, :type => type}.merge(opts) if index_opts = opts[:index] index(name, index_opts.is_a?(Hash) ? index_opts : {}) end end # Adds a named constraint (or unnamed if name is nil) to the DDL, # with the given block or args. To provide options for the constraint, pass # a hash as the first argument. # # constraint(:blah, :num=>1..5) # # CONSTRAINT blah CHECK num >= 1 AND num <= 5 # constraint({:name=>:blah, :deferrable=>true}, :num=>1..5) # # CONSTRAINT blah CHECK num >= 1 AND num <= 5 DEFERRABLE INITIALLY DEFERRED def constraint(name, *args, &block) opts = name.is_a?(Hash) ? name : {:name=>name} constraints << opts.merge(:type=>:check, :check=>block || args) end # Add a foreign key in the table that references another table to the DDL. See column # for available options. # # foreign_key(:artist_id) # artist_id INTEGER # foreign_key(:artist_id, :artists) # artist_id INTEGER REFERENCES artists # foreign_key(:artist_id, :artists, :key=>:id) # artist_id INTEGER REFERENCES artists(id) # foreign_key(:artist_id, :artists, :type=>String) # artist_id varchar(255) REFERENCES artists(id) # # Additional Options: # # :foreign_key_constraint_name :: The name to give the foreign key constraint # # If you want a foreign key constraint without adding a column (usually because it is a # composite foreign key), you can provide an array of columns as the first argument, and # you can provide the :name option to name the constraint: # # foreign_key([:artist_name, :artist_location], :artists, :name=>:artist_fk) # # ADD CONSTRAINT artist_fk FOREIGN KEY (artist_name, artist_location) REFERENCES artists def foreign_key(name, table=nil, opts = OPTS) opts = case table when Hash table.merge(opts) when Symbol opts.merge(:table=>table) when NilClass opts else raise(Error, "The second argument to foreign_key should be a Hash, Symbol, or nil") end return composite_foreign_key(name, opts) if name.is_a?(Array) column(name, Integer, opts) end # Add a full text index on the given columns to the DDL. # # PostgreSQL specific options: # :index_type :: Can be set to :gist to use a GIST index instead of the # default GIN index. # :language :: Set a language to use for the index (default: simple). # # Microsoft SQL Server specific options: # :key_index :: The KEY INDEX to use for the full text index. def full_text_index(columns, opts = OPTS) index(columns, opts.merge(:type => :full_text)) end # True if the DDL includes the creation of a column with the given name. def has_column?(name) columns.any?{|c| c[:name] == name} end # Add an index on the given column(s) with the given options to the DDL. # General options: # # :name :: The name to use for the index. If not given, a default name # based on the table and columns is used. # :type :: The type of index to use (only supported by some databases) # :unique :: Make the index unique, so duplicate values are not allowed. # :where :: Create a partial index (only supported by some databases) # # PostgreSQL specific options: # # :concurrently :: Create the index concurrently, so it doesn't block # operations on the table while the index is being # built. # :opclass :: Use a specific operator class in the index. # # Microsoft SQL Server specific options: # # :include :: Include additional column values in the index, without # actually indexing on those values. # # index :name # # CREATE INDEX table_name_index ON table (name) # # index [:artist_id, :name] # # CREATE INDEX table_artist_id_name_index ON table (artist_id, name) def index(columns, opts = OPTS) indexes << {:columns => Array(columns)}.merge(opts) end # Add a column with the given type, name, and opts to the DDL. See +column+ for available # options. def method_missing(type, name = nil, opts = OPTS) name ? column(name, type, opts) : super end # This object responds to all methods. def respond_to_missing?(meth, include_private) true end # Adds an autoincrementing primary key column or a primary key constraint to the DDL. # To just create a constraint, the first argument should be an array of column symbols # specifying the primary key columns. To create an autoincrementing primary key # column, a single symbol can be used. In both cases, an options hash can be used # as the second argument. # # If you want to create a primary key column that is not autoincrementing, you # should not use this method. Instead, you should use the regular +column+ method # with a :primary_key=>true option. # # If an array of column symbols is used, you can specify the :name option # to name the constraint. # # Examples: # primary_key(:id) # primary_key([:street_number, :house_number], :name=>:some constraint_name) def primary_key(name, *args) return composite_primary_key(name, *args) if name.is_a?(Array) @primary_key = @db.serial_primary_key_options.merge({:name => name}) if opts = args.pop opts = {:type => opts} unless opts.is_a?(Hash) if type = args.pop opts.merge!(:type => type) end @primary_key.merge!(opts) end @primary_key end # The name of the primary key for this generator, if it has a primary key. def primary_key_name @primary_key[:name] if @primary_key end # Add a spatial index on the given columns to the DDL. def spatial_index(columns, opts = OPTS) index(columns, opts.merge(:type => :spatial)) end # Add a unique constraint on the given columns to the DDL. # # unique(:name) # UNIQUE (name) # # Supports the same :deferrable option as #column. The :name option can be used # to name the constraint. def unique(columns, opts = OPTS) constraints << {:type => :unique, :columns => Array(columns)}.merge(opts) end private # Add a composite primary key constraint def composite_primary_key(columns, *args) opts = args.pop || {} constraints << {:type => :primary_key, :columns => columns}.merge(opts) end # Add a composite foreign key constraint def composite_foreign_key(columns, opts) constraints << {:type => :foreign_key, :columns => columns}.merge(opts) end add_type_method(*GENERIC_TYPES) end # Alias of CreateTableGenerator for backwards compatibility. Generator = CreateTableGenerator # Schema::AlterTableGenerator is an internal class that the user is not expected # to instantiate directly. Instances are created by Database#alter_table. # It is used to specify table alteration parameters. It takes a Database # object and a block of operations to perform on the table, and # gives the Database an array of table altering operations, which the database uses to # alter a table's description. # # For more information on Sequel's support for schema modification, see # the {"Schema Modification" guide}[link:files/doc/schema_modification_rdoc.html]. class AlterTableGenerator # An array of DDL operations to perform attr_reader :operations # Set the Database object to which to apply the DDL, and evaluate the # block in the context of this object. def initialize(db, &block) @db = db @operations = [] instance_eval(&block) if block end # Add a column with the given name, type, and opts to the DDL for the table. # See CreateTableGenerator#column for the available options. # # add_column(:name, String) # ADD COLUMN name varchar(255) def add_column(name, type, opts = OPTS) @operations << {:op => :add_column, :name => name, :type => type}.merge(opts) end # Add a constraint with the given name and args to the DDL for the table. # See CreateTableGenerator#constraint. # # add_constraint(:valid_name, Sequel.like(:name, 'A%')) # # ADD CONSTRAINT valid_name CHECK (name LIKE 'A%') # add_constraint({:name=>:valid_name, :deferrable=>true}, :num=>1..5) # # CONSTRAINT valid_name CHECK (name LIKE 'A%') DEFERRABLE INITIALLY DEFERRED def add_constraint(name, *args, &block) opts = name.is_a?(Hash) ? name : {:name=>name} @operations << opts.merge(:op=>:add_constraint, :type=>:check, :check=>block || args) end # Add a unique constraint to the given column(s) # # add_unique_constraint(:name) # ADD UNIQUE (name) # add_unique_constraint(:name, :name=>:unique_name) # ADD CONSTRAINT unique_name UNIQUE (name) # # Supports the same :deferrable option as CreateTableGenerator#column. def add_unique_constraint(columns, opts = OPTS) @operations << {:op => :add_constraint, :type => :unique, :columns => Array(columns)}.merge(opts) end # Add a foreign key with the given name and referencing the given table # to the DDL for the table. See CreateTableGenerator#column for the available options. # # You can also pass an array of column names for creating composite foreign # keys. In this case, it will assume the columns exist and will only add # the constraint. You can provide a :name option to name the constraint. # # NOTE: If you need to add a foreign key constraint to a single existing column # use the composite key syntax even if it is only one column. # # add_foreign_key(:artist_id, :table) # ADD COLUMN artist_id integer REFERENCES table # add_foreign_key([:name], :table) # ADD FOREIGN KEY (name) REFERENCES table # # PostgreSQL specific options: # # :not_valid :: Set to true to add the constraint with the NOT VALID syntax. # This makes it so that future inserts must respect referential # integrity, but allows the constraint to be added even if existing # column values reference rows that do not exist. After all the # existing data has been cleaned up, validate_constraint can be used # to mark the constraint as valid. Note that this option only makes # sense when using an array of columns. def add_foreign_key(name, table, opts = OPTS) return add_composite_foreign_key(name, table, opts) if name.is_a?(Array) add_column(name, Integer, {:table=>table}.merge(opts)) end # Add a full text index on the given columns to the DDL for the table. # See CreateTableGenerator#index for available options. def add_full_text_index(columns, opts = OPTS) add_index(columns, {:type=>:full_text}.merge(opts)) end # Add an index on the given columns to the DDL for the table. See # CreateTableGenerator#index for available options. # # add_index(:artist_id) # CREATE INDEX table_artist_id_index ON table (artist_id) def add_index(columns, opts = OPTS) @operations << {:op => :add_index, :columns => Array(columns)}.merge(opts) end # Add a primary key to the DDL for the table. See CreateTableGenerator#column # for the available options. Like +add_foreign_key+, if you specify # the column name as an array, it just creates a constraint: # # add_primary_key(:id) # ADD COLUMN id serial PRIMARY KEY # add_primary_key([:artist_id, :name]) # ADD PRIMARY KEY (artist_id, name) def add_primary_key(name, opts = OPTS) return add_composite_primary_key(name, opts) if name.is_a?(Array) opts = @db.serial_primary_key_options.merge(opts) add_column(name, opts.delete(:type), opts) end # Add a spatial index on the given columns to the DDL for the table. # See CreateTableGenerator#index for available options. def add_spatial_index(columns, opts = OPTS) add_index(columns, {:type=>:spatial}.merge(opts)) end # Remove a column from the DDL for the table. # # drop_column(:artist_id) # DROP COLUMN artist_id # drop_column(:artist_id, :cascade=>true) # DROP COLUMN artist_id CASCADE def drop_column(name, opts=OPTS) @operations << {:op => :drop_column, :name => name}.merge(opts) end # Remove a constraint from the DDL for the table. MySQL/SQLite specific options: # # :type :: Set the type of constraint to drop, either :primary_key, :foreign_key, # or :unique. # # drop_constraint(:unique_name) # DROP CONSTRAINT unique_name # drop_constraint(:unique_name, :cascade=>true) # DROP CONSTRAINT unique_name CASCADE def drop_constraint(name, opts=OPTS) @operations << {:op => :drop_constraint, :name => name}.merge(opts) end # Remove a foreign key and the associated column from the DDL for the table. General options: # # :name :: The name of the constraint to drop. If not given, uses the same name # that would be used by add_foreign_key with the same columns. # # NOTE: If you want to drop only the foreign key constraint but keep the column, # use the composite key syntax even if it is only one column. # # drop_foreign_key(:artist_id) # DROP CONSTRAINT table_artist_id_fkey, DROP COLUMN artist_id # drop_foreign_key([:name]) # DROP CONSTRAINT table_name_fkey def drop_foreign_key(name, opts=OPTS) drop_composite_foreign_key(Array(name), opts) drop_column(name) unless name.is_a?(Array) end # Remove an index from the DDL for the table. General options: # # :name :: The name of the index to drop. If not given, uses the same name # that would be used by add_index with the same columns. # # PostgreSQL specific options: # # :cascade :: Cascade the index drop to dependent objects. # :concurrently :: Drop the index using CONCURRENTLY, which doesn't block # operations on the table. Supported in PostgreSQL 9.2+. # :if_exists :: Only drop the index if it already exists. # # drop_index(:artist_id) # DROP INDEX table_artist_id_index # drop_index([:a, :b]) # DROP INDEX table_a_b_index # drop_index([:a, :b], :name=>:foo) # DROP INDEX foo def drop_index(columns, options=OPTS) @operations << {:op => :drop_index, :columns => Array(columns)}.merge(options) end # Modify a column's name in the DDL for the table. # # rename_column(:name, :artist_name) # RENAME COLUMN name TO artist_name def rename_column(name, new_name, opts = OPTS) @operations << {:op => :rename_column, :name => name, :new_name => new_name}.merge(opts) end # Modify a column's default value in the DDL for the table. # # set_column_default(:artist_name, 'a') # ALTER COLUMN artist_name SET DEFAULT 'a' def set_column_default(name, default) @operations << {:op => :set_column_default, :name => name, :default => default} end # Modify a column's type in the DDL for the table. # # set_column_type(:artist_name, 'char(10)') # ALTER COLUMN artist_name TYPE char(10) # # PostgreSQL specific options: # # :using :: Add a USING clause that specifies how to convert existing values to new values. def set_column_type(name, type, opts=OPTS) @operations << {:op => :set_column_type, :name => name, :type => type}.merge(opts) end # Set a given column as allowing NULL values. # # set_column_allow_null(:artist_name) # ALTER COLUMN artist_name DROP NOT NULL def set_column_allow_null(name, allow_null=true) @operations << {:op => :set_column_null, :name => name, :null => allow_null} end # Set a given column as not allowing NULL values. # # set_column_not_null(:artist_name) # ALTER COLUMN artist_name SET NOT NULL def set_column_not_null(name) set_column_allow_null(name, false) end private # Add a composite primary key constraint def add_composite_primary_key(columns, opts) @operations << {:op => :add_constraint, :type => :primary_key, :columns => columns}.merge(opts) end # Add a composite foreign key constraint def add_composite_foreign_key(columns, table, opts) @operations << {:op => :add_constraint, :type => :foreign_key, :columns => columns, :table => table}.merge(opts) end # Drop a composite foreign key constraint def drop_composite_foreign_key(columns, opts) @operations << {:op => :drop_constraint, :type => :foreign_key, :columns => columns}.merge(opts) end end end end ruby-sequel-4.1.1/lib/sequel/database/schema_methods.rb000066400000000000000000001032201220156535500231140ustar00rootroot00000000000000module Sequel class Database # --------------------- # :section: 2 - Methods that modify the database schema # These methods execute code on the database that modifies the database's schema. # --------------------- AUTOINCREMENT = 'AUTOINCREMENT'.freeze COMMA_SEPARATOR = ', '.freeze NOT_NULL = ' NOT NULL'.freeze NULL = ' NULL'.freeze PRIMARY_KEY = ' PRIMARY KEY'.freeze TEMPORARY = 'TEMPORARY '.freeze UNDERSCORE = '_'.freeze UNIQUE = ' UNIQUE'.freeze UNSIGNED = ' UNSIGNED'.freeze # The order of column modifiers to use when defining a column. COLUMN_DEFINITION_ORDER = [:collate, :default, :null, :unique, :primary_key, :auto_increment, :references] # The default options for join table columns. DEFAULT_JOIN_TABLE_COLUMN_OPTIONS = {:null=>false} # The alter table operations that are combinable. COMBINABLE_ALTER_TABLE_OPS = [:add_column, :drop_column, :rename_column, :set_column_type, :set_column_default, :set_column_null, :add_constraint, :drop_constraint] # Adds a column to the specified table. This method expects a column name, # a datatype and optionally a hash with additional constraints and options: # # DB.add_column :items, :name, :text, :unique => true, :null => false # DB.add_column :items, :category, :text, :default => 'ruby' # # See alter_table. def add_column(table, *args) alter_table(table) {add_column(*args)} end # Adds an index to a table for the given columns: # # DB.add_index :posts, :title # DB.add_index :posts, [:author, :title], :unique => true # # Options: # :ignore_errors :: Ignore any DatabaseErrors that are raised # # See alter_table. def add_index(table, columns, options=OPTS) e = options[:ignore_errors] begin alter_table(table){add_index(columns, options)} rescue DatabaseError raise unless e end end # Alters the given table with the specified block. Example: # # DB.alter_table :items do # add_column :category, :text, :default => 'ruby' # drop_column :category # rename_column :cntr, :counter # set_column_type :value, :float # set_column_default :value, :float # add_index [:group, :category] # drop_index [:group, :category] # end # # Note that +add_column+ accepts all the options available for column # definitions using create_table, and +add_index+ accepts all the options # available for index definition. # # See Schema::AlterTableGenerator and the {"Migrations and Schema Modification" guide}[link:files/doc/migration_rdoc.html]. def alter_table(name, generator=nil, &block) generator ||= alter_table_generator(&block) remove_cached_schema(name) apply_alter_table_generator(name, generator) nil end # Return a new Schema::AlterTableGenerator instance with the receiver as # the database and the given block. def alter_table_generator(&block) alter_table_generator_class.new(self, &block) end # Create a join table using a hash of foreign keys to referenced # table names. Example: # # create_join_table(:cat_id=>:cats, :dog_id=>:dogs) # # CREATE TABLE cats_dogs ( # # cat_id integer NOT NULL REFERENCES cats, # # dog_id integer NOT NULL REFERENCES dogs, # # PRIMARY KEY (cat_id, dog_id) # # ) # # CREATE INDEX cats_dogs_dog_id_cat_id_index ON cats_dogs(dog_id, cat_id) # # The primary key and index are used so that almost all operations # on the table can benefit from one of the two indexes, and the primary # key ensures that entries in the table are unique, which is the typical # desire for a join table. # # You can provide column options by making the values in the hash # be option hashes, so long as the option hashes have a :table # entry giving the table referenced: # # create_join_table(:cat_id=>{:table=>:cats, :type=>Bignum}, :dog_id=>:dogs) # # You can provide a second argument which is a table options hash: # # create_join_table({:cat_id=>:cats, :dog_id=>:dogs}, :temp=>true) # # Some table options are handled specially: # # :index_options :: The options to pass to the index # :name :: The name of the table to create # :no_index :: Set to true not to create the second index. # :no_primary_key :: Set to true to not create the primary key. def create_join_table(hash, options=OPTS) keys = hash.keys.sort_by{|k| k.to_s} create_table(join_table_name(hash, options), options) do keys.each do |key| v = hash[key] unless v.is_a?(Hash) v = {:table=>v} end v = DEFAULT_JOIN_TABLE_COLUMN_OPTIONS.merge(v) foreign_key(key, v) end primary_key(keys) unless options[:no_primary_key] index(keys.reverse, options[:index_options] || {}) unless options[:no_index] end end # Creates a table with the columns given in the provided block: # # DB.create_table :posts do # primary_key :id # column :title, :text # String :content # index :title # end # # General options: # :as :: Create the table using the value, which should be either a # dataset or a literal SQL string. If this option is used, # a block should not be given to the method. # :ignore_index_errors :: Ignore any errors when creating indexes. # :temp :: Create the table as a temporary table. # # MySQL specific options: # :charset :: The character set to use for the table. # :collate :: The collation to use for the table. # :engine :: The table engine to use for the table. # # PostgreSQL specific options: # :unlogged :: Create the table as an unlogged table. # :inherits :: Inherit from a different tables. An array can be # specified to inherit from multiple tables. # # See Schema::Generator and the {"Schema Modification" guide}[link:files/doc/schema_modification_rdoc.html]. def create_table(name, options=OPTS, &block) remove_cached_schema(name) options = {:generator=>options} if options.is_a?(Schema::CreateTableGenerator) if sql = options[:as] raise(Error, "can't provide both :as option and block to create_table") if block create_table_as(name, sql, options) else generator = options[:generator] || create_table_generator(&block) create_table_from_generator(name, generator, options) create_table_indexes_from_generator(name, generator, options) nil end end # Forcibly create a table, attempting to drop it if it already exists, then creating it. # # DB.create_table!(:a){Integer :a} # # SELECT NULL FROM a LIMIT 1 -- check existence # # DROP TABLE a -- drop table if already exists # # CREATE TABLE a (a integer) def create_table!(name, options=OPTS, &block) drop_table?(name) create_table(name, options, &block) end # Creates the table unless the table already exists. # # DB.create_table?(:a){Integer :a} # # SELECT NULL FROM a LIMIT 1 -- check existence # # CREATE TABLE a (a integer) -- if it doesn't already exist def create_table?(name, options=OPTS, &block) if supports_create_table_if_not_exists? create_table(name, options.merge(:if_not_exists=>true), &block) elsif !table_exists?(name) create_table(name, options, &block) end end # Return a new Schema::CreateTableGenerator instance with the receiver as # the database and the given block. def create_table_generator(&block) create_table_generator_class.new(self, &block) end # Creates a view, replacing a view with the same name if one already exists. # # DB.create_or_replace_view(:some_items, "SELECT * FROM items WHERE price < 100") # DB.create_or_replace_view(:some_items, DB[:items].filter(:category => 'ruby')) # # For databases where replacing a view is not natively supported, support # is emulated by dropping a view with the same name before creating the view. def create_or_replace_view(name, source, options = OPTS) if supports_create_or_replace_view? options = options.merge(:replace=>true) else drop_view(name) rescue nil end create_view(name, source, options) end # Creates a view based on a dataset or an SQL string: # # DB.create_view(:cheap_items, "SELECT * FROM items WHERE price < 100") # DB.create_view(:ruby_items, DB[:items].filter(:category => 'ruby')) # # Options: # :columns :: The column names to use for the view. If not given, # automatically determined based on the input dataset. # # PostgreSQL/SQLite specific option: # :temp :: Create a temporary view, automatically dropped on disconnect. # # PostgreSQL specific option: # :materialized :: Creates a materialized view, similar to a regular view, # but backed by a physical table. # :recursive :: Creates a recursive view. As columns must be specified for # recursive views, you can also set them as the value of this # option. Since a recursive view requires a union that isn't # in a subquery, if you are providing a Dataset as the source # argument, if should probably call the union method with the # :all=>true and :from_self=>false options. def create_view(name, source, options = OPTS) execute_ddl(create_view_sql(name, source, options)) remove_cached_schema(name) nil end # Removes a column from the specified table: # # DB.drop_column :items, :category # # See alter_table. def drop_column(table, *args) alter_table(table) {drop_column(*args)} end # Removes an index for the given table and column/s: # # DB.drop_index :posts, :title # DB.drop_index :posts, [:author, :title] # # See alter_table. def drop_index(table, columns, options=OPTS) alter_table(table){drop_index(columns, options)} end # Drop the join table that would have been created with the # same arguments to create_join_table: # # drop_join_table(:cat_id=>:cats, :dog_id=>:dogs) # # DROP TABLE cats_dogs def drop_join_table(hash, options=OPTS) drop_table(join_table_name(hash, options), options) end # Drops one or more tables corresponding to the given names: # # DB.drop_table(:posts) # DROP TABLE posts # DB.drop_table(:posts, :comments) # DB.drop_table(:posts, :comments, :cascade=>true) def drop_table(*names) options = names.last.is_a?(Hash) ? names.pop : {} names.each do |n| execute_ddl(drop_table_sql(n, options)) remove_cached_schema(n) end nil end # Drops the table if it already exists. If it doesn't exist, # does nothing. # # DB.drop_table?(:a) # # SELECT NULL FROM a LIMIT 1 -- check existence # # DROP TABLE a -- if it already exists def drop_table?(*names) options = names.last.is_a?(Hash) ? names.pop : {} if supports_drop_table_if_exists? options = options.merge(:if_exists=>true) names.each do |name| drop_table(name, options) end else names.each do |name| drop_table(name, options) if table_exists?(name) end end end # Drops one or more views corresponding to the given names: # # DB.drop_view(:cheap_items) # DB.drop_view(:cheap_items, :pricey_items) # DB.drop_view(:cheap_items, :pricey_items, :cascade=>true) # # Options: # :cascade :: Also drop objects depending on this view. # # PostgreSQL specific options: # :if_exists :: Do not raise an error if the view does not exist. # :materialized :: Drop a materialized view. def drop_view(*names) options = names.last.is_a?(Hash) ? names.pop : {} names.each do |n| execute_ddl(drop_view_sql(n, options)) remove_cached_schema(n) end nil end # Renames a table: # # DB.tables #=> [:items] # DB.rename_table :items, :old_items # DB.tables #=> [:old_items] def rename_table(name, new_name) execute_ddl(rename_table_sql(name, new_name)) remove_cached_schema(name) nil end # Renames a column in the specified table. This method expects the current # column name and the new column name: # # DB.rename_column :items, :cntr, :counter # # See alter_table. def rename_column(table, *args) alter_table(table) {rename_column(*args)} end # Sets the default value for the given column in the given table: # # DB.set_column_default :items, :category, 'perl!' # # See alter_table. def set_column_default(table, *args) alter_table(table) {set_column_default(*args)} end # Set the data type for the given column in the given table: # # DB.set_column_type :items, :price, :float # # See alter_table. def set_column_type(table, *args) alter_table(table) {set_column_type(*args)} end private # Apply the changes in the given alter table ops to the table given by name. def apply_alter_table(name, ops) alter_table_sql_list(name, ops).each{|sql| execute_ddl(sql)} end # Apply the operations in the given generator to the table given by name. def apply_alter_table_generator(name, generator) apply_alter_table(name, generator.operations) end # The class used for alter_table generators. def alter_table_generator_class Schema::AlterTableGenerator end # SQL fragment for given alter table operation. def alter_table_op_sql(table, op) quoted_name = quote_identifier(op[:name]) if op[:name] case op[:op] when :add_column "ADD COLUMN #{column_definition_sql(op)}" when :drop_column "DROP COLUMN #{quoted_name}#{' CASCADE' if op[:cascade]}" when :rename_column "RENAME COLUMN #{quoted_name} TO #{quote_identifier(op[:new_name])}" when :set_column_type "ALTER COLUMN #{quoted_name} TYPE #{type_literal(op)}" when :set_column_default "ALTER COLUMN #{quoted_name} SET DEFAULT #{literal(op[:default])}" when :set_column_null "ALTER COLUMN #{quoted_name} #{op[:null] ? 'DROP' : 'SET'} NOT NULL" when :add_constraint "ADD #{constraint_definition_sql(op)}" when :drop_constraint if op[:type] == :foreign_key quoted_name ||= quote_identifier(foreign_key_name(table, op[:columns])) end "DROP CONSTRAINT #{quoted_name}#{' CASCADE' if op[:cascade]}" else raise Error, "Unsupported ALTER TABLE operation: #{op[:op]}" end end # The SQL to execute to modify the DDL for the given table name. op # should be one of the operations returned by the AlterTableGenerator. def alter_table_sql(table, op) case op[:op] when :add_index index_definition_sql(table, op) when :drop_index drop_index_sql(table, op) else "ALTER TABLE #{quote_schema_table(table)} #{alter_table_op_sql(table, op)}" end end # Array of SQL DDL modification statements for the given table, # corresponding to the DDL changes specified by the operations. def alter_table_sql_list(table, operations) if supports_combining_alter_table_ops? grouped_ops = [] last_combinable = false operations.each do |op| if combinable_alter_table_op?(op) if sql = alter_table_op_sql(table, op) grouped_ops << [] unless last_combinable grouped_ops.last << sql last_combinable = true end elsif sql = alter_table_sql(table, op) Array(sql).each{|s| grouped_ops << s} last_combinable = false end end grouped_ops.map do |gop| if gop.is_a?(Array) "ALTER TABLE #{quote_schema_table(table)} #{gop.join(', ')}" else gop end end else operations.map{|op| alter_table_sql(table, op)}.flatten.compact end end # The SQL string specify the autoincrement property, generally used by # primary keys. def auto_increment_sql AUTOINCREMENT end # The order of the column definition, as an array of symbols. def column_definition_order self.class.const_get(:COLUMN_DEFINITION_ORDER) end # SQL DDL fragment containing the column creation SQL for the given column. def column_definition_sql(column) sql = "#{quote_identifier(column[:name])} #{type_literal(column)}" column_definition_order.each{|m| send(:"column_definition_#{m}_sql", sql, column)} sql end # Add auto increment SQL fragment to column creation SQL. def column_definition_auto_increment_sql(sql, column) sql << " #{auto_increment_sql}" if column[:auto_increment] end # Add collate SQL fragment to column creation SQL. def column_definition_collate_sql(sql, column) sql << " COLLATE #{column[:collate]}" if column[:collate] end # Add default SQL fragment to column creation SQL. def column_definition_default_sql(sql, column) sql << " DEFAULT #{literal(column[:default])}" if column.include?(:default) end # Add null/not null SQL fragment to column creation SQL. def column_definition_null_sql(sql, column) null = column.fetch(:null, column[:allow_null]) sql << NOT_NULL if null == false sql << NULL if null == true end # Add primary key SQL fragment to column creation SQL. def column_definition_primary_key_sql(sql, column) if column[:primary_key] if name = column[:primary_key_constraint_name] sql << " CONSTRAINT #{quote_identifier(name)}" end sql << PRIMARY_KEY end end # Add foreign key reference SQL fragment to column creation SQL. def column_definition_references_sql(sql, column) if column[:table] if name = column[:foreign_key_constraint_name] sql << " CONSTRAINT #{quote_identifier(name)}" end sql << column_references_column_constraint_sql(column) end end # Add unique constraint SQL fragment to column creation SQL. def column_definition_unique_sql(sql, column) if column[:unique] if name = column[:unique_constraint_name] sql << " CONSTRAINT #{quote_identifier(name)}" end sql << UNIQUE end end # SQL for all given columns, used inside a CREATE TABLE block. def column_list_sql(generator) (generator.columns.map{|c| column_definition_sql(c)} + generator.constraints.map{|c| constraint_definition_sql(c)}).join(COMMA_SEPARATOR) end # SQL DDL fragment for column foreign key references (column constraints) def column_references_column_constraint_sql(column) column_references_sql(column) end # SQL DDL fragment for column foreign key references def column_references_sql(column) sql = " REFERENCES #{quote_schema_table(column[:table])}" sql << "(#{Array(column[:key]).map{|x| quote_identifier(x)}.join(COMMA_SEPARATOR)})" if column[:key] sql << " ON DELETE #{on_delete_clause(column[:on_delete])}" if column[:on_delete] sql << " ON UPDATE #{on_update_clause(column[:on_update])}" if column[:on_update] constraint_deferrable_sql_append(sql, column[:deferrable]) sql end # SQL DDL fragment for table foreign key references (table constraints) def column_references_table_constraint_sql(constraint) "FOREIGN KEY #{literal(constraint[:columns])}#{column_references_sql(constraint)}" end # Whether the given alter table operation is combinable. def combinable_alter_table_op?(op) # Use a dynamic lookup for easier overriding in adapters COMBINABLE_ALTER_TABLE_OPS.include?(op[:op]) end # SQL DDL fragment specifying a constraint on a table. def constraint_definition_sql(constraint) sql = constraint[:name] ? "CONSTRAINT #{quote_identifier(constraint[:name])} " : "" case constraint[:type] when :check check = constraint[:check] check = check.first if check.is_a?(Array) && check.length == 1 check = filter_expr(check) check = "(#{check})" unless check[0..0] == '(' && check[-1..-1] == ')' sql << "CHECK #{check}" when :primary_key sql << "PRIMARY KEY #{literal(constraint[:columns])}" when :foreign_key sql << column_references_table_constraint_sql(constraint.merge(:deferrable=>nil)) when :unique sql << "UNIQUE #{literal(constraint[:columns])}" else raise Error, "Invalid constraint type #{constraint[:type]}, should be :check, :primary_key, :foreign_key, or :unique" end constraint_deferrable_sql_append(sql, constraint[:deferrable]) sql end # SQL DDL fragment specifying the deferrable constraint attributes. def constraint_deferrable_sql_append(sql, defer) case defer when nil when false sql << ' NOT DEFERRABLE' when :immediate sql << ' DEFERRABLE INITIALLY IMMEDIATE' else sql << ' DEFERRABLE INITIALLY DEFERRED' end end # Execute the create table statements using the generator. def create_table_from_generator(name, generator, options) execute_ddl(create_table_sql(name, generator, options)) end # The class used for create_table generators. def create_table_generator_class Schema::CreateTableGenerator end # Execute the create index statements using the generator. def create_table_indexes_from_generator(name, generator, options) e = options[:ignore_index_errors] || options[:if_not_exists] generator.indexes.each do |index| begin index_sql_list(name, [index]).each{|sql| execute_ddl(sql)} rescue Error raise unless e end end end # DDL statement for creating a table with the given name, columns, and options def create_table_sql(name, generator, options) unless supports_named_column_constraints? # Split column constraints into table constraints if they have a name generator.columns.each do |c| if (constraint_name = c.delete(:foreign_key_constraint_name)) && (table = c.delete(:table)) opts = {} opts[:name] = constraint_name [:key, :on_delete, :on_update, :deferrable].each{|k| opts[k] = c[k]} generator.foreign_key([c[:name]], table, opts) end if (constraint_name = c.delete(:unique_constraint_name)) && c.delete(:unique) generator.unique(c[:name], :name=>constraint_name) end if (constraint_name = c.delete(:primary_key_constraint_name)) && c.delete(:primary_key) generator.primary_key([c[:name]], :name=>constraint_name) end end end "#{create_table_prefix_sql(name, options)} (#{column_list_sql(generator)})" end # Run a command to create the table with the given name from the given # SELECT sql statement. def create_table_as(name, sql, options) sql = sql.sql if sql.is_a?(Sequel::Dataset) run(create_table_as_sql(name, sql, options)) end # DDL statement for creating a table from the result of a SELECT statement. # +sql+ should be a string representing a SELECT query. def create_table_as_sql(name, sql, options) "#{create_table_prefix_sql(name, options)} AS #{sql}" end # DDL fragment for initial part of CREATE TABLE statement def create_table_prefix_sql(name, options) "CREATE #{temporary_table_sql if options[:temp]}TABLE#{' IF NOT EXISTS' if options[:if_not_exists]} #{options[:temp] ? quote_identifier(name) : quote_schema_table(name)}" end # DDL fragment for initial part of CREATE VIEW statement def create_view_prefix_sql(name, options) create_view_sql_append_columns("CREATE #{'OR REPLACE 'if options[:replace]}VIEW #{quote_schema_table(name)}", options[:columns]) end # DDL statement for creating a view. def create_view_sql(name, source, options) source = source.sql if source.is_a?(Dataset) "#{create_view_prefix_sql(name, options)} AS #{source}" end # Append the column list to the SQL, if a column list is given. def create_view_sql_append_columns(sql, columns) if columns sql << ' (' schema_utility_dataset.send(:identifier_list_append, sql, columns) sql << ')' end sql end # Default index name for the table and columns, may be too long # for certain databases. def default_index_name(table_name, columns) schema, table = schema_and_table(table_name) "#{"#{schema}_" if schema}#{table}_#{columns.map{|c| [String, Symbol].any?{|cl| c.is_a?(cl)} ? c : literal(c).gsub(/\W/, '_')}.join(UNDERSCORE)}_index" end # Get foreign key name for given table and columns. def foreign_key_name(table_name, columns) keys = foreign_key_list(table_name).select{|key| key[:columns] == columns} raise(Error, "#{keys.empty? ? 'Missing' : 'Ambiguous'} foreign key for #{columns.inspect}") unless keys.size == 1 keys.first[:name] end # The SQL to drop an index for the table. def drop_index_sql(table, op) "DROP INDEX #{quote_identifier(op[:name] || default_index_name(table, op[:columns]))}" end # SQL DDL statement to drop the table with the given name. def drop_table_sql(name, options) "DROP TABLE#{' IF EXISTS' if options[:if_exists]} #{quote_schema_table(name)}#{' CASCADE' if options[:cascade]}" end # SQL DDL statement to drop a view with the given name. def drop_view_sql(name, options) "DROP VIEW #{quote_schema_table(name)}#{' CASCADE' if options[:cascade]}" end # Proxy the filter_expr call to the dataset, used for creating constraints. def filter_expr(*args, &block) schema_utility_dataset.literal(schema_utility_dataset.send(:filter_expr, *args, &block)) end # SQL DDL statement for creating an index for the table with the given name # and index specifications. def index_definition_sql(table_name, index) index_name = index[:name] || default_index_name(table_name, index[:columns]) if index[:type] raise Error, "Index types are not supported for this database" elsif index[:where] raise Error, "Partial indexes are not supported for this database" else "CREATE #{'UNIQUE ' if index[:unique]}INDEX #{quote_identifier(index_name)} ON #{quote_schema_table(table_name)} #{literal(index[:columns])}" end end # Array of SQL DDL statements, one for each index specification, # for the given table. def index_sql_list(table_name, indexes) indexes.map{|i| index_definition_sql(table_name, i)} end # Extract the join table name from the arguments given to create_join_table. # Also does argument validation for the create_join_table method. def join_table_name(hash, options) entries = hash.values raise Error, "must have 2 entries in hash given to (create|drop)_join_table" unless entries.length == 2 if options[:name] options[:name] else table_names = entries.map{|e| join_table_name_extract(e)} table_names.map{|t| t.to_s}.sort.join('_') end end # Extract an individual join table name, which should either be a string # or symbol, or a hash containing one of those as the value for :table. def join_table_name_extract(entry) case entry when Symbol, String entry when Hash join_table_name_extract(entry[:table]) else raise Error, "can't extract table name from #{entry.inspect}" end end # SQL DDL ON DELETE fragment to use, based on the given action. # The following actions are recognized: # # * :cascade - Delete rows referencing this row. # * :no_action (default) - Raise an error if other rows reference this # row, allow deferring of the integrity check. # * :restrict - Raise an error if other rows reference this row, # but do not allow deferring the integrity check. # * :set_default - Set columns referencing this row to their default value. # * :set_null - Set columns referencing this row to NULL. # # Any other object given is just converted to a string, with "_" converted to " " and upcased. def on_delete_clause(action) action.to_s.gsub("_", " ").upcase end # Alias of #on_delete_clause, since the two usually behave the same. def on_update_clause(action) on_delete_clause(action) end # Proxy the quote_schema_table method to the dataset def quote_schema_table(table) schema_utility_dataset.quote_schema_table(table) end # SQL DDL statement for renaming a table. def rename_table_sql(name, new_name) "ALTER TABLE #{quote_schema_table(name)} RENAME TO #{quote_schema_table(new_name)}" end # Split the schema information from the table def schema_and_table(table_name) schema_utility_dataset.schema_and_table(table_name) end # Return true if the given column schema represents an autoincrementing primary key. def schema_autoincrementing_primary_key?(schema) !!(schema[:primary_key] && schema[:db_type] =~ /int/io) end # The dataset to use for proxying certain schema methods. def schema_utility_dataset @default_dataset end # Split the schema information from the table def split_qualifiers(table_name) schema_utility_dataset.split_qualifiers(table_name) end # SQL DDL fragment for temporary table def temporary_table_sql self.class.const_get(:TEMPORARY) end # SQL fragment specifying the type of a given column. def type_literal(column) column[:type].is_a?(Class) ? type_literal_generic(column) : type_literal_specific(column) end # SQL fragment specifying the full type of a column, # consider the type with possible modifiers. def type_literal_generic(column) meth = "type_literal_generic_#{column[:type].name.to_s.downcase}" if respond_to?(meth, true) send(meth, column) else raise Error, "Unsupported ruby class used as database type: #{column[:type]}" end end # Alias for type_literal_generic_numeric, to make overriding in a subclass easier. def type_literal_generic_bigdecimal(column) type_literal_generic_numeric(column) end # Sequel uses the bigint type by default for Bignums. def type_literal_generic_bignum(column) :bigint end # Sequel uses the date type by default for Dates. def type_literal_generic_date(column) :date end # Sequel uses the timestamp type by default for DateTimes. def type_literal_generic_datetime(column) :timestamp end # Alias for type_literal_generic_trueclass, to make overriding in a subclass easier. def type_literal_generic_falseclass(column) type_literal_generic_trueclass(column) end # Sequel uses the blob type by default for Files. def type_literal_generic_file(column) :blob end # Alias for type_literal_generic_integer, to make overriding in a subclass easier. def type_literal_generic_fixnum(column) type_literal_generic_integer(column) end # Sequel uses the double precision type by default for Floats. def type_literal_generic_float(column) :"double precision" end # Sequel uses the integer type by default for integers def type_literal_generic_integer(column) :integer end # Sequel uses the numeric type by default for Numerics and BigDecimals. # If a size is given, it is used, otherwise, it will default to whatever # the database default is for an unsized value. def type_literal_generic_numeric(column) column[:size] ? "numeric(#{Array(column[:size]).join(', ')})" : :numeric end # Sequel uses the varchar type by default for Strings. If a # size isn't present, Sequel assumes a size of 255. If the # :fixed option is used, Sequel uses the char type. If the # :text option is used, Sequel uses the :text type. def type_literal_generic_string(column) if column[:text] uses_clob_for_text? ? :clob : :text elsif column[:fixed] "char(#{column[:size]||default_string_column_size})" else "varchar(#{column[:size]||default_string_column_size})" end end # Sequel uses the timestamp type by default for Time values. # If the :only_time option is used, the time type is used. def type_literal_generic_time(column) column[:only_time] ? :time : :timestamp end # Sequel uses the boolean type by default for TrueClass and FalseClass. def type_literal_generic_trueclass(column) :boolean end # SQL fragment for the given type of a column if the column is not one of the # generic types specified with a ruby class. def type_literal_specific(column) type = column[:type] type = "double precision" if type.to_s == 'double' column[:size] ||= default_string_column_size if type.to_s == 'varchar' elements = column[:size] || column[:elements] "#{type}#{literal(Array(elements)) if elements}#{UNSIGNED if column[:unsigned]}" end # Whether clob should be used for String :text=>true columns. def uses_clob_for_text? false end end end ruby-sequel-4.1.1/lib/sequel/database/transactions.rb000066400000000000000000000275351220156535500226570ustar00rootroot00000000000000module Sequel class Database # --------------------- # :section: 8 - Methods related to database transactions # Database transactions make multiple queries atomic, so # that either all of the queries take effect or none of # them do. # --------------------- SQL_BEGIN = 'BEGIN'.freeze SQL_COMMIT = 'COMMIT'.freeze SQL_RELEASE_SAVEPOINT = 'RELEASE SAVEPOINT autopoint_%d'.freeze SQL_ROLLBACK = 'ROLLBACK'.freeze SQL_ROLLBACK_TO_SAVEPOINT = 'ROLLBACK TO SAVEPOINT autopoint_%d'.freeze SQL_SAVEPOINT = 'SAVEPOINT autopoint_%d'.freeze TRANSACTION_BEGIN = 'Transaction.begin'.freeze TRANSACTION_COMMIT = 'Transaction.commit'.freeze TRANSACTION_ROLLBACK = 'Transaction.rollback'.freeze TRANSACTION_ISOLATION_LEVELS = {:uncommitted=>'READ UNCOMMITTED'.freeze, :committed=>'READ COMMITTED'.freeze, :repeatable=>'REPEATABLE READ'.freeze, :serializable=>'SERIALIZABLE'.freeze} # The default transaction isolation level for this database, # used for all future transactions. For MSSQL, this should be set # to something if you ever plan to use the :isolation option to # Database#transaction, as on MSSQL if affects all future transactions # on the same connection. attr_accessor :transaction_isolation_level # Starts a database transaction. When a database transaction is used, # either all statements are successful or none of the statements are # successful. Note that MySQL MyISAM tables do not support transactions. # # The following general options are respected: # # :isolation :: The transaction isolation level to use for this transaction, # should be :uncommitted, :committed, :repeatable, or :serializable, # used if given and the database/adapter supports customizable # transaction isolation levels. # :num_retries :: The number of times to retry if the :retry_on option is used. # The default is 5 times. Can be set to nil to retry indefinitely, # but that is not recommended. # :prepare :: A string to use as the transaction identifier for a # prepared transaction (two-phase commit), if the database/adapter # supports prepared transactions. # :retry_on :: An exception class or array of exception classes for which to # automatically retry the transaction. Can only be set if not inside # an existing transaction. # Note that this should not be used unless the entire transaction # block is idempotent, as otherwise it can cause non-idempotent # behavior to execute multiple times. # :rollback :: Can the set to :reraise to reraise any Sequel::Rollback exceptions # raised, or :always to always rollback even if no exceptions occur # (useful for testing). # :server :: The server to use for the transaction. # :savepoint :: Whether to create a new savepoint for this transaction, # only respected if the database/adapter supports savepoints. By # default Sequel will reuse an existing transaction, so if you want to # use a savepoint you must use this option. # # PostgreSQL specific options: # # :deferrable :: (9.1+) If present, set to DEFERRABLE if true or NOT DEFERRABLE if false. # :read_only :: If present, set to READ ONLY if true or READ WRITE if false. # :synchronous :: if non-nil, set synchronous_commit # appropriately. Valid values true, :on, false, :off, :local (9.1+), # and :remote_write (9.2+). def transaction(opts=OPTS, &block) if retry_on = opts[:retry_on] num_retries = opts.fetch(:num_retries, 5) begin transaction(opts.merge(:retry_on=>nil, :retrying=>true), &block) rescue *retry_on if num_retries num_retries -= 1 retry if num_retries >= 0 else retry end raise end else synchronize(opts[:server]) do |conn| if already_in_transaction?(conn, opts) if opts[:retrying] raise Sequel::Error, "cannot set :retry_on options if you are already inside a transaction" end return yield(conn) end _transaction(conn, opts, &block) end end end private # Internal generic transaction method. Any exception raised by the given # block will cause the transaction to be rolled back. If the exception is # not a Sequel::Rollback, the error will be reraised. If no exception occurs # inside the block, the transaction is commited. def _transaction(conn, opts=OPTS) rollback = opts[:rollback] begin add_transaction(conn, opts) begin_transaction(conn, opts) if rollback == :always begin yield(conn) rescue Exception => e1 raise e1 ensure raise ::Sequel::Rollback unless e1 end else yield(conn) end rescue Exception => e begin rollback_transaction(conn, opts) rescue Exception => e3 raise_error(e3, :classes=>database_error_classes, :conn=>conn) end transaction_error(e, :conn=>conn, :rollback=>rollback) ensure begin committed = commit_or_rollback_transaction(e, conn, opts) rescue Exception => e2 begin raise_error(e2, :classes=>database_error_classes, :conn=>conn) rescue Sequel::DatabaseError => e4 begin rollback_transaction(conn, opts) ensure raise e4 end end ensure remove_transaction(conn, committed) end end end # Synchronize access to the current transactions, returning the hash # of options for the current transaction (if any) def _trans(conn) Sequel.synchronize{@transactions[conn]} end # Add the current thread to the list of active transactions def add_transaction(conn, opts) if supports_savepoints? unless _trans(conn) if (prep = opts[:prepare]) && supports_prepared_transactions? Sequel.synchronize{@transactions[conn] = {:savepoint_level=>0, :prepare=>prep}} else Sequel.synchronize{@transactions[conn] = {:savepoint_level=>0}} end end elsif (prep = opts[:prepare]) && supports_prepared_transactions? Sequel.synchronize{@transactions[conn] = {:prepare => prep}} else Sequel.synchronize{@transactions[conn] = {}} end end # Call all stored after_commit blocks for the given transaction def after_transaction_commit(conn) if ary = _trans(conn)[:after_commit] ary.each{|b| b.call} end end # Call all stored after_rollback blocks for the given transaction def after_transaction_rollback(conn) if ary = _trans(conn)[:after_rollback] ary.each{|b| b.call} end end # Whether the current thread/connection is already inside a transaction def already_in_transaction?(conn, opts) _trans(conn) && (!supports_savepoints? || !opts[:savepoint]) end # SQL to start a new savepoint def begin_savepoint_sql(depth) SQL_SAVEPOINT % depth end # Start a new database connection on the given connection def begin_new_transaction(conn, opts) log_connection_execute(conn, begin_transaction_sql) set_transaction_isolation(conn, opts) end # Start a new database transaction or a new savepoint on the given connection. def begin_transaction(conn, opts=OPTS) if supports_savepoints? th = _trans(conn) if (depth = th[:savepoint_level]) > 0 log_connection_execute(conn, begin_savepoint_sql(depth)) else begin_new_transaction(conn, opts) end th[:savepoint_level] += 1 else begin_new_transaction(conn, opts) end end # SQL to BEGIN a transaction. def begin_transaction_sql SQL_BEGIN end if (! defined?(RUBY_ENGINE) or RUBY_ENGINE == 'ruby' or RUBY_ENGINE == 'rbx') and RUBY_VERSION < '1.9' # :nocov: # Whether to commit the current transaction. On ruby 1.8 and rubinius, # Thread.current.status is checked because Thread#kill skips rescue # blocks (so exception would be nil), but the transaction should # still be rolled back. def commit_or_rollback_transaction(exception, conn, opts) if exception false else if Thread.current.status == 'aborting' rollback_transaction(conn, opts) false else commit_transaction(conn, opts) true end end end # :nocov: else # Whether to commit the current transaction. On ruby 1.9 and JRuby, # transactions will be committed if Thread#kill is used on an thread # that has a transaction open, and there isn't a work around. def commit_or_rollback_transaction(exception, conn, opts) if exception false else commit_transaction(conn, opts) true end end end # SQL to commit a savepoint def commit_savepoint_sql(depth) SQL_RELEASE_SAVEPOINT % depth end # Commit the active transaction on the connection def commit_transaction(conn, opts=OPTS) if supports_savepoints? depth = _trans(conn)[:savepoint_level] log_connection_execute(conn, depth > 1 ? commit_savepoint_sql(depth-1) : commit_transaction_sql) else log_connection_execute(conn, commit_transaction_sql) end end # SQL to COMMIT a transaction. def commit_transaction_sql SQL_COMMIT end # Method called on the connection object to execute SQL on the database, # used by the transaction code. def connection_execute_method :execute end # Remove the current thread from the list of active transactions def remove_transaction(conn, committed) if !supports_savepoints? || ((_trans(conn)[:savepoint_level] -= 1) <= 0) begin if committed after_transaction_commit(conn) else after_transaction_rollback(conn) end ensure Sequel.synchronize{@transactions.delete(conn)} end end end # SQL to rollback to a savepoint def rollback_savepoint_sql(depth) SQL_ROLLBACK_TO_SAVEPOINT % depth end # Rollback the active transaction on the connection def rollback_transaction(conn, opts=OPTS) if supports_savepoints? depth = _trans(conn)[:savepoint_level] log_connection_execute(conn, depth > 1 ? rollback_savepoint_sql(depth-1) : rollback_transaction_sql) else log_connection_execute(conn, rollback_transaction_sql) end end # SQL to ROLLBACK a transaction. def rollback_transaction_sql SQL_ROLLBACK end # Set the transaction isolation level on the given connection def set_transaction_isolation(conn, opts) if supports_transaction_isolation_levels? and level = opts.fetch(:isolation, transaction_isolation_level) log_connection_execute(conn, set_transaction_isolation_sql(level)) end end # SQL to set the transaction isolation level def set_transaction_isolation_sql(level) "SET TRANSACTION ISOLATION LEVEL #{TRANSACTION_ISOLATION_LEVELS[level]}" end # Raise a database error unless the exception is an Rollback. def transaction_error(e, opts=OPTS) if e.is_a?(Rollback) raise e if opts[:rollback] == :reraise else raise_error(e, opts.merge(:classes=>database_error_classes)) end end end end ruby-sequel-4.1.1/lib/sequel/dataset.rb000066400000000000000000000030131220156535500200110ustar00rootroot00000000000000module Sequel # A dataset represents an SQL query, or more generally, an abstract # set of rows in the database. Datasets # can be used to create, retrieve, update and delete records. # # Query results are always retrieved on demand, so a dataset can be kept # around and reused indefinitely (datasets never cache results): # # my_posts = DB[:posts].filter(:author => 'david') # no records are retrieved # my_posts.all # records are retrieved # my_posts.all # records are retrieved again # # Most dataset methods return modified copies of the dataset (functional style), so you can # reuse different datasets to access data: # # posts = DB[:posts] # davids_posts = posts.filter(:author => 'david') # old_posts = posts.filter('stamp < ?', Date.today - 7) # davids_old_posts = davids_posts.filter('stamp < ?', Date.today - 7) # # Datasets are Enumerable objects, so they can be manipulated using any # of the Enumerable methods, such as map, inject, etc. # # For more information, see the {"Dataset Basics" guide}[link:files/doc/dataset_basics_rdoc.html]. class Dataset OPTS = Sequel::OPTS include Enumerable include SQL::AliasMethods include SQL::BooleanMethods include SQL::CastMethods include SQL::ComplexExpressionMethods include SQL::InequalityMethods include SQL::NumericMethods include SQL::OrderMethods include SQL::StringMethods end require(%w"query actions features graph prepared_statements misc mutation sql", 'dataset') end ruby-sequel-4.1.1/lib/sequel/dataset/000077500000000000000000000000001220156535500174675ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/dataset/actions.rb000066400000000000000000001023001220156535500214500ustar00rootroot00000000000000module Sequel class Dataset # --------------------- # :section: 2 - Methods that execute code on the database # These methods all execute the dataset's SQL on the database. # They don't return modified datasets, so if used in a method chain # they should be the last method called. # --------------------- # Action methods defined by Sequel that execute code on the database. ACTION_METHODS = (<<-METHS).split.map{|x| x.to_sym} << [] []= all avg count columns columns! delete each empty? fetch_rows first first! get import insert insert_multiple interval last map max min multi_insert paged_each range select_hash select_hash_groups select_map select_order_map set single_record single_value sum to_csv to_hash to_hash_groups truncate update METHS # REMOVE40 []= insert_multiple set to_csv # Inserts the given argument into the database. Returns self so it # can be used safely when chaining: # # DB[:items] << {:id=>0, :name=>'Zero'} << DB[:old_items].select(:id, name) def <<(arg) insert(arg) self end # Returns the first record matching the conditions. Examples: # # DB[:table][:id=>1] # SELECT * FROM table WHERE (id = 1) LIMIT 1 # # => {:id=1} def [](*conditions) raise(Error, ARRAY_ACCESS_ERROR_MSG) if (conditions.length == 1 and conditions.first.is_a?(Integer)) or conditions.length == 0 first(*conditions) end # Returns an array with all records in the dataset. If a block is given, # the array is iterated over after all items have been loaded. # # DB[:table].all # SELECT * FROM table # # => [{:id=>1, ...}, {:id=>2, ...}, ...] # # # Iterate over all rows in the table # DB[:table].all{|row| p row} def all(&block) a = [] each{|r| a << r} post_load(a) a.each(&block) if block a end # Returns the average value for the given column/expression. # Uses a virtual row block if no argument is given. # # DB[:table].avg(:number) # SELECT avg(number) FROM table LIMIT 1 # # => 3 # DB[:table].avg{function(column)} # SELECT avg(function(column)) FROM table LIMIT 1 # # => 1 def avg(column=Sequel.virtual_row(&Proc.new)) aggregate_dataset.get{avg(column).as(:avg)} end # Returns the columns in the result set in order as an array of symbols. # If the columns are currently cached, returns the cached value. Otherwise, # a SELECT query is performed to retrieve a single row in order to get the columns. # # If you are looking for all columns for a single table and maybe some information about # each column (e.g. database type), see Database#schema. # # DB[:table].columns # # => [:id, :name] def columns return @columns if @columns ds = unfiltered.unordered.naked.clone(:distinct => nil, :limit => 1, :offset=>nil) ds.each{break} @columns = ds.instance_variable_get(:@columns) @columns || [] end # Ignore any cached column information and perform a query to retrieve # a row in order to get the columns. # # DB[:table].columns! # # => [:id, :name] def columns! @columns = nil columns end # Returns the number of records in the dataset. If an argument is provided, # it is used as the argument to count. If a block is provided, it is # treated as a virtual row, and the result is used as the argument to # count. # # DB[:table].count # SELECT count(*) AS count FROM table LIMIT 1 # # => 3 # DB[:table].count(:column) # SELECT count(column) AS count FROM table LIMIT 1 # # => 2 # DB[:table].count{foo(column)} # SELECT count(foo(column)) AS count FROM table LIMIT 1 # # => 1 def count(arg=(no_arg=true), &block) if no_arg if block arg = Sequel.virtual_row(&block) aggregate_dataset.get{count(arg).as(count)} else aggregate_dataset.get{count(:*){}.as(count)}.to_i end elsif block raise Error, 'cannot provide both argument and block to Dataset#count' else aggregate_dataset.get{count(arg).as(count)} end end # Deletes the records in the dataset. The returned value should be # number of records deleted, but that is adapter dependent. # # DB[:table].delete # DELETE * FROM table # # => 3 def delete(&block) sql = delete_sql if uses_returning?(:delete) returning_fetch_rows(sql, &block) else execute_dui(sql) end end # Iterates over the records in the dataset as they are yielded from the # database adapter, and returns self. # # DB[:table].each{|row| p row} # SELECT * FROM table # # Note that this method is not safe to use on many adapters if you are # running additional queries inside the provided block. If you are # running queries inside the block, you should use +all+ instead of +each+ # for the outer queries, or use a separate thread or shard inside +each+. def each if row_proc = @row_proc fetch_rows(select_sql){|r| yield row_proc.call(r)} else fetch_rows(select_sql){|r| yield r} end self end # Returns true if no records exist in the dataset, false otherwise # # DB[:table].empty? # SELECT 1 AS one FROM table LIMIT 1 # # => false def empty? get(Sequel::SQL::AliasedExpression.new(1, :one)).nil? end # If a integer argument is given, it is interpreted as a limit, and then returns all # matching records up to that limit. If no argument is passed, # it returns the first matching record. If any other type of # argument(s) is passed, it is given to filter and the # first matching record is returned. If a block is given, it is used # to filter the dataset before returning anything. # # If there are no records in the dataset, returns nil (or an empty # array if an integer argument is given). # # Examples: # # DB[:table].first # SELECT * FROM table LIMIT 1 # # => {:id=>7} # # DB[:table].first(2) # SELECT * FROM table LIMIT 2 # # => [{:id=>6}, {:id=>4}] # # DB[:table].first(:id=>2) # SELECT * FROM table WHERE (id = 2) LIMIT 1 # # => {:id=>2} # # DB[:table].first("id = 3") # SELECT * FROM table WHERE (id = 3) LIMIT 1 # # => {:id=>3} # # DB[:table].first("id = ?", 4) # SELECT * FROM table WHERE (id = 4) LIMIT 1 # # => {:id=>4} # # DB[:table].first{id > 2} # SELECT * FROM table WHERE (id > 2) LIMIT 1 # # => {:id=>5} # # DB[:table].first("id > ?", 4){id < 6} # SELECT * FROM table WHERE ((id > 4) AND (id < 6)) LIMIT 1 # # => {:id=>5} # # DB[:table].first(2){id < 2} # SELECT * FROM table WHERE (id < 2) LIMIT 2 # # => [{:id=>1}] def first(*args, &block) ds = block ? filter(&block) : self if args.empty? ds.single_record else args = (args.size == 1) ? args.first : args if args.is_a?(Integer) ds.limit(args).all else ds.filter(args).single_record end end end # Calls first. If first returns nil (signaling that no # row matches), raise a Sequel::NoMatchingRow exception. def first!(*args, &block) first(*args, &block) || raise(Sequel::NoMatchingRow) end # Return the column value for the first matching record in the dataset. # Raises an error if both an argument and block is given. # # DB[:table].get(:id) # SELECT id FROM table LIMIT 1 # # => 3 # # ds.get{sum(id)} # SELECT sum(id) AS v FROM table LIMIT 1 # # => 6 # # You can pass an array of arguments to return multiple arguments, # but you must make sure each element in the array has an alias that # Sequel can determine: # # DB[:table].get([:id, :name]) # SELECT id, name FROM table LIMIT 1 # # => [3, 'foo'] # # DB[:table].get{[sum(id).as(sum), name]} # SELECT sum(id) AS sum, name FROM table LIMIT 1 # # => [6, 'foo'] def get(column=(no_arg=true; nil), &block) ds = naked if block raise(Error, ARG_BLOCK_ERROR_MSG) unless no_arg ds = ds.select(&block) column = ds.opts[:select] column = nil if column.is_a?(Array) && column.length < 2 else ds = if column.is_a?(Array) ds.select(*column) else ds.select(auto_alias_expression(column)) end end if column.is_a?(Array) if r = ds.single_record r.values_at(*hash_key_symbols(column)) end else ds.single_value end end # Inserts multiple records into the associated table. This method can be # used to efficiently insert a large number of records into a table in a # single query if the database supports it. Inserts # are automatically wrapped in a transaction. # # This method is called with a columns array and an array of value arrays: # # DB[:table].import([:x, :y], [[1, 2], [3, 4]]) # # INSERT INTO table (x, y) VALUES (1, 2) # # INSERT INTO table (x, y) VALUES (3, 4) # # This method also accepts a dataset instead of an array of value arrays: # # DB[:table].import([:x, :y], DB[:table2].select(:a, :b)) # # INSERT INTO table (x, y) SELECT a, b FROM table2 # # Options: # :commit_every :: Open a new transaction for every given number of records. # For example, if you provide a value of 50, will commit # after every 50 records. # :server :: Set the server/shard to use for the transaction and insert # queries. # :slice :: Same as :commit_every, :commit_every takes precedence. def import(columns, values, opts=OPTS) return @db.transaction{insert(columns, values)} if values.is_a?(Dataset) return if values.empty? raise(Error, IMPORT_ERROR_MSG) if columns.empty? ds = opts[:server] ? server(opts[:server]) : self if slice_size = opts[:commit_every] || opts[:slice] offset = 0 rows = [] while offset < values.length rows << ds._import(columns, values[offset, slice_size], opts) offset += slice_size end rows.flatten else ds._import(columns, values, opts) end end # Inserts values into the associated table. The returned value is generally # the value of the primary key for the inserted row, but that is adapter dependent. # # +insert+ handles a number of different argument formats: # no arguments or single empty hash :: Uses DEFAULT VALUES # single hash :: Most common format, treats keys as columns an values as values # single array :: Treats entries as values, with no columns # two arrays :: Treats first array as columns, second array as values # single Dataset :: Treats as an insert based on a selection from the dataset given, # with no columns # array and dataset :: Treats as an insert based on a selection from the dataset # given, with the columns given by the array. # # Examples: # # DB[:items].insert # # INSERT INTO items DEFAULT VALUES # # DB[:items].insert({}) # # INSERT INTO items DEFAULT VALUES # # DB[:items].insert([1,2,3]) # # INSERT INTO items VALUES (1, 2, 3) # # DB[:items].insert([:a, :b], [1,2]) # # INSERT INTO items (a, b) VALUES (1, 2) # # DB[:items].insert(:a => 1, :b => 2) # # INSERT INTO items (a, b) VALUES (1, 2) # # DB[:items].insert(DB[:old_items]) # # INSERT INTO items SELECT * FROM old_items # # DB[:items].insert([:a, :b], DB[:old_items]) # # INSERT INTO items (a, b) SELECT * FROM old_items def insert(*values, &block) sql = insert_sql(*values) if uses_returning?(:insert) returning_fetch_rows(sql, &block) else execute_insert(sql) end end # Returns the interval between minimum and maximum values for the given # column/expression. Uses a virtual row block if no argument is given. # # DB[:table].interval(:id) # SELECT (max(id) - min(id)) FROM table LIMIT 1 # # => 6 # DB[:table].interval{function(column)} # SELECT (max(function(column)) - min(function(column))) FROM table LIMIT 1 # # => 7 def interval(column=Sequel.virtual_row(&Proc.new)) aggregate_dataset.get{(max(column) - min(column)).as(:interval)} end # Reverses the order and then runs #first with the given arguments and block. Note that this # will not necessarily give you the last record in the dataset, # unless you have an unambiguous order. If there is not # currently an order for this dataset, raises an +Error+. # # DB[:table].order(:id).last # SELECT * FROM table ORDER BY id DESC LIMIT 1 # # => {:id=>10} # # DB[:table].order(Sequel.desc(:id)).last(2) # SELECT * FROM table ORDER BY id ASC LIMIT 2 # # => [{:id=>1}, {:id=>2}] def last(*args, &block) raise(Error, 'No order specified') unless @opts[:order] reverse.first(*args, &block) end # Maps column values for each record in the dataset (if a column name is # given), or performs the stock mapping functionality of +Enumerable+ otherwise. # Raises an +Error+ if both an argument and block are given. # # DB[:table].map(:id) # SELECT * FROM table # # => [1, 2, 3, ...] # # DB[:table].map{|r| r[:id] * 2} # SELECT * FROM table # # => [2, 4, 6, ...] # # You can also provide an array of column names: # # DB[:table].map([:id, :name]) # SELECT * FROM table # # => [[1, 'A'], [2, 'B'], [3, 'C'], ...] def map(column=nil, &block) if column raise(Error, ARG_BLOCK_ERROR_MSG) if block return naked.map(column) if row_proc if column.is_a?(Array) super(){|r| r.values_at(*column)} else super(){|r| r[column]} end else super(&block) end end # Returns the maximum value for the given column/expression. # Uses a virtual row block if no argument is given. # # DB[:table].max(:id) # SELECT max(id) FROM table LIMIT 1 # # => 10 # DB[:table].max{function(column)} # SELECT max(function(column)) FROM table LIMIT 1 # # => 7 def max(column=Sequel.virtual_row(&Proc.new)) aggregate_dataset.get{max(column).as(:max)} end # Returns the minimum value for the given column/expression. # Uses a virtual row block if no argument is given. # # DB[:table].min(:id) # SELECT min(id) FROM table LIMIT 1 # # => 1 # DB[:table].min{function(column)} # SELECT min(function(column)) FROM table LIMIT 1 # # => 0 def min(column=Sequel.virtual_row(&Proc.new)) aggregate_dataset.get{min(column).as(:min)} end # This is a front end for import that allows you to submit an array of # hashes instead of arrays of columns and values: # # DB[:table].multi_insert([{:x => 1}, {:x => 2}]) # # INSERT INTO table (x) VALUES (1) # # INSERT INTO table (x) VALUES (2) # # Be aware that all hashes should have the same keys if you use this calling method, # otherwise some columns could be missed or set to null instead of to default # values. # # This respects the same options as #import. def multi_insert(hashes, opts=OPTS) return if hashes.empty? columns = hashes.first.keys import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts) end # Yields each row in the dataset, but interally uses multiple queries as needed with # limit and offset to process the entire result set without keeping all # rows in the dataset in memory, even if the underlying driver buffers all # query results in memory. # # Because this uses multiple queries internally, in order to remain consistent, # it also uses a transaction internally. Additionally, to make sure that all rows # in the dataset are yielded and none are yielded twice, the dataset must have an # unambiguous order. Sequel requires that datasets using this method have an # order, but it cannot ensure that the order is unambiguous. # # Options: # :rows_per_fetch :: The number of rows to fetch per query. Defaults to 1000. def paged_each(opts=OPTS) unless @opts[:order] raise Sequel::Error, "Dataset#paged_each requires the dataset be ordered" end total_limit = @opts[:limit] offset = @opts[:offset] || 0 if server = @opts[:server] opts = opts.merge(:server=>server) end rows_per_fetch = opts[:rows_per_fetch] || 1000 num_rows_yielded = rows_per_fetch total_rows = 0 db.transaction(opts) do while num_rows_yielded == rows_per_fetch && (total_limit.nil? || total_rows < total_limit) if total_limit && total_rows + rows_per_fetch > total_limit rows_per_fetch = total_limit - total_rows end num_rows_yielded = 0 limit(rows_per_fetch, offset).each do |row| num_rows_yielded += 1 total_rows += 1 if total_limit yield row end offset += rows_per_fetch end end self end # Returns a +Range+ instance made from the minimum and maximum values for the # given column/expression. Uses a virtual row block if no argument is given. # # DB[:table].range(:id) # SELECT max(id) AS v1, min(id) AS v2 FROM table LIMIT 1 # # => 1..10 # DB[:table].interval{function(column)} # SELECT max(function(column)) AS v1, min(function(column)) AS v2 FROM table LIMIT 1 # # => 0..7 def range(column=Sequel.virtual_row(&Proc.new)) if r = aggregate_dataset.select{[min(column).as(v1), max(column).as(v2)]}.first (r[:v1]..r[:v2]) end end # Returns a hash with key_column values as keys and value_column values as # values. Similar to to_hash, but only selects the columns given. # # DB[:table].select_hash(:id, :name) # SELECT id, name FROM table # # => {1=>'a', 2=>'b', ...} # # You can also provide an array of column names for either the key_column, # the value column, or both: # # DB[:table].select_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table # # {[1, 3]=>['a', 'c'], [2, 4]=>['b', 'd'], ...} # # When using this method, you must be sure that each expression has an alias # that Sequel can determine. Usually you can do this by calling the #as method # on the expression and providing an alias. def select_hash(key_column, value_column) _select_hash(:to_hash, key_column, value_column) end # Returns a hash with key_column values as keys and an array of value_column values. # Similar to to_hash_groups, but only selects the columns given. # # DB[:table].select_hash(:name, :id) # SELECT id, name FROM table # # => {'a'=>[1, 4, ...], 'b'=>[2, ...], ...} # # You can also provide an array of column names for either the key_column, # the value column, or both: # # DB[:table].select_hash([:first, :middle], [:last, :id]) # SELECT * FROM table # # {['a', 'b']=>[['c', 1], ['d', 2], ...], ...} # # When using this method, you must be sure that each expression has an alias # that Sequel can determine. Usually you can do this by calling the #as method # on the expression and providing an alias. def select_hash_groups(key_column, value_column) _select_hash(:to_hash_groups, key_column, value_column) end # Selects the column given (either as an argument or as a block), and # returns an array of all values of that column in the dataset. If you # give a block argument that returns an array with multiple entries, # the contents of the resulting array are undefined. Raises an Error # if called with both an argument and a block. # # DB[:table].select_map(:id) # SELECT id FROM table # # => [3, 5, 8, 1, ...] # # DB[:table].select_map{id * 2} # SELECT (id * 2) FROM table # # => [6, 10, 16, 2, ...] # # You can also provide an array of column names: # # DB[:table].select_map([:id, :name]) # SELECT id, name FROM table # # => [[1, 'A'], [2, 'B'], [3, 'C'], ...] # # If you provide an array of expressions, you must be sure that each entry # in the array has an alias that Sequel can determine. Usually you can do this # by calling the #as method on the expression and providing an alias. def select_map(column=nil, &block) _select_map(column, false, &block) end # The same as select_map, but in addition orders the array by the column. # # DB[:table].select_order_map(:id) # SELECT id FROM table ORDER BY id # # => [1, 2, 3, 4, ...] # # DB[:table].select_order_map{id * 2} # SELECT (id * 2) FROM table ORDER BY (id * 2) # # => [2, 4, 6, 8, ...] # # You can also provide an array of column names: # # DB[:table].select_order_map([:id, :name]) # SELECT id, name FROM table ORDER BY id, name # # => [[1, 'A'], [2, 'B'], [3, 'C'], ...] # # If you provide an array of expressions, you must be sure that each entry # in the array has an alias that Sequel can determine. Usually you can do this # by calling the #as method on the expression and providing an alias. def select_order_map(column=nil, &block) _select_map(column, true, &block) end # Returns the first record in the dataset, or nil if the dataset # has no records. Users should probably use +first+ instead of # this method. def single_record clone(:limit=>1).each{|r| return r} nil end # Returns the first value of the first record in the dataset. # Returns nil if dataset is empty. Users should generally use # +get+ instead of this method. def single_value if r = ungraphed.naked.single_record r.values.first end end # Returns the sum for the given column/expression. # Uses a virtual row block if no column is given. # # DB[:table].sum(:id) # SELECT sum(id) FROM table LIMIT 1 # # => 55 # DB[:table].sum{function(column)} # SELECT sum(function(column)) FROM table LIMIT 1 # # => 10 def sum(column=Sequel.virtual_row(&Proc.new)) aggregate_dataset.get{sum(column).as(:sum)} end # Returns a hash with one column used as key and another used as value. # If rows have duplicate values for the key column, the latter row(s) # will overwrite the value of the previous row(s). If the value_column # is not given or nil, uses the entire hash as the value. # # DB[:table].to_hash(:id, :name) # SELECT * FROM table # # {1=>'Jim', 2=>'Bob', ...} # # DB[:table].to_hash(:id) # SELECT * FROM table # # {1=>{:id=>1, :name=>'Jim'}, 2=>{:id=>2, :name=>'Bob'}, ...} # # You can also provide an array of column names for either the key_column, # the value column, or both: # # DB[:table].to_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table # # {[1, 3]=>['Jim', 'bo'], [2, 4]=>['Bob', 'be'], ...} # # DB[:table].to_hash([:id, :name]) # SELECT * FROM table # # {[1, 'Jim']=>{:id=>1, :name=>'Jim'}, [2, 'Bob'=>{:id=>2, :name=>'Bob'}, ...} def to_hash(key_column, value_column = nil) h = {} if value_column return naked.to_hash(key_column, value_column) if row_proc if value_column.is_a?(Array) if key_column.is_a?(Array) each{|r| h[r.values_at(*key_column)] = r.values_at(*value_column)} else each{|r| h[r[key_column]] = r.values_at(*value_column)} end else if key_column.is_a?(Array) each{|r| h[r.values_at(*key_column)] = r[value_column]} else each{|r| h[r[key_column]] = r[value_column]} end end elsif key_column.is_a?(Array) each{|r| h[r.values_at(*key_column)] = r} else each{|r| h[r[key_column]] = r} end h end # Returns a hash with one column used as key and the values being an # array of column values. If the value_column is not given or nil, uses # the entire hash as the value. # # DB[:table].to_hash(:name, :id) # SELECT * FROM table # # {'Jim'=>[1, 4, 16, ...], 'Bob'=>[2], ...} # # DB[:table].to_hash(:name) # SELECT * FROM table # # {'Jim'=>[{:id=>1, :name=>'Jim'}, {:id=>4, :name=>'Jim'}, ...], 'Bob'=>[{:id=>2, :name=>'Bob'}], ...} # # You can also provide an array of column names for either the key_column, # the value column, or both: # # DB[:table].to_hash([:first, :middle], [:last, :id]) # SELECT * FROM table # # {['Jim', 'Bob']=>[['Smith', 1], ['Jackson', 4], ...], ...} # # DB[:table].to_hash([:first, :middle]) # SELECT * FROM table # # {['Jim', 'Bob']=>[{:id=>1, :first=>'Jim', :middle=>'Bob', :last=>'Smith'}, ...], ...} def to_hash_groups(key_column, value_column = nil) h = {} if value_column return naked.to_hash_groups(key_column, value_column) if row_proc if value_column.is_a?(Array) if key_column.is_a?(Array) each{|r| (h[r.values_at(*key_column)] ||= []) << r.values_at(*value_column)} else each{|r| (h[r[key_column]] ||= []) << r.values_at(*value_column)} end else if key_column.is_a?(Array) each{|r| (h[r.values_at(*key_column)] ||= []) << r[value_column]} else each{|r| (h[r[key_column]] ||= []) << r[value_column]} end end elsif key_column.is_a?(Array) each{|r| (h[r.values_at(*key_column)] ||= []) << r} else each{|r| (h[r[key_column]] ||= []) << r} end h end # Truncates the dataset. Returns nil. # # DB[:table].truncate # TRUNCATE table # # => nil def truncate execute_ddl(truncate_sql) end # Updates values for the dataset. The returned value is generally the # number of rows updated, but that is adapter dependent. +values+ should # a hash where the keys are columns to set and values are the values to # which to set the columns. # # DB[:table].update(:x=>nil) # UPDATE table SET x = NULL # # => 10 # # DB[:table].update(:x=>:x+1, :y=>0) # UPDATE table SET x = (x + 1), y = 0 # # => 10 def update(values=OPTS, &block) sql = update_sql(values) if uses_returning?(:update) returning_fetch_rows(sql, &block) else execute_dui(sql) end end # Execute the given SQL and return the number of rows deleted. This exists # solely as an optimization, replacing with_sql(sql).delete. It's significantly # faster as it does not require cloning the current dataset. def with_sql_delete(sql) execute_dui(sql) end protected # Internals of #import. If primary key values are requested, use # separate insert commands for each row. Otherwise, call #multi_insert_sql # and execute each statement it gives separately. def _import(columns, values, opts) trans_opts = opts.merge(:server=>@opts[:server]) if opts[:return] == :primary_key @db.transaction(trans_opts){values.map{|v| insert(columns, v)}} else stmts = multi_insert_sql(columns, values) @db.transaction(trans_opts){stmts.each{|st| execute_dui(st)}} end end # Return an array of arrays of values given by the symbols in ret_cols. def _select_map_multiple(ret_cols) map{|r| r.values_at(*ret_cols)} end # Returns an array of the first value in each row. def _select_map_single map{|r| r.values.first} end private # Internals of +select_hash+ and +select_hash_groups+ def _select_hash(meth, key_column, value_column) select(*(key_column.is_a?(Array) ? key_column : [key_column]) + (value_column.is_a?(Array) ? value_column : [value_column])). send(meth, hash_key_symbols(key_column), hash_key_symbols(value_column)) end # Internals of +select_map+ and +select_order_map+ def _select_map(column, order, &block) ds = ungraphed.naked columns = Array(column) virtual_row_columns(columns, block) select_cols = order ? columns.map{|c| c.is_a?(SQL::OrderedExpression) ? c.expression : c} : columns ds = ds.order(*columns.map{|c| unaliased_identifier(c)}) if order if column.is_a?(Array) || (columns.length > 1) ds.select(*select_cols)._select_map_multiple(hash_key_symbols(select_cols)) else ds.select(auto_alias_expression(select_cols.first))._select_map_single end end # Automatically alias the given expression if it does not have an identifiable alias. def auto_alias_expression(v) case v when LiteralString, Symbol, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression v else SQL::AliasedExpression.new(v, :v) end end # Set the server to use to :default unless it is already set in the passed opts def default_server_opts(opts) {:server=>@opts[:server] || :default}.merge(opts) end # Execute the given select SQL on the database using execute. Use the # :read_only server unless a specific server is set. def execute(sql, opts=OPTS, &block) @db.execute(sql, {:server=>@opts[:server] || :read_only}.merge(opts), &block) end # Execute the given SQL on the database using execute_ddl. def execute_ddl(sql, opts=OPTS, &block) @db.execute_ddl(sql, default_server_opts(opts), &block) nil end # Execute the given SQL on the database using execute_dui. def execute_dui(sql, opts=OPTS, &block) @db.execute_dui(sql, default_server_opts(opts), &block) end # Execute the given SQL on the database using execute_insert. def execute_insert(sql, opts=OPTS, &block) @db.execute_insert(sql, default_server_opts(opts), &block) end # Return a plain symbol given a potentially qualified or aliased symbol, # specifying the symbol that is likely to be used as the hash key # for the column when records are returned. Return nil if no hash key # can be determined def _hash_key_symbol(s, recursing=false) case s when Symbol _, c, a = split_symbol(s) (a || c).to_sym when SQL::Identifier, SQL::Wrapper _hash_key_symbol(s.value, true) when SQL::QualifiedIdentifier _hash_key_symbol(s.column, true) when SQL::AliasedExpression _hash_key_symbol(s.aliaz, true) when String s.to_sym if recursing end end # Return a plain symbol given a potentially qualified or aliased symbol, # specifying the symbol that is likely to be used as the hash key # for the column when records are returned. Raise Error if the hash key # symbol cannot be returned. def hash_key_symbol(s) if v = _hash_key_symbol(s) v else raise(Error, "#{s.inspect} is not supported, should be a Symbol, SQL::Identifier, SQL::QualifiedIdentifier, or SQL::AliasedExpression") end end # If s is an array, return an array with the given hash key symbols. # Otherwise, return a hash key symbol for the given expression # If a hash key symbol cannot be determined, raise an error. def hash_key_symbols(s) s.is_a?(Array) ? s.map{|c| hash_key_symbol(c)} : hash_key_symbol(s) end # Modify the identifier returned from the database based on the # identifier_output_method. def output_identifier(v) v = 'untitled' if v == '' (i = identifier_output_method) ? v.to_s.send(i).to_sym : v.to_sym end # This is run inside .all, after all of the records have been loaded # via .each, but before any block passed to all is called. It is called with # a single argument, an array of all returned records. Does nothing by # default, added to make the model eager loading code simpler. def post_load(all_records) end # Called by insert/update/delete when returning is used. # Yields each row as a plain hash to the block if one is given, or returns # an array of plain hashes for all rows if a block is not given def returning_fetch_rows(sql, &block) if block default_server.fetch_rows(sql, &block) nil else rows = [] default_server.fetch_rows(sql){|r| rows << r} rows end end # Return the unaliased part of the identifier. Handles both # implicit aliases in symbols, as well as SQL::AliasedExpression # objects. Other objects are returned as is. def unaliased_identifier(c) case c when Symbol c_table, column, _ = split_symbol(c) c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym when SQL::AliasedExpression c.expression when SQL::OrderedExpression case expr = c.expression when Symbol, SQL::AliasedExpression SQL::OrderedExpression.new(unaliased_identifier(expr), c.descending, :nulls=>c.nulls) else c end else c end end end end ruby-sequel-4.1.1/lib/sequel/dataset/features.rb000066400000000000000000000121651220156535500216370ustar00rootroot00000000000000module Sequel class Dataset # --------------------- # :section: 4 - Methods that describe what the dataset supports # These methods all return booleans, with most describing whether or not the # dataset supports a feature. # --------------------- # Whether this dataset quotes identifiers. def quote_identifiers? if defined?(@quote_identifiers) @quote_identifiers else @quote_identifiers = db.quote_identifiers? end end # Whether this dataset will provide accurate number of rows matched for # delete and update statements. Accurate in this case is the number of # rows matched by the dataset's filter. def provides_accurate_rows_matched? true end # Whether you must use a column alias list for recursive CTEs (false by # default). def recursive_cte_requires_column_aliases? false end # Whether the dataset requires SQL standard datetimes (false by default, # as most allow strings with ISO 8601 format). def requires_sql_standard_datetimes? false end # Whether type specifiers are required for prepared statement/bound # variable argument placeholders (i.e. :bv__integer) def requires_placeholder_type_specifiers? false end # Whether the dataset supports common table expressions (the WITH clause). # If given, +type+ can be :select, :insert, :update, or :delete, in which case it # determines whether WITH is supported for the respective statement type. def supports_cte?(type=:select) send(:"#{type}_clause_methods").include?(:"#{type}_with_sql") end # Whether the dataset supports common table expressions (the WITH clause) # in subqueries. If false, applies the WITH clause to the main query, which can cause issues # if multiple WITH clauses use the same name. def supports_cte_in_subqueries? false end # Whether the dataset supports or can emulate the DISTINCT ON clause, false by default. def supports_distinct_on? false end # Whether the dataset supports CUBE with GROUP BY. def supports_group_cube? false end # Whether the dataset supports ROLLUP with GROUP BY. def supports_group_rollup? false end # Whether this dataset supports the +insert_select+ method for returning all columns values # directly from an insert query. def supports_insert_select? supports_returning?(:insert) end # Whether the dataset supports the INTERSECT and EXCEPT compound operations, true by default. def supports_intersect_except? true end # Whether the dataset supports the INTERSECT ALL and EXCEPT ALL compound operations, true by default. def supports_intersect_except_all? true end # Whether the dataset supports the IS TRUE syntax. def supports_is_true? true end # Whether the dataset supports the JOIN table USING (column1, ...) syntax. def supports_join_using? true end # Whether modifying joined datasets is supported. def supports_modifying_joins? false end # Whether the IN/NOT IN operators support multiple columns when an # array of values is given. def supports_multiple_column_in? true end # Whether the dataset supports or can fully emulate the DISTINCT ON clause, # including respecting the ORDER BY clause, false by default def supports_ordered_distinct_on? supports_distinct_on? end # Whether the dataset supports pattern matching by regular expressions. def supports_regexp? false end # Whether the dataset supports REPLACE syntax, false by default. def supports_replace? false end # Whether the RETURNING clause is supported for the given type of query. # +type+ can be :insert, :update, or :delete. def supports_returning?(type) send(:"#{type}_clause_methods").include?(:"#{type}_returning_sql") end # Whether the database supports SELECT *, column FROM table def supports_select_all_and_column? true end # Whether the dataset supports timezones in literal timestamps def supports_timestamp_timezones? false end # Whether the dataset supports fractional seconds in literal timestamps def supports_timestamp_usecs? true end # Whether the dataset supports window functions. def supports_window_functions? false end # Whether the dataset supports WHERE TRUE (or WHERE 1 for databases that # that use 1 for true). def supports_where_true? true end private # Whether insert(nil) or insert({}) must be emulated by # using at least one value, false by default. def insert_supports_empty_values? true end # Whether the RETURNING clause is used for the given dataset. # +type+ can be :insert, :update, or :delete. def uses_returning?(type) opts[:returning] && !@opts[:sql] && supports_returning?(type) end # Whether the dataset uses WITH ROLLUP/CUBE instead of ROLLUP()/CUBE(). def uses_with_rollup? false end end end ruby-sequel-4.1.1/lib/sequel/dataset/graph.rb000066400000000000000000000257271220156535500211320ustar00rootroot00000000000000module Sequel class Dataset # --------------------- # :section: 5 - Methods related to dataset graphing # Dataset graphing changes the dataset to yield hashes where keys are table # name symbols and values are hashes representing the columns related to # that table. All of these methods return modified copies of the receiver. # --------------------- # Adds the given graph aliases to the list of graph aliases to use, # unlike +set_graph_aliases+, which replaces the list (the equivalent # of +select_more+ when graphing). See +set_graph_aliases+. # # DB[:table].add_graph_aliases(:some_alias=>[:table, :column]) # # SELECT ..., table.column AS some_alias def add_graph_aliases(graph_aliases) unless (ga = opts[:graph_aliases]) || (opts[:graph] && (ga = opts[:graph][:column_aliases])) raise Error, "cannot call add_graph_aliases on a dataset that has not been called with graph or set_graph_aliases" end columns, graph_aliases = graph_alias_columns(graph_aliases) select_more(*columns).clone(:graph_aliases => ga.merge(graph_aliases)) end # Similar to Dataset#join_table, but uses unambiguous aliases for selected # columns and keeps metadata about the aliases for use in other methods. # # Arguments: # dataset :: Can be a symbol (specifying a table), another dataset, # or an object that responds to +dataset+ and returns a symbol or a dataset # join_conditions :: Any condition(s) allowed by +join_table+. # block :: A block that is passed to +join_table+. # # Options: # :from_self_alias :: The alias to use when the receiver is not a graphed # dataset but it contains multiple FROM tables or a JOIN. In this case, # the receiver is wrapped in a from_self before graphing, and this option # determines the alias to use. # :implicit_qualifier :: The qualifier of implicit conditions, see #join_table. # :join_type :: The type of join to use (passed to +join_table+). Defaults to :left_outer. # :qualify:: The type of qualification to do, see #join_table. # :select :: An array of columns to select. When not used, selects # all columns in the given dataset. When set to false, selects no # columns and is like simply joining the tables, though graph keeps # some metadata about the join that makes it important to use +graph+ instead # of +join_table+. # :table_alias :: The alias to use for the table. If not specified, doesn't # alias the table. You will get an error if the the alias (or table) name is # used more than once. def graph(dataset, join_conditions = nil, options = OPTS, &block) # Allow the use of a dataset or symbol as the first argument # Find the table name/dataset based on the argument table_alias = options[:table_alias] case dataset when Symbol table = dataset dataset = @db[dataset] table_alias ||= table when ::Sequel::Dataset if dataset.simple_select_all? table = dataset.opts[:from].first table_alias ||= table else table = dataset table_alias ||= dataset_alias((@opts[:num_dataset_sources] || 0)+1) end else raise Error, "The dataset argument should be a symbol or dataset" end # Raise Sequel::Error with explanation that the table alias has been used raise_alias_error = lambda do raise(Error, "this #{options[:table_alias] ? 'alias' : 'table'} has already been been used, please specify " \ "#{options[:table_alias] ? 'a different alias' : 'an alias via the :table_alias option'}") end # Only allow table aliases that haven't been used raise_alias_error.call if @opts[:graph] && @opts[:graph][:table_aliases] && @opts[:graph][:table_aliases].include?(table_alias) # Use a from_self if this is already a joined table ds = (!@opts[:graph] && (@opts[:from].length > 1 || @opts[:join])) ? from_self(:alias=>options[:from_self_alias] || first_source) : self # Join the table early in order to avoid cloning the dataset twice ds = ds.join_table(options[:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias, :implicit_qualifier=>options[:implicit_qualifier], :qualify=>options[:qualify], &block) opts = ds.opts # Whether to include the table in the result set add_table = options[:select] == false ? false : true # Whether to add the columns to the list of column aliases add_columns = !ds.opts.include?(:graph_aliases) # Setup the initial graph data structure if it doesn't exist if graph = opts[:graph] opts[:graph] = graph = graph.dup select = opts[:select].dup [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k] = graph[k].dup} else master = alias_symbol(ds.first_source_alias) raise_alias_error.call if master == table_alias # Master hash storing all .graph related information graph = opts[:graph] = {} # Associates column aliases back to tables and columns column_aliases = graph[:column_aliases] = {} # Associates table alias (the master is never aliased) table_aliases = graph[:table_aliases] = {master=>self} # Keep track of the alias numbers used ca_num = graph[:column_alias_num] = Hash.new(0) # All columns in the master table are never # aliased, but are not included if set_graph_aliases # has been used. if add_columns if (select = @opts[:select]) && !select.empty? && !(select.length == 1 && (select.first.is_a?(SQL::ColumnAll))) select = select.each do |sel| column = case sel when Symbol _, c, a = split_symbol(sel) (a || c).to_sym when SQL::Identifier sel.value.to_sym when SQL::QualifiedIdentifier column = sel.column column = column.value if column.is_a?(SQL::Identifier) column.to_sym when SQL::AliasedExpression column = sel.aliaz column = column.value if column.is_a?(SQL::Identifier) column.to_sym else raise Error, "can't figure out alias to use for graphing for #{sel.inspect}" end column_aliases[column] = [master, column] end select = qualified_expression(select, master) else select = columns.map do |column| column_aliases[column] = [master, column] SQL::QualifiedIdentifier.new(master, column) end end end end # Add the table alias to the list of aliases # Even if it isn't been used in the result set, # we add a key for it with a nil value so we can check if it # is used more than once table_aliases = graph[:table_aliases] table_aliases[table_alias] = add_table ? dataset : nil # Add the columns to the selection unless we are ignoring them if add_table && add_columns column_aliases = graph[:column_aliases] ca_num = graph[:column_alias_num] # Which columns to add to the result set cols = options[:select] || dataset.columns # If the column hasn't been used yet, don't alias it. # If it has been used, try table_column. # If that has been used, try table_column_N # using the next value of N that we know hasn't been # used cols.each do |column| col_alias, identifier = if column_aliases[column] column_alias = :"#{table_alias}_#{column}" if column_aliases[column_alias] column_alias_num = ca_num[column_alias] column_alias = :"#{column_alias}_#{column_alias_num}" ca_num[column_alias] += 1 end [column_alias, SQL::AliasedExpression.new(SQL::QualifiedIdentifier.new(table_alias, column), column_alias)] else ident = SQL::QualifiedIdentifier.new(table_alias, column) [column, ident] end column_aliases[col_alias] = [table_alias, column] select.push(identifier) end end add_columns ? ds.select(*select) : ds end # This allows you to manually specify the graph aliases to use # when using graph. You can use it to only select certain # columns, and have those columns mapped to specific aliases # in the result set. This is the equivalent of +select+ for a # graphed dataset, and must be used instead of +select+ whenever # graphing is used. # # graph_aliases :: Should be a hash with keys being symbols of # column aliases, and values being either symbols or arrays with one to three elements. # If the value is a symbol, it is assumed to be the same as a one element # array containing that symbol. # The first element of the array should be the table alias symbol. # The second should be the actual column name symbol. If the array only # has a single element the column name symbol will be assumed to be the # same as the corresponding hash key. If the array # has a third element, it is used as the value returned, instead of # table_alias.column_name. # # DB[:artists].graph(:albums, :artist_id=>:id). # set_graph_aliases(:name=>:artists, # :album_name=>[:albums, :name], # :forty_two=>[:albums, :fourtwo, 42]).first # # SELECT artists.name, albums.name AS album_name, 42 AS forty_two ... def set_graph_aliases(graph_aliases) columns, graph_aliases = graph_alias_columns(graph_aliases) ds = select(*columns) ds.opts[:graph_aliases] = graph_aliases ds end # Remove the splitting of results into subhashes, and all metadata # related to the current graph (if any). def ungraphed clone(:graph=>nil, :graph_aliases=>nil) end private # Transform the hash of graph aliases and return a two element array # where the first element is an array of identifiers suitable to pass to # a select method, and the second is a new hash of preprocessed graph aliases. def graph_alias_columns(graph_aliases) gas = {} identifiers = graph_aliases.collect do |col_alias, tc| table, column, value = Array(tc) column ||= col_alias gas[col_alias] = [table, column] identifier = value || SQL::QualifiedIdentifier.new(table, column) identifier = SQL::AliasedExpression.new(identifier, col_alias) if value || column != col_alias identifier end [identifiers, gas] end end end ruby-sequel-4.1.1/lib/sequel/dataset/misc.rb000066400000000000000000000156761220156535500207660ustar00rootroot00000000000000module Sequel class Dataset # --------------------- # :section: 6 - Miscellaneous methods # These methods don't fit cleanly into another section. # --------------------- NOTIMPL_MSG = "This method must be overridden in Sequel adapters".freeze ARRAY_ACCESS_ERROR_MSG = 'You cannot call Dataset#[] with an integer or with no arguments.'.freeze ARG_BLOCK_ERROR_MSG = 'Must use either an argument or a block, not both'.freeze IMPORT_ERROR_MSG = 'Using Sequel::Dataset#import an empty column array is not allowed'.freeze # The database related to this dataset. This is the Database instance that # will execute all of this dataset's queries. attr_reader :db # The hash of options for this dataset, keys are symbols. attr_reader :opts # Constructs a new Dataset instance with an associated database and # options. Datasets are usually constructed by invoking the Database#[] method: # # DB[:posts] # # Sequel::Dataset is an abstract class that is not useful by itself. Each # database adapter provides a subclass of Sequel::Dataset, and has # the Database#dataset method return an instance of that subclass. def initialize(db) @db = db @opts = OPTS end # Define a hash value such that datasets with the same DB, opts, and SQL # will be considered equal. def ==(o) o.is_a?(self.class) && db == o.db && opts == o.opts && sql == o.sql end # Alias for == def eql?(o) self == o end # Yield a dataset for each server in the connection pool that is tied to that server. # Intended for use in sharded environments where all servers need to be modified # with the same data: # # DB[:configs].where(:key=>'setting').each_server{|ds| ds.update(:value=>'new_value')} def each_server db.servers.each{|s| yield server(s)} end # Returns the string with the LIKE metacharacters (% and _) escaped. # Useful for when the LIKE term is a user-provided string where metacharacters should not # be recognized. Example: # # ds.escape_like("foo\\%_") # 'foo\\\%\_' def escape_like(string) string.gsub(/[\\%_]/){|m| "\\#{m}"} end # Alias of +first_source_alias+ def first_source first_source_alias end # The first source (primary table) for this dataset. If the dataset doesn't # have a table, raises an +Error+. If the table is aliased, returns the aliased name. # # DB[:table].first_source_alias # # => :table # # DB[:table___t].first_source_alias # # => :t def first_source_alias source = @opts[:from] if source.nil? || source.empty? raise Error, 'No source specified for query' end case s = source.first when SQL::AliasedExpression s.aliaz when Symbol _, _, aliaz = split_symbol(s) aliaz ? aliaz.to_sym : s else s end end # The first source (primary table) for this dataset. If the dataset doesn't # have a table, raises an error. If the table is aliased, returns the original # table, not the alias # # DB[:table].first_source_table # # => :table # # DB[:table___t].first_source_table # # => :table def first_source_table source = @opts[:from] if source.nil? || source.empty? raise Error, 'No source specified for query' end case s = source.first when SQL::AliasedExpression s.expression when Symbol sch, table, aliaz = split_symbol(s) aliaz ? (sch ? SQL::QualifiedIdentifier.new(sch, table) : table.to_sym) : s else s end end # Define a hash value such that datasets with the same DB, opts, and SQL # will have the same hash value def hash [db, opts, sql].hash end # The String instance method to call on identifiers before sending them to # the database. def identifier_input_method if defined?(@identifier_input_method) @identifier_input_method else @identifier_input_method = db.identifier_input_method end end # The String instance method to call on identifiers before sending them to # the database. def identifier_output_method if defined?(@identifier_output_method) @identifier_output_method else @identifier_output_method = db.identifier_output_method end end # Returns a string representation of the dataset including the class name # and the corresponding SQL select statement. def inspect c = self.class c = c.superclass while c.name.nil? || c.name == '' "#<#{c.name}: #{sql.inspect}>" end # The alias to use for the row_number column, used when emulating OFFSET # support and for eager limit strategies def row_number_column :x_sequel_row_number_x end # Splits a possible implicit alias in +c+, handling both SQL::AliasedExpressions # and Symbols. Returns an array of two elements, with the first being the # main expression, and the second being the alias. def split_alias(c) case c when Symbol c_table, column, aliaz = split_symbol(c) [c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym, aliaz] when SQL::AliasedExpression [c.expression, c.aliaz] when SQL::JoinClause [c.table, c.table_alias] else [c, nil] end end # Creates a unique table alias that hasn't already been used in the dataset. # table_alias can be any type of object accepted by alias_symbol. # The symbol returned will be the implicit alias in the argument, # possibly appended with "_N" if the implicit alias has already been # used, where N is an integer starting at 0 and increasing until an # unused one is found. # # You can provide a second addition array argument containing symbols # that should not be considered valid table aliases. The current aliases # for the FROM and JOIN tables are automatically included in this array. # # DB[:table].unused_table_alias(:t) # # => :t # # DB[:table].unused_table_alias(:table) # # => :table_0 # # DB[:table, :table_0].unused_table_alias(:table) # # => :table_1 # # DB[:table, :table_0].unused_table_alias(:table, [:table_1, :table_2]) # # => :table_3 def unused_table_alias(table_alias, used_aliases = []) table_alias = alias_symbol(table_alias) used_aliases += opts[:from].map{|t| alias_symbol(t)} if opts[:from] used_aliases += opts[:join].map{|j| j.table_alias ? alias_alias_symbol(j.table_alias) : alias_symbol(j.table)} if opts[:join] if used_aliases.include?(table_alias) i = 0 loop do ta = :"#{table_alias}_#{i}" return ta unless used_aliases.include?(ta) i += 1 end else table_alias end end end end ruby-sequel-4.1.1/lib/sequel/dataset/mutation.rb000066400000000000000000000055571220156535500216700ustar00rootroot00000000000000module Sequel class Dataset # --------------------- # :section: 7 - Mutation methods # These methods modify the receiving dataset and should be used with care. # --------------------- # All methods that should have a ! method added that modifies the receiver. MUTATION_METHODS = QUERY_METHODS - [:paginate, :naked, :from_self] # Setup mutation (e.g. filter!) methods. These operate the same as the # non-! methods, but replace the options of the current dataset with the # options of the resulting dataset. # # Do not call this method with untrusted input, as that can result in # arbitrary code execution. def self.def_mutation_method(*meths) options = meths.pop if meths.last.is_a?(Hash) mod = options[:module] if options mod ||= self meths.each do |meth| mod.class_eval("def #{meth}!(*args, &block); mutation_method(:#{meth}, *args, &block) end", __FILE__, __LINE__) end end # Add the mutation methods via metaprogramming def_mutation_method(*MUTATION_METHODS) # Set the method to call on identifiers going into the database for this dataset attr_writer :identifier_input_method # Set the method to call on identifiers coming the database for this dataset attr_writer :identifier_output_method # Whether to quote identifiers for this dataset attr_writer :quote_identifiers # The row_proc for this database, should be any object that responds to +call+ with # a single hash argument and returns the object you want #each to return. attr_accessor :row_proc # Load an extension into the receiver. In addition to requiring the extension file, this # also modifies the dataset to work with the extension (usually extending it with a # module defined in the extension file). If no related extension file exists or the # extension does not have specific support for Database objects, an Error will be raised. # Returns self. def extension!(*exts) Sequel.extension(*exts) exts.each do |ext| if pr = Sequel.synchronize{EXTENSIONS[ext]} pr.call(self) else raise(Error, "Extension #{ext} does not have specific support handling individual datasets") end end self end # Avoid self-referential dataset by cloning. def from_self!(*args, &block) @opts = clone.from_self(*args, &block).opts self end # Remove the row_proc from the current dataset. def naked! self.row_proc = nil self end private # Modify the receiver with the results of sending the meth, args, and block # to the receiver and merging the options of the resulting dataset into # the receiver's options. def mutation_method(meth, *args, &block) @opts = send(meth, *args, &block).opts self end end end ruby-sequel-4.1.1/lib/sequel/dataset/prepared_statements.rb000066400000000000000000000232311220156535500240660ustar00rootroot00000000000000module Sequel class Dataset # --------------------- # :section: 8 - Methods related to prepared statements or bound variables # On some adapters, these use native prepared statements and bound variables, on others # support is emulated. For details, see the {"Prepared Statements/Bound Variables" guide}[link:files/doc/prepared_statements_rdoc.html]. # --------------------- PREPARED_ARG_PLACEHOLDER = LiteralString.new('?').freeze # Default implementation of the argument mapper to allow # native database support for bind variables and prepared # statements (as opposed to the emulated ones used by default). module ArgumentMapper # The name of the prepared statement, if any. attr_accessor :prepared_statement_name # The bind arguments to use for running this prepared statement attr_accessor :bind_arguments # Set the bind arguments based on the hash and call super. def call(bind_vars={}, &block) ds = bind(bind_vars) ds.prepared_sql ds.bind_arguments = ds.map_to_prepared_args(ds.opts[:bind_vars]) ds.run(&block) end # Override the given *_sql method based on the type, and # cache the result of the sql. def prepared_sql return @prepared_sql if @prepared_sql @prepared_args ||= [] @prepared_sql = super @opts[:sql] = @prepared_sql @prepared_sql end end # Backbone of the prepared statement support. Grafts bind variable # support into datasets by hijacking #literal and using placeholders. # By default, emulates prepared statements and bind variables by # taking the hash of bind variables and directly substituting them # into the query, which works on all databases, as it is no different # from using the dataset without bind variables. module PreparedStatementMethods PLACEHOLDER_RE = /\A\$(.*)\z/ # Whether to log the full SQL query. By default, just the prepared statement # name is generally logged on adapters that support native prepared statements. attr_accessor :log_sql # The type of prepared statement, should be one of :select, :first, # :insert, :update, or :delete attr_accessor :prepared_type # The array/hash of bound variable placeholder names. attr_accessor :prepared_args # The dataset that created this prepared statement. attr_accessor :orig_dataset # The argument to supply to insert and update, which may use # placeholders specified by prepared_args attr_accessor :prepared_modify_values # Sets the prepared_args to the given hash and runs the # prepared statement. def call(bind_vars={}, &block) bind(bind_vars).run(&block) end # Send the columns to the original dataset, as calling it # on the prepared statement can cause problems. def columns orig_dataset.columns end # Returns the SQL for the prepared statement, depending on # the type of the statement and the prepared_modify_values. def prepared_sql case @prepared_type when :select, :all, :each # Most common scenario, so listed first. select_sql when :first clone(:limit=>1).select_sql when :insert_select returning.insert_sql(*@prepared_modify_values) when :insert insert_sql(*@prepared_modify_values) when :update update_sql(*@prepared_modify_values) when :delete delete_sql else select_sql end end # Changes the values of symbols if they start with $ and # prepared_args is present. If so, they are considered placeholders, # and they are substituted using prepared_arg. def literal_symbol_append(sql, v) if @opts[:bind_vars] and match = PLACEHOLDER_RE.match(v.to_s) s = match[1].to_sym if prepared_arg?(s) literal_append(sql, prepared_arg(s)) else sql << v.to_s end else super end end # Programmer friendly string showing this is a prepared statement, # with the prepared SQL it represents (which in general won't have # substituted variables). def inspect c = self.class c = c.superclass while c.name.nil? || c.name == '' "<#{c.name}/PreparedStatement #{prepared_sql.inspect}>" end protected # Run the method based on the type of prepared statement, with # :select running #all to get all of the rows, and the other # types running the method with the same name as the type. def run(&block) case @prepared_type when :select, :all # Most common scenario, so listed first all(&block) when :each each(&block) when :insert_select with_sql(prepared_sql).first when :first first when :insert insert(*@prepared_modify_values) when :update update(*@prepared_modify_values) when :delete delete when Array case @prepared_type.at(0) when :map, :to_hash, :to_hash_groups send(*@prepared_type, &block) end else all(&block) end end private # Returns the value of the prepared_args hash for the given key. def prepared_arg(k) @opts[:bind_vars][k] end # Whether there is a bound value for the given key. def prepared_arg?(k) @opts[:bind_vars].has_key?(k) end # Use a clone of the dataset extended with prepared statement # support and using the same argument hash so that you can use # bind variables/prepared arguments in subselects. def subselect_sql_append(sql, ds) ps = ds.clone(:append_sql=>sql).prepare(:select) ps = ps.bind(@opts[:bind_vars]) if @opts[:bind_vars] ps.prepared_args = prepared_args ps.prepared_sql end end # Default implementation for an argument mapper that uses # unnumbered SQL placeholder arguments. Keeps track of which # arguments have been used, and allows arguments to # be used more than once. module UnnumberedArgumentMapper include ArgumentMapper protected # Returns a single output array mapping the values of the input hash. # Keys in the input hash that are used more than once in the query # have multiple entries in the output array. def map_to_prepared_args(bind_vars) prepared_args.map{|v| bind_vars[v]} end private # Associates the argument with name k with the next position in # the output array. def prepared_arg(k) prepared_args << k prepared_arg_placeholder end # Always assume there is a prepared arg in the argument mapper. def prepared_arg?(k) true end end # Set the bind variables to use for the call. If bind variables have # already been set for this dataset, they are updated with the contents # of bind_vars. # # DB[:table].filter(:id=>:$id).bind(:id=>1).call(:first) # # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1) # # => {:id=>1} def bind(bind_vars={}) clone(:bind_vars=>@opts[:bind_vars] ? @opts[:bind_vars].merge(bind_vars) : bind_vars) end # For the given type (:select, :first, :insert, :insert_select, :update, or :delete), # run the sql with the bind variables specified in the hash. +values+ is a hash passed to # insert or update (if one of those types is used), which may contain placeholders. # # DB[:table].filter(:id=>:$id).call(:first, :id=>1) # # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1) # # => {:id=>1} def call(type, bind_variables={}, *values, &block) prepare(type, nil, *values).call(bind_variables, &block) end # Prepare an SQL statement for later execution. Takes a type similar to #call, # and the +name+ symbol of the prepared statement. While +name+ defaults to +nil+, # it should always be provided as a symbol for the name of the prepared statement, # as some databases require that prepared statements have names. # # This returns a clone of the dataset extended with PreparedStatementMethods, # which you can +call+ with the hash of bind variables to use. # The prepared statement is also stored in # the associated database, where it can be called by name. # The following usage is identical: # # ps = DB[:table].filter(:name=>:$name).prepare(:first, :select_by_name) # # ps.call(:name=>'Blah') # # SELECT * FROM table WHERE name = ? -- ('Blah') # # => {:id=>1, :name=>'Blah'} # # DB.call(:select_by_name, :name=>'Blah') # Same thing def prepare(type, name=nil, *values) ps = to_prepared_statement(type, values) db.set_prepared_statement(name, ps) if name ps end protected # Return a cloned copy of the current dataset extended with # PreparedStatementMethods, setting the type and modify values. def to_prepared_statement(type, values=nil) ps = bind ps.extend(PreparedStatementMethods) ps.orig_dataset = self ps.prepared_type = type ps.prepared_modify_values = values ps end private # The argument placeholder. Most databases used unnumbered # arguments with question marks, so that is the default. def prepared_arg_placeholder PREPARED_ARG_PLACEHOLDER end end end ruby-sequel-4.1.1/lib/sequel/dataset/query.rb000066400000000000000000001301261220156535500211640ustar00rootroot00000000000000module Sequel class Dataset # --------------------- # :section: 1 - Methods that return modified datasets # These methods all return modified copies of the receiver. # --------------------- # Hash of extension name symbols to callable objects to load the extension # into the Dataset object (usually by extending it with a module defined # in the extension). EXTENSIONS = {} # The dataset options that require the removal of cached columns # if changed. COLUMN_CHANGE_OPTS = [:select, :sql, :from, :join].freeze # Which options don't affect the SQL generation. Used by simple_select_all? # to determine if this is a simple SELECT * FROM table. NON_SQL_OPTIONS = [:server, :defaults, :overrides, :graph, :eager_graph, :graph_aliases] # These symbols have _join methods created (e.g. inner_join) that # call join_table with the symbol, passing along the arguments and # block from the method call. CONDITIONED_JOIN_TYPES = [:inner, :full_outer, :right_outer, :left_outer, :full, :right, :left] # These symbols have _join methods created (e.g. natural_join) that # call join_table with the symbol. They only accept a single table # argument which is passed to join_table, and they raise an error # if called with a block. UNCONDITIONED_JOIN_TYPES = [:natural, :natural_left, :natural_right, :natural_full, :cross] # All methods that return modified datasets with a joined table added. JOIN_METHODS = (CONDITIONED_JOIN_TYPES + UNCONDITIONED_JOIN_TYPES).map{|x| "#{x}_join".to_sym} + [:join, :join_table] # Methods that return modified datasets QUERY_METHODS = (<<-METHS).split.map{|x| x.to_sym} + JOIN_METHODS add_graph_aliases and distinct except exclude exclude_having exclude_where filter for_update from from_self graph grep group group_and_count group_by having intersect invert limit lock_style naked or order order_append order_by order_more order_prepend paginate qualify query reverse reverse_order select select_all select_append select_group select_more server set_defaults set_graph_aliases set_overrides unfiltered ungraphed ungrouped union unlimited unordered where with with_recursive with_sql METHS # REMOVE40: query paginate set_defaults set_overrides # Register an extension callback for Dataset objects. ext should be the # extension name symbol, and mod should either be a Module that the # dataset is extended with, or a callable object called with the database # object. If mod is not provided, a block can be provided and is treated # as the mod object. # # If mod is a module, this also registers a Database extension that will # extend all of the database's datasets. def self.register_extension(ext, mod=nil, &block) if mod raise(Error, "cannot provide both mod and block to Dataset.register_extension") if block if mod.is_a?(Module) block = proc{|ds| ds.extend(mod)} Sequel::Database.register_extension(ext){|db| db.extend_datasets(mod)} else block = mod end end Sequel.synchronize{EXTENSIONS[ext] = block} end # Alias for where. def and(*cond, &block) where(*cond, &block) end # Returns a new clone of the dataset with with the given options merged. # If the options changed include options in COLUMN_CHANGE_OPTS, the cached # columns are deleted. This method should generally not be called # directly by user code. def clone(opts = nil) c = super() if opts c.instance_variable_set(:@opts, @opts.merge(opts)) c.instance_variable_set(:@columns, nil) if @columns && !opts.each_key{|o| break if COLUMN_CHANGE_OPTS.include?(o)} else c.instance_variable_set(:@opts, @opts.dup) end c end # Returns a copy of the dataset with the SQL DISTINCT clause. # The DISTINCT clause is used to remove duplicate rows from the # output. If arguments are provided, uses a DISTINCT ON clause, # in which case it will only be distinct on those columns, instead # of all returned columns. Raises an error if arguments # are given and DISTINCT ON is not supported. # # DB[:items].distinct # SQL: SELECT DISTINCT * FROM items # DB[:items].order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id def distinct(*args) raise(InvalidOperation, "DISTINCT ON not supported") if !args.empty? && !supports_distinct_on? clone(:distinct => args) end # Adds an EXCEPT clause using a second dataset object. # An EXCEPT compound dataset returns all rows in the current dataset # that are not in the given dataset. # Raises an +InvalidOperation+ if the operation is not supported. # Options: # :alias :: Use the given value as the from_self alias # :all :: Set to true to use EXCEPT ALL instead of EXCEPT, so duplicate rows can occur # :from_self :: Set to false to not wrap the returned dataset in a from_self, use with care. # # DB[:items].except(DB[:other_items]) # # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS t1 # # DB[:items].except(DB[:other_items], :all=>true, :from_self=>false) # # SELECT * FROM items EXCEPT ALL SELECT * FROM other_items # # DB[:items].except(DB[:other_items], :alias=>:i) # # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS i def except(dataset, opts=OPTS) raise(InvalidOperation, "EXCEPT not supported") unless supports_intersect_except? raise(InvalidOperation, "EXCEPT ALL not supported") if opts[:all] && !supports_intersect_except_all? compound_clone(:except, dataset, opts) end # Performs the inverse of Dataset#where. Note that if you have multiple filter # conditions, this is not the same as a negation of all conditions. # # DB[:items].exclude(:category => 'software') # # SELECT * FROM items WHERE (category != 'software') # # DB[:items].exclude(:category => 'software', :id=>3) # # SELECT * FROM items WHERE ((category != 'software') OR (id != 3)) def exclude(*cond, &block) _filter_or_exclude(true, :where, *cond, &block) end # Inverts the given conditions and adds them to the HAVING clause. # # DB[:items].select_group(:name).exclude_having{count(name) < 2} # # SELECT name FROM items GROUP BY name HAVING (count(name) >= 2) def exclude_having(*cond, &block) _filter_or_exclude(true, :having, *cond, &block) end # Alias for exclude. def exclude_where(*cond, &block) exclude(*cond, &block) end # Return a clone of the dataset loaded with the extensions, see #extension!. def extension(*exts) clone.extension!(*exts) end # Alias for where. def filter(*cond, &block) where(*cond, &block) end # Returns a cloned dataset with a :update lock style. # # DB[:table].for_update # SELECT * FROM table FOR UPDATE def for_update lock_style(:update) end # Returns a copy of the dataset with the source changed. If no # source is given, removes all tables. If multiple sources # are given, it is the same as using a CROSS JOIN (cartesian product) between all tables. # If a block is given, it is treated as a virtual row block, similar to +where+. # # DB[:items].from # SQL: SELECT * # DB[:items].from(:blah) # SQL: SELECT * FROM blah # DB[:items].from(:blah, :foo) # SQL: SELECT * FROM blah, foo # DB[:items].from{fun(arg)} # SQL: SELECT * FROM fun(arg) def from(*source, &block) virtual_row_columns(source, block) table_alias_num = 0 ctes = nil source.map! do |s| case s when Dataset if hoist_cte?(s) ctes ||= [] ctes += s.opts[:with] s = s.clone(:with=>nil) end SQL::AliasedExpression.new(s, dataset_alias(table_alias_num+=1)) when Symbol sch, table, aliaz = split_symbol(s) if aliaz s = sch ? SQL::QualifiedIdentifier.new(sch, table) : SQL::Identifier.new(table) SQL::AliasedExpression.new(s, aliaz.to_sym) else s end else s end end o = {:from=>source.empty? ? nil : source} o[:with] = (opts[:with] || []) + ctes if ctes o[:num_dataset_sources] = table_alias_num if table_alias_num > 0 clone(o) end # Returns a dataset selecting from the current dataset. # Supplying the :alias option controls the alias of the result. # # ds = DB[:items].order(:name).select(:id, :name) # # SELECT id,name FROM items ORDER BY name # # ds.from_self # # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS t1 # # ds.from_self(:alias=>:foo) # # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo def from_self(opts=OPTS) fs = {} @opts.keys.each{|k| fs[k] = nil unless NON_SQL_OPTIONS.include?(k)} clone(fs).from(opts[:alias] ? as(opts[:alias]) : self) end # Match any of the columns to any of the patterns. The terms can be # strings (which use LIKE) or regular expressions (which are only # supported on MySQL and PostgreSQL). Note that the total number of # pattern matches will be Array(columns).length * Array(terms).length, # which could cause performance issues. # # Options (all are boolean): # # :all_columns :: All columns must be matched to any of the given patterns. # :all_patterns :: All patterns must match at least one of the columns. # :case_insensitive :: Use a case insensitive pattern match (the default is # case sensitive if the database supports it). # # If both :all_columns and :all_patterns are true, all columns must match all patterns. # # Examples: # # dataset.grep(:a, '%test%') # # SELECT * FROM items WHERE (a LIKE '%test%') # # dataset.grep([:a, :b], %w'%test% foo') # # SELECT * FROM items WHERE ((a LIKE '%test%') OR (a LIKE 'foo') OR (b LIKE '%test%') OR (b LIKE 'foo')) # # dataset.grep([:a, :b], %w'%foo% %bar%', :all_patterns=>true) # # SELECT * FROM a WHERE (((a LIKE '%foo%') OR (b LIKE '%foo%')) AND ((a LIKE '%bar%') OR (b LIKE '%bar%'))) # # dataset.grep([:a, :b], %w'%foo% %bar%', :all_columns=>true) # # SELECT * FROM a WHERE (((a LIKE '%foo%') OR (a LIKE '%bar%')) AND ((b LIKE '%foo%') OR (b LIKE '%bar%'))) # # dataset.grep([:a, :b], %w'%foo% %bar%', :all_patterns=>true, :all_columns=>true) # # SELECT * FROM a WHERE ((a LIKE '%foo%') AND (b LIKE '%foo%') AND (a LIKE '%bar%') AND (b LIKE '%bar%')) def grep(columns, patterns, opts=OPTS) if opts[:all_patterns] conds = Array(patterns).map do |pat| SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *Array(columns).map{|c| SQL::StringExpression.like(c, pat, opts)}) end where(SQL::BooleanExpression.new(opts[:all_patterns] ? :AND : :OR, *conds)) else conds = Array(columns).map do |c| SQL::BooleanExpression.new(:OR, *Array(patterns).map{|pat| SQL::StringExpression.like(c, pat, opts)}) end where(SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *conds)) end end # Returns a copy of the dataset with the results grouped by the value of # the given columns. If a block is given, it is treated # as a virtual row block, similar to +where+. # # DB[:items].group(:id) # SELECT * FROM items GROUP BY id # DB[:items].group(:id, :name) # SELECT * FROM items GROUP BY id, name # DB[:items].group{[a, sum(b)]} # SELECT * FROM items GROUP BY a, sum(b) def group(*columns, &block) virtual_row_columns(columns, block) clone(:group => (columns.compact.empty? ? nil : columns)) end # Alias of group def group_by(*columns, &block) group(*columns, &block) end # Returns a dataset grouped by the given column with count by group. # Column aliases may be supplied, and will be included in the select clause. # If a block is given, it is treated as a virtual row block, similar to +where+. # # Examples: # # DB[:items].group_and_count(:name).all # # SELECT name, count(*) AS count FROM items GROUP BY name # # => [{:name=>'a', :count=>1}, ...] # # DB[:items].group_and_count(:first_name, :last_name).all # # SELECT first_name, last_name, count(*) AS count FROM items GROUP BY first_name, last_name # # => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...] # # DB[:items].group_and_count(:first_name___name).all # # SELECT first_name AS name, count(*) AS count FROM items GROUP BY first_name # # => [{:name=>'a', :count=>1}, ...] # # DB[:items].group_and_count{substr(first_name, 1, 1).as(initial)}.all # # SELECT substr(first_name, 1, 1) AS initial, count(*) AS count FROM items GROUP BY substr(first_name, 1, 1) # # => [{:initial=>'a', :count=>1}, ...] def group_and_count(*columns, &block) select_group(*columns, &block).select_more(COUNT_OF_ALL_AS_COUNT) end # Adds the appropriate CUBE syntax to GROUP BY. def group_cube raise Error, "GROUP BY CUBE not supported on #{db.database_type}" unless supports_group_cube? clone(:group_options=>:cube) end # Adds the appropriate ROLLUP syntax to GROUP BY. def group_rollup raise Error, "GROUP BY ROLLUP not supported on #{db.database_type}" unless supports_group_rollup? clone(:group_options=>:rollup) end # Returns a copy of the dataset with the HAVING conditions changed. See #where for argument types. # # DB[:items].group(:sum).having(:sum=>10) # # SELECT * FROM items GROUP BY sum HAVING (sum = 10) def having(*cond, &block) _filter(:having, *cond, &block) end # Adds an INTERSECT clause using a second dataset object. # An INTERSECT compound dataset returns all rows in both the current dataset # and the given dataset. # Raises an +InvalidOperation+ if the operation is not supported. # Options: # :alias :: Use the given value as the from_self alias # :all :: Set to true to use INTERSECT ALL instead of INTERSECT, so duplicate rows can occur # :from_self :: Set to false to not wrap the returned dataset in a from_self, use with care. # # DB[:items].intersect(DB[:other_items]) # # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS t1 # # DB[:items].intersect(DB[:other_items], :all=>true, :from_self=>false) # # SELECT * FROM items INTERSECT ALL SELECT * FROM other_items # # DB[:items].intersect(DB[:other_items], :alias=>:i) # # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS i def intersect(dataset, opts=OPTS) raise(InvalidOperation, "INTERSECT not supported") unless supports_intersect_except? raise(InvalidOperation, "INTERSECT ALL not supported") if opts[:all] && !supports_intersect_except_all? compound_clone(:intersect, dataset, opts) end # Inverts the current WHERE and HAVING clauses. If there is neither a # WHERE or HAVING clause, adds a WHERE clause that is always false. # # DB[:items].where(:category => 'software').invert # # SELECT * FROM items WHERE (category != 'software') # # DB[:items].where(:category => 'software', :id=>3).invert # # SELECT * FROM items WHERE ((category != 'software') OR (id != 3)) def invert having, where = @opts.values_at(:having, :where) if having.nil? && where.nil? where(false) else o = {} o[:having] = SQL::BooleanExpression.invert(having) if having o[:where] = SQL::BooleanExpression.invert(where) if where clone(o) end end # Alias of +inner_join+ def join(*args, &block) inner_join(*args, &block) end # Returns a joined dataset. Not usually called directly, users should use the # appropriate join method (e.g. join, left_join, natural_join, cross_join) which fills # in the +type+ argument. # # Takes the following arguments: # # * type - The type of join to do (e.g. :inner) # * table - Depends on type: # * Dataset - a subselect is performed with an alias of tN for some value of N # * String, Symbol: table # * expr - specifies conditions, depends on type: # * Hash, Array of two element arrays - Assumes key (1st arg) is column of joined table (unless already # qualified), and value (2nd arg) is column of the last joined or primary table (or the # :implicit_qualifier option). # To specify multiple conditions on a single joined table column, you must use an array. # Uses a JOIN with an ON clause. # * Array - If all members of the array are symbols, considers them as columns and # uses a JOIN with a USING clause. Most databases will remove duplicate columns from # the result set if this is used. # * nil - If a block is not given, doesn't use ON or USING, so the JOIN should be a NATURAL # or CROSS join. If a block is given, uses an ON clause based on the block, see below. # * Everything else - pretty much the same as a using the argument in a call to where, # so strings are considered literal, symbols specify boolean columns, and Sequel # expressions can be used. Uses a JOIN with an ON clause. # * options - a hash of options, with any of the following keys: # * :table_alias - the name of the table's alias when joining, necessary for joining # to the same table more than once. No alias is used by default. # * :implicit_qualifier - The name to use for qualifying implicit conditions. By default, # the last joined or primary table is used. # * :qualify - Can be set to false to not do any implicit qualification. Can be set # to :deep to use the Qualifier AST Transformer, which will attempt to qualify # subexpressions of the expression tree. Can be set to :symbol to only qualify # symbols. Defaults to the value of default_join_table_qualification. # * block - The block argument should only be given if a JOIN with an ON clause is used, # in which case it yields the table alias/name for the table currently being joined, # the table alias/name for the last joined (or first table), and an array of previous # SQL::JoinClause. Unlike +where+, this block is not treated as a virtual row block. # # Examples: # # DB[:a].join_table(:cross, :b) # # SELECT * FROM a CROSS JOIN b # # DB[:a].join_table(:inner, DB[:b], :c=>d) # # SELECT * FROM a INNER JOIN (SELECT * FROM b) AS t1 ON (t1.c = a.d) # # DB[:a].join_table(:left, :b___c, [:d]) # # SELECT * FROM a LEFT JOIN b AS c USING (d) # # DB[:a].natural_join(:b).join_table(:inner, :c) do |ta, jta, js| # (Sequel.qualify(ta, :d) > Sequel.qualify(jta, :e)) & {Sequel.qualify(ta, :f)=>DB.from(js.first.table).select(:g)} # end # # SELECT * FROM a NATURAL JOIN b INNER JOIN c # # ON ((c.d > b.e) AND (c.f IN (SELECT g FROM b))) def join_table(type, table, expr=nil, options=OPTS, &block) if hoist_cte?(table) s, ds = hoist_cte(table) return s.join_table(type, ds, expr, options, &block) end using_join = expr.is_a?(Array) && !expr.empty? && expr.all?{|x| x.is_a?(Symbol)} if using_join && !supports_join_using? h = {} expr.each{|e| h[e] = e} return join_table(type, table, h, options) end table_alias = options[:table_alias] last_alias = options[:implicit_qualifier] qualify_type = options[:qualify] if table.is_a?(Dataset) if table_alias.nil? table_alias_num = (@opts[:num_dataset_sources] || 0) + 1 table_alias = dataset_alias(table_alias_num) end table_name = table_alias else table, implicit_table_alias = split_alias(table) table_alias ||= implicit_table_alias table_name = table_alias || table end join = if expr.nil? and !block SQL::JoinClause.new(type, table, table_alias) elsif using_join raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block SQL::JoinUsingClause.new(expr, type, table, table_alias) else last_alias ||= @opts[:last_joined_table] || first_source_alias if Sequel.condition_specifier?(expr) expr = expr.collect do |k, v| qualify_type = default_join_table_qualification if qualify_type.nil? case qualify_type when false nil # Do no qualification when :deep k = Sequel::Qualifier.new(self, table_name).transform(k) v = Sequel::Qualifier.new(self, last_alias).transform(v) else k = qualified_column_name(k, table_name) if k.is_a?(Symbol) v = qualified_column_name(v, last_alias) if v.is_a?(Symbol) end [k,v] end expr = SQL::BooleanExpression.from_value_pairs(expr) end if block expr2 = yield(table_name, last_alias, @opts[:join] || []) expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2 end SQL::JoinOnClause.new(expr, type, table, table_alias) end opts = {:join => (@opts[:join] || []) + [join], :last_joined_table => table_name} opts[:num_dataset_sources] = table_alias_num if table_alias_num clone(opts) end CONDITIONED_JOIN_TYPES.each do |jtype| class_eval("def #{jtype}_join(*args, &block); join_table(:#{jtype}, *args, &block) end", __FILE__, __LINE__) end UNCONDITIONED_JOIN_TYPES.each do |jtype| class_eval("def #{jtype}_join(table); raise(Sequel::Error, '#{jtype}_join does not accept join table blocks') if block_given?; join_table(:#{jtype}, table) end", __FILE__, __LINE__) end # If given an integer, the dataset will contain only the first l results. # If given a range, it will contain only those at offsets within that # range. If a second argument is given, it is used as an offset. To use # an offset without a limit, pass nil as the first argument. # # DB[:items].limit(10) # SELECT * FROM items LIMIT 10 # DB[:items].limit(10, 20) # SELECT * FROM items LIMIT 10 OFFSET 20 # DB[:items].limit(10...20) # SELECT * FROM items LIMIT 10 OFFSET 10 # DB[:items].limit(10..20) # SELECT * FROM items LIMIT 11 OFFSET 10 # DB[:items].limit(nil, 20) # SELECT * FROM items OFFSET 20 def limit(l, o = (no_offset = true; nil)) return from_self.limit(l, o) if @opts[:sql] if l.is_a?(Range) o = l.first l = l.last - l.first + (l.exclude_end? ? 0 : 1) end l = l.to_i if l.is_a?(String) && !l.is_a?(LiteralString) if l.is_a?(Integer) raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1 end opts = {:limit => l} if o o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString) if o.is_a?(Integer) raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0 end opts[:offset] = o elsif !no_offset opts[:offset] = nil end clone(opts) end # Returns a cloned dataset with the given lock style. If style is a # string, it will be used directly. You should never pass a string # to this method that is derived from user input, as that can lead to # SQL injection. # # A symbol may be used for database independent locking behavior, but # all supported symbols have separate methods (e.g. for_update). # # DB[:items].lock_style('FOR SHARE NOWAIT') # SELECT * FROM items FOR SHARE NOWAIT def lock_style(style) clone(:lock => style) end # Returns a cloned dataset without a row_proc. # # ds = DB[:items] # ds.row_proc = proc{|r| r.invert} # ds.all # => [{2=>:id}] # ds.naked.all # => [{:id=>2}] def naked ds = clone ds.row_proc = nil ds end # Adds an alternate filter to an existing filter using OR. If no filter # exists an +Error+ is raised. # # DB[:items].where(:a).or(:b) # SELECT * FROM items WHERE a OR b def or(*cond, &block) cond = cond.first if cond.size == 1 v = @opts[:where] if v.nil? || (cond.respond_to?(:empty?) && cond.empty? && !block) clone else clone(:where => SQL::BooleanExpression.new(:OR, v, filter_expr(cond, &block))) end end # Returns a copy of the dataset with the order changed. If the dataset has an # existing order, it is ignored and overwritten with this order. If a nil is given # the returned dataset has no order. This can accept multiple arguments # of varying kinds, such as SQL functions. If a block is given, it is treated # as a virtual row block, similar to +where+. # # DB[:items].order(:name) # SELECT * FROM items ORDER BY name # DB[:items].order(:a, :b) # SELECT * FROM items ORDER BY a, b # DB[:items].order(Sequel.lit('a + b')) # SELECT * FROM items ORDER BY a + b # DB[:items].order(:a + :b) # SELECT * FROM items ORDER BY (a + b) # DB[:items].order(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name DESC # DB[:items].order(Sequel.asc(:name, :nulls=>:last)) # SELECT * FROM items ORDER BY name ASC NULLS LAST # DB[:items].order{sum(name).desc} # SELECT * FROM items ORDER BY sum(name) DESC # DB[:items].order(nil) # SELECT * FROM items def order(*columns, &block) virtual_row_columns(columns, block) clone(:order => (columns.compact.empty?) ? nil : columns) end # Alias of order_more, for naming consistency with order_prepend. def order_append(*columns, &block) order_more(*columns, &block) end # Alias of order def order_by(*columns, &block) order(*columns, &block) end # Returns a copy of the dataset with the order columns added # to the end of the existing order. # # DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b # DB[:items].order(:a).order_more(:b) # SELECT * FROM items ORDER BY a, b def order_more(*columns, &block) columns = @opts[:order] + columns if @opts[:order] order(*columns, &block) end # Returns a copy of the dataset with the order columns added # to the beginning of the existing order. # # DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b # DB[:items].order(:a).order_prepend(:b) # SELECT * FROM items ORDER BY b, a def order_prepend(*columns, &block) ds = order(*columns, &block) @opts[:order] ? ds.order_more(*@opts[:order]) : ds end # Qualify to the given table, or first source if no table is given. # # DB[:items].where(:id=>1).qualify # # SELECT items.* FROM items WHERE (items.id = 1) # # DB[:items].where(:id=>1).qualify(:i) # # SELECT i.* FROM items WHERE (i.id = 1) def qualify(table=first_source) o = @opts return clone if o[:sql] h = {} (o.keys & QUALIFY_KEYS).each do |k| h[k] = qualified_expression(o[k], table) end h[:select] = [SQL::ColumnAll.new(table)] if !o[:select] || o[:select].empty? clone(h) end # Modify the RETURNING clause, only supported on a few databases. If returning # is used, instead of insert returning the autogenerated primary key or # update/delete returning the number of modified rows, results are # returned using +fetch_rows+. # # DB[:items].returning # RETURNING * # DB[:items].returning(nil) # RETURNING NULL # DB[:items].returning(:id, :name) # RETURNING id, name def returning(*values) clone(:returning=>values) end # Returns a copy of the dataset with the order reversed. If no order is # given, the existing order is inverted. # # DB[:items].reverse(:id) # SELECT * FROM items ORDER BY id DESC # DB[:items].reverse{foo(bar)} # SELECT * FROM items ORDER BY foo(bar) DESC # DB[:items].order(:id).reverse # SELECT * FROM items ORDER BY id DESC # DB[:items].order(:id).reverse(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name ASC def reverse(*order, &block) virtual_row_columns(order, block) order(*invert_order(order.empty? ? @opts[:order] : order)) end # Alias of +reverse+ def reverse_order(*order, &block) reverse(*order, &block) end # Returns a copy of the dataset with the columns selected changed # to the given columns. This also takes a virtual row block, # similar to +where+. # # DB[:items].select(:a) # SELECT a FROM items # DB[:items].select(:a, :b) # SELECT a, b FROM items # DB[:items].select{[a, sum(b)]} # SELECT a, sum(b) FROM items def select(*columns, &block) virtual_row_columns(columns, block) clone(:select => columns) end # Returns a copy of the dataset selecting the wildcard if no arguments # are given. If arguments are given, treat them as tables and select # all columns (using the wildcard) from each table. # # DB[:items].select(:a).select_all # SELECT * FROM items # DB[:items].select_all(:items) # SELECT items.* FROM items # DB[:items].select_all(:items, :foo) # SELECT items.*, foo.* FROM items def select_all(*tables) if tables.empty? clone(:select => nil) else select(*tables.map{|t| i, a = split_alias(t); a || i}.map{|t| SQL::ColumnAll.new(t)}) end end # Returns a copy of the dataset with the given columns added # to the existing selected columns. If no columns are currently selected, # it will select the columns given in addition to *. # # DB[:items].select(:a).select(:b) # SELECT b FROM items # DB[:items].select(:a).select_append(:b) # SELECT a, b FROM items # DB[:items].select_append(:b) # SELECT *, b FROM items def select_append(*columns, &block) cur_sel = @opts[:select] if !cur_sel || cur_sel.empty? unless supports_select_all_and_column? return select_all(*(Array(@opts[:from]) + Array(@opts[:join]))).select_more(*columns, &block) end cur_sel = [WILDCARD] end select(*(cur_sel + columns), &block) end # Set both the select and group clauses with the given +columns+. # Column aliases may be supplied, and will be included in the select clause. # This also takes a virtual row block similar to +where+. # # DB[:items].select_group(:a, :b) # # SELECT a, b FROM items GROUP BY a, b # # DB[:items].select_group(:c___a){f(c2)} # # SELECT c AS a, f(c2) FROM items GROUP BY c, f(c2) def select_group(*columns, &block) virtual_row_columns(columns, block) select(*columns).group(*columns.map{|c| unaliased_identifier(c)}) end # Alias for select_append. def select_more(*columns, &block) select_append(*columns, &block) end # Set the server for this dataset to use. Used to pick a specific database # shard to run a query against, or to override the default (where SELECT uses # :read_only database and all other queries use the :default database). This # method is always available but is only useful when database sharding is being # used. # # DB[:items].all # Uses the :read_only or :default server # DB[:items].delete # Uses the :default server # DB[:items].server(:blah).delete # Uses the :blah server def server(servr) clone(:server=>servr) end # Unbind bound variables from this dataset's filter and return an array of two # objects. The first object is a modified dataset where the filter has been # replaced with one that uses bound variable placeholders. The second object # is the hash of unbound variables. You can then prepare and execute (or just # call) the dataset with the bound variables to get results. # # ds, bv = DB[:items].where(:a=>1).unbind # ds # SELECT * FROM items WHERE (a = $a) # bv # {:a => 1} # ds.call(:select, bv) def unbind u = Unbinder.new ds = clone(:where=>u.transform(opts[:where]), :join=>u.transform(opts[:join])) [ds, u.binds] end # Returns a copy of the dataset with no filters (HAVING or WHERE clause) applied. # # DB[:items].group(:a).having(:a=>1).where(:b).unfiltered # # SELECT * FROM items GROUP BY a def unfiltered clone(:where => nil, :having => nil) end # Returns a copy of the dataset with no grouping (GROUP or HAVING clause) applied. # # DB[:items].group(:a).having(:a=>1).where(:b).ungrouped # # SELECT * FROM items WHERE b def ungrouped clone(:group => nil, :having => nil) end # Adds a UNION clause using a second dataset object. # A UNION compound dataset returns all rows in either the current dataset # or the given dataset. # Options: # :alias :: Use the given value as the from_self alias # :all :: Set to true to use UNION ALL instead of UNION, so duplicate rows can occur # :from_self :: Set to false to not wrap the returned dataset in a from_self, use with care. # # DB[:items].union(DB[:other_items]) # # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS t1 # # DB[:items].union(DB[:other_items], :all=>true, :from_self=>false) # # SELECT * FROM items UNION ALL SELECT * FROM other_items # # DB[:items].union(DB[:other_items], :alias=>:i) # # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS i def union(dataset, opts=OPTS) compound_clone(:union, dataset, opts) end # Returns a copy of the dataset with no limit or offset. # # DB[:items].limit(10, 20).unlimited # SELECT * FROM items def unlimited clone(:limit=>nil, :offset=>nil) end # Returns a copy of the dataset with no order. # # DB[:items].order(:a).unordered # SELECT * FROM items def unordered order(nil) end # Returns a copy of the dataset with the given WHERE conditions imposed upon it. # # Accepts the following argument types: # # * Hash - list of equality/inclusion expressions # * Array - depends: # * If first member is a string, assumes the rest of the arguments # are parameters and interpolates them into the string. # * If all members are arrays of length two, treats the same way # as a hash, except it allows for duplicate keys to be # specified. # * Otherwise, treats each argument as a separate condition. # * String - taken literally # * Symbol - taken as a boolean column argument (e.g. WHERE active) # * Sequel::SQL::BooleanExpression - an existing condition expression, # probably created using the Sequel expression filter DSL. # # where also accepts a block, which should return one of the above argument # types, and is treated the same way. This block yields a virtual row object, # which is easy to use to create identifiers and functions. For more details # on the virtual row support, see the {"Virtual Rows" guide}[link:files/doc/virtual_rows_rdoc.html] # # If both a block and regular argument are provided, they get ANDed together. # # Examples: # # DB[:items].where(:id => 3) # # SELECT * FROM items WHERE (id = 3) # # DB[:items].where('price < ?', 100) # # SELECT * FROM items WHERE price < 100 # # DB[:items].where([[:id, [1,2,3]], [:id, 0..10]]) # # SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10))) # # DB[:items].where('price < 100') # # SELECT * FROM items WHERE price < 100 # # DB[:items].where(:active) # # SELECT * FROM items WHERE :active # # DB[:items].where{price < 100} # # SELECT * FROM items WHERE (price < 100) # # Multiple where calls can be chained for scoping: # # software = dataset.where(:category => 'software').where{price < 100} # # SELECT * FROM items WHERE ((category = 'software') AND (price < 100)) # # See the the {"Dataset Filtering" guide}[link:files/doc/dataset_filtering_rdoc.html] for more examples and details. def where(*cond, &block) _filter(:where, *cond, &block) end # Add a common table expression (CTE) with the given name and a dataset that defines the CTE. # A common table expression acts as an inline view for the query. # Options: # :args :: Specify the arguments/columns for the CTE, should be an array of symbols. # :recursive :: Specify that this is a recursive CTE # # DB[:items].with(:items, DB[:syx].where(:name.like('A%'))) # # WITH items AS (SELECT * FROM syx WHERE (name LIKE 'A%')) SELECT * FROM items def with(name, dataset, opts=OPTS) raise(Error, 'This datatset does not support common table expressions') unless supports_cte? if hoist_cte?(dataset) s, ds = hoist_cte(dataset) s.with(name, ds, opts) else clone(:with=>(@opts[:with]||[]) + [opts.merge(:name=>name, :dataset=>dataset)]) end end # Add a recursive common table expression (CTE) with the given name, a dataset that # defines the nonrecursive part of the CTE, and a dataset that defines the recursive part # of the CTE. Options: # :args :: Specify the arguments/columns for the CTE, should be an array of symbols. # :union_all :: Set to false to use UNION instead of UNION ALL combining the nonrecursive and recursive parts. # # DB[:t].with_recursive(:t, # DB[:i1].select(:id, :parent_id).where(:parent_id=>nil), # DB[:i1].join(:t, :id=>:parent_id).select(:i1__id, :i1__parent_id), # :args=>[:id, :parent_id]) # # # WITH RECURSIVE "t"("id", "parent_id") AS ( # # SELECT "id", "parent_id" FROM "i1" WHERE ("parent_id" IS NULL) # # UNION ALL # # SELECT "i1"."id", "i1"."parent_id" FROM "i1" INNER JOIN "t" ON ("t"."id" = "i1"."parent_id") # # ) SELECT * FROM "t" def with_recursive(name, nonrecursive, recursive, opts=OPTS) raise(Error, 'This datatset does not support common table expressions') unless supports_cte? if hoist_cte?(nonrecursive) s, ds = hoist_cte(nonrecursive) s.with_recursive(name, ds, recursive, opts) elsif hoist_cte?(recursive) s, ds = hoist_cte(recursive) s.with_recursive(name, nonrecursive, ds, opts) else clone(:with=>(@opts[:with]||[]) + [opts.merge(:recursive=>true, :name=>name, :dataset=>nonrecursive.union(recursive, {:all=>opts[:union_all] != false, :from_self=>false}))]) end end # Returns a copy of the dataset with the static SQL used. This is useful if you want # to keep the same row_proc/graph, but change the SQL used to custom SQL. # # DB[:items].with_sql('SELECT * FROM foo') # SELECT * FROM foo # # You can use placeholders in your SQL and provide arguments for those placeholders: # # DB[:items].with_sql('SELECT ? FROM foo', 1) # SELECT 1 FROM foo # # You can also provide a method name and arguments to call to get the SQL: # # DB[:items].with_sql(:insert_sql, :b=>1) # INSERT INTO items (b) VALUES (1) def with_sql(sql, *args) if sql.is_a?(Symbol) sql = send(sql, *args) else sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty? end clone(:sql=>sql) end protected # Add the dataset to the list of compounds def compound_clone(type, dataset, opts) if hoist_cte?(dataset) s, ds = hoist_cte(dataset) return s.compound_clone(type, ds, opts) end ds = compound_from_self.clone(:compounds=>Array(@opts[:compounds]).map{|x| x.dup} + [[type, dataset.compound_from_self, opts[:all]]]) opts[:from_self] == false ? ds : ds.from_self(opts) end # Return true if the dataset has a non-nil value for any key in opts. def options_overlap(opts) !(@opts.collect{|k,v| k unless v.nil?}.compact & opts).empty? end # Whether this dataset is a simple SELECT * FROM table. def simple_select_all? o = @opts.reject{|k,v| v.nil? || NON_SQL_OPTIONS.include?(k)} o.length == 1 && (f = o[:from]) && f.length == 1 && (f.first.is_a?(Symbol) || f.first.is_a?(SQL::AliasedExpression)) end private # Internal filtering method so it works on either the WHERE or HAVING clauses, with or # without inversion. def _filter_or_exclude(invert, clause, *cond, &block) cond = cond.first if cond.size == 1 if cond.respond_to?(:empty?) && cond.empty? && !block clone else cond = filter_expr(cond, &block) cond = SQL::BooleanExpression.invert(cond) if invert cond = SQL::BooleanExpression.new(:AND, @opts[clause], cond) if @opts[clause] clone(clause => cond) end end # Internal filter method so it works on either the having or where clauses. def _filter(clause, *cond, &block) _filter_or_exclude(false, clause, *cond, &block) end # The default :qualify option to use for join tables if one is not specified. def default_join_table_qualification :symbol end # SQL expression object based on the expr type. See +where+. def filter_expr(expr = nil, &block) expr = nil if expr == [] if expr && block return SQL::BooleanExpression.new(:AND, filter_expr(expr), filter_expr(block)) elsif block expr = block end case expr when Hash SQL::BooleanExpression.from_value_pairs(expr) when Array if (sexpr = expr.at(0)).is_a?(String) SQL::PlaceholderLiteralString.new(sexpr, expr[1..-1], true) elsif Sequel.condition_specifier?(expr) SQL::BooleanExpression.from_value_pairs(expr) else SQL::BooleanExpression.new(:AND, *expr.map{|x| filter_expr(x)}) end when Proc filter_expr(Sequel.virtual_row(&expr)) when SQL::NumericExpression, SQL::StringExpression raise(Error, "Invalid SQL Expression type: #{expr.inspect}") when Symbol, SQL::Expression expr when TrueClass, FalseClass if supports_where_true? SQL::BooleanExpression.new(:NOOP, expr) elsif expr SQL::Constants::SQLTRUE else SQL::Constants::SQLFALSE end when String LiteralString.new("(#{expr})") else raise(Error, "Invalid filter argument: #{expr.inspect}") end end # Return two datasets, the first a clone of the receiver with the WITH # clause from the given dataset added to it, and the second a clone of # the given dataset with the WITH clause removed. def hoist_cte(ds) [clone(:with => (opts[:with] || []) + ds.opts[:with]), ds.clone(:with => nil)] end # Whether CTEs need to be hoisted from the given ds into the current ds. def hoist_cte?(ds) ds.is_a?(Dataset) && ds.opts[:with] && !supports_cte_in_subqueries? end # Inverts the given order by breaking it into a list of column references # and inverting them. # # DB[:items].invert_order([Sequel.desc(:id)]]) #=> [Sequel.asc(:id)] # DB[:items].invert_order([:category, Sequel.desc(:price)]) #=> [Sequel.desc(:category), Sequel.asc(:price)] def invert_order(order) return nil unless order order.map do |f| case f when SQL::OrderedExpression f.invert else SQL::OrderedExpression.new(f) end end end # Return self if the dataset already has a server, or a cloned dataset with the # default server otherwise. def default_server @opts[:server] ? self : clone(:server=>:default) end # Treat the +block+ as a virtual_row block if not +nil+ and # add the resulting columns to the +columns+ array (modifies +columns+). def virtual_row_columns(columns, block) if block v = Sequel.virtual_row(&block) if v.is_a?(Array) columns.concat(v) else columns << v end end end end end ruby-sequel-4.1.1/lib/sequel/dataset/sql.rb000066400000000000000000001301401220156535500206120ustar00rootroot00000000000000module Sequel class Dataset # --------------------- # :section: 3 - User Methods relating to SQL Creation # These are methods you can call to see what SQL will be generated by the dataset. # --------------------- # Returns a DELETE SQL query string. See +delete+. # # dataset.filter{|o| o.price >= 100}.delete_sql # # => "DELETE FROM items WHERE (price >= 100)" def delete_sql return static_sql(opts[:sql]) if opts[:sql] check_modification_allowed! clause_sql(:delete) end # Returns an EXISTS clause for the dataset as a +LiteralString+. # # DB.select(1).where(DB[:items].exists) # # SELECT 1 WHERE (EXISTS (SELECT * FROM items)) def exists SQL::PlaceholderLiteralString.new(EXISTS, [self], true) end # Returns an INSERT SQL query string. See +insert+. # # DB[:items].insert_sql(:a=>1) # # => "INSERT INTO items (a) VALUES (1)" def insert_sql(*values) return static_sql(@opts[:sql]) if @opts[:sql] check_modification_allowed! columns = [] case values.size when 0 return insert_sql({}) when 1 case vals = values.at(0) when Hash values = [] vals.each do |k,v| columns << k values << v end when Dataset, Array, LiteralString values = vals end when 2 if (v0 = values.at(0)).is_a?(Array) && ((v1 = values.at(1)).is_a?(Array) || v1.is_a?(Dataset) || v1.is_a?(LiteralString)) columns, values = v0, v1 raise(Error, "Different number of values and columns given to insert_sql") if values.is_a?(Array) and columns.length != values.length end end if values.is_a?(Array) && values.empty? && !insert_supports_empty_values? columns = [columns().last] values = [DEFAULT] end clone(:columns=>columns, :values=>values)._insert_sql end # Returns a literal representation of a value to be used as part # of an SQL expression. # # DB[:items].literal("abc'def\\") #=> "'abc''def\\\\'" # DB[:items].literal(:items__id) #=> "items.id" # DB[:items].literal([1, 2, 3]) => "(1, 2, 3)" # DB[:items].literal(DB[:items]) => "(SELECT * FROM items)" # DB[:items].literal(:x + 1 > :y) => "((x + 1) > y)" # # If an unsupported object is given, an +Error+ is raised. def literal_append(sql, v) case v when Symbol literal_symbol_append(sql, v) when String case v when LiteralString sql << v when SQL::Blob literal_blob_append(sql, v) else literal_string_append(sql, v) end when Integer sql << literal_integer(v) when Hash literal_hash_append(sql, v) when SQL::Expression literal_expression_append(sql, v) when Float sql << literal_float(v) when BigDecimal sql << literal_big_decimal(v) when NilClass sql << literal_nil when TrueClass sql << literal_true when FalseClass sql << literal_false when Array literal_array_append(sql, v) when Time sql << (v.is_a?(SQLTime) ? literal_sqltime(v) : literal_time(v)) when DateTime sql << literal_datetime(v) when Date sql << literal_date(v) when Dataset literal_dataset_append(sql, v) else literal_other_append(sql, v) end end # Returns an array of insert statements for inserting multiple records. # This method is used by +multi_insert+ to format insert statements and # expects a keys array and and an array of value arrays. # # This method should be overridden by descendants if the support # inserting multiple records in a single SQL statement. def multi_insert_sql(columns, values) values.map{|r| insert_sql(columns, r)} end # Returns a SELECT SQL query string. # # dataset.select_sql # => "SELECT * FROM items" def select_sql return static_sql(@opts[:sql]) if @opts[:sql] clause_sql(:select) end # Same as +select_sql+, not aliased directly to make subclassing simpler. def sql select_sql end # Returns a TRUNCATE SQL query string. See +truncate+ # # DB[:items].truncate_sql # => 'TRUNCATE items' def truncate_sql if opts[:sql] static_sql(opts[:sql]) else check_truncation_allowed! raise(InvalidOperation, "Can't truncate filtered datasets") if opts[:where] || opts[:having] t = '' source_list_append(t, opts[:from]) _truncate_sql(t) end end # Formats an UPDATE statement using the given values. See +update+. # # DB[:items].update_sql(:price => 100, :category => 'software') # # => "UPDATE items SET price = 100, category = 'software' # # Raises an +Error+ if the dataset is grouped or includes more # than one table. def update_sql(values = OPTS) return static_sql(opts[:sql]) if opts[:sql] check_modification_allowed! clone(:values=>values)._update_sql end # --------------------- # :section: 9 - Internal Methods relating to SQL Creation # These methods, while public, are not designed to be used directly by the end user. # --------------------- # Given a type (e.g. select) and an array of clauses, # return an array of methods to call to build the SQL string. def self.clause_methods(type, clauses) clauses.map{|clause| :"#{type}_#{clause}_sql"}.freeze end # Map of emulated function names to native function names. EMULATED_FUNCTION_MAP = {} WILDCARD = LiteralString.new('*').freeze ALL = ' ALL'.freeze AND_SEPARATOR = " AND ".freeze APOS = "'".freeze APOS_RE = /'/.freeze ARRAY_EMPTY = '(NULL)'.freeze AS = ' AS '.freeze ASC = ' ASC'.freeze BACKSLASH = "\\".freeze BOOL_FALSE = "'f'".freeze BOOL_TRUE = "'t'".freeze BRACKET_CLOSE = ']'.freeze BRACKET_OPEN = '['.freeze CASE_ELSE = " ELSE ".freeze CASE_END = " END)".freeze CASE_OPEN = '(CASE'.freeze CASE_THEN = " THEN ".freeze CASE_WHEN = " WHEN ".freeze CAST_OPEN = 'CAST('.freeze COLON = ':'.freeze COLUMN_REF_RE1 = Sequel::COLUMN_REF_RE1 COLUMN_REF_RE2 = Sequel::COLUMN_REF_RE2 COLUMN_REF_RE3 = Sequel::COLUMN_REF_RE3 COMMA = ', '.freeze COMMA_SEPARATOR = COMMA CONDITION_FALSE = '(1 = 0)'.freeze CONDITION_TRUE = '(1 = 1)'.freeze COUNT_FROM_SELF_OPTS = [:distinct, :group, :sql, :limit, :offset, :compounds] COUNT_OF_ALL_AS_COUNT = SQL::Function.new(:count, WILDCARD).as(:count) DATASET_ALIAS_BASE_NAME = 't'.freeze DEFAULT = LiteralString.new('DEFAULT').freeze DEFAULT_VALUES = " DEFAULT VALUES".freeze DELETE = 'DELETE'.freeze DELETE_CLAUSE_METHODS = clause_methods(:delete, %w'delete from where') DESC = ' DESC'.freeze DISTINCT = " DISTINCT".freeze DOT = '.'.freeze DOUBLE_APOS = "''".freeze DOUBLE_QUOTE = '""'.freeze EQUAL = ' = '.freeze ESCAPE = " ESCAPE ".freeze EXTRACT = 'extract('.freeze EXISTS = ['EXISTS '.freeze].freeze FOR_UPDATE = ' FOR UPDATE'.freeze FORMAT_DATE = "'%Y-%m-%d'".freeze FORMAT_DATE_STANDARD = "DATE '%Y-%m-%d'".freeze FORMAT_OFFSET = "%+03i%02i".freeze FORMAT_TIMESTAMP_RE = /%[Nz]/.freeze FORMAT_TIMESTAMP_USEC = ".%06d".freeze FORMAT_USEC = '%N'.freeze FRAME_ALL = "ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING".freeze FRAME_ROWS = "ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW".freeze FROM = ' FROM '.freeze FUNCTION_EMPTY = '()'.freeze GROUP_BY = " GROUP BY ".freeze HAVING = " HAVING ".freeze INSERT = "INSERT".freeze INSERT_CLAUSE_METHODS = clause_methods(:insert, %w'insert into columns values') INTO = " INTO ".freeze IS_LITERALS = {nil=>'NULL'.freeze, true=>'TRUE'.freeze, false=>'FALSE'.freeze}.freeze IS_OPERATORS = ::Sequel::SQL::ComplexExpression::IS_OPERATORS LIKE_OPERATORS = ::Sequel::SQL::ComplexExpression::LIKE_OPERATORS LIMIT = " LIMIT ".freeze N_ARITY_OPERATORS = ::Sequel::SQL::ComplexExpression::N_ARITY_OPERATORS NOT_SPACE = 'NOT '.freeze NULL = "NULL".freeze NULLS_FIRST = " NULLS FIRST".freeze NULLS_LAST = " NULLS LAST".freeze OFFSET = " OFFSET ".freeze ON = ' ON '.freeze ON_PAREN = " ON (".freeze ORDER_BY = " ORDER BY ".freeze ORDER_BY_NS = "ORDER BY ".freeze OVER = ' OVER '.freeze PAREN_CLOSE = ')'.freeze PAREN_OPEN = '('.freeze PAREN_SPACE_OPEN = ' ('.freeze PARTITION_BY = "PARTITION BY ".freeze QUALIFY_KEYS = [:select, :where, :having, :order, :group] QUESTION_MARK = '?'.freeze QUESTION_MARK_RE = /\?/.freeze QUOTE = '"'.freeze QUOTE_RE = /"/.freeze RETURNING = " RETURNING ".freeze SELECT = 'SELECT'.freeze SELECT_CLAUSE_METHODS = clause_methods(:select, %w'with select distinct columns from join where group having compounds order limit lock') SET = ' SET '.freeze SPACE = ' '.freeze SQL_WITH = "WITH ".freeze SPACE_WITH = " WITH ".freeze TILDE = '~'.freeze TIMESTAMP_FORMAT = "'%Y-%m-%d %H:%M:%S%N%z'".freeze STANDARD_TIMESTAMP_FORMAT = "TIMESTAMP #{TIMESTAMP_FORMAT}".freeze TWO_ARITY_OPERATORS = ::Sequel::SQL::ComplexExpression::TWO_ARITY_OPERATORS REGEXP_OPERATORS = ::Sequel::SQL::ComplexExpression::REGEXP_OPERATORS UNDERSCORE = '_'.freeze UPDATE = 'UPDATE'.freeze UPDATE_CLAUSE_METHODS = clause_methods(:update, %w'update table set where') USING = ' USING ('.freeze VALUES = " VALUES ".freeze V190 = '1.9.0'.freeze WHERE = " WHERE ".freeze [:literal, :quote_identifier, :quote_schema_table].each do |meth| class_eval(<<-END, __FILE__, __LINE__ + 1) def #{meth}(*args, &block) s = '' #{meth}_append(s, *args, &block) s end END end # SQL fragment for AliasedExpression def aliased_expression_sql_append(sql, ae) literal_append(sql, ae.expression) as_sql_append(sql, ae.aliaz) end # SQL fragment for Array def array_sql_append(sql, a) if a.empty? sql << ARRAY_EMPTY else sql << PAREN_OPEN expression_list_append(sql, a) sql << PAREN_CLOSE end end # SQL fragment for BooleanConstants def boolean_constant_sql_append(sql, constant) if (constant == true || constant == false) && !supports_where_true? sql << (constant == true ? CONDITION_TRUE : CONDITION_FALSE) else literal_append(sql, constant) end end # SQL fragment for CaseExpression def case_expression_sql_append(sql, ce) sql << CASE_OPEN if ce.expression? sql << SPACE literal_append(sql, ce.expression) end w = CASE_WHEN t = CASE_THEN ce.conditions.each do |c,r| sql << w literal_append(sql, c) sql << t literal_append(sql, r) end sql << CASE_ELSE literal_append(sql, ce.default) sql << CASE_END end # SQL fragment for the SQL CAST expression def cast_sql_append(sql, expr, type) sql << CAST_OPEN literal_append(sql, expr) sql << AS << db.cast_type_literal(type).to_s sql << PAREN_CLOSE end # SQL fragment for specifying all columns in a given table def column_all_sql_append(sql, ca) qualified_identifier_sql_append(sql, ca.table, WILDCARD) end # SQL fragment for the complex expression. def complex_expression_sql_append(sql, op, args) case op when *IS_OPERATORS r = args.at(1) if r.nil? || supports_is_true? raise(InvalidOperation, 'Invalid argument used for IS operator') unless val = IS_LITERALS[r] sql << PAREN_OPEN literal_append(sql, args.at(0)) sql << SPACE << op.to_s << SPACE sql << val << PAREN_CLOSE elsif op == :IS complex_expression_sql_append(sql, :"=", args) else complex_expression_sql_append(sql, :OR, [SQL::BooleanExpression.new(:"!=", *args), SQL::BooleanExpression.new(:IS, args.at(0), nil)]) end when :IN, :"NOT IN" cols = args.at(0) vals = args.at(1) col_array = true if cols.is_a?(Array) if vals.is_a?(Array) val_array = true empty_val_array = vals == [] end if empty_val_array literal_append(sql, empty_array_value(op, cols)) elsif col_array if !supports_multiple_column_in? if val_array expr = SQL::BooleanExpression.new(:OR, *vals.to_a.map{|vs| SQL::BooleanExpression.from_value_pairs(cols.to_a.zip(vs).map{|c, v| [c, v]})}) literal_append(sql, op == :IN ? expr : ~expr) else old_vals = vals vals = vals.naked if vals.is_a?(Sequel::Dataset) vals = vals.to_a val_cols = old_vals.columns complex_expression_sql_append(sql, op, [cols, vals.map!{|x| x.values_at(*val_cols)}]) end else # If the columns and values are both arrays, use array_sql instead of # literal so that if values is an array of two element arrays, it # will be treated as a value list instead of a condition specifier. sql << PAREN_OPEN literal_append(sql, cols) sql << SPACE << op.to_s << SPACE if val_array array_sql_append(sql, vals) else literal_append(sql, vals) end sql << PAREN_CLOSE end else sql << PAREN_OPEN literal_append(sql, cols) sql << SPACE << op.to_s << SPACE literal_append(sql, vals) sql << PAREN_CLOSE end when :LIKE, :'NOT LIKE' sql << PAREN_OPEN literal_append(sql, args.at(0)) sql << SPACE << op.to_s << SPACE literal_append(sql, args.at(1)) sql << ESCAPE literal_append(sql, BACKSLASH) sql << PAREN_CLOSE when :ILIKE, :'NOT ILIKE' complex_expression_sql_append(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), args.map{|v| Sequel.function(:UPPER, v)}) when *TWO_ARITY_OPERATORS if REGEXP_OPERATORS.include?(op) && !supports_regexp? raise InvalidOperation, "Pattern matching via regular expressions is not supported on #{db.database_type}" end sql << PAREN_OPEN literal_append(sql, args.at(0)) sql << SPACE << op.to_s << SPACE literal_append(sql, args.at(1)) sql << PAREN_CLOSE when *N_ARITY_OPERATORS sql << PAREN_OPEN c = false op_str = " #{op} " args.each do |a| sql << op_str if c literal_append(sql, a) c ||= true end sql << PAREN_CLOSE when :NOT sql << NOT_SPACE literal_append(sql, args.at(0)) when :NOOP literal_append(sql, args.at(0)) when :'B~' sql << TILDE literal_append(sql, args.at(0)) when :extract sql << EXTRACT << args.at(0).to_s << FROM literal_append(sql, args.at(1)) sql << PAREN_CLOSE else raise(InvalidOperation, "invalid operator #{op}") end end # SQL fragment for constants def constant_sql_append(sql, constant) sql << constant.to_s end # SQL fragment for delayed evaluations, evaluating the # object and literalizing the returned value. def delayed_evaluation_sql_append(sql, callable) literal_append(sql, callable.call) end # SQL fragment specifying an emulated SQL function call. # By default, assumes just the function name may need to # be emulated, adapters should set an EMULATED_FUNCTION_MAP # hash mapping emulated functions to native functions in # their dataset class to setup the emulation. def emulated_function_sql_append(sql, f) _function_sql_append(sql, native_function_name(f.f), f.args) end # SQL fragment specifying an SQL function call without emulation. def function_sql_append(sql, f) _function_sql_append(sql, f.f, f.args) end # SQL fragment specifying a JOIN clause without ON or USING. def join_clause_sql_append(sql, jc) table = jc.table table_alias = jc.table_alias table_alias = nil if table == table_alias sql << SPACE << join_type_sql(jc.join_type) << SPACE identifier_append(sql, table) as_sql_append(sql, table_alias) if table_alias end # SQL fragment specifying a JOIN clause with ON. def join_on_clause_sql_append(sql, jc) join_clause_sql_append(sql, jc) sql << ON literal_append(sql, filter_expr(jc.on)) end # SQL fragment specifying a JOIN clause with USING. def join_using_clause_sql_append(sql, jc) join_clause_sql_append(sql, jc) sql << USING column_list_append(sql, jc.using) sql << PAREN_CLOSE end # SQL fragment for NegativeBooleanConstants def negative_boolean_constant_sql_append(sql, constant) sql << NOT_SPACE boolean_constant_sql_append(sql, constant) end # SQL fragment for the ordered expression, used in the ORDER BY # clause. def ordered_expression_sql_append(sql, oe) literal_append(sql, oe.expression) sql << (oe.descending ? DESC : ASC) case oe.nulls when :first sql << NULLS_FIRST when :last sql << NULLS_LAST end end # SQL fragment for a literal string with placeholders def placeholder_literal_string_sql_append(sql, pls) args = pls.args str = pls.str sql << PAREN_OPEN if pls.parens if args.is_a?(Hash) re = /:(#{args.keys.map{|k| Regexp.escape(k.to_s)}.join('|')})\b/ loop do previous, q, str = str.partition(re) sql << previous literal_append(sql, args[($1||q[1..-1].to_s).to_sym]) unless q.empty? break if str.empty? end elsif str.is_a?(Array) len = args.length str.each_with_index do |s, i| sql << s literal_append(sql, args[i]) unless i == len end unless str.length == args.length || str.length == args.length + 1 raise Error, "Mismatched number of placeholders (#{str.length}) and placeholder arguments (#{args.length}) when using placeholder array" end else i = -1 match_len = args.length - 1 loop do previous, q, str = str.partition(QUESTION_MARK) sql << previous literal_append(sql, args.at(i+=1)) unless q.empty? if str.empty? unless i == match_len raise Error, "Mismatched number of placeholders (#{i+1}) and placeholder arguments (#{args.length}) when using placeholder array" end break end end end sql << PAREN_CLOSE if pls.parens end # SQL fragment for the qualifed identifier, specifying # a table and a column (or schema and table). # If 3 arguments are given, the 2nd should be the table/qualifier and the third should be # column/qualified. If 2 arguments are given, the 2nd should be an SQL::QualifiedIdentifier. def qualified_identifier_sql_append(sql, table, column=(c = table.column; table = table.table; c)) identifier_append(sql, table) sql << DOT identifier_append(sql, column) end # Adds quoting to identifiers (columns and tables). If identifiers are not # being quoted, returns name as a string. If identifiers are being quoted # quote the name with quoted_identifier. def quote_identifier_append(sql, name) if name.is_a?(LiteralString) sql << name else name = name.value if name.is_a?(SQL::Identifier) name = input_identifier(name) if quote_identifiers? quoted_identifier_append(sql, name) else sql << name end end end # Separates the schema from the table and returns a string with them # quoted (if quoting identifiers) def quote_schema_table_append(sql, table) schema, table = schema_and_table(table) if schema quote_identifier_append(sql, schema) sql << DOT end quote_identifier_append(sql, table) end # This method quotes the given name with the SQL standard double quote. # should be overridden by subclasses to provide quoting not matching the # SQL standard, such as backtick (used by MySQL and SQLite). def quoted_identifier_append(sql, name) sql << QUOTE << name.to_s.gsub(QUOTE_RE, DOUBLE_QUOTE) << QUOTE end # Split the schema information from the table, returning two strings, # one for the schema and one for the table. The returned schema may # be nil, but the table will always have a string value. # # Note that this function does not handle tables with more than one # level of qualification (e.g. database.schema.table on Microsoft # SQL Server). def schema_and_table(table_name, sch=nil) sch = sch.to_s if sch case table_name when Symbol s, t, _ = split_symbol(table_name) [s||sch, t] when SQL::QualifiedIdentifier [table_name.table.to_s, table_name.column.to_s] when SQL::Identifier [sch, table_name.value.to_s] when String [sch, table_name] else raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String' end end # Splits table_name into an array of strings. # # ds.split_qualifiers(:s) # ['s'] # ds.split_qualifiers(:t__s) # ['t', 's'] # ds.split_qualifiers(Sequel.qualify(:d, :t__s)) # ['d', 't', 's'] # ds.split_qualifiers(Sequel.qualify(:h__d, :t__s)) # ['h', 'd', 't', 's'] def split_qualifiers(table_name, *args) case table_name when SQL::QualifiedIdentifier split_qualifiers(table_name.table, nil) + split_qualifiers(table_name.column, nil) else sch, table = schema_and_table(table_name, *args) sch ? [sch, table] : [table] end end # SQL fragment for specifying subscripts (SQL array accesses) def subscript_sql_append(sql, s) literal_append(sql, s.f) sql << BRACKET_OPEN if s.sub.length == 1 && (range = s.sub.first).is_a?(Range) literal_append(sql, range.begin) sql << COLON e = range.end e -= 1 if range.exclude_end? && e.is_a?(Integer) literal_append(sql, e) else expression_list_append(sql, s.sub) end sql << BRACKET_CLOSE end # The SQL fragment for the given window's options. def window_sql_append(sql, opts) raise(Error, 'This dataset does not support window functions') unless supports_window_functions? sql << PAREN_OPEN window, part, order, frame = opts.values_at(:window, :partition, :order, :frame) space = false space_s = SPACE if window literal_append(sql, window) space = true end if part sql << space_s if space sql << PARTITION_BY expression_list_append(sql, Array(part)) space = true end if order sql << space_s if space sql << ORDER_BY_NS expression_list_append(sql, Array(order)) space = true end case frame when nil # nothing when :all sql << space_s if space sql << FRAME_ALL when :rows sql << space_s if space sql << FRAME_ROWS when String sql << space_s if space sql << frame else raise Error, "invalid window frame clause, should be :all, :rows, a string, or nil" end sql << PAREN_CLOSE end # The SQL fragment for the given window function's function and window. def window_function_sql_append(sql, function, window) literal_append(sql, function) sql << OVER literal_append(sql, window) end protected # Formats in INSERT statement using the stored columns and values. def _insert_sql clause_sql(:insert) end # Formats an UPDATE statement using the stored values. def _update_sql clause_sql(:update) end # Return a from_self dataset if an order or limit is specified, so it works as expected # with UNION, EXCEPT, and INTERSECT clauses. def compound_from_self (@opts[:limit] || @opts[:order]) ? from_self : self end private # Backbone of function_sql_append and emulated_function_sql_append. def _function_sql_append(sql, name, args) sql << name.to_s if args.empty? sql << FUNCTION_EMPTY else literal_append(sql, args) end end # Formats the truncate statement. Assumes the table given has already been # literalized. def _truncate_sql(table) "TRUNCATE TABLE #{table}" end # Returns an appropriate symbol for the alias represented by s. def alias_alias_symbol(s) case s when Symbol s when String s.to_sym when SQL::Identifier s.value.to_s.to_sym else raise Error, "Invalid alias for alias_alias_symbol: #{s.inspect}" end end # Returns an appropriate alias symbol for the given object, which can be # a Symbol, String, SQL::Identifier, SQL::QualifiedIdentifier, or # SQL::AliasedExpression. def alias_symbol(sym) case sym when Symbol s, t, a = split_symbol(sym) a || s ? (a || t).to_sym : sym when String sym.to_sym when SQL::Identifier sym.value.to_s.to_sym when SQL::QualifiedIdentifier alias_symbol(sym.column) when SQL::AliasedExpression alias_alias_symbol(sym.aliaz) else raise Error, "Invalid alias for alias_symbol: #{sym.inspect}" end end # Clone of this dataset usable in aggregate operations. Does # a from_self if dataset contains any parameters that would # affect normal aggregation, or just removes an existing # order if not. def aggregate_dataset options_overlap(COUNT_FROM_SELF_OPTS) ? from_self : unordered end # SQL fragment for specifying an alias. expression should already be literalized. def as_sql_append(sql, aliaz) sql << AS quote_identifier_append(sql, aliaz) end # Raise an InvalidOperation exception if deletion is not allowed # for this dataset def check_modification_allowed! raise(InvalidOperation, "Grouped datasets cannot be modified") if opts[:group] raise(InvalidOperation, "Joined datasets cannot be modified") if !supports_modifying_joins? && joined_dataset? end # Alias of check_modification_allowed! def check_truncation_allowed! check_modification_allowed! end # Prepare an SQL statement by calling all clause methods for the given statement type. def clause_sql(type) sql = @opts[:append_sql] || sql_string_origin send("#{type}_clause_methods").each{|x| send(x, sql)} sql end # Converts an array of column names into a comma seperated string of # column names. If the array is empty, a wildcard (*) is returned. def column_list_append(sql, columns) if (columns.nil? || columns.empty?) sql << WILDCARD else expression_list_append(sql, columns) end end # Yield each two pair of arguments to the block, which should # return a string representing the SQL code for those # two arguments. If more than 2 arguments are provided, all # calls to the block # after the first will have a LiteralString # as the first argument, representing the application of the block to # the previous arguments. def complex_expression_arg_pairs(args) case args.length when 1 literal(args.at(0)) when 2 yield args.at(0), args.at(1) else args.inject{|m, a| LiteralString.new(yield(m, a))} end end # The SQL to use for the dataset used in a UNION/INTERSECT/EXCEPT clause. def compound_dataset_sql_append(sql, ds) subselect_sql_append(sql, ds) end # The alias to use for datasets, takes a number to make sure the name is unique. def dataset_alias(number) :"#{DATASET_ALIAS_BASE_NAME}#{number}" end # The strftime format to use when literalizing the time. def default_timestamp_format requires_sql_standard_datetimes? ? STANDARD_TIMESTAMP_FORMAT : TIMESTAMP_FORMAT end # The order of methods to call to build the DELETE SQL statement def delete_clause_methods DELETE_CLAUSE_METHODS end def delete_delete_sql(sql) sql << DELETE end # Converts an array of expressions into a comma separated string of # expressions. def expression_list_append(sql, columns) c = false co = COMMA columns.each do |col| sql << co if c literal_append(sql, col) c ||= true end end # An expression for how to handle an empty array lookup. def empty_array_value(op, cols) c = Array(cols) SQL::BooleanExpression.from_value_pairs(c.zip(c), :AND, op == :IN) end # Format the timestamp based on the default_timestamp_format, with a couple # of modifiers. First, allow %N to be used for fractions seconds (if the # database supports them), and override %z to always use a numeric offset # of hours and minutes. def format_timestamp(v) v2 = db.from_application_timestamp(v) fmt = default_timestamp_format.gsub(FORMAT_TIMESTAMP_RE) do |m| if m == FORMAT_USEC format_timestamp_usec(v.is_a?(DateTime) ? v.sec_fraction*(RUBY_VERSION < V190 ? 86400000000 : 1000000) : v.usec) if supports_timestamp_usecs? else if supports_timestamp_timezones? # Would like to just use %z format, but it doesn't appear to work on Windows # Instead, the offset fragment is constructed manually minutes = (v2.is_a?(DateTime) ? v2.offset * 1440 : v2.utc_offset/60).to_i format_timestamp_offset(*minutes.divmod(60)) end end end v2.strftime(fmt) end # Return the SQL timestamp fragment to use for the timezone offset. def format_timestamp_offset(hour, minute) sprintf(FORMAT_OFFSET, hour, minute) end # Return the SQL timestamp fragment to use for the fractional time part. # Should start with the decimal point. Uses 6 decimal places by default. def format_timestamp_usec(usec) sprintf(FORMAT_TIMESTAMP_USEC, usec) end # Append the value, but special case regular (non-literal, non-blob) strings # so that they are considered as identifiers and not SQL strings. def identifier_append(sql, v) if v.is_a?(String) case v when LiteralString sql << v when SQL::Blob literal_append(sql, v) else quote_identifier_append(sql, v) end else literal_append(sql, v) end end # Append all identifiers in args interspersed by commas. def identifier_list_append(sql, args) c = false comma = COMMA args.each do |a| sql << comma if c identifier_append(sql, a) c ||= true end end # Modify the identifier returned from the database based on the # identifier_output_method. def input_identifier(v) (i = identifier_input_method) ? v.to_s.send(i) : v.to_s end # SQL fragment specifying the table to insert INTO def insert_into_sql(sql) sql << INTO source_list_append(sql, @opts[:from]) end # The order of methods to call to build the INSERT SQL statement def insert_clause_methods INSERT_CLAUSE_METHODS end # SQL fragment specifying the columns to insert into def insert_columns_sql(sql) columns = opts[:columns] if columns && !columns.empty? sql << PAREN_SPACE_OPEN identifier_list_append(sql, columns) sql << PAREN_CLOSE end end def insert_insert_sql(sql) sql << INSERT end # SQL fragment specifying the values to insert. def insert_values_sql(sql) case values = opts[:values] when Array if values.empty? sql << DEFAULT_VALUES else sql << VALUES literal_append(sql, values) end when Dataset sql << SPACE subselect_sql_append(sql, values) when LiteralString sql << SPACE << values else raise Error, "Unsupported INSERT values type, should be an Array or Dataset: #{values.inspect}" end end # SQL fragment specifying the values to return. def insert_returning_sql(sql) if opts.has_key?(:returning) sql << RETURNING column_list_append(sql, Array(opts[:returning])) end end alias delete_returning_sql insert_returning_sql alias update_returning_sql insert_returning_sql # SQL fragment specifying a JOIN type, converts underscores to # spaces and upcases. def join_type_sql(join_type) "#{join_type.to_s.gsub(UNDERSCORE, SPACE).upcase} JOIN" end # Whether this dataset is a joined dataset def joined_dataset? (opts[:from].is_a?(Array) && opts[:from].size > 1) || opts[:join] end # SQL fragment for Array. Treats as an expression if an array of all two pairs, or as a SQL array otherwise. def literal_array_append(sql, v) if Sequel.condition_specifier?(v) literal_expression_append(sql, SQL::BooleanExpression.from_value_pairs(v)) else array_sql_append(sql, v) end end # SQL fragment for BigDecimal def literal_big_decimal(v) d = v.to_s("F") v.nan? || v.infinite? ? "'#{d}'" : d end # SQL fragment for SQL::Blob def literal_blob_append(sql, v) literal_string_append(sql, v) end # SQL fragment for Dataset. Does a subselect inside parantheses. def literal_dataset_append(sql, v) sql << PAREN_OPEN subselect_sql_append(sql, v) sql << PAREN_CLOSE end # SQL fragment for Date, using the ISO8601 format. def literal_date(v) if requires_sql_standard_datetimes? v.strftime(FORMAT_DATE_STANDARD) else v.strftime(FORMAT_DATE) end end # SQL fragment for DateTime def literal_datetime(v) format_timestamp(v) end # SQL fragment for SQL::Expression, result depends on the specific type of expression. def literal_expression_append(sql, v) v.to_s_append(self, sql) end # SQL fragment for false def literal_false BOOL_FALSE end # SQL fragment for Float def literal_float(v) v.to_s end # SQL fragment for Hash, treated as an expression def literal_hash_append(sql, v) literal_expression_append(sql, SQL::BooleanExpression.from_value_pairs(v)) end # SQL fragment for Integer def literal_integer(v) v.to_s end # SQL fragment for nil def literal_nil NULL end # SQL fragment for a type of object not handled by Dataset#literal. # Calls +sql_literal+ if object responds to it, otherwise raises an error. # Classes implementing +sql_literal+ should call a class-specific method on the dataset # provided and should add that method to Sequel::Dataset, allowing for adapters # to provide customized literalizations. # If a database specific type is allowed, this should be overriden in a subclass. def literal_other_append(sql, v) if v.respond_to?(:sql_literal_append) v.sql_literal_append(self, sql) elsif v.respond_to?(:sql_literal) sql << v.sql_literal(self) else raise Error, "can't express #{v.inspect} as a SQL literal" end end # SQL fragment for Sequel::SQLTime, containing just the time part def literal_sqltime(v) v.strftime("'%H:%M:%S#{format_timestamp_usec(v.usec) if supports_timestamp_usecs?}'") end # SQL fragment for String. Doubles \ and ' by default. def literal_string_append(sql, v) sql << APOS << v.gsub(APOS_RE, DOUBLE_APOS) << APOS end # Converts a symbol into a column name. This method supports underscore # notation in order to express qualified (two underscores) and aliased # (three underscores) columns: # # dataset.literal(:abc) #=> "abc" # dataset.literal(:abc___a) #=> "abc AS a" # dataset.literal(:items__abc) #=> "items.abc" # dataset.literal(:items__abc___a) #=> "items.abc AS a" def literal_symbol_append(sql, v) c_table, column, c_alias = split_symbol(v) if c_table quote_identifier_append(sql, c_table) sql << DOT end quote_identifier_append(sql, column) as_sql_append(sql, c_alias) if c_alias end # SQL fragment for Time def literal_time(v) format_timestamp(v) end # SQL fragment for true def literal_true BOOL_TRUE end # Get the native function name given the emulated function name. def native_function_name(emulated_function) self.class.const_get(:EMULATED_FUNCTION_MAP).fetch(emulated_function, emulated_function) end # Returns a qualified column name (including a table name) if the column # name isn't already qualified. def qualified_column_name(column, table) if column.is_a?(Symbol) c_table, column, _ = split_symbol(column) unless c_table case table when Symbol schema, table, t_alias = split_symbol(table) t_alias ||= Sequel::SQL::QualifiedIdentifier.new(schema, table) if schema when Sequel::SQL::AliasedExpression t_alias = table.aliaz end c_table = t_alias || table end ::Sequel::SQL::QualifiedIdentifier.new(c_table, column) else column end end # Qualify the given expression e to the given table. def qualified_expression(e, table) Qualifier.new(self, table).transform(e) end # The order of methods to call to build the SELECT SQL statement def select_clause_methods SELECT_CLAUSE_METHODS end # Modify the sql to add the columns selected def select_columns_sql(sql) sql << SPACE column_list_append(sql, @opts[:select]) end # Modify the sql to add the DISTINCT modifier def select_distinct_sql(sql) if distinct = @opts[:distinct] sql << DISTINCT unless distinct.empty? sql << ON_PAREN expression_list_append(sql, distinct) sql << PAREN_CLOSE end end end # Modify the sql to add a dataset to the via an EXCEPT, INTERSECT, or UNION clause. # This uses a subselect for the compound datasets used, because using parantheses doesn't # work on all databases. I consider this an ugly hack, but can't I think of a better default. def select_compounds_sql(sql) return unless c = @opts[:compounds] c.each do |type, dataset, all| sql << SPACE << type.to_s.upcase sql << ALL if all sql << SPACE compound_dataset_sql_append(sql, dataset) end end # Modify the sql to add the list of tables to select FROM def select_from_sql(sql) if f = @opts[:from] sql << FROM source_list_append(sql, f) end end alias delete_from_sql select_from_sql # Modify the sql to add the expressions to GROUP BY def select_group_sql(sql) if group = @opts[:group] sql << GROUP_BY if go = @opts[:group_options] if uses_with_rollup? expression_list_append(sql, group) sql << SPACE_WITH << go.to_s.upcase else sql << go.to_s.upcase << PAREN_OPEN expression_list_append(sql, group) sql << PAREN_CLOSE end else expression_list_append(sql, group) end end end # Modify the sql to add the filter criteria in the HAVING clause def select_having_sql(sql) if having = @opts[:having] sql << HAVING literal_append(sql, having) end end # Modify the sql to add the list of tables to JOIN to def select_join_sql(sql) if js = @opts[:join] js.each{|j| literal_append(sql, j)} end end # Modify the sql to limit the number of rows returned and offset def select_limit_sql(sql) if l = @opts[:limit] sql << LIMIT literal_append(sql, l) end if o = @opts[:offset] sql << OFFSET literal_append(sql, o) end end # Modify the sql to support the different types of locking modes. def select_lock_sql(sql) case l = @opts[:lock] when :update sql << FOR_UPDATE when String sql << SPACE << l end end # Modify the sql to add the expressions to ORDER BY def select_order_sql(sql) if o = @opts[:order] sql << ORDER_BY expression_list_append(sql, o) end end alias delete_order_sql select_order_sql alias update_order_sql select_order_sql def select_select_sql(sql) sql << SELECT end # Modify the sql to add the filter criteria in the WHERE clause def select_where_sql(sql) if w = @opts[:where] sql << WHERE literal_append(sql, w) end end alias delete_where_sql select_where_sql alias update_where_sql select_where_sql # SQL Fragment specifying the WITH clause def select_with_sql(sql) ws = opts[:with] return if !ws || ws.empty? sql << select_with_sql_base c = false comma = COMMA ws.each do |w| sql << comma if c quote_identifier_append(sql, w[:name]) if args = w[:args] sql << PAREN_OPEN identifier_list_append(sql, args) sql << PAREN_CLOSE end sql << AS literal_dataset_append(sql, w[:dataset]) c ||= true end sql << SPACE end alias delete_with_sql select_with_sql alias insert_with_sql select_with_sql alias update_with_sql select_with_sql # The base keyword to use for the SQL WITH clause def select_with_sql_base SQL_WITH end # Converts an array of source names into into a comma separated list. def source_list_append(sql, sources) raise(Error, 'No source specified for query') if sources.nil? || sources == [] identifier_list_append(sql, sources) end # Delegate to Sequel.split_symbol. def split_symbol(sym) Sequel.split_symbol(sym) end # The string that is appended to to create the SQL query, the empty # string by default def sql_string_origin '' end # SQL to use if this dataset uses static SQL. Since static SQL # can be a PlaceholderLiteralString in addition to a String, # we literalize nonstrings. def static_sql(sql) if append_sql = @opts[:append_sql] if sql.is_a?(String) append_sql << sql else literal_append(append_sql, sql) end else if sql.is_a?(String) sql else literal(sql) end end end # SQL fragment for a subselect using the given database's SQL. def subselect_sql_append(sql, ds) ds.clone(:append_sql=>sql).sql end # The order of methods to call to build the UPDATE SQL statement def update_clause_methods UPDATE_CLAUSE_METHODS end # SQL fragment specifying the tables from with to delete. # Includes join table if modifying joins is allowed. def update_table_sql(sql) sql << SPACE source_list_append(sql, @opts[:from]) select_join_sql(sql) if supports_modifying_joins? end # The SQL fragment specifying the columns and values to SET. def update_set_sql(sql) values = opts[:values] sql << SET if values.is_a?(Hash) c = false eq = EQUAL values.each do |k, v| sql << COMMA if c if k.is_a?(String) && !k.is_a?(LiteralString) quote_identifier_append(sql, k) else literal_append(sql, k) end sql << eq literal_append(sql, v) c ||= true end else sql << values end end def update_update_sql(sql) sql << UPDATE end end end ruby-sequel-4.1.1/lib/sequel/deprecated.rb000066400000000000000000000051231220156535500204700ustar00rootroot00000000000000module Sequel # This module makes it easy to print deprecation warnings with optional backtraces to a given stream. # There are a two accessors you can use to change how/where the deprecation methods are printed # and whether/how backtraces should be included: # # Sequel::Deprecation.output = $stderr # print deprecation messages to standard error (default) # Sequel::Deprecation.output = File.open('deprecated_calls.txt', 'wb') # use a file instead # Sequel::Deprecation.output = false # do not output deprecation messages # # Sequel::Deprecation.prefix = "SEQUEL DEPRECATION WARNING: " # prefix deprecation messages with a given string (default) # Sequel::Deprecation.prefix = false # do not prefix deprecation messages # # Sequel::Deprecation.backtrace_filter = false # don't include backtraces # Sequel::Deprecation.backtrace_filter = true # include full backtraces # Sequel::Deprecation.backtrace_filter = 10 # include 10 backtrace lines (default) # Sequel::Deprecation.backtrace_filter = 1 # include 1 backtrace line # Sequel::Deprecation.backtrace_filter = lambda{|line, line_no| line_no < 3 || line =~ /my_app/} # select backtrace lines to output module Deprecation @backtrace_filter = 10 @output = $stderr @prefix = "SEQUEL DEPRECATION WARNING: " class << self # How to filter backtraces. +false+ does not include backtraces, +true+ includes # full backtraces, an Integer includes that number of backtrace lines, and # a proc is called with the backtrace line and line number to select the backtrace # lines to include. The default is 10 backtrace lines. attr_accessor :backtrace_filter # Where deprecation messages should be output, must respond to puts. $stderr by default. attr_accessor :output # Where deprecation messages should be prefixed with ("SEQUEL DEPRECATION WARNING: " by default). attr_accessor :prefix end # Print the message and possibly backtrace to the output. def self.deprecate(method, instead=nil) return unless output message = instead ? "#{method} is deprecated and will be removed in Sequel 4.0. #{instead}." : method message = "#{prefix}#{message}" if prefix output.puts(message) case b = backtrace_filter when Integer caller.each do |c| b -= 1 output.puts(c) break if b <= 0 end when true caller.each{|c| output.puts(c)} when Proc caller.each_with_index{|line, line_no| output.puts(line) if b.call(line, line_no)} end nil end end end ruby-sequel-4.1.1/lib/sequel/exceptions.rb000066400000000000000000000062041220156535500205520ustar00rootroot00000000000000module Sequel # The default exception class for exceptions raised by Sequel. # All exception classes defined by Sequel are descendants of this class. class Error < ::StandardError # If this exception wraps an underlying exception, the underlying # exception is held here. attr_accessor :wrapped_exception end # Error raised when the adapter requested doesn't exist or can't be loaded. class AdapterNotFound < Error; end # Generic error raised by the database adapters, indicating a # problem originating from the database server. Usually raised # because incorrect SQL syntax is used. class DatabaseError < Error; end # Error raised when the Sequel is unable to connect to the database with the # connection parameters it was given. class DatabaseConnectionError < DatabaseError; end # Error raised by adapters when they determine that the connection # to the database has been lost. Instructs the connection pool code to # remove that connection from the pool so that other connections can be acquired # automatically. class DatabaseDisconnectError < DatabaseError; end # Generic error raised when Sequel determines a database constraint has been violated. class ConstraintViolation < DatabaseError; end # Error raised when Sequel determines a database check constraint has been violated. class CheckConstraintViolation < ConstraintViolation; end # Error raised when Sequel determines a database foreign key constraint has been violated. class ForeignKeyConstraintViolation < ConstraintViolation; end # Error raised when Sequel determines a database NOT NULL constraint has been violated. class NotNullConstraintViolation < ConstraintViolation; end # Error raised when Sequel determines a database unique constraint has been violated. class UniqueConstraintViolation < ConstraintViolation; end # Error raised when Sequel determines a serialization failure/deadlock in the database. class SerializationFailure < DatabaseError; end # Error raised on an invalid operation, such as trying to update or delete # a joined or grouped dataset. class InvalidOperation < Error; end # Error raised when attempting an invalid type conversion. class InvalidValue < Error; end # Error raised when the user requests a record via the first! or similar # method, and the dataset does not yield any rows. class NoMatchingRow < Error; end # Error raised when the connection pool cannot acquire a database connection # before the timeout. class PoolTimeout < Error; end # Error that you should raise to signal a rollback of the current transaction. # The transaction block will catch this exception, rollback the current transaction, # and won't reraise it (unless a reraise is requested). class Rollback < Error; end # Error raised when unbinding a dataset that has multiple different values # for a given variable. class UnbindDuplicate < Error; end class Error AdapterNotFound = Sequel::AdapterNotFound InvalidOperation = Sequel::InvalidOperation InvalidValue = Sequel::InvalidValue PoolTimeoutError = Sequel::PoolTimeout Rollback = Sequel::Rollback end end ruby-sequel-4.1.1/lib/sequel/extensions/000077500000000000000000000000001220156535500202415ustar00rootroot00000000000000ruby-sequel-4.1.1/lib/sequel/extensions/_pretty_table.rb000066400000000000000000000044121220156535500234240ustar00rootroot00000000000000# This _pretty_table extension is only for internal use. # It adds the Sequel::PrettyTable class without modifying # Sequel::Dataset. # # To load the extension: # # Sequel.extension :_pretty_table module Sequel module PrettyTable # Prints nice-looking plain-text tables via puts # # +--+-------+ # |id|name | # |--+-------| # |1 |fasdfas| # |2 |test | # +--+-------+ def self.print(records, columns=nil) puts string(records, columns) end # Return the string that #print will print via puts. def self.string(records, columns = nil) # records is an array of hashes columns ||= records.first.keys.sort_by{|x|x.to_s} sizes = column_sizes(records, columns) sep_line = separator_line(columns, sizes) array = [sep_line, header_line(columns, sizes), sep_line] records.each {|r| array << data_line(columns, sizes, r)} array << sep_line array.join("\n") end ### Private Module Methods ### # Hash of the maximum size of the value for each column def self.column_sizes(records, columns) # :nodoc: sizes = Hash.new {0} columns.each do |c| s = c.to_s.size sizes[c] = s if s > sizes[c] end records.each do |r| columns.each do |c| s = r[c].to_s.size sizes[c] = s if s > sizes[c] end end sizes end # String for each data line def self.data_line(columns, sizes, record) # :nodoc: '|' << columns.map {|c| format_cell(sizes[c], record[c])}.join('|') << '|' end # Format the value so it takes up exactly size characters def self.format_cell(size, v) # :nodoc: case v when Bignum, Fixnum "%#{size}d" % v when Float, BigDecimal "%#{size}g" % v else "%-#{size}s" % v.to_s end end # String for header line def self.header_line(columns, sizes) # :nodoc: '|' << columns.map {|c| "%-#{sizes[c]}s" % c.to_s}.join('|') << '|' end # String for separtor line def self.separator_line(columns, sizes) # :nodoc: '+' << columns.map {|c| '-' * sizes[c]}.join('+') << '+' end private_class_method :column_sizes, :data_line, :format_cell, :header_line, :separator_line end end ruby-sequel-4.1.1/lib/sequel/extensions/arbitrary_servers.rb000066400000000000000000000062121220156535500243370ustar00rootroot00000000000000# The arbitrary_servers extension allows you to connect to arbitrary # servers/shards that were not defined when you created the database. # To use it, you first load the extension into the Database object: # # DB.extension :arbitrary_servers # # Then you can pass arbitrary connection options for the server/shard # to use as a hash: # # DB[:table].server(:host=>'...', :database=>'...').all # # Because Sequel can never be sure that the connection will be reused, # arbitrary connections are disconnected as soon as the outermost block # that uses them exits. So this example uses the same connection: # # DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c| # DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c2| # # c == c2 # end # end # # But this example does not: # # DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c| # end # DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c2| # # c != c2 # end # # You can use this extension in conjunction with the server_block # extension: # # DB.with_server(:host=>'...', :database=>'...') do # DB.synchronize do # # All of these use the host/database given to with_server # DB[:table].insert(...) # DB[:table].update(...) # DB.tables # DB[:table].all # end # end # # Anyone using this extension in conjunction with the server_block # extension may want to do the following to so that you don't need # to call synchronize separately: # # def DB.with_server(*) # super{synchronize{yield}} # end # # Note that this extension only works with the sharded threaded connection # pool. If you are using the sharded single connection pool, you need # to switch to the sharded threaded connection pool before using this # extension. module Sequel module ArbitraryServers private # If server is a hash, create a new connection for # it, and cache it first by thread and then server. def acquire(thread, server) if server.is_a?(Hash) sync{@allocated[thread] ||= {}}[server] = make_new(server) else super end end # If server is a hash, the entry for it probably doesn't # exist in the @allocated hash, so check for existence to # avoid calling nil.[] def owned_connection(thread, server) if server.is_a?(Hash) if a = sync{@allocated[thread]} a[server] end else super end end # If server is a hash, return it directly. def pick_server(server) if server.is_a?(Hash) server else super end end # If server is a hash, delete the thread from the allocated # connections for that server. Additionally, if this was the last thread # using that server, delete the server from the @allocated hash. def release(thread, conn, server) if server.is_a?(Hash) a = @allocated[thread] a.delete(server) @allocated.delete(thread) if a.empty? db.disconnect_connection(conn) else super end end end Database.register_extension(:arbitrary_servers){|db| db.pool.extend(ArbitraryServers)} end ruby-sequel-4.1.1/lib/sequel/extensions/blank.rb000066400000000000000000000012711220156535500216560ustar00rootroot00000000000000# The blank extension adds the blank? method to all objects (e.g. Object#blank?). # # To load the extension: # # Sequel.extension :blank class FalseClass # false is always blank def blank? true end end class Object # Objects are blank if they respond true to empty? def blank? respond_to?(:empty?) && empty? end end class NilClass # nil is always blank def blank? true end end class Numeric # Numerics are never blank (not even 0) def blank? false end end class String # Strings are blank if they are empty or include only whitespace def blank? strip.empty? end end class TrueClass # true is never blank def blank? false end end ruby-sequel-4.1.1/lib/sequel/extensions/columns_introspection.rb000066400000000000000000000050351220156535500252310ustar00rootroot00000000000000# The columns_introspection extension attempts to introspect the # selected columns for a dataset before issuing a query. If it # thinks it can guess correctly at the columns the query will use, # it will return the columns without issuing a database query. # # This method is not fool-proof, it's possible that some databases # will use column names that Sequel does not expect. Also, it # may not correctly handle all cases. # # To attempt to introspect columns for a single dataset: # # ds.extension(:columns_introspection) # # To attempt to introspect columns for all datasets on a single database: # # DB.extension(:columns_introspection) module Sequel module ColumnsIntrospection # Attempt to guess the columns that will be returned # if there are columns selected, in order to skip a database # query to retrieve the columns. This should work with # Symbols, SQL::Identifiers, SQL::QualifiedIdentifiers, and # SQL::AliasedExpressions. def columns return @columns if @columns if (pcs = probable_columns) && pcs.all? @columns = pcs else super end end protected # Return an array of probable column names for the dataset, or # nil if it is not possible to determine that through # introspection. def probable_columns if (cols = opts[:select]) && !cols.empty? cols.map{|c| probable_column_name(c)} elsif !opts[:join] && !opts[:with] && (from = opts[:from]) && from.length == 1 && (from = from.first) if from.is_a?(SQL::AliasedExpression) from = from.expression end case from when Dataset from.probable_columns when Symbol, SQL::Identifier, SQL::QualifiedIdentifier schemas = db.instance_variable_get(:@schemas) if schemas && (sch = Sequel.synchronize{schemas[literal(from)]}) sch.map{|c,_| c} end end end end private # Return the probable name of the column, or nil if one # cannot be determined. def probable_column_name(c) case c when Symbol _, c, a = split_symbol(c) (a || c).to_sym when SQL::Identifier c.value.to_sym when SQL::QualifiedIdentifier col = c.column col.is_a?(SQL::Identifier) ? col.value.to_sym : col.to_sym when SQL::AliasedExpression a = c.aliaz a.is_a?(SQL::Identifier) ? a.value.to_sym : a.to_sym end end end Dataset.register_extension(:columns_introspection, Sequel::ColumnsIntrospection) end ruby-sequel-4.1.1/lib/sequel/extensions/connection_validator.rb000066400000000000000000000077471220156535500250110ustar00rootroot00000000000000# The connection_validator extension modifies a database's # connection pool to validate that connections checked out # from the pool are still valid, before yielding them for # use. If it detects an invalid connection, it removes it # from the pool and tries the next available connection, # creating a new connection if no available connection is # valid. Example of use: # # DB.extension(:connection_validator) # # As checking connections for validity involves issuing a # query, which is potentially an expensive operation, # the validation checks are only run if the connection has # been idle for longer than a certain threshold. By default, # that threshold is 3600 seconds (1 hour), but it can be # modified by the user, set to -1 to always validate # connections on checkout: # # DB.pool.connection_validation_timeout = -1 # # Note that if you set the timeout to validate connections # on every checkout, you should probably manually control # connection checkouts on a coarse basis, using # Database#synchronize. In a web application, the optimal # place for that would be a rack middleware. Validating # connections on every checkout without setting up coarse # connection checkouts will hurt performance, in some cases # significantly. Note that setting up coarse connection # checkouts reduces the concurrency level acheivable. For # example, in a web application, using Database#synchronize # in a rack middleware will limit the number of concurrent # web requests to the number to connections in the database # connection pool. # # Note that this extension only affects the default threaded # and the sharded threaded connection pool. The single # threaded and sharded single threaded connection pools are # not affected. As the only reason to use the single threaded # pools is for speed, and this extension makes the connection # pool slower, there's not much point in modifying this # extension to work with the single threaded pools. The # threaded pools work fine even in single threaded code, so if # you are currently using a single threaded pool and want to # use this extension, switch to using a threaded pool. module Sequel module ConnectionValidator class Retry < Error; end # The number of seconds that need to pass since # connection checkin before attempting to validate # the connection when checking it out from the pool. # Defaults to 3600 seconds (1 hour). attr_accessor :connection_validation_timeout # Initialize the data structures used by this extension. def self.extended(pool) pool.instance_eval do @connection_timestamps ||= {} @connection_validation_timeout = 3600 end # Make sure the valid connection SQL query is precached, # otherwise it's possible it will happen at runtime. While # it should work correctly at runtime, it's better to avoid # the possibility of failure altogether. pool.db.send(:valid_connection_sql) end private # Record the time the connection was checked back into the pool. def checkin_connection(*) conn = super @connection_timestamps[conn] = Time.now conn end # When acquiring a connection, if it has been # idle for longer than the connection validation timeout, # test the connection for validity. If it is not valid, # disconnect the connection, and retry with a new connection. def acquire(*a) begin if (conn = super) && (t = sync{@connection_timestamps.delete(conn)}) && Time.now - t > @connection_validation_timeout && !db.valid_connection?(conn) if pool_type == :sharded_threaded sync{allocated(a.last).delete(Thread.current)} else sync{@allocated.delete(Thread.current)} end db.disconnect_connection(conn) raise Retry end rescue Retry retry end conn end end Database.register_extension(:connection_validator){|db| db.pool.extend(ConnectionValidator)} end ruby-sequel-4.1.1/lib/sequel/extensions/constraint_validations.rb000066400000000000000000000442151220156535500253550ustar00rootroot00000000000000# The constraint_validations extension is designed to easily create database # constraints inside create_table and alter_table blocks. It also adds # relevant metadata about the constraints to a separate table, which the # constraint_validations model plugin uses to setup automatic validations. # # To use this extension, you first need to load it into the database: # # DB.extension(:constraint_validations) # # Note that you should only need to do this when modifying the constraint # validations (i.e. when migrating). You should probably not load this # extension in general application code. # # You also need to make sure to add the metadata table for the automatic # validations. By default, this table is called sequel_constraint_validations. # # DB.create_constraint_validations_table # # This table should only be created once. For new applications, you # generally want to create it first, before creating any other application # tables. # # Because migrations instance_eval the up and down blocks on a database, # using this extension in a migration can be done via: # # Sequel.migration do # up do # extension(:constraint_validations) # # ... # end # down do # extension(:constraint_validations) # # ... # end # end # # However, note that you cannot use change migrations with this extension, # you need to use separate up/down migrations. # # The API for creating the constraints with automatic validations is # similar to the validation_helpers model plugin API. However, # instead of having separate validates_* methods, it just adds a validate # method that accepts a block to the schema generators. Like the # create_table and alter_table blocks, this block is instance_evaled and # offers its own DSL. Example: # # DB.create_table(:table) do # Integer :id # String :name # # validate do # presence :id # min_length 5, :name # end # end # # instance_eval is used in this case because create_table and alter_table # already use instance_eval, so losing access to the surrounding receiver # is not an issue. # # Here's a breakdown of the constraints created for each constraint validation # method: # # All constraints except unique unless :allow_nil is true :: CHECK column IS NOT NULL # presence (String column) :: CHECK trim(column) != '' # exact_length 5 :: CHECK char_length(column) = 5 # min_length 5 :: CHECK char_length(column) >= 5 # max_length 5 :: CHECK char_length(column) <= 5 # length_range 3..5 :: CHECK char_length(column) >= 3 AND char_length(column) <= 5 # length_range 3...5 :: CHECK char_length(column) >= 3 AND char_length(column) < 5 # format /foo\\d+/ :: CHECK column ~ 'foo\\d+' # format /foo\\d+/i :: CHECK column ~* 'foo\\d+' # like 'foo%' :: CHECK column LIKE 'foo%' # ilike 'foo%' :: CHECK column ILIKE 'foo%' # includes ['a', 'b'] :: CHECK column IN ('a', 'b') # includes [1, 2] :: CHECK column IN (1, 2) # includes 3..5 :: CHECK column >= 3 AND column <= 5 # includes 3...5 :: CHECK column >= 3 AND column < 5 # unique :: UNIQUE (column) # # There are some additional API differences: # # * Only the :message and :allow_nil options are respected. The :allow_blank # and :allow_missing options are not respected. # * A new option, :name, is respected, for providing the name of the constraint. It is highly # recommended that you provide a name for all constraint validations, as # otherwise, it is difficult to drop the constraints later. # * The includes validation only supports an array of strings, and array of # integers, and a range of integers. # * There are like and ilike validations, which are similar to the format # validation but use a case sensitive or case insensitive LIKE pattern. LIKE # patters are very simple, so many regexp patterns cannot be expressed by # them, but only a couple databases (PostgreSQL and MySQL) support regexp # patterns. # * When using the unique validation, column names cannot have embedded commas. # For similar reasons, when using an includes validation with an array of # strings, none of the strings in the array can have embedded commas. # * The unique validation does not support an arbitrary number of columns. # For a single column, just the symbol should be used, and for an array # of columns, an array of symbols should be used. There is no support # for creating two separate unique validations for separate columns in # a single call. # * A drop method can be called with a constraint name in a alter_table # validate block to drop an existing constraint and the related # validation metadata. # * While it is allowed to create a presence constraint with :allow_nil # set to true, doing so does not create a constraint unless the column # has String type. # # Note that this extension has the following issues on certain databases: # # * MySQL does not support check constraints (they are parsed but ignored), # so using this extension does not actually set up constraints on MySQL, # except for the unique constraint. It can still be used on MySQL to # add the validation metadata so that the plugin can setup automatic # validations. # * On SQLite, adding constraints to a table is not supported, so it must # be emulated by dropping the table and recreating it with the constraints. # If you want to use this plugin on SQLite with an alter_table block, # you should drop all constraint validation metadata using # drop_constraint_validations_for(:table=>'table'), and then # readd all constraints you want to use inside the alter table block, # making no other changes inside the alter_table block. module Sequel module ConstraintValidations # The default table name used for the validation metadata. DEFAULT_CONSTRAINT_VALIDATIONS_TABLE = :sequel_constraint_validations # Set the default validation metadata table name if it has not already # been set. def self.extended(db) db.constraint_validations_table ||= DEFAULT_CONSTRAINT_VALIDATIONS_TABLE end # This is the DSL class used for the validate block inside create_table and # alter_table. class Generator # Store the schema generator that encloses this validates block. def initialize(generator) @generator = generator end # Create constraint validation methods that don't take an argument %w'presence unique'.each do |v| class_eval(<<-END, __FILE__, __LINE__+1) def #{v}(columns, opts=OPTS) @generator.validation({:type=>:#{v}, :columns=>Array(columns)}.merge(opts)) end END end # Create constraint validation methods that take an argument %w'exact_length min_length max_length length_range format like ilike includes'.each do |v| class_eval(<<-END, __FILE__, __LINE__+1) def #{v}(arg, columns, opts=OPTS) @generator.validation({:type=>:#{v}, :columns=>Array(columns), :arg=>arg}.merge(opts)) end END end # Given the name of a constraint, drop that constraint from the database, # and remove the related validation metadata. def drop(constraint) @generator.validation({:type=>:drop, :name=>constraint}) end # Alias of instance_eval for a nicer API. def process(&block) instance_eval(&block) end end # Additional methods for the create_table generator to support constraint validations. module CreateTableGeneratorMethods # An array of stored validation metadata, used later by the database to create # constraints. attr_reader :validations # Add a validation metadata hash to the stored array. def validation(opts) @validations << opts end # Call into the validate DSL for creating constraint validations. def validate(&block) Generator.new(self).process(&block) end end # Additional methods for the alter_table generator to support constraint validations, # used to give it a more similar API to the create_table generator. module AlterTableGeneratorMethods include CreateTableGeneratorMethods # Alias of add_constraint for similarity to create_table generator. def constraint(*args) add_constraint(*args) end # Alias of add_unique_constraint for similarity to create_table generator. def unique(*args) add_unique_constraint(*args) end end # The name of the table storing the validation metadata. If modifying this # from the default, this should be changed directly after loading the # extension into the database attr_accessor :constraint_validations_table # Create the table storing the validation metadata for all of the # constraints created by this extension. def create_constraint_validations_table create_table(constraint_validations_table) do String :table, :null=>false String :constraint_name String :validation_type, :null=>false String :column, :null=>false String :argument String :message TrueClass :allow_nil end end # Drop the constraint validations table. def drop_constraint_validations_table drop_table(constraint_validations_table) end # Delete validation metadata for specific constraints. At least # one of the following options should be specified: # # :table :: The table containing the constraint # :column :: The column affected by the constraint # :constraint :: The name of the related constraint # # The main reason for this method is when dropping tables # or columns. If you have previously defined a constraint # validation on the table or column, you should delete the # related metadata when dropping the table or column. # For a table, this isn't a big issue, as it will just result # in some wasted space, but for columns, if you don't drop # the related metadata, it could make it impossible to save # rows, since a validation for a nonexistent column will be # created. def drop_constraint_validations_for(opts=OPTS) ds = from(constraint_validations_table) if table = opts[:table] ds = ds.where(:table=>constraint_validations_literal_table(table)) end if column = opts[:column] ds = ds.where(:column=>column.to_s) end if constraint = opts[:constraint] ds = ds.where(:constraint_name=>constraint.to_s) end unless table || column || constraint raise Error, "must specify :table, :column, or :constraint when dropping constraint validations" end ds.delete end private # Modify the default create_table generator to include # the constraint validation methods. def alter_table_generator(&block) super do extend AlterTableGeneratorMethods @validations = [] instance_eval(&block) if block end end # After running all of the table alteration statements, # if there were any constraint validations, run table alteration # statements to create related constraints. This is purposely # run after the other statements, as the presence validation # in alter table requires introspecting the modified model # schema. def apply_alter_table_generator(name, generator) super unless generator.validations.empty? gen = alter_table_generator process_generator_validations(name, gen, generator.validations) apply_alter_table(name, gen.operations) end end # The value of a blank string. An empty string by default, but nil # on Oracle as Oracle treats the empty string as NULL. def blank_string_value if database_type == :oracle nil else '' end end # Return an unquoted literal form of the table name. # This allows the code to handle schema qualified tables, # without quoting all table names. def constraint_validations_literal_table(table) ds = dataset ds.quote_identifiers = false ds.literal(table) end # Before creating the table, add constraints for all of the # generators validations to the generator. def create_table_from_generator(name, generator, options) unless generator.validations.empty? process_generator_validations(name, generator, generator.validations) end super end # Modify the default create_table generator to include # the constraint validation methods. def create_table_generator(&block) super do extend CreateTableGeneratorMethods @validations = [] instance_eval(&block) if block end end # For the given table, generator, and validations, add constraints # to the generator for each of the validations, as well as adding # validation metadata to the constraint validations table. def process_generator_validations(table, generator, validations) drop_rows = [] rows = validations.map do |val| columns, arg, constraint, validation_type, message, allow_nil = val.values_at(:columns, :arg, :name, :type, :message, :allow_nil) case validation_type when :presence string_check = columns.select{|c| generator_string_column?(generator, table, c)}.map{|c| [Sequel.trim(c), blank_string_value]} generator_add_constraint_from_validation(generator, val, (Sequel.negate(string_check) unless string_check.empty?)) when :exact_length generator_add_constraint_from_validation(generator, val, Sequel.&(*columns.map{|c| {Sequel.char_length(c) => arg}})) when :min_length generator_add_constraint_from_validation(generator, val, Sequel.&(*columns.map{|c| Sequel.char_length(c) >= arg})) when :max_length generator_add_constraint_from_validation(generator, val, Sequel.&(*columns.map{|c| Sequel.char_length(c) <= arg})) when :length_range op = arg.exclude_end? ? :< : :<= generator_add_constraint_from_validation(generator, val, Sequel.&(*columns.map{|c| (Sequel.char_length(c) >= arg.begin) & Sequel.char_length(c).send(op, arg.end)})) arg = "#{arg.begin}..#{'.' if arg.exclude_end?}#{arg.end}" when :format generator_add_constraint_from_validation(generator, val, Sequel.&(*columns.map{|c| {c => arg}})) if arg.casefold? validation_type = :iformat end arg = arg.source when :includes generator_add_constraint_from_validation(generator, val, Sequel.&(*columns.map{|c| {c => arg}})) if arg.is_a?(Range) if arg.begin.is_a?(Integer) && arg.end.is_a?(Integer) validation_type = :includes_int_range arg = "#{arg.begin}..#{'.' if arg.exclude_end?}#{arg.end}" else raise Error, "validates includes with a range only supports integers currently, cannot handle: #{arg.inspect}" end elsif arg.is_a?(Array) if arg.all?{|x| x.is_a?(Integer)} validation_type = :includes_int_array elsif arg.all?{|x| x.is_a?(String)} validation_type = :includes_str_array else raise Error, "validates includes with an array only supports strings and integers currently, cannot handle: #{arg.inspect}" end arg = arg.join(',') else raise Error, "validates includes only supports arrays and ranges currently, cannot handle: #{arg.inspect}" end when :like, :ilike generator_add_constraint_from_validation(generator, val, Sequel.&(*columns.map{|c| Sequel.send(validation_type, c, arg)})) when :unique generator.unique(columns, :name=>constraint) columns = [columns.join(',')] when :drop if generator.is_a?(Sequel::Schema::AlterTableGenerator) unless constraint raise Error, 'cannot drop a constraint validation without a constraint name' end generator.drop_constraint(constraint) drop_rows << [constraint_validations_literal_table(table), constraint.to_s] columns = [] else raise Error, 'cannot drop a constraint validation in a create_table generator' end else raise Error, "invalid or missing validation type: #{val.inspect}" end columns.map do |column| {:table=>constraint_validations_literal_table(table), :constraint_name=>(constraint.to_s if constraint), :validation_type=>validation_type.to_s, :column=>column.to_s, :argument=>(arg.to_s if arg), :message=>(message.to_s if message), :allow_nil=>allow_nil} end end ds = from(:sequel_constraint_validations) ds.multi_insert(rows.flatten) unless drop_rows.empty? ds.where([:table, :constraint_name]=>drop_rows).delete end end # Add the constraint to the generator, including a NOT NULL constraint # for all columns unless the :allow_nil option is given. def generator_add_constraint_from_validation(generator, val, cons) if val[:allow_nil] nil_cons = Sequel.expr(val[:columns].map{|c| [c, nil]}) cons = Sequel.|(nil_cons, cons) if cons else nil_cons = Sequel.negate(val[:columns].map{|c| [c, nil]}) cons = cons ? Sequel.&(nil_cons, cons) : nil_cons end if cons generator.constraint(val[:name], cons) end end # Introspect the generator to determine if column # created is a string or not. def generator_string_column?(generator, table, c) if generator.is_a?(Sequel::Schema::AlterTableGenerator) # This is the alter table case, which runs after the # table has been altered, so just check the database # schema for the column. schema(table).each do |col, sch| if col == c return sch[:type] == :string end end false else # This is the create table case, check the metadata # for the column to be created to see if it is a string. generator.columns.each do |col| if col[:name] == c return [String, :text, :varchar].include?(col[:type]) end end false end end end Database.register_extension(:constraint_validations, ConstraintValidations) end ruby-sequel-4.1.1/lib/sequel/extensions/core_extensions.rb000066400000000000000000000212121220156535500237730ustar00rootroot00000000000000# These are extensions to core classes that Sequel enables by default. # They make using Sequel's DSL easier by adding methods to Array, # Hash, String, and Symbol to add methods that return Sequel # expression objects. # # This extension is currently loaded by default, but that will no # longer be true in Sequel 4. Starting in Sequel 4, you will # need to load it manually via: # # Sequel.extension :core_extensions # This extension loads the core extensions. def Sequel.core_extensions? true end # Sequel extends +Array+ to add methods to implement the SQL DSL. # Most of these methods require that the array not be empty and that it # must consist solely of other arrays that have exactly two elements. class Array # Return a Sequel::SQL::BooleanExpression created from this array, not matching all of the # conditions. # # ~[[:a, true]] # SQL: a IS NOT TRUE # ~[[:a, 1], [:b, [2, 3]]] # SQL: a != 1 OR b NOT IN (2, 3) def ~ Sequel.~(self) end # Return a Sequel::SQL::CaseExpression with this array as the conditions and the given # default value and expression. # # [[{:a=>[2,3]}, 1]].case(0) # SQL: CASE WHEN a IN (2, 3) THEN 1 ELSE 0 END # [[:a, 1], [:b, 2]].case(:d, :c) # SQL: CASE c WHEN a THEN 1 WHEN b THEN 2 ELSE d END def case(*args) ::Sequel::SQL::CaseExpression.new(self, *args) end # Return a Sequel::SQL::ValueList created from this array. Used if this array contains # all two element arrays and you want it treated as an SQL value list (IN predicate) # instead of as a conditions specifier (similar to a hash). This is not necessary if you are using # this array as a value in a filter, but may be necessary if you are using it as a # value with placeholder SQL: # # DB[:a].filter([:a, :b]=>[[1, 2], [3, 4]]) # SQL: (a, b) IN ((1, 2), (3, 4)) # DB[:a].filter('(a, b) IN ?', [[1, 2], [3, 4]]) # SQL: (a, b) IN ((1 = 2) AND (3 = 4)) # DB[:a].filter('(a, b) IN ?', [[1, 2], [3, 4]].sql_value_list) # SQL: (a, b) IN ((1, 2), (3, 4)) def sql_value_list ::Sequel::SQL::ValueList.new(self) end # Deprecated alias for sql_value_list alias sql_array sql_value_list # Return a Sequel::SQL::BooleanExpression created from this array, matching all of the # conditions. Rarely do you need to call this explicitly, as Sequel generally # assumes that arrays of two element arrays specify this type of condition. One case where # it can be necessary to use this is if you are using the object as a value in a filter hash # and want to use the = operator instead of the IN operator (which is used by default for # arrays of two element arrays). # # [[:a, true]].sql_expr # SQL: a IS TRUE # [[:a, 1], [:b, [2, 3]]].sql_expr # SQL: a = 1 AND b IN (2, 3) def sql_expr Sequel.expr(self) end # Return a Sequel::SQL::BooleanExpression created from this array, matching none # of the conditions. # # [[:a, true]].sql_negate # SQL: a IS NOT TRUE # [[:a, 1], [:b, [2, 3]]].sql_negate # SQL: a != 1 AND b NOT IN (2, 3) def sql_negate Sequel.negate(self) end # Return a Sequel::SQL::BooleanExpression created from this array, matching any of the # conditions. # # [[:a, true]].sql_or # SQL: a IS TRUE # [[:a, 1], [:b, [2, 3]]].sql_or # SQL: a = 1 OR b IN (2, 3) def sql_or Sequel.or(self) end # Return a Sequel::SQL::StringExpression representing an SQL string made up of the # concatenation of this array's elements. If an argument is passed # it is used in between each element of the array in the SQL # concatenation. # # [:a].sql_string_join # SQL: a # [:a, :b].sql_string_join # SQL: a || b # [:a, 'b'].sql_string_join # SQL: a || 'b' # ['a', :b].sql_string_join(' ') # SQL: 'a' || ' ' || b def sql_string_join(joiner=nil) Sequel.join(self, joiner) end end # Sequel extends +Hash+ to add methods to implement the SQL DSL. class Hash # Return a Sequel::SQL::BooleanExpression created from this hash, matching # all of the conditions in this hash and the condition specified by # the given argument. # # {:a=>1} & :b # SQL: a = 1 AND b # {:a=>true} & ~:b # SQL: a IS TRUE AND NOT b def &(ce) ::Sequel::SQL::BooleanExpression.new(:AND, self, ce) end # Return a Sequel::SQL::BooleanExpression created from this hash, matching # all of the conditions in this hash or the condition specified by # the given argument. # # {:a=>1} | :b # SQL: a = 1 OR b # {:a=>true} | ~:b # SQL: a IS TRUE OR NOT b def |(ce) ::Sequel::SQL::BooleanExpression.new(:OR, self, ce) end # Return a Sequel::SQL::BooleanExpression created from this hash, not matching all of the # conditions. # # ~{:a=>true} # SQL: a IS NOT TRUE # ~{:a=>1, :b=>[2, 3]} # SQL: a != 1 OR b NOT IN (2, 3) def ~ ::Sequel::SQL::BooleanExpression.from_value_pairs(self, :OR, true) end # Return a Sequel::SQL::CaseExpression with this hash as the conditions and the given # default value. Note that the order of the conditions will be arbitrary on ruby 1.8, so all # conditions should be orthogonal. # # {{:a=>[2,3]}=>1}.case(0) # SQL: CASE WHEN a IN (2, 3) THEN 1 ELSE 0 END # {:a=>1, :b=>2}.case(:d, :c) # SQL: CASE c WHEN a THEN 1 WHEN b THEN 2 ELSE d END # # or: CASE c WHEN b THEN 2 WHEN a THEN 1 ELSE d END def case(*args) ::Sequel::SQL::CaseExpression.new(to_a, *args) end # Return a Sequel::SQL::BooleanExpression created from this hash, matching all of the # conditions. Rarely do you need to call this explicitly, as Sequel generally # assumes that hashes specify this type of condition. # # {:a=>true}.sql_expr # SQL: a IS TRUE # {:a=>1, :b=>[2, 3]}.sql_expr # SQL: a = 1 AND b IN (2, 3) def sql_expr ::Sequel::SQL::BooleanExpression.from_value_pairs(self) end # Return a Sequel::SQL::BooleanExpression created from this hash, matching none # of the conditions. # # {:a=>true}.sql_negate # SQL: a IS NOT TRUE # {:a=>1, :b=>[2, 3]}.sql_negate # SQL: a != 1 AND b NOT IN (2, 3) def sql_negate ::Sequel::SQL::BooleanExpression.from_value_pairs(self, :AND, true) end # Return a Sequel::SQL::BooleanExpression created from this hash, matching any of the # conditions. # # {:a=>true}.sql_or # SQL: a IS TRUE # {:a=>1, :b=>[2, 3]}.sql_or # SQL: a = 1 OR b IN (2, 3) def sql_or ::Sequel::SQL::BooleanExpression.from_value_pairs(self, :OR) end end # Sequel extends +String+ to add methods to implement the SQL DSL. class String include Sequel::SQL::AliasMethods include Sequel::SQL::CastMethods # Converts a string into a Sequel::LiteralString, in order to override string # literalization, e.g.: # # DB[:items].filter(:abc => 'def').sql #=> # "SELECT * FROM items WHERE (abc = 'def')" # # DB[:items].filter(:abc => 'def'.lit).sql #=> # "SELECT * FROM items WHERE (abc = def)" # # You can also provide arguments, to create a Sequel::SQL::PlaceholderLiteralString: # # DB[:items].select{|o| o.count('DISTINCT ?'.lit(:a))}.sql #=> # "SELECT count(DISTINCT a) FROM items" def lit(*args) args.empty? ? Sequel::LiteralString.new(self) : Sequel::SQL::PlaceholderLiteralString.new(self, args) end # Returns a Sequel::SQL::Blob that holds the same data as this string. Blobs provide proper # escaping of binary data. def to_sequel_blob ::Sequel::SQL::Blob.new(self) end end # Sequel extends +Symbol+ to add methods to implement the SQL DSL. class Symbol include Sequel::SQL::AliasMethods include Sequel::SQL::CastMethods include Sequel::SQL::OrderMethods include Sequel::SQL::BooleanMethods include Sequel::SQL::NumericMethods include Sequel::SQL::QualifyingMethods include Sequel::SQL::StringMethods include Sequel::SQL::SubscriptMethods include Sequel::SQL::ComplexExpressionMethods # Returns receiver wrapped in an Sequel::SQL::Identifier. Usually used to # prevent splitting the symbol. # # :a__b # SQL: "a"."b" # :a__b.identifier # SQL: "a__b" def identifier Sequel::SQL::Identifier.new(self) end # Returns a Sequel::SQL::Function with this as the function name, # and the given arguments. This is aliased as Symbol#[] if the RUBY_VERSION # is less than 1.9.0. Ruby 1.9 defines Symbol#[], and Sequel # doesn't override methods defined by ruby itself. # # :now.sql_function # SQL: now() # :sum.sql_function(:a) # SQL: sum(a) # :concat.sql_function(:a, :b) # SQL: concat(a, b) def sql_function(*args) Sequel::SQL::Function.new(self, *args) end end ruby-sequel-4.1.1/lib/sequel/extensions/core_refinements.rb000066400000000000000000000207451220156535500241250ustar00rootroot00000000000000# These are refinements to core classes that allow the Sequel # DSL to be used without modifying the core classes directly. # After loading the extension via: # # Sequel.extension :core_refinements # # you can enable the refinements for particular files: # # using Sequel::CoreRefinements raise(Sequel::Error, "Refinements require ruby 2.0.0 or greater") unless RUBY_VERSION >= '2.0.0' module Sequel::CoreRefinements refine Array do # Return a Sequel::SQL::BooleanExpression created from this array, not matching all of the # conditions. # # ~[[:a, true]] # SQL: a IS NOT TRUE # ~[[:a, 1], [:b, [2, 3]]] # SQL: a != 1 OR b NOT IN (2, 3) def ~ Sequel.~(self) end # Return a Sequel::SQL::CaseExpression with this array as the conditions and the given # default value and expression. # # [[{:a=>[2,3]}, 1]].case(0) # SQL: CASE WHEN a IN (2, 3) THEN 1 ELSE 0 END # [[:a, 1], [:b, 2]].case(:d, :c) # SQL: CASE c WHEN a THEN 1 WHEN b THEN 2 ELSE d END def case(*args) ::Sequel::SQL::CaseExpression.new(self, *args) end # Return a Sequel::SQL::ValueList created from this array. Used if this array contains # all two element arrays and you want it treated as an SQL value list (IN predicate) # instead of as a conditions specifier (similar to a hash). This is not necessary if you are using # this array as a value in a filter, but may be necessary if you are using it as a # value with placeholder SQL: # # DB[:a].filter([:a, :b]=>[[1, 2], [3, 4]]) # SQL: (a, b) IN ((1, 2), (3, 4)) # DB[:a].filter('(a, b) IN ?', [[1, 2], [3, 4]]) # SQL: (a, b) IN ((1 = 2) AND (3 = 4)) # DB[:a].filter('(a, b) IN ?', [[1, 2], [3, 4]].sql_value_list) # SQL: (a, b) IN ((1, 2), (3, 4)) def sql_value_list ::Sequel::SQL::ValueList.new(self) end # Return a Sequel::SQL::BooleanExpression created from this array, matching all of the # conditions. Rarely do you need to call this explicitly, as Sequel generally # assumes that arrays of two element arrays specify this type of condition. One case where # it can be necessary to use this is if you are using the object as a value in a filter hash # and want to use the = operator instead of the IN operator (which is used by default for # arrays of two element arrays). # # [[:a, true]].sql_expr # SQL: a IS TRUE # [[:a, 1], [:b, [2, 3]]].sql_expr # SQL: a = 1 AND b IN (2, 3) def sql_expr Sequel.expr(self) end # Return a Sequel::SQL::BooleanExpression created from this array, matching none # of the conditions. # # [[:a, true]].sql_negate # SQL: a IS NOT TRUE # [[:a, 1], [:b, [2, 3]]].sql_negate # SQL: a != 1 AND b NOT IN (2, 3) def sql_negate Sequel.negate(self) end # Return a Sequel::SQL::BooleanExpression created from this array, matching any of the # conditions. # # [[:a, true]].sql_or # SQL: a IS TRUE # [[:a, 1], [:b, [2, 3]]].sql_or # SQL: a = 1 OR b IN (2, 3) def sql_or Sequel.or(self) end # Return a Sequel::SQL::StringExpression representing an SQL string made up of the # concatenation of this array's elements. If an argument is passed # it is used in between each element of the array in the SQL # concatenation. # # [:a].sql_string_join # SQL: a # [:a, :b].sql_string_join # SQL: a || b # [:a, 'b'].sql_string_join # SQL: a || 'b' # ['a', :b].sql_string_join(' ') # SQL: 'a' || ' ' || b def sql_string_join(joiner=nil) Sequel.join(self, joiner) end end refine Hash do # Return a Sequel::SQL::BooleanExpression created from this hash, matching # all of the conditions in this hash and the condition specified by # the given argument. # # {:a=>1} & :b # SQL: a = 1 AND b # {:a=>true} & ~:b # SQL: a IS TRUE AND NOT b def &(ce) ::Sequel::SQL::BooleanExpression.new(:AND, self, ce) end # Return a Sequel::SQL::BooleanExpression created from this hash, matching # all of the conditions in this hash or the condition specified by # the given argument. # # {:a=>1} | :b # SQL: a = 1 OR b # {:a=>true} | ~:b # SQL: a IS TRUE OR NOT b def |(ce) ::Sequel::SQL::BooleanExpression.new(:OR, self, ce) end # Return a Sequel::SQL::BooleanExpression created from this hash, not matching all of the # conditions. # # ~{:a=>true} # SQL: a IS NOT TRUE # ~{:a=>1, :b=>[2, 3]} # SQL: a != 1 OR b NOT IN (2, 3) def ~ ::Sequel::SQL::BooleanExpression.from_value_pairs(self, :OR, true) end # Return a Sequel::SQL::CaseExpression with this hash as the conditions and the given # default value. Note that the order of the conditions will be arbitrary on ruby 1.8, so all # conditions should be orthogonal. # # {{:a=>[2,3]}=>1}.case(0) # SQL: CASE WHEN a IN (2, 3) THEN 1 ELSE 0 END # {:a=>1, :b=>2}.case(:d, :c) # SQL: CASE c WHEN a THEN 1 WHEN b THEN 2 ELSE d END # # or: CASE c WHEN b THEN 2 WHEN a THEN 1 ELSE d END def case(*args) ::Sequel::SQL::CaseExpression.new(to_a, *args) end # Return a Sequel::SQL::BooleanExpression created from this hash, matching all of the # conditions. Rarely do you need to call this explicitly, as Sequel generally # assumes that hashes specify this type of condition. # # {:a=>true}.sql_expr # SQL: a IS TRUE # {:a=>1, :b=>[2, 3]}.sql_expr # SQL: a = 1 AND b IN (2, 3) def sql_expr ::Sequel::SQL::BooleanExpression.from_value_pairs(self) end # Return a Sequel::SQL::BooleanExpression created from this hash, matching none # of the conditions. # # {:a=>true}.sql_negate # SQL: a IS NOT TRUE # {:a=>1, :b=>[2, 3]}.sql_negate # SQL: a != 1 AND b NOT IN (2, 3) def sql_negate ::Sequel::SQL::BooleanExpression.from_value_pairs(self, :AND, true) end # Return a Sequel::SQL::BooleanExpression created from this hash, matching any of the # conditions. # # {:a=>true}.sql_or # SQL: a IS TRUE # {:a=>1, :b=>[2, 3]}.sql_or # SQL: a = 1 OR b IN (2, 3) def sql_or ::Sequel::SQL::BooleanExpression.from_value_pairs(self, :OR) end end refine String do include Sequel::SQL::AliasMethods include Sequel::SQL::CastMethods # Converts a string into a Sequel::LiteralString, in order to override string # literalization, e.g.: # # DB[:items].filter(:abc => 'def').sql #=> # "SELECT * FROM items WHERE (abc = 'def')" # # DB[:items].filter(:abc => 'def'.lit).sql #=> # "SELECT * FROM items WHERE (abc = def)" # # You can also provide arguments, to create a Sequel::SQL::PlaceholderLiteralString: # # DB[:items].select{|o| o.count('DISTINCT ?'.lit(:a))}.sql #=> # "SELECT count(DISTINCT a) FROM items" def lit(*args) args.empty? ? Sequel::LiteralString.new(self) : Sequel::SQL::PlaceholderLiteralString.new(self, args) end # Returns a Sequel::SQL::Blob that holds the same data as this string. Blobs provide proper # escaping of binary data. def to_sequel_blob ::Sequel::SQL::Blob.new(self) end end refine Symbol do include Sequel::SQL::AliasMethods include Sequel::SQL::CastMethods include Sequel::SQL::OrderMethods include Sequel::SQL::BooleanMethods include Sequel::SQL::NumericMethods include Sequel::SQL::QualifyingMethods include Sequel::SQL::StringMethods include Sequel::SQL::SubscriptMethods include Sequel::SQL::ComplexExpressionMethods # Returns receiver wrapped in an Sequel::SQL::Identifier. Usually used to # prevent splitting the symbol. # # :a__b # SQL: "a"."b" # :a__b.identifier # SQL: "a__b" def identifier Sequel::SQL::Identifier.new(self) end # Returns a Sequel::SQL::Function with this as the function name, # and the given arguments. This is aliased as Symbol#[] if the RUBY_VERSION # is less than 1.9.0. Ruby 1.9 defines Symbol#[], and Sequel # doesn't override methods defined by ruby itself. # # :now.sql_function # SQL: now() # :sum.sql_function(:a) # SQL: sum(a) # :concat.sql_function(:a, :b) # SQL: concat(a, b) def sql_function(*args) Sequel::SQL::Function.new(self, *args) end end end ruby-sequel-4.1.1/lib/sequel/extensions/date_arithmetic.rb000066400000000000000000000163161220156535500237230ustar00rootroot00000000000000# The date_arithmetic extension adds the ability to perform database-independent # addition/substraction of intervals to/from dates and timestamps. # # First, you need to load the extension into the database: # # DB.extension :date_arithmetic # # Then you can use the Sequel.date_add and Sequel.date_sub methods # to return Sequel expressions: # # add = Sequel.date_add(:date_column, :years=>1, :months=>2, :days=>3) # sub = Sequel.date_sub(:date_column, :hours=>1, :minutes=>2, :seconds=>3) # # In addition to specifying the interval as a hash, there is also # support for specifying the interval as an ActiveSupport::Duration # object: # # require 'active_support/all' # add = Sequel.date_add(:date_column, 1.years + 2.months + 3.days) # sub = Sequel.date_sub(:date_column, 1.hours + 2.minutes + 3.seconds) # # These expressions can be used in your datasets, or anywhere else that # Sequel expressions are allowed: # # DB[:table].select(add.as(:d)).where(sub > Sequel::CURRENT_TIMESTAMP) module Sequel module SQL module Builders # Return a DateAdd expression, adding an interval to the date/timestamp expr. def date_add(expr, interval) DateAdd.new(expr, interval) end # Return a DateAdd expression, adding the negative of the interval to # the date/timestamp expr. def date_sub(expr, interval) interval = if interval.is_a?(Hash) h = {} interval.each{|k,v| h[k] = -v unless v.nil?} h else -interval end DateAdd.new(expr, interval) end end # The DateAdd class represents the addition of an interval to a # date/timestamp expression. class DateAdd < GenericExpression # These methods are added to datasets using the date_arithmetic # extension, for the purposes of correctly literalizing DateAdd # expressions for the appropriate database type. module DatasetMethods DURATION_UNITS = [:years, :months, :days, :hours, :minutes, :seconds].freeze DEF_DURATION_UNITS = DURATION_UNITS.zip(DURATION_UNITS.map{|s| s.to_s.freeze}).freeze MYSQL_DURATION_UNITS = DURATION_UNITS.zip(DURATION_UNITS.map{|s| Sequel.lit(s.to_s.upcase[0...-1]).freeze}).freeze MSSQL_DURATION_UNITS = DURATION_UNITS.zip(DURATION_UNITS.map{|s| Sequel.lit(s.to_s[0...-1]).freeze}).freeze H2_DURATION_UNITS = DURATION_UNITS.zip(DURATION_UNITS.map{|s| s.to_s[0...-1].freeze}).freeze DERBY_DURATION_UNITS = DURATION_UNITS.zip(DURATION_UNITS.map{|s| Sequel.lit("SQL_TSI_#{s.to_s.upcase[0...-1]}").freeze}).freeze ACCESS_DURATION_UNITS = DURATION_UNITS.zip(%w'yyyy m d h n s'.map{|s| s.freeze}).freeze DB2_DURATION_UNITS = DURATION_UNITS.zip(DURATION_UNITS.map{|s| Sequel.lit(s.to_s).freeze}).freeze # Append the SQL fragment for the DateAdd expression to the SQL query. def date_add_sql_append(sql, da) h = da.interval expr = da.expr cast = case db_type = db.database_type when :postgres interval = "" each_valid_interval_unit(h, DEF_DURATION_UNITS) do |value, sql_unit| interval << "#{value} #{sql_unit} " end if interval.empty? return literal_append(sql, Sequel.cast(expr, Time)) else return complex_expression_sql_append(sql, :+, [Sequel.cast(expr, Time), Sequel.cast(interval, :interval)]) end when :sqlite args = [expr] each_valid_interval_unit(h, DEF_DURATION_UNITS) do |value, sql_unit| args << "#{value} #{sql_unit}" end return _function_sql_append(sql, :datetime, args) when :mysql, :hsqldb, :cubrid if db_type == :hsqldb # HSQLDB requires 2.2.9+ for the DATE_ADD function expr = Sequel.cast(expr, Time) end each_valid_interval_unit(h, MYSQL_DURATION_UNITS) do |value, sql_unit| expr = Sequel.function(:DATE_ADD, expr, Sequel.lit(["INTERVAL ", " "], value, sql_unit)) end when :mssql, :h2, :access units = case db_type when :mssql MSSQL_DURATION_UNITS when :h2 H2_DURATION_UNITS when :access ACCESS_DURATION_UNITS end each_valid_interval_unit(h, units) do |value, sql_unit| expr = Sequel.function(:DATEADD, sql_unit, value, expr) end when :derby if expr.is_a?(Date) && !expr.is_a?(DateTime) # Work around for https://issues.apache.org/jira/browse/DERBY-896 expr = Sequel.cast_string(expr) + ' 00:00:00' end each_valid_interval_unit(h, DERBY_DURATION_UNITS) do |value, sql_unit| expr = Sequel.lit(["{fn timestampadd(#{sql_unit}, ", ", timestamp(", "))}"], value, expr) end when :oracle each_valid_interval_unit(h, MYSQL_DURATION_UNITS) do |value, sql_unit| expr = Sequel.+(expr, Sequel.lit(["INTERVAL ", " "], value.to_s, sql_unit)) end when :db2 expr = Sequel.cast(expr, Time) each_valid_interval_unit(h, DB2_DURATION_UNITS) do |value, sql_unit| expr = Sequel.+(expr, Sequel.lit(["", " "], value, sql_unit)) end false else raise Error, "date arithmetic is not implemented on #{db.database_type}" end if cast expr = Sequel.cast(expr, Time) end literal_append(sql, expr) end private # Yield the value in the interval for each of the units # present in the interval, along with the SQL fragment # representing the unit name. Returns false if any # values were yielded, true otherwise def each_valid_interval_unit(interval, units) cast = true units.each do |unit, sql_unit| if (value = interval[unit]) && value != 0 cast = false yield value, sql_unit end end cast end end # The expression that the interval is being added to. attr_reader :expr # The interval added to the expression, as a hash with # symbol keys. attr_reader :interval # Supports two types of intervals: # Hash :: Used directly, but values cannot be plain strings. # ActiveSupport::Duration :: Converted to a hash using the interval's parts. def initialize(expr, interval) @expr = expr @interval = if interval.is_a?(Hash) interval.each_value do |v| # Attempt to prevent SQL injection by users who pass untrusted strings # as interval values. if v.is_a?(String) && !v.is_a?(LiteralString) raise Sequel::InvalidValue, "cannot provide String value as interval part: #{v.inspect}" end end interval else h = Hash.new(0) interval.parts.each{|unit, value| h[unit] += value} {}.merge(h) end end to_s_method :date_add_sql end end Dataset.register_extension(:date_arithmetic, SQL::DateAdd::DatasetMethods) end ruby-sequel-4.1.1/lib/sequel/extensions/empty_array_ignore_nulls.rb000066400000000000000000000017431220156535500257070ustar00rootroot00000000000000# This changes Sequel's literalization of IN/NOT IN with an empty # array value to not return NULL even if one of the referenced # columns is NULL: # # DB[:test].where(:name=>[]) # # SELECT * FROM test WHERE (1 = 0) # DB[:test].exclude(:name=>[]) # # SELECT * FROM test WHERE (1 = 1) # # The default Sequel behavior is to respect NULLs, so that when # name is NULL, the expression returns NULL. # # You can load this extension into specific datasets: # # ds = DB[:table] # ds.extension(:empty_array_ignore_nulls) # # Or you can load it into all of a database's datasets, which # is probably the desired behavior if you are using this extension: # # DB.extension(:empty_array_ignore_nulls) module Sequel module EmptyArrayIgnoreNulls # Use a simple expression that is always true or false, never NULL. def empty_array_value(op, cols) {1 => ((op == :IN) ? 0 : 1)} end end Dataset.register_extension(:empty_array_ignore_nulls, EmptyArrayIgnoreNulls) end ruby-sequel-4.1.1/lib/sequel/extensions/eval_inspect.rb000066400000000000000000000125311220156535500232440ustar00rootroot00000000000000# The eval_inspect extension changes #inspect for Sequel::SQL::Expression # subclasses to return a string suitable for ruby's eval, such that # # eval(obj.inspect) == obj # # is true. The above code is true for most of ruby's simple classes such # as String, Integer, Float, and Symbol, but it's not true for classes such # as Time, Date, and BigDecimal. Sequel attempts to handle situations where # instances of these classes are a component of a Sequel expression. # # To load the extension: # # Sequel.extension :eval_inspect module Sequel module EvalInspect # Special case objects where inspect does not generally produce input # suitable for eval. Used by Sequel::SQL::Expression#inspect so that # it can produce a string suitable for eval even if components of the # expression have inspect methods that do not produce strings suitable # for eval. def eval_inspect(obj) case obj when Sequel::SQL::Blob, Sequel::LiteralString, Sequel::SQL::ValueList "#{obj.class}.new(#{obj.inspect})" when Array "[#{obj.map{|o| eval_inspect(o)}.join(', ')}]" when Hash "{#{obj.map{|k, v| "#{eval_inspect(k)} => #{eval_inspect(v)}"}.join(', ')}}" when Time datepart = "%Y-%m-%dT" unless obj.is_a?(Sequel::SQLTime) if RUBY_VERSION < '1.9' # :nocov: # Time on 1.8 doesn't handle %N (or %z on Windows), manually set the usec value in the string hours, mins = obj.utc_offset.divmod(3600) mins /= 60 "#{obj.class}.parse(#{obj.strftime("#{datepart}%H:%M:%S.#{sprintf('%06i%+03i%02i', obj.usec, hours, mins)}").inspect})#{'.utc' if obj.utc?}" # :nocov: else "#{obj.class}.parse(#{obj.strftime("#{datepart}%T.%N%z").inspect})#{'.utc' if obj.utc?}" end when DateTime # Ignore date of calendar reform "DateTime.parse(#{obj.strftime('%FT%T.%N%z').inspect})" when Date # Ignore offset and date of calendar reform "Date.new(#{obj.year}, #{obj.month}, #{obj.day})" when BigDecimal "BigDecimal.new(#{obj.to_s.inspect})" else obj.inspect end end end extend EvalInspect module SQL class Expression # Attempt to produce a string suitable for eval, such that: # # eval(obj.inspect) == obj def inspect # Assume by default that the object can be recreated by calling # self.class.new with any attr_reader values defined on the class, # in the order they were defined. klass = self.class args = inspect_args.map do |arg| if arg.is_a?(String) && arg =~ /\A\*/ # Special case string arguments starting with *, indicating that # they should return an array to be splatted as the remaining arguments send(arg.sub('*', '')).map{|a| Sequel.eval_inspect(a)}.join(', ') else Sequel.eval_inspect(send(arg)) end end "#{klass}.new(#{args.join(', ')})" end private # Which attribute values to use in the inspect string. def inspect_args self.class.comparison_attrs end end class ComplexExpression private # ComplexExpression's initializer uses a splat for the operator arguments. def inspect_args [:op, "*args"] end end class Constant # Constants to lookup in the Sequel module. INSPECT_LOOKUPS = [:CURRENT_DATE, :CURRENT_TIMESTAMP, :CURRENT_TIME, :SQLTRUE, :SQLFALSE, :NULL, :NOTNULL] # Reference the constant in the Sequel module if there is # one that matches. def inspect INSPECT_LOOKUPS.each do |c| return "Sequel::#{c}" if Sequel.const_get(c) == self end super end end class CaseExpression private # CaseExpression's initializer checks whether an argument was # provided, to differentiate CASE WHEN from CASE NULL WHEN, so # check if an expression was provided, and only include the # expression in the inspect output if so. def inspect_args if expression? [:conditions, :default, :expression] else [:conditions, :default] end end end class Function private # Function's initializer uses a splat for the function arguments. def inspect_args [:f, "*args"] end end class JoinOnClause private # JoinOnClause's initializer takes the on argument as the first argument # instead of the last. def inspect_args [:on, :join_type, :table, :table_alias] end end class JoinUsingClause private # JoinOnClause's initializer takes the using argument as the first argument # instead of the last. def inspect_args [:using, :join_type, :table, :table_alias] end end class OrderedExpression private # OrderedExpression's initializer takes the :nulls information inside a hash, # so if a NULL order was given, include a hash with that information. def inspect_args if nulls [:expression, :descending, :opts_hash] else [:expression, :descending] end end # A hash of null information suitable for passing to the initializer. def opts_hash {:nulls=>nulls} end end end end ruby-sequel-4.1.1/lib/sequel/extensions/filter_having.rb000066400000000000000000000030161220156535500234070ustar00rootroot00000000000000# The filter_having extension allows Dataset#filter, #and, #or # and #exclude to operate on the HAVING clause if the dataset # already has a HAVING clause, which was the historical behavior # before Sequel 4. It is only recommended to use this for # backwards compatibility. # # You can load this extension into specific datasets: # # ds = DB[:table] # ds.extension(:filter_having) # # Or you can load it into all of a database's datasets, which # is probably the desired behavior if you are using this extension: # # DB.extension(:filter_having) module Sequel module FilterHaving # Operate on HAVING clause if HAVING clause already present. def and(*cond, &block) if @opts[:having] having(*cond, &block) else super end end # Operate on HAVING clause if HAVING clause already present. def exclude(*cond, &block) if @opts[:having] exclude_having(*cond, &block) else super end end # Operate on HAVING clause if HAVING clause already present. def filter(*cond, &block) if @opts[:having] having(*cond, &block) else super end end # Operate on HAVING clause if HAVING clause already present. def or(*cond, &block) if having = @opts[:having] cond = cond.first if cond.size == 1 clone(:having => SQL::BooleanExpression.new(:OR, having, filter_expr(cond, &block))) else super end end end Dataset.register_extension(:filter_having, FilterHaving) end ruby-sequel-4.1.1/lib/sequel/extensions/graph_each.rb000066400000000000000000000046701220156535500226560ustar00rootroot00000000000000# The graph_each extension adds Dataset#graph_each and # makes Dataset#each call #graph_each if the dataset has been graphed. # Dataset#graph_each splits result hashes into subhashes per table: # # DB[:a].graph(:b, :id=>:b_id).all # # => {:a=>{:id=>1, :b_id=>2}, :b=>{:id=>2}} # # You can load this extension into specific datasets: # # ds = DB[:table] # ds.extension(:graph_each) # # Or you can load it into all of a database's datasets, which # is probably the desired behavior if you are using this extension: # # DB.extension(:graph_each) module Sequel module GraphEach # Call graph_each for graphed datasets that are not being eager graphed. def each if @opts[:graph] && !@opts[:eager_graph] graph_each{|r| yield r} else super end end private # Fetch the rows, split them into component table parts, # tranform and run the row_proc on each part (if applicable), # and yield a hash of the parts. def graph_each # Reject tables with nil datasets, as they are excluded from # the result set datasets = @opts[:graph][:table_aliases].to_a.reject{|ta,ds| ds.nil?} # Get just the list of table aliases into a local variable, for speed table_aliases = datasets.collect{|ta,ds| ta} # Get an array of arrays, one for each dataset, with # the necessary information about each dataset, for speed datasets = datasets.collect{|ta, ds| [ta, ds, ds.row_proc]} # Use the manually set graph aliases, if any, otherwise # use the ones automatically created by .graph column_aliases = @opts[:graph_aliases] || @opts[:graph][:column_aliases] fetch_rows(select_sql) do |r| graph = {} # Create the sub hashes, one per table table_aliases.each{|ta| graph[ta]={}} # Split the result set based on the column aliases # If there are columns in the result set that are # not in column_aliases, they are ignored column_aliases.each do |col_alias, tc| ta, column = tc graph[ta][column] = r[col_alias] end # For each dataset run the row_proc if applicable datasets.each do |ta,ds,rp| g = graph[ta] graph[ta] = if g.values.any?{|x| !x.nil?} rp ? rp.call(g) : g else nil end end yield graph end self end end Dataset.register_extension(:graph_each, GraphEach) end ruby-sequel-4.1.1/lib/sequel/extensions/hash_aliases.rb000066400000000000000000000021671220156535500232200ustar00rootroot00000000000000# The hash_aliases extension allows Dataset#select and Dataset#from # to treat a hash argument as an alias specification, with keys # being the expressions and values being the aliases, # which was the historical behavior before Sequel 4. # It is only recommended to use this for backwards compatibility. # # You can load this extension into specific datasets: # # ds = DB[:table] # ds.extension(:hash_aliases) # # Or you can load it into all of a database's datasets, which # is probably the desired behavior if you are using this extension: # # DB.extension(:hash_aliases) module Sequel module HashAliases def from(*source) super(*convert_hash_aliases(source)) end def select(*columns, &block) virtual_row_columns(columns, block) super(*convert_hash_aliases(columns), &nil) end private def convert_hash_aliases(columns) m = [] columns.each do |i| if i.is_a?(Hash) m.concat(i.map{|k, v| SQL::AliasedExpression.new(k,v)}) else m << i end end m end end Dataset.register_extension(:hash_aliases, HashAliases) end ruby-sequel-4.1.1/lib/sequel/extensions/inflector.rb000066400000000000000000000213701220156535500225560ustar00rootroot00000000000000# The inflector extension adds inflection instance methods to String, which allows the easy transformation of # words from singular to plural, class names to table names, modularized class # names to ones without, and class names to foreign keys. It exists for # backwards compatibility to legacy Sequel code. # # To load the extension: # # Sequel.extension :inflector class String # This module acts as a singleton returned/yielded by String.inflections, # which is used to override or specify additional inflection rules. Examples: # # String.inflections do |inflect| # inflect.plural /^(ox)$/i, '\1\2en' # inflect.singular /^(ox)en/i, '\1' # # inflect.irregular 'octopus', 'octopi' # # inflect.uncountable "equipment" # end # # New rules are added at the top. So in the example above, the irregular rule for octopus will now be the first of the # pluralization and singularization rules that is runs. This guarantees that your rules run before any of the rules that may # already have been loaded. module Inflections @plurals, @singulars, @uncountables = [], [], [] class << self # Array of 2 element arrays, first containing a regex, and the second containing a substitution pattern, used for plurization. attr_reader :plurals # Array of 2 element arrays, first containing a regex, and the second containing a substitution pattern, used for singularization. attr_reader :singulars # Array of strings for words were the singular form is the same as the plural form attr_reader :uncountables end # Clears the loaded inflections within a given scope (default is :all). Give the scope as a symbol of the inflection type, # the options are: :plurals, :singulars, :uncountables # # Examples: # clear :all # clear :plurals def self.clear(scope = :all) case scope when :all @plurals, @singulars, @uncountables = [], [], [] else instance_variable_set("@#{scope}", []) end end # Specifies a new irregular that applies to both pluralization and singularization at the same time. This can only be used # for strings, not regular expressions. You simply pass the irregular in singular and plural form. # # Examples: # irregular 'octopus', 'octopi' # irregular 'person', 'people' def self.irregular(singular, plural) plural(Regexp.new("(#{singular[0,1]})#{singular[1..-1]}$", "i"), '\1' + plural[1..-1]) singular(Regexp.new("(#{plural[0,1]})#{plural[1..-1]}$", "i"), '\1' + singular[1..-1]) end # Specifies a new pluralization rule and its replacement. The rule can either be a string or a regular expression. # The replacement should always be a string that may include references to the matched data from the rule. # # Example: # plural(/(x|ch|ss|sh)$/i, '\1es') def self.plural(rule, replacement) @plurals.insert(0, [rule, replacement]) end # Specifies a new singularization rule and its replacement. The rule can either be a string or a regular expression. # The replacement should always be a string that may include references to the matched data from the rule. # # Example: # singular(/([^aeiouy]|qu)ies$/i, '\1y') def self.singular(rule, replacement) @singulars.insert(0, [rule, replacement]) end # Add uncountable words that shouldn't be attempted inflected. # # Examples: # uncountable "money" # uncountable "money", "information" # uncountable %w( money information rice ) def self.uncountable(*words) (@uncountables << words).flatten! end Sequel.require('default_inflections', 'model') instance_eval(&Sequel::DEFAULT_INFLECTIONS_PROC) end # Yield the Inflections module if a block is given, and return # the Inflections module. def self.inflections yield Inflections if block_given? Inflections end # By default, camelize converts the string to UpperCamelCase. If the argument to camelize # is set to :lower then camelize produces lowerCamelCase. # # camelize will also convert '/' to '::' which is useful for converting paths to namespaces # # Examples # "active_record".camelize #=> "ActiveRecord" # "active_record".camelize(:lower) #=> "activeRecord" # "active_record/errors".camelize #=> "ActiveRecord::Errors" # "active_record/errors".camelize(:lower) #=> "activeRecord::Errors" def camelize(first_letter_in_uppercase = :upper) s = gsub(/\/(.?)/){|x| "::#{x[-1..-1].upcase unless x == '/'}"}.gsub(/(^|_)(.)/){|x| x[-1..-1].upcase} s[0...1] = s[0...1].downcase unless first_letter_in_uppercase == :upper s end alias_method :camelcase, :camelize # Singularizes and camelizes the string. Also strips out all characters preceding # and including a period ("."). # # Examples # "egg_and_hams".classify #=> "EggAndHam" # "post".classify #=> "Post" # "schema.post".classify #=> "Post" def classify sub(/.*\./, '').singularize.camelize end # Constantize tries to find a declared constant with the name specified # in the string. It raises a NameError when the name is not in CamelCase # or is not initialized. # # Examples # "Module".constantize #=> Module # "Class".constantize #=> Class def constantize raise(NameError, "#{inspect} is not a valid constant name!") unless m = /\A(?:::)?([A-Z]\w*(?:::[A-Z]\w*)*)\z/.match(self) Object.module_eval("::#{m[1]}", __FILE__, __LINE__) end # Replaces underscores with dashes in the string. # # Example # "puni_puni".dasherize #=> "puni-puni" def dasherize gsub(/_/, '-') end # Removes the module part from the expression in the string # # Examples # "ActiveRecord::CoreExtensions::String::Inflections".demodulize #=> "Inflections" # "Inflections".demodulize #=> "Inflections" def demodulize gsub(/^.*::/, '') end # Creates a foreign key name from a class name. # +use_underscore+ sets whether the method should put '_' between the name and 'id'. # # Examples # "Message".foreign_key #=> "message_id" # "Message".foreign_key(false) #=> "messageid" # "Admin::Post".foreign_key #=> "post_id" def foreign_key(use_underscore = true) "#{demodulize.underscore}#{'_' if use_underscore}id" end # Capitalizes the first word and turns underscores into spaces and strips _id. # Like titleize, this is meant for creating pretty output. # # Examples # "employee_salary" #=> "Employee salary" # "author_id" #=> "Author" def humanize gsub(/_id$/, "").gsub(/_/, " ").capitalize end # Returns the plural form of the word in the string. # # Examples # "post".pluralize #=> "posts" # "octopus".pluralize #=> "octopi" # "sheep".pluralize #=> "sheep" # "words".pluralize #=> "words" # "the blue mailman".pluralize #=> "the blue mailmen" # "CamelOctopus".pluralize #=> "CamelOctopi" def pluralize result = dup Inflections.plurals.each{|(rule, replacement)| break if result.gsub!(rule, replacement)} unless Inflections.uncountables.include?(downcase) result end # The reverse of pluralize, returns the singular form of a word in a string. # # Examples # "posts".singularize #=> "post" # "octopi".singularize #=> "octopus" # "sheep".singluarize #=> "sheep" # "word".singluarize #=> "word" # "the blue mailmen".singularize #=> "the blue mailman" # "CamelOctopi".singularize #=> "CamelOctopus" def singularize result = dup Inflections.singulars.each{|(rule, replacement)| break if result.gsub!(rule, replacement)} unless Inflections.uncountables.include?(downcase) result end # Underscores and pluralizes the string. # # Examples # "RawScaledScorer".tableize #=> "raw_scaled_scorers" # "egg_and_ham".tableize #=> "egg_and_hams" # "fancyCategory".tableize #=> "fancy_categories" def tableize underscore.pluralize end # Capitalizes all the words and replaces some characters in the string to create # a nicer looking title. Titleize is meant for creating pretty output. # # titleize is also aliased as as titlecase # # Examples # "man from the boondocks".titleize #=> "Man From The Boondocks" # "x-men: the last stand".titleize #=> "X Men: The Last Stand" def titleize underscore.humanize.gsub(/\b([a-z])/){|x| x[-1..-1].upcase} end alias_method :titlecase, :titleize # The reverse of camelize. Makes an underscored form from the expression in the string. # Also changes '::' to '/' to convert namespaces to paths. # # Examples # "ActiveRecord".underscore #=> "active_record" # "ActiveRecord::Errors".underscore #=> active_record/errors def underscore gsub(/::/, '/').gsub(/([A-Z]+)([A-Z][a-z])/,'\1_\2'). gsub(/([a-z\d])([A-Z])/,'\1_\2').tr("-", "_").downcase end end ruby-sequel-4.1.1/lib/sequel/extensions/looser_typecasting.rb000066400000000000000000000022161220156535500245040ustar00rootroot00000000000000# The LooserTypecasting extension loosens the default database typecasting # for the following types: # # :float :: use to_f instead of Float() # :integer :: use to_i instead of Integer() # :decimal :: don't check string conversion with Float() # :string :: silently allow hash and array conversion to string # # To load the extension into the database: # # DB.extension :looser_typecasting module Sequel module LooserTypecasting # Typecast the value to a Float using to_f instead of Kernel.Float def typecast_value_float(value) value.to_f end # Typecast the value to an Integer using to_i instead of Kernel.Integer def typecast_value_integer(value) value.to_i end # Typecast the value to an Integer using to_i instead of Kernel.Integer def typecast_value_string(value) value.to_s end # Typecast the value to a BigDecimal, without checking if strings # have a valid format. def typecast_value_decimal(value) if value.is_a?(String) BigDecimal.new(value) else super end end end Database.register_extension(:looser_typecasting, LooserTypecasting) end ruby-sequel-4.1.1/lib/sequel/extensions/meta_def.rb000066400000000000000000000017361220156535500223410ustar00rootroot00000000000000# The meta_def extension is designed for backwards compatibility # with older Sequel code that uses the meta_def method on # Database, Dataset, and Model classes and/or instances. It is # not recommended for usage in new code. To load this extension: # # Sequel.extension :meta_def module Sequel # Contains meta_def method for adding methods to objects via blocks. # Only recommended for backwards compatibility with existing code. module Metaprogramming # Define a method with the given name and block body on the receiver. # # ds = DB[:items] # ds.meta_def(:x){42} # ds.x # => 42 def meta_def(name, &block) (class << self; self end).send(:define_method, name, &block) end end Database.extend Metaprogramming Database.send(:include, Metaprogramming) Dataset.extend Metaprogramming Dataset.send(:include, Metaprogramming) if defined?(Model) Model.extend Metaprogramming Model.send(:include, Metaprogramming) end end ruby-sequel-4.1.1/lib/sequel/extensions/migration.rb000066400000000000000000000574251220156535500225740ustar00rootroot00000000000000# Adds the Sequel::Migration and Sequel::Migrator classes, which allow # the user to easily group schema changes and migrate the database # to a newer version or revert to a previous version. # # To load the extension: # # Sequel.extension :migration module Sequel # Sequel's older migration class, available for backward compatibility. # Uses subclasses with up and down instance methods for each migration: # # Class.new(Sequel::Migration) do # def up # create_table(:artists) do # primary_key :id # String :name # end # end # # def down # drop_table(:artists) # end # end # # Part of the +migration+ extension. class Migration # Set the database associated with this migration. def initialize(db) @db = db end # Applies the migration to the supplied database in the specified # direction. def self.apply(db, direction) raise(ArgumentError, "Invalid migration direction specified (#{direction.inspect})") unless [:up, :down].include?(direction) new(db).send(direction) end # Returns the list of Migration descendants. def self.descendants @descendants ||= [] end # Adds the new migration class to the list of Migration descendants. def self.inherited(base) descendants << base end # Don't allow transaction overriding in old migrations. def self.use_transactions nil end # The default down action does nothing def down end # Intercepts method calls intended for the database and sends them along. def method_missing(method_sym, *args, &block) @db.send(method_sym, *args, &block) end # This object responds to all methods the database responds to. def respond_to_missing?(meth, include_private) @db.respond_to?(meth, include_private) end # The default up action does nothing def up end end # Migration class used by the Sequel.migration DSL, # using instances for each migration, unlike the # +Migration+ class, which uses subclasses for each # migration. Part of the +migration+ extension. class SimpleMigration # Proc used for the down action attr_accessor :down # Proc used for the up action attr_accessor :up # Whether to use transactions for this migration, default depends on the # database. attr_accessor :use_transactions # Don't set transaction use by default. def initialize @use_transactions = nil end # Apply the appropriate block on the +Database+ # instance using instance_eval. def apply(db, direction) raise(ArgumentError, "Invalid migration direction specified (#{direction.inspect})") unless [:up, :down].include?(direction) if prok = send(direction) db.instance_eval(&prok) end end end # Internal class used by the Sequel.migration DSL, part of the +migration+ extension. class MigrationDSL < BasicObject # The underlying Migration instance attr_reader :migration def self.create(&block) new(&block).migration end # Create a new migration class, and instance_eval the block. def initialize(&block) @migration = SimpleMigration.new Migration.descendants << migration instance_eval(&block) end # Defines the migration's down action. def down(&block) migration.down = block end # Disable the use of transactions for the related migration def no_transaction migration.use_transactions = false end # Enable the use of transactions for the related migration def transaction migration.use_transactions = true end # Defines the migration's up action. def up(&block) migration.up = block end # Creates a reversible migration. This is the same as creating # the same block with +up+, but it also calls the block and attempts # to create a +down+ block that will reverse the changes made by # the block. # # There are no guarantees that this will work perfectly # in all cases, but it should work for most common cases. def change(&block) migration.up = block migration.down = MigrationReverser.new.reverse(&block) end end # Handles the reversing of reversible migrations. Basically records # supported methods calls, translates them to reversed calls, and # returns them in reverse order. class MigrationReverser < Sequel::BasicObject def initialize @actions = [] end # Reverse the actions for the given block. Takes the block given # and returns a new block that reverses the actions taken by # the given block. def reverse(&block) begin instance_eval(&block) rescue just_raise = true end if just_raise Proc.new{raise Sequel::Error, 'irreversible migration method used, you may need to write your own down method'} else actions = @actions.reverse Proc.new do actions.each do |a| if a.last.is_a?(Proc) pr = a.pop send(*a, &pr) else send(*a) end end end end end private def add_column(*args) @actions << [:drop_column, args[0], args[1]] end def add_index(*args) @actions << [:drop_index, *args] end def alter_table(table, &block) @actions << [:alter_table, table, MigrationAlterTableReverser.new.reverse(&block)] end def create_join_table(*args) @actions << [:drop_join_table, *args] end def create_table(*args) @actions << [:drop_table, args.first] end def create_view(*args) @actions << [:drop_view, args.first] end def rename_column(table, name, new_name) @actions << [:rename_column, table, new_name, name] end def rename_table(table, new_name) @actions << [:rename_table, new_name, table] end end # Handles reversing an alter_table block in a reversible migration. class MigrationAlterTableReverser < Sequel::BasicObject def initialize @actions = [] end def reverse(&block) instance_eval(&block) actions = @actions.reverse Proc.new{actions.each{|a| send(*a)}} end private def add_column(*args) @actions << [:drop_column, args.first] end def add_constraint(*args) @actions << [:drop_constraint, args.first] end def add_foreign_key(key, table, *args) @actions << [:drop_foreign_key, key, *args] end def add_primary_key(*args) raise if args.first.is_a?(Array) @actions << [:drop_column, args.first] end def add_index(*args) @actions << [:drop_index, *args] end alias add_full_text_index add_index alias add_spatial_index add_index def rename_column(name, new_name) @actions << [:rename_column, new_name, name] end end # The preferred method for writing Sequel migrations, using a DSL: # # Sequel.migration do # up do # create_table(:artists) do # primary_key :id # String :name # end # end # # down do # drop_table(:artists) # end # end # # Designed to be used with the +Migrator+ class, part of the +migration+ extension. def self.migration(&block) MigrationDSL.create(&block) end # The +Migrator+ class performs migrations based on migration files in a # specified directory. The migration files should be named using the # following pattern: # # _.rb # # For example, the following files are considered migration files: # # 001_create_sessions.rb # 002_add_data_column.rb # # You can also use timestamps as version numbers: # # 1273253850_create_sessions.rb # 1273257248_add_data_column.rb # # If any migration filenames use timestamps as version numbers, Sequel # uses the +TimestampMigrator+ to migrate, otherwise it uses the +IntegerMigrator+. # The +TimestampMigrator+ can handle migrations that are run out of order # as well as migrations with the same timestamp, # while the +IntegerMigrator+ is more strict and raises exceptions for missing # or duplicate migration files. # # The migration files should contain either one +Migration+ # subclass or one <tt>Sequel.migration</tt> call. # # Migrations are generally run via the sequel command line tool, # using the -m and -M switches. The -m switch specifies the migration # directory, and the -M switch specifies the version to which to migrate. # # You can apply migrations using the Migrator API, as well (this is necessary # if you want to specify the version from which to migrate in addition to the version # to which to migrate). # To apply a migrator, the +apply+ method must be invoked with the database # instance, the directory of migration files and the target version. If # no current version is supplied, it is read from the database. The migrator # automatically creates a table (schema_info for integer migrations and # schema_migrations for timestamped migrations). in the database to keep track # of the current migration version. If no migration version is stored in the # database, the version is considered to be 0. If no target version is # specified, the database is migrated to the latest version available in the # migration directory. # # For example, to migrate the database to the latest version: # # Sequel::Migrator.apply(DB, '.') # # For example, to migrate the database all the way down: # # Sequel::Migrator.apply(DB, '.', 0) # # For example, to migrate the database to version 4: # # Sequel::Migrator.apply(DB, '.', 4) # # To migrate the database from version 1 to version 5: # # Sequel::Migrator.apply(DB, '.', 5, 1) # # Part of the +migration+ extension. class Migrator MIGRATION_FILE_PATTERN = /\A(\d+)_.+\.rb\z/i.freeze MIGRATION_SPLITTER = '_'.freeze MINIMUM_TIMESTAMP = 20000101 # Exception class raised when there is an error with the migrator's # file structure, database, or arguments. class Error < Sequel::Error end # Exception class raised when Migrator.check_current signals that it is # not current. class NotCurrentError < Error end # Wrapper for +run+, maintaining backwards API compatibility def self.apply(db, directory, target = nil, current = nil) run(db, directory, :target => target, :current => current) end # Raise a NotCurrentError unless the migrator is current, takes the same # arguments as #run. def self.check_current(*args) raise(NotCurrentError, 'migrator is not current') unless is_current?(*args) end # Return whether the migrator is current (i.e. it does not need to make # any changes). Takes the same arguments as #run. def self.is_current?(db, directory, opts=OPTS) migrator_class(directory).new(db, directory, opts).is_current? end # Migrates the supplied database using the migration files in the the specified directory. Options: # :allow_missing_migration_files :: Don't raise an error if there are missing migration files. # :column :: The column in the :table argument storing the migration version (default: :version). # :current :: The current version of the database. If not given, it is retrieved from the database # using the :table and :column options. # :table :: The table containing the schema version (default: :schema_info). # :target :: The target version to which to migrate. If not given, migrates to the maximum version. # # Examples: # Sequel::Migrator.run(DB, "migrations") # Sequel::Migrator.run(DB, "migrations", :target=>15, :current=>10) # Sequel::Migrator.run(DB, "app1/migrations", :column=> :app2_version) # Sequel::Migrator.run(DB, "app2/migrations", :column => :app2_version, :table=>:schema_info2) def self.run(db, directory, opts=OPTS) migrator_class(directory).new(db, directory, opts).run end # Choose the Migrator subclass to use. Uses the TimestampMigrator # if the version number appears to be a unix time integer for a year # after 2005, otherwise uses the IntegerMigrator. def self.migrator_class(directory) if self.equal?(Migrator) Dir.new(directory).each do |file| next unless MIGRATION_FILE_PATTERN.match(file) return TimestampMigrator if file.split(MIGRATION_SPLITTER, 2).first.to_i > MINIMUM_TIMESTAMP end IntegerMigrator else self end end private_class_method :migrator_class # The column to use to hold the migration version number for integer migrations or # filename for timestamp migrations (defaults to :version for integer migrations and # :filename for timestamp migrations) attr_reader :column # The database related to this migrator attr_reader :db # The directory for this migrator's files attr_reader :directory # The dataset for this migrator, representing the +schema_info+ table for integer # migrations and the +schema_migrations+ table for timestamp migrations attr_reader :ds # All migration files in this migrator's directory attr_reader :files # The table to use to hold the applied migration data (defaults to :schema_info for # integer migrations and :schema_migrations for timestamp migrations) attr_reader :table # The target version for this migrator attr_reader :target # Setup the state for the migrator def initialize(db, directory, opts=OPTS) raise(Error, "Must supply a valid migration path") unless File.directory?(directory) @db = db @directory = directory @allow_missing_migration_files = opts[:allow_missing_migration_files] @files = get_migration_files schema, table = @db.send(:schema_and_table, opts[:table] || self.class.const_get(:DEFAULT_SCHEMA_TABLE)) @table = schema ? Sequel::SQL::QualifiedIdentifier.new(schema, table) : table @column = opts[:column] || self.class.const_get(:DEFAULT_SCHEMA_COLUMN) @ds = schema_dataset @use_transactions = opts[:use_transactions] end private # If transactions should be used for the migration, yield to the block # inside a transaction. Otherwise, just yield to the block. def checked_transaction(migration, &block) use_trans = if @use_transactions.nil? if migration.use_transactions.nil? @db.supports_transactional_ddl? else migration.use_transactions end else @use_transactions end if use_trans db.transaction(&block) else yield end end # Remove all migration classes. Done by the migrator to ensure that # the correct migration classes are picked up. def remove_migration_classes # Remove class definitions Migration.descendants.each do |c| Object.send(:remove_const, c.to_s) rescue nil end Migration.descendants.clear # remove any defined migration classes end # Return the integer migration version based on the filename. def migration_version_from_file(filename) filename.split(MIGRATION_SPLITTER, 2).first.to_i end end # The default migrator, recommended in most cases. Uses a simple incrementing # version number starting with 1, where missing or duplicate migration file # versions are not allowed. Part of the +migration+ extension. class IntegerMigrator < Migrator DEFAULT_SCHEMA_COLUMN = :version DEFAULT_SCHEMA_TABLE = :schema_info Error = Migrator::Error # The current version for this migrator attr_reader :current # The direction of the migrator, either :up or :down attr_reader :direction # The migrations used by this migrator attr_reader :migrations # Set up all state for the migrator instance def initialize(db, directory, opts=OPTS) super @target = opts[:target] || latest_migration_version @current = opts[:current] || current_migration_version raise(Error, "No current version available") unless current raise(Error, "No target version available, probably because no migration files found or filenames don't follow the migration filename convention") unless target @direction = current < target ? :up : :down @migrations = get_migrations end # The integer migrator is current if the current version is the same as the target version. def is_current? current_migration_version == target end # Apply all migrations on the database def run migrations.zip(version_numbers).each do |m, v| t = Time.now lv = up? ? v : v + 1 db.log_info("Begin applying migration version #{lv}, direction: #{direction}") checked_transaction(m) do m.apply(db, direction) set_migration_version(v) end db.log_info("Finished applying migration version #{lv}, direction: #{direction}, took #{sprintf('%0.6f', Time.now - t)} seconds") end target end private # Gets the current migration version stored in the database. If no version # number is stored, 0 is returned. def current_migration_version ds.get(column) || 0 end # Returns any found migration files in the supplied directory. def get_migration_files files = [] Dir.new(directory).each do |file| next unless MIGRATION_FILE_PATTERN.match(file) version = migration_version_from_file(file) if version >= 20000101 raise Migrator::Error, "Migration number too large, must use TimestampMigrator: #{file}" end raise(Error, "Duplicate migration version: #{version}") if files[version] files[version] = File.join(directory, file) end 1.upto(files.length - 1){|i| raise(Error, "Missing migration version: #{i}") unless files[i]} unless @allow_missing_migration_files files end # Returns a list of migration classes filtered for the migration range and # ordered according to the migration direction. def get_migrations remove_migration_classes # load migration files files[up? ? (current + 1)..target : (target + 1)..current].compact.each{|f| load(f)} # get migration classes classes = Migration.descendants up? ? classes : classes.reverse end # Returns the latest version available in the specified directory. def latest_migration_version l = files.last l ? migration_version_from_file(File.basename(l)) : nil end # Returns the dataset for the schema_info table. If no such table # exists, it is automatically created. def schema_dataset c = column ds = db.from(table) db.create_table?(table){Integer c, :default=>0, :null=>false} unless ds.columns.include?(c) db.alter_table(table){add_column c, Integer, :default=>0, :null=>false} end ds.insert(c=>0) if ds.empty? raise(Error, "More than 1 row in migrator table") if ds.count > 1 ds end # Sets the current migration version stored in the database. def set_migration_version(version) ds.update(column=>version) end # Whether or not this is an up migration def up? direction == :up end # An array of numbers corresponding to the migrations, # so that each number in the array is the migration version # that will be in affect after the migration is run. def version_numbers up? ? ((current+1)..target).to_a : (target..(current - 1)).to_a.reverse end end # The migrator used if any migration file version appears to be a timestamp. # Stores filenames of migration files, and can figure out which migrations # have not been applied and apply them, even if earlier migrations are added # after later migrations. If you plan to do that, the responsibility is on # you to make sure the migrations don't conflict. Part of the +migration+ extension. class TimestampMigrator < Migrator DEFAULT_SCHEMA_COLUMN = :filename DEFAULT_SCHEMA_TABLE = :schema_migrations Error = Migrator::Error # Array of strings of applied migration filenames attr_reader :applied_migrations # Get tuples of migrations, filenames, and actions for each migration attr_reader :migration_tuples # Set up all state for the migrator instance def initialize(db, directory, opts=OPTS) super @target = opts[:target] @applied_migrations = get_applied_migrations @migration_tuples = get_migration_tuples end # The timestamp migrator is current if there are no migrations to apply # in either direction. def is_current? migration_tuples.empty? end # Apply all migration tuples on the database def run migration_tuples.each do |m, f, direction| t = Time.now db.log_info("Begin applying migration #{f}, direction: #{direction}") checked_transaction(m) do m.apply(db, direction) fi = f.downcase direction == :up ? ds.insert(column=>fi) : ds.filter(column=>fi).delete end db.log_info("Finished applying migration #{f}, direction: #{direction}, took #{sprintf('%0.6f', Time.now - t)} seconds") end nil end private # Convert the schema_info table to the new schema_migrations table format, # using the version of the schema_info table and the current migration files. def convert_from_schema_info v = db[IntegerMigrator::DEFAULT_SCHEMA_TABLE].get(IntegerMigrator::DEFAULT_SCHEMA_COLUMN) ds = db.from(table) files.each do |path| f = File.basename(path) if migration_version_from_file(f) <= v ds.insert(column=>f) end end end # Returns filenames of all applied migrations def get_applied_migrations am = ds.select_order_map(column) missing_migration_files = am - files.map{|f| File.basename(f).downcase} raise(Error, "Applied migration files not in file system: #{missing_migration_files.join(', ')}") if missing_migration_files.length > 0 && !@allow_missing_migration_files am end # Returns any migration files found in the migrator's directory. def get_migration_files files = [] Dir.new(directory).each do |file| next unless MIGRATION_FILE_PATTERN.match(file) files << File.join(directory, file) end files.sort_by{|f| MIGRATION_FILE_PATTERN.match(File.basename(f))[1].to_i} end # Returns tuples of migration, filename, and direction def get_migration_tuples remove_migration_classes up_mts = [] down_mts = [] ms = Migration.descendants files.each do |path| f = File.basename(path) fi = f.downcase if target if migration_version_from_file(f) > target if applied_migrations.include?(fi) load(path) down_mts << [ms.last, f, :down] end elsif !applied_migrations.include?(fi) load(path) up_mts << [ms.last, f, :up] end elsif !applied_migrations.include?(fi) load(path) up_mts << [ms.last, f, :up] end end up_mts + down_mts.reverse end # Returns the dataset for the schema_migrations table. If no such table # exists, it is automatically created. def schema_dataset c = column ds = db.from(table) if !db.table_exists?(table) db.create_table(table){String c, :primary_key=>true} if db.table_exists?(:schema_info) and vha = db[:schema_info].all and vha.length == 1 and vha.first.keys == [:version] and vha.first.values.first.is_a?(Integer) convert_from_schema_info end elsif !ds.columns.include?(c) raise(Error, "Migrator table #{table} does not contain column #{c}") end ds end end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/named_timezones.rb������������������������������������������0000664�0000000�0000000�00000007604�12201565355�0023756�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Allows the use of named timezones via TZInfo (requires tzinfo). # Forces the use of DateTime as Sequel's datetime_class, since # ruby's Time class doesn't support timezones other than local # and UTC. # # This allows you to either pass strings or TZInfo::Timezone # instance to Sequel.database_timezone=, application_timezone=, and # typecast_timezone=. If a string is passed, it is converted to a # TZInfo::Timezone using TZInfo::Timezone.get. # # To load the extension: # # Sequel.extension :named_timezones # # Let's say you have the database server in New York and the # application server in Los Angeles. For historical reasons, data # is stored in local New York time, but the application server only # services clients in Los Angeles, so you want to use New York # time in the database and Los Angeles time in the application. This # is easily done via: # # Sequel.database_timezone = 'America/New_York' # Sequel.application_timezone = 'America/Los_Angeles' # # Then, before data is stored in the database, it is converted to New # York time. When data is retrieved from the database, it is # converted to Los Angeles time. # # If you are using database specific timezones, you may want to load # this extension into the database in order to support similar API: # # DB.extension :named_timezones # DB.timezone = 'America/New_York' # # Note that typecasting from the database timezone to the application # timezone when fetching rows is dependent on the database adapter, # and only works on adapters where Sequel itself does the conversion. # It should work on mysql, postgres, sqlite, ibmdb, and jdbc. require 'tzinfo' module Sequel self.datetime_class = DateTime module NamedTimezones module DatabaseMethods def timezone=(tz) super(Sequel.send(:convert_timezone_setter_arg, tz)) end end # Handles TZInfo::AmbiguousTime exceptions automatically by providing a # proc called with both the datetime value being converted as well as # the array of TZInfo::TimezonePeriod results. Example: # # Sequel.tzinfo_disambiguator = proc{|datetime, periods| periods.first} attr_accessor :tzinfo_disambiguator private # Assume the given DateTime has a correct time but a wrong timezone. It is # currently in UTC timezone, but it should be converted to the input_timezone. # Keep the time the same but convert the timezone to the input_timezone. # Expects the input_timezone to be a TZInfo::Timezone instance. def convert_input_datetime_other(v, input_timezone) local_offset = input_timezone.period_for_local(v, &tzinfo_disambiguator_for(v)).utc_total_offset_rational (v - local_offset).new_offset(local_offset) end # Convert the given DateTime to use the given output_timezone. # Expects the output_timezone to be a TZInfo::Timezone instance. def convert_output_datetime_other(v, output_timezone) # TZInfo converts times, but expects the given DateTime to have an offset # of 0 and always leaves the timezone offset as 0 v = output_timezone.utc_to_local(v.new_offset(0)) local_offset = output_timezone.period_for_local(v, &tzinfo_disambiguator_for(v)).utc_total_offset_rational # Convert timezone offset from UTC to the offset for the output_timezone (v - local_offset).new_offset(local_offset) end # Returns TZInfo::Timezone instance if given a String. def convert_timezone_setter_arg(tz) tz.is_a?(String) ? TZInfo::Timezone.get(tz) : super end # Return a disambiguation proc that provides both the datetime value # and the periods, in order to allow the choice of period to depend # on the datetime value. def tzinfo_disambiguator_for(v) if pr = @tzinfo_disambiguator proc{|periods| pr.call(v, periods)} end end end extend NamedTimezones Database.register_extension(:named_timezones, NamedTimezones::DatabaseMethods) end ����������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/null_dataset.rb���������������������������������������������0000664�0000000�0000000�00000005372�12201565355�0023254�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The null_dataset extension adds the Dataset#nullify method, which # returns a cloned dataset that will never issue a query to the # database. It implements the null object pattern for datasets. # # To load the extension: # # Sequel.extension :null_dataset # # The most common usage is probably in a method that must return # a dataset, where the method knows the dataset shouldn't return # anything. With standard Sequel, you'd probably just add a # WHERE condition that is always false, but that still results # in a query being sent to the database, and can be overridden # using #unfiltered, the OR operator, or a UNION. # # Usage: # # ds = DB[:items].nullify.where(:a=>:b).select(:c) # ds.sql # => "SELECT c FROM items WHERE (a = b)" # ds.all # => [] # no query sent to the database # # Note that there is one case where a null dataset will sent # a query to the database. If you call #columns on a nulled # dataset and the dataset doesn't have an already cached # version of the columns, it will create a new dataset with # the same options to get the columns. # # This extension uses Object#extend at runtime, which can hurt performance. module Sequel class Dataset module Nullifiable # Return a cloned nullified dataset. def nullify clone.nullify! end # Nullify the current dataset def nullify! extend NullDataset end end module NullDataset # Create a new dataset from the dataset (which won't # be nulled) to get the columns if they aren't already cached. def columns @columns ||= db.dataset.clone(@opts).columns end # Return 0 without sending a database query. def delete 0 end # Return self without sending a database query, never yielding. def each self end # Return nil without sending a database query, never yielding. def fetch_rows(sql) nil end # Return nil without sending a database query. def insert(*) nil end # Return nil without sending a database query. def truncate nil end # Return 0 without sending a database query. def update(v=OPTS) 0 end protected # Return nil without sending a database query. def _import(columns, values, opts) nil end private # Just in case these are called directly by some internal code, # make them noops. There's nothing we can do if the db # is accessed directly to make a change, though. (%w'_ddl _dui _insert' << '').each do |m| class_eval("private; def execute#{m}(sql, opts=OPTS) end", __FILE__, __LINE__) end end end Dataset.register_extension(:null_dataset, Dataset::Nullifiable) end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pagination.rb�����������������������������������������������0000664�0000000�0000000�00000010042�12201565355�0022714�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pagination extension adds the Sequel::Dataset#paginate and #each_page methods, # which return paginated (limited and offset) datasets with some helpful methods # that make creating a paginated display easier. # # This extension uses Object#extend at runtime, which can hurt performance. # # You can load this extension into specific datasets: # # ds = DB[:table] # ds.extension(:pagination) # # Or you can load it into all of a database's datasets, which # is probably the desired behavior if you are using this extension: # # DB.extension(:pagination) module Sequel module DatasetPagination # Returns a paginated dataset. The returned dataset is limited to # the page size at the correct offset, and extended with the Pagination # module. If a record count is not provided, does a count of total # number of records for this dataset. def paginate(page_no, page_size, record_count=nil) raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] paginated = limit(page_size, (page_no - 1) * page_size) paginated.extend(Dataset::Pagination) paginated.set_pagination_info(page_no, page_size, record_count || count) end # Yields a paginated dataset for each page and returns the receiver. Does # a count to find the total number of records for this dataset. def each_page(page_size) raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] record_count = count total_pages = (record_count / page_size.to_f).ceil (1..total_pages).each{|page_no| yield paginate(page_no, page_size, record_count)} self end end class Dataset # Holds methods that only relate to paginated datasets. Paginated dataset # have pages starting at 1 (page 1 is offset 0, page 1 is offset page_size). module Pagination # The number of records per page (the final page may have fewer than # this number of records). attr_accessor :page_size # The number of pages in the dataset before pagination, of which # this paginated dataset is one. attr_accessor :page_count # The current page of the dataset, starting at 1 and not 0. attr_accessor :current_page # The total number of records in the dataset before pagination. attr_accessor :pagination_record_count # Returns the record range for the current page def current_page_record_range return (0..0) if @current_page > @page_count a = 1 + (@current_page - 1) * @page_size b = a + @page_size - 1 b = @pagination_record_count if b > @pagination_record_count a..b end # Returns the number of records in the current page def current_page_record_count return 0 if @current_page > @page_count a = 1 + (@current_page - 1) * @page_size b = a + @page_size - 1 b = @pagination_record_count if b > @pagination_record_count b - a + 1 end # Returns true if the current page is the first page def first_page? @current_page == 1 end # Returns true if the current page is the last page def last_page? @current_page == @page_count end # Returns the next page number or nil if the current page is the last page def next_page current_page < page_count ? (current_page + 1) : nil end # Returns the page range def page_range 1..page_count end # Returns the previous page number or nil if the current page is the first def prev_page current_page > 1 ? (current_page - 1) : nil end # Sets the pagination info for this paginated dataset, and returns self. def set_pagination_info(page_no, page_size, record_count) @current_page = page_no @page_size = page_size @pagination_record_count = record_count @page_count = (record_count / page_size.to_f).ceil self end end end Dataset.register_extension(:pagination, DatasetPagination) end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_array.rb�������������������������������������������������0000664�0000000�0000000�00000063116�12201565355�0022401�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_array extension adds support for Sequel to handle # PostgreSQL's array types. # # This extension integrates with Sequel's native postgres adapter, so # that when array fields are retrieved, they are parsed and returned # as instances of Sequel::Postgres::PGArray. PGArray is # a DelegateClass of Array, so it mostly acts like an array, but not # completely (is_a?(Array) is false). If you want the actual array, # you can call PGArray#to_a. This is done so that Sequel does not # treat a PGArray like an Array by default, which would cause issues. # # In addition to the parsers, this extension comes with literalizers # for PGArray using the standard Sequel literalization callbacks, so # they work with on all adapters. # # To turn an existing Array into a PGArray: # # Sequel.pg_array(array) # # If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html], # or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html] # and have activated refinements for the file, you can also use Array#pg_array: # # array.pg_array # # You can also provide a type, though it many cases it isn't necessary: # # Sequel.pg_array(array, :varchar) # or :integer, :"double precision", etc. # array.pg_array(:varchar) # or :integer, :"double precision", etc. # # So if you want to insert an array into an integer[] database column: # # DB[:table].insert(:column=>Sequel.pg_array([1, 2, 3])) # # To use this extension, first load it into your Sequel::Database instance: # # DB.extension :pg_array # # If you are not using the native postgres adapter and are using array # types as model column values you probably should use the # pg_typecast_on_load plugin in the model, and set it to typecast the # array column(s) on load. # # This extension by default includes handlers for array types for # all scalar types that the native postgres adapter handles. It # also makes it easy to add support for other array types. In # general, you just need to make sure that the scalar type is # handled and has the appropriate converter installed in # Sequel::Postgres::PG_TYPES under the appropriate type OID. # Then you can call # Sequel::Postgres::PGArray::DatabaseMethods#register_array_type # to automatically set up a handler for the array type. So if you # want to support the foo[] type (assuming the foo type is already # supported): # # DB.register_array_type('foo') # # You can also register array types on a global basis using # Sequel::Postgres::PGArray.register. In this case, you'll have # to specify the type oids: # # Sequel::Postgres::PGArray.register('foo', :oid=>4321, :scalar_oid=>1234) # # Both Sequel::Postgres::PGArray::DatabaseMethods#register_array_type # and Sequel::Postgres::PGArray.register support many options to # customize the array type handling. See the Sequel::Postgres::PGArray.register # method documentation. # # If you want an easy way to call PostgreSQL array functions and # operators, look into the pg_array_ops extension. # # This extension requires both the json and delegate libraries. # # == Additional License # # PGArray::Parser code was translated from Javascript code in the # node-postgres project and has the following additional license: # # Copyright (c) 2010 Brian Carlson (brian.m.carlson@gmail.com) # # Permission is hereby granted, free of charge, to any person obtaining # a copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so, subject # to the following conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE # LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION # WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. require 'delegate' require 'json' Sequel.require 'adapters/utils/pg_types' module Sequel module Postgres # Represents a PostgreSQL array column value. class PGArray < DelegateClass(Array) include Sequel::SQL::AliasMethods ARRAY = "ARRAY".freeze DOUBLE_COLON = '::'.freeze EMPTY_BRACKET = '[]'.freeze OPEN_BRACKET = '['.freeze CLOSE_BRACKET = ']'.freeze COMMA = ','.freeze BACKSLASH = '\\'.freeze EMPTY_STRING = ''.freeze OPEN_BRACE = '{'.freeze CLOSE_BRACE = '}'.freeze NULL = 'NULL'.freeze QUOTE = '"'.freeze # Global hash of database array type name strings to symbols (e.g. 'double precision' => :float), # used by the schema parsing for array types registered globally. ARRAY_TYPES = {} # Registers an array type that the extension should handle. Makes a Database instance that # has been extended with DatabaseMethods recognize the array type given and set up the # appropriate typecasting. Also sets up automatic typecasting for the native postgres # adapter, so that on retrieval, the values are automatically converted to PGArray instances. # The db_type argument should be the exact database type used (as returned by the PostgreSQL # format_type database function). Accepts the following options: # # :array_type :: The type to automatically cast the array to when literalizing the array. # Usually the same as db_type. # :converter :: A callable object (e.g. Proc), that is called with each element of the array # (usually a string), and should return the appropriate typecasted object. # :oid :: The PostgreSQL OID for the array type. This is used by the Sequel postgres adapter # to set up automatic type conversion on retrieval from the database. # :parser :: Can be set to :json to use the faster JSON-based parser. Note that the JSON-based # parser can only correctly handle integers values correctly. It doesn't handle # full precision for numeric types, and doesn't handle NaN/Infinity values for # floating point types. # :scalar_oid :: Should be the PostgreSQL OID for the scalar version of this array type. If given, # automatically sets the :converter option by looking for scalar conversion # proc. # :scalar_typecast :: Should be a symbol indicating the typecast method that should be called on # each element of the array, when a plain array is passed into a database # typecast method. For example, for an array of integers, this could be set to # :integer, so that the typecast_value_integer method is called on all of the # array elements. Defaults to :type_symbol option. # :type_procs :: A hash mapping oids to conversion procs, used for looking up the :scalar_oid and # value and setting the :oid value. Defaults to the global Sequel::Postgres::PG_TYPES. # :type_symbol :: The base of the schema type symbol for this type. For example, if you provide # :integer, Sequel will recognize this type as :integer_array during schema parsing. # Defaults to the db_type argument. # :typecast_method :: If given, specifies the :type_symbol option, but additionally causes no # typecasting method to be created in the database. This should only be used # to alias existing array types. For example, if there is an array type that can be # treated just like an integer array, you can do :typecast_method=>:integer. # :typecast_method_map :: The map in which to place the database type string to type symbol mapping. # Defaults to ARRAY_TYPES. # :typecast_methods_module :: If given, a module object to add the typecasting method to. Defaults # to DatabaseMethods. # # If a block is given, it is treated as the :converter option. def self.register(db_type, opts=OPTS, &block) db_type = db_type.to_s typecast_method = opts[:typecast_method] type = (typecast_method || opts[:type_symbol] || db_type).to_sym type_procs = opts[:type_procs] || PG_TYPES mod = opts[:typecast_methods_module] || DatabaseMethods typecast_method_map = opts[:typecast_method_map] || ARRAY_TYPES if converter = opts[:converter] raise Error, "can't provide both a block and :converter option to register" if block else converter = block end if soid = opts[:scalar_oid] raise Error, "can't provide both a converter and :scalar_oid option to register" if converter raise Error, "no conversion proc for :scalar_oid=>#{soid.inspect}" unless converter = type_procs[soid] end array_type = (opts[:array_type] || db_type).to_s.dup.freeze creator = (opts[:parser] == :json ? JSONCreator : Creator).new(array_type, converter) typecast_method_map[db_type] = :"#{type}_array" define_array_typecast_method(mod, type, creator, opts.fetch(:scalar_typecast, type)) unless typecast_method if oid = opts[:oid] type_procs[oid] = creator end nil end # Define a private array typecasting method in the given module for the given type that uses # the creator argument to do the type conversion. def self.define_array_typecast_method(mod, type, creator, scalar_typecast) mod.class_eval do meth = :"typecast_value_#{type}_array" scalar_typecast_method = :"typecast_value_#{scalar_typecast}" define_method(meth){|v| typecast_value_pg_array(v, creator, scalar_typecast_method)} private meth end end private_class_method :define_array_typecast_method module DatabaseMethods APOS = "'".freeze DOUBLE_APOS = "''".freeze ESCAPE_RE = /("|\\)/.freeze ESCAPE_REPLACEMENT = '\\\\\1'.freeze BLOB_RANGE = 1...-1 # Create the local hash of database type strings to schema type symbols, # used for array types local to this database. def self.extended(db) db.instance_eval do @pg_array_schema_types ||= {} copy_conversion_procs([1009, 1007, 1016, 1231, 1022, 1000, 1001, 1182, 1183, 1270, 1005, 1028, 1021, 1014, 1015]) [:string_array, :integer_array, :decimal_array, :float_array, :boolean_array, :blob_array, :date_array, :time_array, :datetime_array].each do |v| @schema_type_classes[v] = PGArray end end procs = db.conversion_procs procs[1115] = Creator.new("timestamp without time zone", procs[1114]) procs[1185] = Creator.new("timestamp with time zone", procs[1184]) end # Handle arrays in bound variables def bound_variable_arg(arg, conn) case arg when PGArray bound_variable_array(arg.to_a) when Array bound_variable_array(arg) else super end end # Register a database specific array type. This can be used to support # different array types per Database. Use of this method does not # affect global state, unlike PGArray.register. See PGArray.register for # possible options. def register_array_type(db_type, opts=OPTS, &block) opts = {:type_procs=>conversion_procs, :typecast_method_map=>@pg_array_schema_types, :typecast_methods_module=>(class << self; self; end)}.merge(opts) unless (opts.has_key?(:scalar_oid) || block) && opts.has_key?(:oid) array_oid, scalar_oid = from(:pg_type).where(:typname=>db_type.to_s).get([:typarray, :oid]) opts[:scalar_oid] = scalar_oid unless opts.has_key?(:scalar_oid) || block opts[:oid] = array_oid unless opts.has_key?(:oid) end PGArray.register(db_type, opts, &block) @schema_type_classes[:"#{opts[:typecast_method] || opts[:type_symbol] || db_type}_array"] = PGArray end # Return PGArray if this type matches any supported array type. def schema_type_class(type) super || (ARRAY_TYPES.each_value{|v| return PGArray if type == v}; nil) end private # Format arrays used in bound variables. def bound_variable_array(a) case a when Array "{#{a.map{|i| bound_variable_array(i)}.join(COMMA)}}" when Sequel::SQL::Blob "\"#{literal(a)[BLOB_RANGE].gsub(DOUBLE_APOS, APOS).gsub(ESCAPE_RE, ESCAPE_REPLACEMENT)}\"" when Sequel::LiteralString a when String "\"#{a.gsub(ESCAPE_RE, ESCAPE_REPLACEMENT)}\"" else literal(a) end end # Automatically handle array types for the given named types. def convert_named_procs_to_procs(named_procs) h = super unless h.empty? from(:pg_type).where(:oid=>h.keys).select_map([:typname, :oid, :typarray]).each do |name, scalar_oid, array_oid| register_array_type(name, :type_procs=>h, :oid=>array_oid.to_i, :scalar_oid=>scalar_oid.to_i) end end h end # Manually override the typecasting for timestamp array types so that # they use the database's timezone instead of the global Sequel # timezone. def get_conversion_procs procs = super procs[1115] = Creator.new("timestamp without time zone", procs[1114]) procs[1185] = Creator.new("timestamp with time zone", procs[1184]) procs end # Look into both the current database's array schema types and the global # array schema types to get the type symbol for the given database type # string. def pg_array_schema_type(type) @pg_array_schema_types[type] || ARRAY_TYPES[type] end # Make the column type detection handle registered array types. def schema_column_type(db_type) if (db_type =~ /\A([^(]+)(?:\([^(]+\))?\[\]\z/io) && (type = pg_array_schema_type($1)) type else super end end # Given a value to typecast and the type of PGArray subclass: # * If given a PGArray with a matching array_type, use it directly. # * If given a PGArray with a different array_type, return a PGArray # with the creator's type. # * If given an Array, create a new PGArray instance for it. This does not # typecast all members of the array in ruby for performance reasons, but # it will cast the array the appropriate database type when the array is # literalized. # * If given a String, call the parser for the subclass with it. def typecast_value_pg_array(value, creator, scalar_typecast_method=nil) case value when PGArray if value.array_type != creator.type PGArray.new(value.to_a, creator.type) else value end when Array if scalar_typecast_method && respond_to?(scalar_typecast_method, true) value = Sequel.recursive_map(value, method(scalar_typecast_method)) end PGArray.new(value, creator.type) else raise Sequel::InvalidValue, "invalid value for array type: #{value.inspect}" end end end # PostgreSQL array parser that handles all types of input. # # This parser is very simple and unoptimized, but should still # be O(n) where n is the length of the input string. class Parser # Current position in the input string. attr_reader :pos # Set the source for the input, and any converter callable # to call with objects to be created. For nested parsers # the source may contain text after the end current parse, # which will be ignored. def initialize(source, converter=nil) @source = source @source_length = source.length @converter = converter @pos = -1 @entries = [] @recorded = "" @dimension = 0 end # Return 2 objects, whether the next character in the input # was escaped with a backslash, and what the next character is. def next_char @pos += 1 if (c = @source[@pos..@pos]) == BACKSLASH @pos += 1 [true, @source[@pos..@pos]] else [false, c] end end # Add a new character to the buffer of recorded characters. def record(c) @recorded << c end # Take the buffer of recorded characters and add it to the array # of entries, and use a new buffer for recorded characters. def new_entry(include_empty=false) if !@recorded.empty? || include_empty entry = @recorded if entry == NULL && !include_empty entry = nil elsif @converter entry = @converter.call(entry) end @entries.push(entry) @recorded = "" end end # Parse the input character by character, returning an array # of parsed (and potentially converted) objects. def parse(nested=false) # quote sets whether we are inside of a quoted string. quote = false until @pos >= @source_length escaped, char = next_char if char == OPEN_BRACE && !quote @dimension += 1 if (@dimension > 1) # Multi-dimensional array encounter, use a subparser # to parse the next level down. subparser = self.class.new(@source[@pos..-1], @converter) @entries.push(subparser.parse(true)) @pos += subparser.pos - 1 end elsif char == CLOSE_BRACE && !quote @dimension -= 1 if (@dimension == 0) new_entry # Exit early if inside a subparser, since the # text after parsing the current level should be # ignored as it is handled by the parent parser. return @entries if nested end elsif char == QUOTE && !escaped # If already inside the quoted string, this is the # ending quote, so add the entry. Otherwise, this # is the opening quote, so set the quote flag. new_entry(true) if quote quote = !quote elsif char == COMMA && !quote # If not inside a string and a comma occurs, it indicates # the end of the entry, so add the entry. new_entry else # Add the character to the recorded character buffer. record(char) end end raise Sequel::Error, "array dimensions not balanced" unless @dimension == 0 @entries end end unless Sequel::Postgres.respond_to?(:parse_pg_array) # Callable object that takes the input string and parses it using Parser. class Creator # The converter callable that is called on each member of the array # to convert it to the correct type. attr_reader :converter # The database type to set on the PGArray instances returned. attr_reader :type # Set the type and optional converter callable that will be used. def initialize(type, converter=nil) @type = type @converter = converter end if Sequel::Postgres.respond_to?(:parse_pg_array) # :nocov: # Use sequel_pg's C-based parser if it has already been defined. def call(string) PGArray.new(Sequel::Postgres.parse_pg_array(string, @converter), @type) end # :nocov: else # Parse the string using Parser with the appropriate # converter, and return a PGArray with the appropriate database # type. def call(string) PGArray.new(Parser.new(string, @converter).parse, @type) end end end # Callable object that takes the input string and parses it using. # a JSON parser. This should be faster than the standard Creator, # but only handles integer types correctly. class JSONCreator < Creator # Character conversion map mapping input strings to JSON replacements SUBST = {'{'.freeze=>'['.freeze, '}'.freeze=>']'.freeze, 'NULL'.freeze=>'null'.freeze} # Regular expression matching input strings to convert SUBST_RE = %r[\{|\}|NULL].freeze # Parse the input string by using a gsub to convert non-JSON characters to # JSON, running it through a regular JSON parser. If a converter is used, a # recursive map of the output is done to make sure that the entires in the # correct type. def call(string) array = Sequel.parse_json(string.gsub(SUBST_RE){|m| SUBST[m]}) array = Sequel.recursive_map(array, @converter) if @converter PGArray.new(array, @type) end end # The type of this array. May be nil if no type was given. If a type # is provided, the array is automatically casted to this type when # literalizing. This type is the underlying type, not the array type # itself, so for an int4[] database type, it should be :int4 or 'int4' attr_accessor :array_type # Set the array to delegate to, and a database type. def initialize(array, type=nil) super(array) @array_type = type end # Append the array SQL to the given sql string. # If the receiver has a type, add a cast to the # database array type. def sql_literal_append(ds, sql) sql << ARRAY _literal_append(sql, ds, to_a) if at = array_type sql << DOUBLE_COLON << at.to_s << EMPTY_BRACKET end end private # Recursive method that handles multi-dimensional # arrays, surrounding each with [] and interspersing # entries with ,. def _literal_append(sql, ds, array) sql << OPEN_BRACKET comma = false commas = COMMA array.each do |i| sql << commas if comma if i.is_a?(Array) _literal_append(sql, ds, i) else ds.literal_append(sql, i) end comma = true end sql << CLOSE_BRACKET end # Register all array types that this extension handles by default. register('text', :oid=>1009, :type_symbol=>:string) register('integer', :oid=>1007, :parser=>:json) register('bigint', :oid=>1016, :parser=>:json, :scalar_typecast=>:integer) register('numeric', :oid=>1231, :scalar_oid=>1700, :type_symbol=>:decimal) register('double precision', :oid=>1022, :scalar_oid=>701, :type_symbol=>:float) register('boolean', :oid=>1000, :scalar_oid=>16) register('bytea', :oid=>1001, :scalar_oid=>17, :type_symbol=>:blob) register('date', :oid=>1182, :scalar_oid=>1082) register('time without time zone', :oid=>1183, :scalar_oid=>1083, :type_symbol=>:time) register('timestamp without time zone', :oid=>1115, :scalar_oid=>1114, :type_symbol=>:datetime) register('time with time zone', :oid=>1270, :scalar_oid=>1083, :type_symbol=>:time_timezone, :scalar_typecast=>:time) register('timestamp with time zone', :oid=>1185, :scalar_oid=>1184, :type_symbol=>:datetime_timezone, :scalar_typecast=>:datetime) register('smallint', :oid=>1005, :parser=>:json, :typecast_method=>:integer) register('oid', :oid=>1028, :parser=>:json, :typecast_method=>:integer) register('real', :oid=>1021, :scalar_oid=>701, :typecast_method=>:float) register('character', :oid=>1014, :array_type=>:text, :typecast_method=>:string) register('character varying', :oid=>1015, :typecast_method=>:string) end end module SQL::Builders # Return a Postgres::PGArray proxy for the given array and database array type. def pg_array(v, array_type=nil) case v when Postgres::PGArray if array_type.nil? || v.array_type == array_type v else Postgres::PGArray.new(v.to_a, array_type) end when Array Postgres::PGArray.new(v, array_type) else # May not be defined unless the pg_array_ops extension is used pg_array_op(v) end end end Database.register_extension(:pg_array, Postgres::PGArray::DatabaseMethods) end # :nocov: if Sequel.core_extensions? class Array # Return a PGArray proxy to the receiver, using a # specific database type if given. This is mostly useful # as a short cut for creating PGArray objects that didn't # come from the database. def pg_array(type=nil) Sequel::Postgres::PGArray.new(self, type) end end end if defined?(Sequel::CoreRefinements) module Sequel::CoreRefinements refine Array do def pg_array(type=nil) Sequel::Postgres::PGArray.new(self, type) end end end end # :nocov: ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_array_ops.rb���������������������������������������������0000664�0000000�0000000�00000022105�12201565355�0023253�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_array_ops extension adds support to Sequel's DSL to make # it easier to call PostgreSQL array functions and operators. # # To load the extension: # # Sequel.extension :pg_array_ops # # The most common usage is passing an expression to Sequel.pg_array_op: # # ia = Sequel.pg_array_op(:int_array_column) # # If you have also loaded the pg_array extension, you can use # Sequel.pg_array as well: # # ia = Sequel.pg_array(:int_array_column) # # Also, on most Sequel expression objects, you can call the pg_array # method: # # ia = Sequel.expr(:int_array_column).pg_array # # If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]), # or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html]) # and have activated refinements for the file, you can also use Symbol#pg_array: # # ia = :int_array_column.pg_array # # This creates a Sequel::Postgres::ArrayOp object that can be used # for easier querying: # # ia[1] # int_array_column[1] # ia[1][2] # int_array_column[1][2] # # ia.contains(:other_int_array_column) # @> # ia.contained_by(:other_int_array_column) # <@ # ia.overlaps(:other_int_array_column) # && # ia.concat(:other_int_array_column) # || # # ia.push(1) # int_array_column || 1 # ia.unshift(1) # 1 || int_array_column # # ia.any # ANY(int_array_column) # ia.all # ALL(int_array_column) # ia.dims # array_dims(int_array_column) # ia.length # array_length(int_array_column, 1) # ia.length(2) # array_length(int_array_column, 2) # ia.lower # array_lower(int_array_column, 1) # ia.lower(2) # array_lower(int_array_column, 2) # ia.join # array_to_string(int_array_column, '', NULL) # ia.join(':') # array_to_string(int_array_column, ':', NULL) # ia.join(':', ' ') # array_to_string(int_array_column, ':', ' ') # ia.unnest # unnest(int_array_column) # # See the PostgreSQL array function and operator documentation for more # details on what these functions and operators do. # # If you are also using the pg_array extension, you should load it before # loading this extension. Doing so will allow you to use PGArray#op to get # an ArrayOp, allowing you to perform array operations on array literals. module Sequel module Postgres # The ArrayOp class is a simple container for a single object that # defines methods that yield Sequel expression objects representing # PostgreSQL array operators and functions. # # In the method documentation examples, assume that: # # array_op = :array.pg_array class ArrayOp < Sequel::SQL::Wrapper CONCAT = ["(".freeze, " || ".freeze, ")".freeze].freeze CONTAINS = ["(".freeze, " @> ".freeze, ")".freeze].freeze CONTAINED_BY = ["(".freeze, " <@ ".freeze, ")".freeze].freeze OVERLAPS = ["(".freeze, " && ".freeze, ")".freeze].freeze # Access a member of the array, returns an SQL::Subscript instance: # # array_op[1] # array[1] def [](key) s = Sequel::SQL::Subscript.new(self, [key]) s = ArrayOp.new(s) if key.is_a?(Range) s end # Call the ALL function: # # array_op.all # ALL(array) # # Usually used like: # # dataset.where(1=>array_op.all) # # WHERE (1 = ALL(array)) def all function(:ALL) end # Call the ANY function: # # array_op.all # ANY(array) # # Usually used like: # # dataset.where(1=>array_op.any) # # WHERE (1 = ANY(array)) def any function(:ANY) end # Use the contains (@>) operator: # # array_op.contains(:a) # (array @> a) def contains(other) bool_op(CONTAINS, wrap_array(other)) end # Use the contained by (<@) operator: # # array_op.contained_by(:a) # (array <@ a) def contained_by(other) bool_op(CONTAINED_BY, wrap_array(other)) end # Call the array_dims method: # # array_op.dims # array_dims(array) def dims function(:array_dims) end # Convert the array into an hstore using the hstore function. # If given an argument, use the two array form: # # array_op.hstore # hstore(array) # array_op.hstore(:array2) # hstore(array, array2) def hstore(arg=(no_arg_given=true; nil)) v = if no_arg_given Sequel.function(:hstore, self) else Sequel.function(:hstore, self, wrap_array(arg)) end if Sequel.respond_to?(:hstore_op) v = Sequel.hstore_op(v) end v end # Call the array_length method: # # array_op.length # array_length(array, 1) # array_op.length(2) # array_length(array, 2) def length(dimension = 1) function(:array_length, dimension) end # Call the array_lower method: # # array_op.lower # array_lower(array, 1) # array_op.lower(2) # array_lower(array, 2) def lower(dimension = 1) function(:array_lower, dimension) end # Use the overlaps (&&) operator: # # array_op.overlaps(:a) # (array && a) def overlaps(other) bool_op(OVERLAPS, wrap_array(other)) end # Use the concatentation (||) operator: # # array_op.push(:a) # (array || a) # array_op.concat(:a) # (array || a) def push(other) array_op(CONCAT, [self, wrap_array(other)]) end alias concat push # Return the receiver. def pg_array self end # Remove the given element from the array: # # array_op.remove(1) # array_remove(array, 1) def remove(element) ArrayOp.new(function(:array_remove, element)) end # Replace the given element in the array with another # element: # # array_op.replace(1, 2) # array_replace(array, 1, 2) def replace(element, replacement) ArrayOp.new(function(:array_replace, element, replacement)) end # Call the array_to_string method: # # array_op.join # array_to_string(array, '', NULL) # array_op.to_string # array_to_string(array, '', NULL) # array_op.join(":") # array_to_string(array, ':', NULL) # array_op.join(":", "*") # array_to_string(array, ':', '*') def to_string(joiner="", null=nil) function(:array_to_string, joiner, null) end alias join to_string # Call the unnest method: # # array_op.unnest # unnest(array) def unnest function(:unnest) end # Use the concatentation (||) operator, reversing the order: # # array_op.unshift(:a) # (a || array) def unshift(other) array_op(CONCAT, [wrap_array(other), self]) end private # Return a placeholder literal with the given str and args, wrapped # in an ArrayOp, used by operators that return arrays. def array_op(str, args) ArrayOp.new(Sequel::SQL::PlaceholderLiteralString.new(str, args)) end # Return a placeholder literal with the given str and args, wrapped # in a boolean expression, used by operators that return booleans. def bool_op(str, other) Sequel::SQL::BooleanExpression.new(:NOOP, Sequel::SQL::PlaceholderLiteralString.new(str, [value, other])) end # Return a function with the given name, and the receiver as the first # argument, with any additional arguments given. def function(name, *args) SQL::Function.new(name, self, *args) end # Automatically wrap argument in a PGArray if it is a plain Array. # Requires that the pg_array extension has been loaded to work. def wrap_array(arg) if arg.instance_of?(Array) Sequel.pg_array(arg) else arg end end end module ArrayOpMethods # Wrap the receiver in an ArrayOp so you can easily use the PostgreSQL # array functions and operators with it. def pg_array ArrayOp.new(self) end end if defined?(PGArray) class PGArray # Wrap the PGArray instance in an ArrayOp, allowing you to easily use # the PostgreSQL array functions and operators with literal arrays. def op ArrayOp.new(self) end end end end module SQL::Builders # Return the object wrapped in an Postgres::ArrayOp. def pg_array_op(v) case v when Postgres::ArrayOp v else Postgres::ArrayOp.new(v) end end end class SQL::GenericExpression include Sequel::Postgres::ArrayOpMethods end class LiteralString include Sequel::Postgres::ArrayOpMethods end end # :nocov: if Sequel.core_extensions? class Symbol include Sequel::Postgres::ArrayOpMethods end end if defined?(Sequel::CoreRefinements) module Sequel::CoreRefinements refine Symbol do include Sequel::Postgres::ArrayOpMethods end end end # :nocov: �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_hstore.rb������������������������������������������������0000664�0000000�0000000�00000025526�12201565355�0022572�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_hstore extension adds support for the PostgreSQL hstore type # to Sequel. hstore is an extension that ships with PostgreSQL, and # the hstore type stores an arbitrary key-value table, where the keys # are strings and the values are strings or NULL. # # This extension integrates with Sequel's native postgres adapter, so # that when hstore fields are retrieved, they are parsed and returned # as instances of Sequel::Postgres::HStore. HStore is # a DelegateClass of Hash, so it mostly acts like a hash, but not # completely (is_a?(Hash) is false). If you want the actual hash, # you can call Hstore#to_hash. This is done so that Sequel does not # treat a HStore like a Hash by default, which would cause issues. # # In addition to the parsers, this extension comes with literalizers # for HStore using the standard Sequel literalization callbacks, so # they work with on all adapters. # # To turn an existing Hash into an HStore, use Sequel.hstore: # # Sequel.hstore(hash) # # If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]), # or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html]) # and have activated refinements for the file, you can also use Hash#hstore: # # hash.hstore # # Since the hstore type only supports strings, non string keys and # values are converted to strings # # {:foo=>1}.hstore.to_hash # {'foo'=>'1'} # v = {}.hstore # v[:foo] = 1 # v # {'foo'=>'1'} # # However, to make life easier, lookups by key are converted to # strings (even when accessing the underlying hash directly): # # {'foo'=>'bar'}.hstore[:foo] # 'bar' # {'foo'=>'bar'}.hstore.to_hash[:foo] # 'bar' # # HStore instances mostly just delegate to the underlying hash # instance, so Hash methods that modify the receiver or returned # modified copies of the receiver may not do string conversion. # The following methods will handle string conversion, and more # can be added later if desired: # # * \[\] # * \[\]= # * assoc (ruby 1.9 only) # * delete # * fetch # * has_key? # * has_value? # * include? # * key (ruby 1.9 only) # * key? # * member? # * merge # * merge! # * rassoc (ruby 1.9 only) # * replace # * store # * update # * value? # # If you want to insert a hash into an hstore database column: # # DB[:table].insert(:column=>{'foo'=>'bar'}.hstore) # # If you would like to use hstore columns in your model objects, you # probably want to modify the schema parsing/typecasting so that it # recognizes and correctly handles the hstore columns, which you can # do by: # # DB.extension :pg_hstore # # If you are not using the native postgres adapter, you probably # also want to use the pg_typecast_on_load plugin in the model, and # set it to typecast the hstore column(s) on load. # # This extension requires the delegate and strscan libraries. require 'delegate' require 'strscan' module Sequel module Postgres class HStore < DelegateClass(Hash) include Sequel::SQL::AliasMethods # Parser for PostgreSQL hstore output format. class Parser < StringScanner QUOTE_RE = /"/.freeze KV_SEP_RE = /"\s*=>\s*/.freeze NULL_RE = /NULL/.freeze SEP_RE = /,\s*/.freeze QUOTED_RE = /(\\"|[^"])*/.freeze REPLACE_RE = /\\(.)/.freeze REPLACE_WITH = '\1'.freeze # Parse the output format that PostgreSQL uses for hstore # columns. Note that this does not attempt to parse all # input formats that PostgreSQL will accept. For instance, # it expects all keys and non-NULL values to be quoted. # # Return the resulting hash of objects. This can be called # multiple times, it will cache the parsed hash on the first # call and use it for subsequent calls. def parse return @result if @result hash = {} while !eos? skip(QUOTE_RE) k = parse_quoted skip(KV_SEP_RE) if skip(QUOTE_RE) v = parse_quoted skip(QUOTE_RE) else scan(NULL_RE) v = nil end skip(SEP_RE) hash[k] = v end @result = hash end private # Parse and unescape a quoted key/value. def parse_quoted scan(QUOTED_RE).gsub(REPLACE_RE, REPLACE_WITH) end end module DatabaseMethods def self.extended(db) db.instance_eval do add_named_conversion_procs(conversion_procs, :hstore=>PG_NAMED_TYPES[:hstore]) @schema_type_classes[:hstore] = HStore end end # Handle hstores in bound variables def bound_variable_arg(arg, conn) case arg when HStore arg.unquoted_literal when Hash HStore.new(arg).unquoted_literal else super end end private # Recognize the hstore database type. def schema_column_type(db_type) db_type == 'hstore' ? :hstore : super end # Typecast value correctly to HStore. If already an # HStore instance, return as is. If a hash, return # an HStore version of it. If a string, assume it is # in PostgreSQL output format and parse it using the # parser. def typecast_value_hstore(value) case value when HStore value when Hash HStore.new(value) else raise Sequel::InvalidValue, "invalid value for hstore: #{value.inspect}" end end end # Default proc used for all underlying HStore hashes, so that even # if you grab the underlying hash, it will still convert non-string # keys to strings during lookup. DEFAULT_PROC = lambda{|h, k| h[k.to_s] unless k.is_a?(String)} QUOTE = '"'.freeze COMMA = ",".freeze KV_SEP = "=>".freeze NULL = "NULL".freeze ESCAPE_RE = /("|\\)/.freeze ESCAPE_REPLACE = '\\\\\1'.freeze HSTORE_CAST = '::hstore'.freeze if RUBY_VERSION >= '1.9' # Undef 1.9 marshal_{dump,load} methods in the delegate class, # so that ruby 1.9 uses the old style _dump/_load methods defined # in the delegate class, instead of the marshal_{dump,load} methods # in the Hash class. undef_method :marshal_load undef_method :marshal_dump end # Use custom marshal loading, since underlying hash uses a default proc. def self._load(args) new(Hash[Marshal.load(args)]) end # Parse the given string into an HStore, assuming the str is in PostgreSQL # hstore output format. def self.parse(str) new(Parser.new(str).parse) end # Override methods that accept key argument to convert to string. (%w'[] delete has_key? include? key? member?' + Array((%w'assoc' if RUBY_VERSION >= '1.9.0'))).each do |m| class_eval("def #{m}(k) super(k.to_s) end", __FILE__, __LINE__) end # Override methods that accept value argument to convert to string unless nil. (%w'has_value? value?' + Array((%w'key rassoc' if RUBY_VERSION >= '1.9.0'))).each do |m| class_eval("def #{m}(v) super(convert_value(v)) end", __FILE__, __LINE__) end # Override methods that accept key and value arguments to convert to string appropriately. %w'[]= store'.each do |m| class_eval("def #{m}(k, v) super(k.to_s, convert_value(v)) end", __FILE__, __LINE__) end # Override methods that take hashes to convert the hashes to using strings for keys and # values before using them. %w'initialize merge! update replace'.each do |m| class_eval("def #{m}(h, &block) super(convert_hash(h), &block) end", __FILE__, __LINE__) end # Use custom marshal dumping, since underlying hash uses a default proc. def _dump(*) Marshal.dump(to_a) end # Override to force the key argument to a string. def fetch(key, *args, &block) super(key.to_s, *args, &block) end # Convert the input hash to string keys and values before merging, # and return a new HStore instance with the merged hash. def merge(hash, &block) self.class.new(super(convert_hash(hash), &block)) end # Return the underlying hash used by this HStore instance. alias to_hash __getobj__ # Append a literalize version of the hstore to the sql. def sql_literal_append(ds, sql) ds.literal_append(sql, unquoted_literal) sql << HSTORE_CAST end # Return a string containing the unquoted, unstring-escaped # literal version of the hstore. Separated out for use by # the bound argument code. def unquoted_literal str = '' comma = false commas = COMMA quote = QUOTE kv_sep = KV_SEP null = NULL each do |k, v| str << commas if comma str << quote << escape_value(k) << quote str << kv_sep if v.nil? str << null else str << quote << escape_value(v) << quote end comma = true end str end private # Return a new hash based on the input hash with string # keys and string or nil values. def convert_hash(h) hash = Hash.new(&DEFAULT_PROC) h.each{|k,v| hash[k.to_s] = convert_value(v)} hash end # Return value v as a string unless it is already nil. def convert_value(v) v.to_s unless v.nil? end # Escape key/value strings when literalizing to # correctly handle backslash and quote characters. def escape_value(k) k.to_s.gsub(ESCAPE_RE, ESCAPE_REPLACE) end end PG_NAMED_TYPES = {} unless defined?(PG_NAMED_TYPES) # Associate the named types by default. PG_NAMED_TYPES[:hstore] = HStore.method(:parse) end module SQL::Builders # Return a Postgres::HStore proxy for the given hash. def hstore(v) case v when Postgres::HStore v when Hash Postgres::HStore.new(v) else # May not be defined unless the pg_hstore_ops extension is used hstore_op(v) end end end Database.register_extension(:pg_hstore, Postgres::HStore::DatabaseMethods) end # :nocov: if Sequel.core_extensions? class Hash # Create a new HStore using the receiver as the input # hash. Note that the HStore created will not use the # receiver as the backing store, since it has to # modify the hash. To get the new backing store, use: # # hash.hstore.to_hash def hstore Sequel::Postgres::HStore.new(self) end end end if defined?(Sequel::CoreRefinements) module Sequel::CoreRefinements refine Hash do def hstore Sequel::Postgres::HStore.new(self) end end end end # :nocov: ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_hstore_ops.rb��������������������������������������������0000664�0000000�0000000�00000024413�12201565355�0023445�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_hstore_ops extension adds support to Sequel's DSL to make # it easier to call PostgreSQL hstore functions and operators. # # To load the extension: # # Sequel.extension :pg_hstore_ops # # The most common usage is taking an object that represents an SQL # expression (such as a :symbol), and calling Sequel.hstore_op with it: # # h = Sequel.hstore_op(:hstore_column) # # If you have also loaded the pg_hstore extension, you can use # Sequel.hstore as well: # # h = Sequel.hstore(:hstore_column) # # Also, on most Sequel expression objects, you can call the hstore # method: # # h = Sequel.expr(:hstore_column).hstore # # If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]), # or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html]) # and have activated refinements for the file, you can also use Symbol#hstore: # # h = :hstore_column.hstore # # This creates a Sequel::Postgres::HStoreOp object that can be used # for easier querying: # # h - 'a' # hstore_column - CAST('a' AS text) # h['a'] # hstore_column -> 'a' # # h.concat(:other_hstore_column) # || # h.has_key?('a') # ? # h.contain_all(:array_column) # ?& # h.contain_any(:array_column) # ?| # h.contains(:other_hstore_column) # @> # h.contained_by(:other_hstore_column) # <@ # # h.defined # defined(hstore_column) # h.delete('a') # delete(hstore_column, 'a') # h.each # each(hstore_column) # h.keys # akeys(hstore_column) # h.populate(:a) # populate_record(a, hstore_column) # h.record_set(:a) # (a #= hstore_column) # h.skeys # skeys(hstore_column) # h.slice(:a) # slice(hstore_column, a) # h.svals # svals(hstore_column) # h.to_array # hstore_to_array(hstore_column) # h.to_matrix # hstore_to_matrix(hstore_column) # h.values # avals(hstore_column) # # See the PostgreSQL hstore function and operator documentation for more # details on what these functions and operators do. # # If you are also using the pg_hstore extension, you should load it before # loading this extension. Doing so will allow you to use HStore#op to get # an HStoreOp, allowing you to perform hstore operations on hstore literals. module Sequel module Postgres # The HStoreOp class is a simple container for a single object that # defines methods that yield Sequel expression objects representing # PostgreSQL hstore operators and functions. # # In the method documentation examples, assume that: # # hstore_op = :hstore.hstore class HStoreOp < Sequel::SQL::Wrapper CONCAT = ["(".freeze, " || ".freeze, ")".freeze].freeze CONTAIN_ALL = ["(".freeze, " ?& ".freeze, ")".freeze].freeze CONTAIN_ANY = ["(".freeze, " ?| ".freeze, ")".freeze].freeze CONTAINS = ["(".freeze, " @> ".freeze, ")".freeze].freeze CONTAINED_BY = ["(".freeze, " <@ ".freeze, ")".freeze].freeze HAS_KEY = ["(".freeze, " ? ".freeze, ")".freeze].freeze LOOKUP = ["(".freeze, " -> ".freeze, ")".freeze].freeze RECORD_SET = ["(".freeze, " #= ".freeze, ")".freeze].freeze # Delete entries from an hstore using the subtraction operator: # # hstore_op - 'a' # (hstore - 'a') def -(other) other = if other.is_a?(String) && !other.is_a?(Sequel::LiteralString) Sequel.cast_string(other) else wrap_input_array(wrap_input_hash(other)) end HStoreOp.new(super) end # Lookup the value for the given key in an hstore: # # hstore_op['a'] # (hstore -> 'a') def [](key) v = Sequel::SQL::PlaceholderLiteralString.new(LOOKUP, [value, wrap_input_array(key)]) if key.is_a?(Array) || (defined?(Sequel::Postgres::PGArray) && key.is_a?(Sequel::Postgres::PGArray)) || (defined?(Sequel::Postgres::ArrayOp) && key.is_a?(Sequel::Postgres::ArrayOp)) wrap_output_array(v) else Sequel::SQL::StringExpression.new(:NOOP, v) end end # Check if the receiver contains all of the keys in the given array: # # hstore_op.contain_all(:a) # (hstore ?& a) def contain_all(other) bool_op(CONTAIN_ALL, wrap_input_array(other)) end # Check if the receiver contains any of the keys in the given array: # # hstore_op.contain_any(:a) # (hstore ?| a) def contain_any(other) bool_op(CONTAIN_ANY, wrap_input_array(other)) end # Check if the receiver contains all entries in the other hstore: # # hstore_op.contains(:h) # (hstore @> h) def contains(other) bool_op(CONTAINS, wrap_input_hash(other)) end # Check if the other hstore contains all entries in the receiver: # # hstore_op.contained_by(:h) # (hstore <@ h) def contained_by(other) bool_op(CONTAINED_BY, wrap_input_hash(other)) end # Check if the receiver contains a non-NULL value for the given key: # # hstore_op.defined('a') # defined(hstore, 'a') def defined(key) Sequel::SQL::BooleanExpression.new(:NOOP, function(:defined, key)) end # Delete the matching entries from the receiver: # # hstore_op.delete('a') # delete(hstore, 'a') def delete(key) HStoreOp.new(function(:delete, wrap_input_array(wrap_input_hash(key)))) end # Transform the receiver into a set of keys and values: # # hstore_op.each # each(hstore) def each function(:each) end # Check if the receiver contains the given key: # # hstore_op.has_key?('a') # (hstore ? 'a') def has_key?(key) bool_op(HAS_KEY, key) end alias include? has_key? alias key? has_key? alias member? has_key? alias exist? has_key? # Return the receiver. def hstore self end # Return the keys as a PostgreSQL array: # # hstore_op.keys # akeys(hstore) def keys wrap_output_array(function(:akeys)) end alias akeys keys # Merge a given hstore into the receiver: # # hstore_op.merge(:a) # (hstore || a) def merge(other) HStoreOp.new(Sequel::SQL::PlaceholderLiteralString.new(CONCAT, [self, wrap_input_hash(other)])) end alias concat merge # Create a new record populated with entries from the receiver: # # hstore_op.populate(:a) # populate_record(a, hstore) def populate(record) SQL::Function.new(:populate_record, record, self) end # Update the values in a record using entries in the receiver: # # hstore_op.record_set(:a) # (a #= hstore) def record_set(record) Sequel::SQL::PlaceholderLiteralString.new(RECORD_SET, [record, value]) end # Return the keys as a PostgreSQL set: # # hstore_op.skeys # skeys(hstore) def skeys function(:skeys) end # Return an hstore with only the keys in the given array: # # hstore_op.slice(:a) # slice(hstore, a) def slice(keys) HStoreOp.new(function(:slice, wrap_input_array(keys))) end # Return the values as a PostgreSQL set: # # hstore_op.svals # svals(hstore) def svals function(:svals) end # Return a flattened array of the receiver with alternating # keys and values: # # hstore_op.to_array # hstore_to_array(hstore) def to_array wrap_output_array(function(:hstore_to_array)) end # Return a nested array of the receiver, with arrays of # 2 element (key/value) arrays: # # hstore_op.to_matrix # hstore_to_matrix(hstore) def to_matrix wrap_output_array(function(:hstore_to_matrix)) end # Return the values as a PostgreSQL array: # # hstore_op.values # avals(hstore) def values wrap_output_array(function(:avals)) end alias avals values private # Return a placeholder literal with the given str and args, wrapped # in a boolean expression, used by operators that return booleans. def bool_op(str, other) Sequel::SQL::BooleanExpression.new(:NOOP, Sequel::SQL::PlaceholderLiteralString.new(str, [value, other])) end # Return a function with the given name, and the receiver as the first # argument, with any additional arguments given. def function(name, *args) SQL::Function.new(name, self, *args) end # Wrap argument in a PGArray if it is an array def wrap_input_array(obj) if obj.is_a?(Array) && Sequel.respond_to?(:pg_array) Sequel.pg_array(obj) else obj end end # Wrap argument in an Hstore if it is a hash def wrap_input_hash(obj) if obj.is_a?(Hash) && Sequel.respond_to?(:hstore) Sequel.hstore(obj) else obj end end # Wrap argument in a PGArrayOp if supported def wrap_output_array(obj) if Sequel.respond_to?(:pg_array_op) Sequel.pg_array_op(obj) else obj end end end module HStoreOpMethods # Wrap the receiver in an HStoreOp so you can easily use the PostgreSQL # hstore functions and operators with it. def hstore HStoreOp.new(self) end end if defined?(HStore) class HStore # Wrap the receiver in an HStoreOp so you can easily use the PostgreSQL # hstore functions and operators with it. def op HStoreOp.new(self) end end end end module SQL::Builders # Return the object wrapped in an Postgres::HStoreOp. def hstore_op(v) case v when Postgres::HStoreOp v else Postgres::HStoreOp.new(v) end end end class SQL::GenericExpression include Sequel::Postgres::HStoreOpMethods end class LiteralString include Sequel::Postgres::HStoreOpMethods end end # :nocov: if Sequel.core_extensions? class Symbol include Sequel::Postgres::HStoreOpMethods end end if defined?(Sequel::CoreRefinements) module Sequel::CoreRefinements refine Symbol do include Sequel::Postgres::HStoreOpMethods end end end # :nocov: �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_inet.rb��������������������������������������������������0000664�0000000�0000000�00000007171�12201565355�0022221�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_inet extension adds support for Sequel to handle # PostgreSQL's inet and cidr types using ruby's IPAddr class. # # This extension integrates with Sequel's native postgres adapter, so # that when inet/cidr fields are retrieved, they are returned as # IPAddr instances # # After loading the extension, you should extend your dataset # with a module so that it correctly handles the inet/cidr type: # # DB.extension :pg_inet # # If you are not using the native postgres adapter, you probably # also want to use the pg_typecast_on_load plugin in the model, and # set it to typecast the inet/cidr column(s) on load. # # This extension integrates with the pg_array extension. If you plan # to use the inet[] or cidr[] types, load the pg_array extension before # the pg_inet extension: # # DB.extension :pg_array, :pg_inet # # This extension does not add special support for the macaddr # type. Ruby doesn't have a stdlib class that represents mac # addresses, so these will still be returned as strings. The exception # to this is that the pg_array extension integration will recognize # macaddr[] types return them as arrays of strings. require 'ipaddr' Sequel.require 'adapters/utils/pg_types' module Sequel module Postgres # Methods enabling Database object integration with the inet/cidr types. module InetDatabaseMethods # Reset the conversion procs when extending the Database object, so # it will pick up the inet/cidr converter. Also, extend the datasets # with support for literalizing the IPAddr types. def self.extended(db) db.instance_eval do extend_datasets(InetDatasetMethods) copy_conversion_procs([869, 650, 1041, 651, 1040]) @schema_type_classes[:ipaddr] = IPAddr end end # Convert an IPAddr arg to a string. Probably not necessary, but done # for safety. def bound_variable_arg(arg, conn) case arg when IPAddr "#{arg.to_s}/#{arg.instance_variable_get(:@mask_addr).to_s(2).count('1')}" else super end end private # Handle inet[]/cidr[] types in bound variables. def bound_variable_array(a) case a when IPAddr "\"#{a.to_s}/#{a.instance_variable_get(:@mask_addr).to_s(2).count('1')}\"" else super end end # Make the column type detection recognize the inet and cidr types. def schema_column_type(db_type) case db_type when 'inet', 'cidr' :ipaddr else super end end # Typecast the given value to an IPAddr object. def typecast_value_ipaddr(value) case value when IPAddr value when String IPAddr.new(value) else raise Sequel::InvalidValue, "invalid value for inet/cidr: #{value.inspect}" end end end module InetDatasetMethods private # Convert IPAddr value to a string and append a literal version # of the string to the sql. def literal_other_append(sql, value) if value.is_a?(IPAddr) literal_string_append(sql, "#{value.to_s}/#{value.instance_variable_get(:@mask_addr).to_s(2).count('1')}") else super end end end PG_TYPES[869] = PG_TYPES[650] = IPAddr.method(:new) if defined?(PGArray) && PGArray.respond_to?(:register) PGArray.register('inet', :oid=>1041, :scalar_oid=>869) PGArray.register('cidr', :oid=>651, :scalar_oid=>650) PGArray.register('macaddr', :oid=>1040) end end Database.register_extension(:pg_inet, Postgres::InetDatabaseMethods) end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_interval.rb����������������������������������������������0000664�0000000�0000000�00000015106�12201565355�0023103�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_interval extension adds support for PostgreSQL's interval type. # # This extension integrates with Sequel's native postgres adapter, so # that when interval type values are retrieved, they are parsed and returned # as instances of ActiveSupport::Duration. # # In addition to the parser, this extension adds literalizers for # ActiveSupport::Duration that use the standard Sequel literalization # callbacks, so they work on all adapters. # # If you would like to use interval columns in your model objects, you # probably want to modify the typecasting so that it # recognizes and correctly handles the interval columns, which you can # do by: # # DB.extension :pg_interval # # If you are not using the native postgres adapter, you probably # also want to use the pg_typecast_on_load plugin in the model, and # set it to typecast the interval type column(s) on load. # # This extension integrates with the pg_array extension. If you plan # to use arrays of interval types, load the pg_array extension before the # pg_interval extension: # # DB.extension :pg_array, :pg_interval # # The parser this extension uses requires that IntervalStyle for PostgreSQL # is set to postgres (the default setting). If IntervalStyle is changed from # the default setting, the parser will probably not work. The parser used is # very simple, and is only designed to parse PostgreSQL's default output # format, it is not designed to support all input formats that PostgreSQL # supports. require 'active_support/duration' Sequel.require 'adapters/utils/pg_types' module Sequel module Postgres module IntervalDatabaseMethods EMPTY_INTERVAL = '0'.freeze DURATION_UNITS = [:years, :months, :days, :minutes, :seconds].freeze # Return an unquoted string version of the duration object suitable for # use as a bound variable. def self.literal_duration(duration) h = Hash.new(0) duration.parts.each{|unit, value| h[unit] += value} s = '' DURATION_UNITS.each do |unit| if (v = h[unit]) != 0 s << "#{v.is_a?(Integer) ? v : sprintf('%0.6f', v)} #{unit} " end end if s.empty? EMPTY_INTERVAL else s end end # Creates callable objects that convert strings into ActiveSupport::Duration instances. class Parser # Regexp that parses the full range of PostgreSQL interval type output. PARSER = /\A([+-]?\d+ years?\s?)?([+-]?\d+ mons?\s?)?([+-]?\d+ days?\s?)?(?:(?:([+-])?(\d\d):(\d\d):(\d\d(\.\d+)?))|([+-]?\d+ hours?\s?)?([+-]?\d+ mins?\s?)?([+-]?\d+(\.\d+)? secs?\s?)?)?\z/o # Parse the interval input string into an ActiveSupport::Duration instance. def call(string) raise(InvalidValue, "invalid or unhandled interval format: #{string.inspect}") unless matches = PARSER.match(string) value = 0 parts = [] if v = matches[1] v = v.to_i value += 31557600 * v parts << [:years, v] end if v = matches[2] v = v.to_i value += 2592000 * v parts << [:months, v] end if v = matches[3] v = v.to_i value += 86400 * v parts << [:days, v] end if matches[5] seconds = matches[5].to_i * 3600 + matches[6].to_i * 60 seconds += matches[8] ? matches[7].to_f : matches[7].to_i seconds *= -1 if matches[4] == '-' value += seconds parts << [:seconds, seconds] elsif matches[9] || matches[10] || matches[11] seconds = 0 if v = matches[9] seconds += v.to_i * 3600 end if v = matches[10] seconds += v.to_i * 60 end if v = matches[11] seconds += matches[12] ? v.to_f : v.to_i end value += seconds parts << [:seconds, seconds] end ActiveSupport::Duration.new(value, parts) end end # Single instance of Parser used for parsing, to save on memory (since the parser has no state). PARSER = Parser.new # Reset the conversion procs if using the native postgres adapter, # and extend the datasets to correctly literalize ActiveSupport::Duration values. def self.extended(db) db.instance_eval do extend_datasets(IntervalDatasetMethods) copy_conversion_procs([1186, 1187]) @schema_type_classes[:interval] = ActiveSupport::Duration end end # Handle ActiveSupport::Duration values in bound variables. def bound_variable_arg(arg, conn) case arg when ActiveSupport::Duration IntervalDatabaseMethods.literal_duration(arg) else super end end private # Handle arrays of interval types in bound variables. def bound_variable_array(a) case a when ActiveSupport::Duration "\"#{IntervalDatabaseMethods.literal_duration(a)}\"" else super end end # Typecast value correctly to an ActiveSupport::Duration instance. # If already an ActiveSupport::Duration, return it. # If a numeric argument is given, assume it represents a number # of seconds, and create a new ActiveSupport::Duration instance # representing that number of seconds. # If a String, assume it is in PostgreSQL interval output format # and attempt to parse it. def typecast_value_interval(value) case value when ActiveSupport::Duration value when Numeric ActiveSupport::Duration.new(value, [[:seconds, value]]) when String PARSER.call(value) else raise Sequel::InvalidValue, "invalid value for interval type: #{value.inspect}" end end end module IntervalDatasetMethods CAST_INTERVAL = '::interval'.freeze # Handle literalization of ActiveSupport::Duration objects, treating them as # PostgreSQL intervals. def literal_other_append(sql, v) case v when ActiveSupport::Duration literal_append(sql, IntervalDatabaseMethods.literal_duration(v)) sql << CAST_INTERVAL else super end end end PG_TYPES[1186] = Postgres::IntervalDatabaseMethods::PARSER if defined?(PGArray) && PGArray.respond_to?(:register) PGArray.register('interval', :oid=>1187, :scalar_oid=>1186) end end Database.register_extension(:pg_interval, Postgres::IntervalDatabaseMethods) end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_json.rb��������������������������������������������������0000664�0000000�0000000�00000017113�12201565355�0022230�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_json extension adds support for Sequel to handle # PostgreSQL's json type. It is slightly more strict than the # PostgreSQL json type in that the object returned must be an # array or object (PostgreSQL's json type considers plain numbers # and strings as valid). This is because Sequel relies completely # on the ruby JSON library for parsing, and ruby's JSON library # does not accept the values. # # This extension integrates with Sequel's native postgres adapter, so # that when json fields are retrieved, they are parsed and returned # as instances of Sequel::Postgres::JSONArray or # Sequel::Postgres::JSONHash. JSONArray and JSONHash are # DelegateClasses of Array and Hash, so they mostly act the same, but # not completely (json_array.is_a?(Array) is false). If you want # the actual array for a JSONArray, call JSONArray#to_a. If you want # the actual hash for a JSONHash, call JSONHash#to_hash. # This is done so that Sequel does not treat JSONArray and JSONHash # like Array and Hash by default, which would cause issues. # # To turn an existing Array or Hash into a JSONArray or JSONHash, # use Sequel.pg_json: # # Sequel.pg_json(array) # Sequel.pg_json(hash) # # If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]), # or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html]) # and have activated refinements for the file, you can also use Array#pg_json and Hash#pg_json: # # array.pg_json # hash.pg_json # # So if you want to insert an array or hash into an json database column: # # DB[:table].insert(:column=>Sequel.pg_json([1, 2, 3])) # DB[:table].insert(:column=>Sequel.pg_json({'a'=>1, 'b'=>2})) # # If you would like to use PostgreSQL json columns in your model # objects, you probably want to modify the schema parsing/typecasting # so that it recognizes and correctly handles the json type, which # you can do by: # # DB.extension :pg_json # # If you are not using the native postgres adapter, you probably # also want to use the pg_typecast_on_load plugin in the model, and # set it to typecast the json column(s) on load. # # This extension integrates with the pg_array extension. If you plan # to use the json[] type, load the pg_array extension before the # pg_json extension: # # DB.extension :pg_array, :pg_json # # This extension requires both the json and delegate libraries. require 'delegate' require 'json' Sequel.require 'adapters/utils/pg_types' module Sequel module Postgres CAST_JSON = '::json'.freeze # Class representating PostgreSQL JSON column array values. class JSONArray < DelegateClass(Array) include Sequel::SQL::AliasMethods # Convert the array to a json string, append a # literalized version of the string to the sql, and explicitly # cast the string to json. def sql_literal_append(ds, sql) ds.literal_append(sql, Sequel.object_to_json(self)) sql << CAST_JSON end end # Class representating PostgreSQL JSON column hash/object values. class JSONHash < DelegateClass(Hash) include Sequel::SQL::AliasMethods # Convert the hash to a json string, append a # literalized version of the string to the sql, and explicitly # cast the string to json. def sql_literal_append(ds, sql) ds.literal_append(sql, Sequel.object_to_json(self)) sql << CAST_JSON end # Return the object being delegated to. alias to_hash __getobj__ end # Methods enabling Database object integration with the json type. module JSONDatabaseMethods def self.extended(db) db.instance_eval do copy_conversion_procs([114, 199]) @schema_type_classes[:json] = [JSONHash, JSONArray] end end # Parse JSON data coming from the database. Since PostgreSQL allows # non JSON data in JSON fields (such as plain numbers and strings), # we don't want to raise an exception for that. def self.db_parse_json(s) parse_json(s) rescue Sequel::InvalidValue raise unless s.is_a?(String) parse_json("[#{s}]").first end # Parse the given string as json, returning either a JSONArray # or JSONHash instance, and raising an error if the JSON # parsing does not yield an array or hash. def self.parse_json(s) begin value = Sequel.parse_json(s) rescue Sequel.json_parser_error_class => e raise Sequel.convert_exception_class(e, Sequel::InvalidValue) end case value when Array JSONArray.new(value) when Hash JSONHash.new(value) else raise Sequel::InvalidValue, "unhandled json value: #{value.inspect} (from #{s.inspect})" end end # Handle JSONArray and JSONHash in bound variables def bound_variable_arg(arg, conn) case arg when JSONArray, JSONHash Sequel.object_to_json(arg) else super end end private # Handle json[] types in bound variables. def bound_variable_array(a) case a when JSONHash, JSONArray "\"#{Sequel.object_to_json(a).gsub('"', '\\"')}\"" else super end end # Make the column type detection recognize the json type. def schema_column_type(db_type) case db_type when 'json' :json else super end end # Given a value to typecast to the json column # * If given a JSONArray or JSONHash, just return the value # * If given an Array, return a JSONArray # * If given a Hash, return a JSONHash # * If given a String, parse it as would be done during # database retrieval. def typecast_value_json(value) case value when JSONArray, JSONHash value when Array JSONArray.new(value) when Hash JSONHash.new(value) when String JSONDatabaseMethods.parse_json(value) else raise Sequel::InvalidValue, "invalid value for json: #{value.inspect}" end end end PG_TYPES[114] = JSONDatabaseMethods.method(:db_parse_json) if defined?(PGArray) && PGArray.respond_to?(:register) PGArray.register('json', :oid=>199, :scalar_oid=>114) end end module SQL::Builders # Wrap the array or hash in a Postgres::JSONArray or Postgres::JSONHash. def pg_json(v) case v when Postgres::JSONArray, Postgres::JSONHash v when Array Postgres::JSONArray.new(v) when Hash Postgres::JSONHash.new(v) else Sequel.pg_json_op(v) end end end Database.register_extension(:pg_json, Postgres::JSONDatabaseMethods) end # :nocov: if Sequel.core_extensions? class Array # Return a Sequel::Postgres::JSONArray proxy to the receiver. # This is mostly useful as a short cut for creating JSONArray # objects that didn't come from the database. def pg_json Sequel::Postgres::JSONArray.new(self) end end class Hash # Return a Sequel::Postgres::JSONHash proxy to the receiver. # This is mostly useful as a short cut for creating JSONHash # objects that didn't come from the database. def pg_json Sequel::Postgres::JSONHash.new(self) end end end if defined?(Sequel::CoreRefinements) module Sequel::CoreRefinements refine Array do def pg_json Sequel::Postgres::JSONArray.new(self) end end refine Hash do def pg_json Sequel::Postgres::JSONHash.new(self) end end end end # :nocov: �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_json_ops.rb����������������������������������������������0000664�0000000�0000000�00000020104�12201565355�0023103�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_json_ops extension adds support to Sequel's DSL to make # it easier to call PostgreSQL JSON functions and operators (added # first in PostgreSQL 9.3). # # To load the extension: # # Sequel.extension :pg_json_ops # # The most common usage is passing an expression to Sequel.pg_json_op: # # j = Sequel.pg_json_op(:json_column) # # If you have also loaded the pg_json extension, you can use # Sequel.pg_json as well: # # j = Sequel.pg_json(:json_column) # # Also, on most Sequel expression objects, you can call the pg_json # method: # # j = Sequel.expr(:json_column).pg_json # # If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]), # or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html]) # and have activated refinements for the file, you can also use Symbol#pg_json: # # j = :json_column.pg_json # # This creates a Sequel::Postgres::JSONOp object that can be used # for easier querying: # # j[1] # (json_column -> 1) # j[%w'a b'] # (json_column #> ARRAY['a','b']) # j.get_text(1) # (json_column ->> 1) # j.get_text(%w'a b') # (json_column #>> ARRAY['a','b']) # j.extract('a', 'b') # json_extract_path(json_column, 'a', 'b') # j.extract_text('a', 'b') # json_extract_path_text(json_column, 'a', 'b') # # j.array_length # json_array_length(json_column) # j.array_elements # json_array_elements(json_column) # j.each # json_each(json_column) # j.each_text # json_each_text(json_column) # j.keys # json_object_keys(json_column) # # j.populate(:a) # json_populate_record(:a, json_column) # j.populate_set(:a) # json_populate_recordset(:a, json_column) # # If you are also using the pg_json extension, you should load it before # loading this extension. Doing so will allow you to use JSONHash#op and # JSONArray#op to get a JSONOp, allowing you to perform json operations # on json literals. module Sequel module Postgres # The JSONOp class is a simple container for a single object that # defines methods that yield Sequel expression objects representing # PostgreSQL json operators and functions. # # In the method documentation examples, assume that: # # json_op = Sequel.pg_json(:json) class JSONOp < Sequel::SQL::Wrapper GET = ["(".freeze, " -> ".freeze, ")".freeze].freeze GET_TEXT = ["(".freeze, " ->> ".freeze, ")".freeze].freeze GET_PATH = ["(".freeze, " #> ".freeze, ")".freeze].freeze GET_PATH_TEXT = ["(".freeze, " #>> ".freeze, ")".freeze].freeze # Get JSON array element or object field as json. If an array is given, # gets the object at the specified path. # # json_op[1] # (json -> 1) # json_op['a'] # (json -> 'a') # json_op[%w'a b'] # (json #> ARRAY['a', 'b']) def [](key) if is_array?(key) json_op(GET_PATH, wrap_array(key)) else json_op(GET, key) end end alias get [] # Returns a set of json values for the elements in the json array. # # json_op.array_elements # json_oarray_elements(json) def array_elements function(:json_array_elements) end # Get the length of the outermost json array. # # json_op.array_length # json_array_length(json) def array_length Sequel::SQL::NumericExpression.new(:NOOP, function(:json_array_length)) end # Returns a set of key and value pairs, where the keys # are text and the values are JSON. # # json_op.each # json_each(json) def each function(:json_each) end # Returns a set of key and value pairs, where the keys # and values are both text. # # json_op.each_text # json_each_text(json) def each_text function(:json_each_text) end # Returns a json value for the object at the given path. # # json_op.extract('a') # json_extract_path(json, 'a') # json_op.extract('a', 'b') # json_extract_path(json, 'a', 'b') def extract(*a) JSONOp.new(function(:json_extract_path, *a)) end # Returns a text value for the object at the given path. # # json_op.extract_text('a') # json_extract_path_text(json, 'a') # json_op.extract_text('a', 'b') # json_extract_path_text(json, 'a', 'b') def extract_text(*a) Sequel::SQL::StringExpression.new(:NOOP, function(:json_extract_path_text, *a)) end # Get JSON array element or object field as text. If an array is given, # gets the object at the specified path. # # json_op.get_text(1) # (json ->> 1) # json_op.get_text('a') # (json ->> 'a') # json_op.get_text(%w'a b') # (json #>> ARRAY['a', 'b']) def get_text(key) if is_array?(key) json_op(GET_PATH_TEXT, wrap_array(key)) else json_op(GET_TEXT, key) end end # Returns a set of keys AS text in the json object. # # json_op.keys # json_object_keys(json) def keys function(:json_object_keys) end # Return the receiver, since it is already a JSONOp. def pg_json self end # Expands the given argument using the columns in the json. # # json_op.populate(arg) # json_populate_record(arg, json) def populate(arg) SQL::Function.new(:json_populate_record, arg, self) end # Expands the given argument using the columns in the json. # # json_op.populate_set(arg) # json_populate_recordset(arg, json) def populate_set(arg) SQL::Function.new(:json_populate_recordset, arg, self) end private # Return a placeholder literal with the given str and args, wrapped # in an JSONOp, used by operators that return json. def json_op(str, args) JSONOp.new(Sequel::SQL::PlaceholderLiteralString.new(str, [self, args])) end # Return a function with the given name, and the receiver as the first # argument, with any additional arguments given. def function(name, *args) SQL::Function.new(name, self, *args) end # Whether the given object represents an array in PostgreSQL. def is_array?(a) a.is_a?(Array) || (defined?(PGArray) && a.is_a?(PGArray)) || (defined?(ArrayOp) && a.is_a?(ArrayOp)) end # Automatically wrap argument in a PGArray if it is a plain Array. # Requires that the pg_array extension has been loaded to work. def wrap_array(arg) if arg.instance_of?(Array) && Sequel.respond_to?(:pg_array) Sequel.pg_array(arg) else arg end end end module JSONOpMethods # Wrap the receiver in an JSONOp so you can easily use the PostgreSQL # json functions and operators with it. def pg_json JSONOp.new(self) end end if defined?(JSONArray) class JSONArray # Wrap the JSONHash instance in an JSONOp, allowing you to easily use # the PostgreSQL json functions and operators with literal jsons. def op JSONOp.new(self) end end class JSONHash # Wrap the JSONHash instance in an JSONOp, allowing you to easily use # the PostgreSQL json functions and operators with literal jsons. def op JSONOp.new(self) end end end end module SQL::Builders # Return the object wrapped in an Postgres::JSONOp. def pg_json_op(v) case v when Postgres::JSONOp v else Postgres::JSONOp.new(v) end end end class SQL::GenericExpression include Sequel::Postgres::JSONOpMethods end class LiteralString include Sequel::Postgres::JSONOpMethods end end # :nocov: if Sequel.core_extensions? class Symbol include Sequel::Postgres::JSONOpMethods end end if defined?(Sequel::CoreRefinements) module Sequel::CoreRefinements refine Symbol do include Sequel::Postgres::JSONOpMethods end end end # :nocov: ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_range.rb�������������������������������������������������0000664�0000000�0000000�00000046551�12201565355�0022363�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_range extension adds support for the PostgreSQL 9.2+ range # types to Sequel. PostgreSQL range types are similar to ruby's # Range class, representating an array of values. However, they # are more flexible than ruby's ranges, allowing exclusive beginnings # and endings (ruby's range only allows exclusive endings), and # unbounded beginnings and endings (which ruby's range does not # support). # # This extension integrates with Sequel's native postgres adapter, so # that when range type values are retrieved, they are parsed and returned # as instances of Sequel::Postgres::PGRange. PGRange mostly acts # like a Range, but it's not a Range as not all PostgreSQL range # type values would be valid ruby ranges. If the range type value # you are using is a valid ruby range, you can call PGRange#to_range # to get a Range. However, if you call PGRange#to_range on a range # type value uses features that ruby's Range does not support, an # exception will be raised. # # In addition to the parser, this extension comes with literalizers # for both PGRange and Range that use the standard Sequel literalization # callbacks, so they work on all adapters. # # To turn an existing Range into a PGRange, use Sequel.pg_range: # # Sequel.pg_range(range) # # If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]), # or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html]) # and have activated refinements for the file, you can also use Range#pg_range: # # range.pg_range # # You may want to specify a specific range type: # # Sequel.pg_range(range, :daterange) # range.pg_range(:daterange) # # If you specify the range database type, Sequel will automatically cast # the value to that type when literalizing. # # If you would like to use range columns in your model objects, you # probably want to modify the schema parsing/typecasting so that it # recognizes and correctly handles the range type columns, which you can # do by: # # DB.extension :pg_range # # If you are not using the native postgres adapter, you probably # also want to use the pg_typecast_on_load plugin in the model, and # set it to typecast the range type column(s) on load. # # This extension integrates with the pg_array extension. If you plan # to use arrays of range types, load the pg_array extension before the # pg_range extension: # # DB.extension :pg_array, :pg_range Sequel.require 'adapters/utils/pg_types' module Sequel module Postgres class PGRange include Sequel::SQL::AliasMethods # Map of string database type names to type symbols (e.g. 'int4range' => :int4range), # used in the schema parsing. RANGE_TYPES = {} EMPTY = 'empty'.freeze EMPTY_STRING = ''.freeze QUOTED_EMPTY_STRING = '""'.freeze OPEN_PAREN = "(".freeze CLOSE_PAREN = ")".freeze OPEN_BRACKET = "[".freeze CLOSE_BRACKET = "]".freeze ESCAPE_RE = /("|,|\\|\[|\]|\(|\))/.freeze ESCAPE_REPLACE = '\\\\\1'.freeze CAST = '::'.freeze # Registers a range type that the extension should handle. Makes a Database instance that # has been extended with DatabaseMethods recognize the range type given and set up the # appropriate typecasting. Also sets up automatic typecasting for the native postgres # adapter, so that on retrieval, the values are automatically converted to PGRange instances. # The db_type argument should be the name of the range type. Accepts the following options: # # :converter :: A callable object (e.g. Proc), that is called with the start or end of the range # (usually a string), and should return the appropriate typecasted object. # :oid :: The PostgreSQL OID for the range type. This is used by the Sequel postgres adapter # to set up automatic type conversion on retrieval from the database. # :subtype_oid :: Should be the PostgreSQL OID for the range's subtype. If given, # automatically sets the :converter option by looking for scalar conversion # proc. # # If a block is given, it is treated as the :converter option. def self.register(db_type, opts=OPTS, &block) db_type = db_type.to_s.dup.freeze if converter = opts[:converter] raise Error, "can't provide both a block and :converter option to register" if block else converter = block end if soid = opts[:subtype_oid] raise Error, "can't provide both a converter and :scalar_oid option to register" if converter raise Error, "no conversion proc for :scalar_oid=>#{soid.inspect} in PG_TYPES" unless converter = PG_TYPES[soid] end parser = Parser.new(db_type, converter) RANGE_TYPES[db_type] = db_type.to_sym DatabaseMethods.define_range_typecast_method(db_type, parser) if oid = opts[:oid] Sequel::Postgres::PG_TYPES[oid] = parser end nil end # Creates callable objects that convert strings into PGRange instances. class Parser # Regexp that parses the full range of PostgreSQL range type output, # except for empty ranges. PARSER = /\A(\[|\()("((?:\\"|[^"])*)"|[^"]*),("((?:\\"|[^"])*)"|[^"]*)(\]|\))\z/o REPLACE_RE = /\\(.)/.freeze REPLACE_WITH = '\1'.freeze # The database range type for this parser (e.g. 'int4range'), # automatically setting the db_type for the returned PGRange instances. attr_reader :db_type # A callable object to convert the beginning and ending of the range into # the appropriate ruby type. attr_reader :converter # Set the db_type and converter on initialization. def initialize(db_type, converter=nil) @db_type = db_type.to_s.dup.freeze if db_type @converter = converter end # Parse the range type input string into a PGRange value. def call(string) if string == EMPTY return PGRange.empty(db_type) end raise(InvalidValue, "invalid or unhandled range format: #{string.inspect}") unless matches = PARSER.match(string) exclude_begin = matches[1] == '(' exclude_end = matches[6] == ')' # If the input is quoted, it needs to be unescaped. Also, quoted input isn't # checked for emptiness, since the empty quoted string is considered an # element that happens to be the empty string, while an unquoted empty string # is considered unbounded. # # While PostgreSQL allows pure escaping for input (without quoting), it appears # to always use the quoted output form when characters need to be escaped, so # there isn't a need to unescape unquoted output. if beg = matches[3] beg.gsub!(REPLACE_RE, REPLACE_WITH) else beg = matches[2] unless matches[2].empty? end if en = matches[5] en.gsub!(REPLACE_RE, REPLACE_WITH) else en = matches[4] unless matches[4].empty? end if c = converter beg = c.call(beg) if beg en = c.call(en) if en end PGRange.new(beg, en, :exclude_begin=>exclude_begin, :exclude_end=>exclude_end, :db_type=>db_type) end end module DatabaseMethods # Reset the conversion procs if using the native postgres adapter, # and extend the datasets to correctly literalize ruby Range values. def self.extended(db) db.instance_eval do extend_datasets(DatasetMethods) copy_conversion_procs([3904, 3906, 3912, 3926, 3905, 3907, 3913, 3927]) [:int4range, :numrange, :tsrange, :tstzrange, :daterange, :int8range].each do |v| @schema_type_classes[v] = PGRange end end procs = db.conversion_procs procs[3908] = Parser.new("tsrange", procs[1114]) procs[3910] = Parser.new("tstzrange", procs[1184]) if defined?(PGArray::Creator) procs[3909] = PGArray::Creator.new("tsrange", procs[3908]) procs[3911] = PGArray::Creator.new("tstzrange", procs[3910]) end end # Define a private range typecasting method for the given type that uses # the parser argument to do the type conversion. def self.define_range_typecast_method(type, parser) meth = :"typecast_value_#{type}" define_method(meth){|v| typecast_value_pg_range(v, parser)} private meth end # Handle Range and PGRange values in bound variables def bound_variable_arg(arg, conn) case arg when PGRange arg.unquoted_literal(schema_utility_dataset) when Range PGRange.from_range(arg).unquoted_literal(schema_utility_dataset) else super end end private # Handle arrays of range types in bound variables. def bound_variable_array(a) case a when PGRange, Range "\"#{bound_variable_arg(a, nil)}\"" else super end end # Manually override the typecasting for tsrange and tstzrange types so that # they use the database's timezone instead of the global Sequel # timezone. def get_conversion_procs procs = super procs[3908] = Parser.new("tsrange", procs[1114]) procs[3910] = Parser.new("tstzrange", procs[1184]) if defined?(PGArray::Creator) procs[3909] = PGArray::Creator.new("tsrange", procs[3908]) procs[3911] = PGArray::Creator.new("tstzrange", procs[3910]) end procs end # Recognize the registered database range types. def schema_column_type(db_type) if type = RANGE_TYPES[db_type] type else super end end # Typecast value correctly to a PGRange. If already an # PGRange instance with the same db_type, return as is. # If a PGRange with a different subtype, return a new # PGRange with the same values and the expected subtype. # If a Range object, create a PGRange with the given # db_type. If a string, assume it is in PostgreSQL # output format and parse it using the parser. def typecast_value_pg_range(value, parser) case value when PGRange if value.db_type.to_s == parser.db_type value elsif value.empty? PGRange.empty(parser.db_type) else PGRange.new(value.begin, value.end, :exclude_begin=>value.exclude_begin?, :exclude_end=>value.exclude_end?, :db_type=>parser.db_type) end when Range PGRange.from_range(value, parser.db_type) when String parser.call(value) else raise Sequel::InvalidValue, "invalid value for range type: #{value.inspect}" end end end module DatasetMethods # Handle literalization of ruby Range objects, treating them as # PostgreSQL ranges. def literal_other_append(sql, v) case v when Range super(sql, Sequel::Postgres::PGRange.from_range(v)) else super end end end include Enumerable # The beginning of the range. If nil, the range has an unbounded beginning. attr_reader :begin # The end of the range. If nil, the range has an unbounded ending. attr_reader :end # The PostgreSQL database type for the range (e.g. 'int4range'). attr_reader :db_type # Create a new PGRange instance using the beginning and ending of the ruby Range, # with the given db_type. def self.from_range(range, db_type=nil) new(range.begin, range.end, :exclude_end=>range.exclude_end?, :db_type=>db_type) end # Create an empty PGRange with the given database type. def self.empty(db_type=nil) new(nil, nil, :empty=>true, :db_type=>db_type) end # Initialize a new PGRange instance. Accepts the following options: # # :db_type :: The PostgreSQL database type for the range. # :empty :: Whether the range is empty (has no points) # :exclude_begin :: Whether the beginning element is excluded from the range. # :exclude_end :: Whether the ending element is excluded from the range. def initialize(beg, en, opts=OPTS) @begin = beg @end = en @empty = !!opts[:empty] @exclude_begin = !!opts[:exclude_begin] @exclude_end = !!opts[:exclude_end] @db_type = opts[:db_type] if @empty raise(Error, 'cannot have an empty range with either a beginning or ending') unless @begin.nil? && @end.nil? && opts[:exclude_begin].nil? && opts[:exclude_end].nil? end end # Delegate to the ruby range object so that the object mostly acts like a range. range_methods = %w'each last first step' range_methods << 'cover?' if RUBY_VERSION >= '1.9' range_methods.each do |m| class_eval("def #{m}(*a, &block) to_range.#{m}(*a, &block) end", __FILE__, __LINE__) end # Consider the receiver equal to other PGRange instances with the # same beginning, ending, exclusions, and database type. Also consider # it equal to Range instances if this PGRange can be converted to a # a Range and those ranges are equal. def eql?(other) case other when PGRange if db_type == other.db_type if empty? other.empty? elsif other.empty? false else [:@begin, :@end, :@exclude_begin, :@exclude_end].all?{|v| instance_variable_get(v) == other.instance_variable_get(v)} end else false end when Range if valid_ruby_range? to_range.eql?(other) else false end else false end end alias == eql? # Allow PGRange values in case statements, where they return true if they # are equal to each other using eql?, or if this PGRange can be converted # to a Range, delegating to that range. def ===(other) if eql?(other) true else if valid_ruby_range? to_range === other else false end end end # Whether this range is empty (has no points). def empty? @empty end # Whether the beginning element is excluded from the range. def exclude_begin? @exclude_begin end # Whether the ending element is excluded from the range. def exclude_end? @exclude_end end # Append a literalize version of the receiver to the sql. def sql_literal_append(ds, sql) ds.literal_append(sql, unquoted_literal(ds)) if s = @db_type sql << CAST << s.to_s end end # Return a ruby Range object for this instance, if one can be created. def to_range return @range if @range raise(Error, "cannot create ruby range for an empty PostgreSQL range") if empty? raise(Error, "cannot create ruby range when PostgreSQL range excludes beginning element") if exclude_begin? raise(Error, "cannot create ruby range when PostgreSQL range has unbounded beginning") unless self.begin raise(Error, "cannot create ruby range when PostgreSQL range has unbounded ending") unless self.end @range = Range.new(self.begin, self.end, exclude_end?) end # Whether or not this PGRange is a valid ruby range. In order to be a valid ruby range, # it must have a beginning and an ending (no unbounded ranges), and it cannot exclude # the beginning element. def valid_ruby_range? !(empty? || exclude_begin? || !self.begin || !self.end) end # Whether the beginning of the range is unbounded. def unbounded_begin? self.begin.nil? && !empty? end # Whether the end of the range is unbounded. def unbounded_end? self.end.nil? && !empty? end # Return a string containing the unescaped version of the range. # Separated out for use by the bound argument code. def unquoted_literal(ds) if empty? EMPTY else "#{exclude_begin? ? OPEN_PAREN : OPEN_BRACKET}#{escape_value(self.begin, ds)},#{escape_value(self.end, ds)}#{exclude_end? ? CLOSE_PAREN : CLOSE_BRACKET}" end end private # Escape common range types. Instead of quoting, just backslash escape all # special characters. def escape_value(k, ds) case k when nil EMPTY_STRING when Date, Time ds.literal(k)[1...-1] when Integer, Float k.to_s when BigDecimal k.to_s('F') when LiteralString k when String if k.empty? QUOTED_EMPTY_STRING else k.gsub(ESCAPE_RE, ESCAPE_REPLACE) end else ds.literal(k).gsub(ESCAPE_RE, ESCAPE_REPLACE) end end end PGRange.register('int4range', :oid=>3904, :subtype_oid=>23) PGRange.register('numrange', :oid=>3906, :subtype_oid=>1700) PGRange.register('tsrange', :oid=>3908, :subtype_oid=>1114) PGRange.register('tstzrange', :oid=>3910, :subtype_oid=>1184) PGRange.register('daterange', :oid=>3912, :subtype_oid=>1082) PGRange.register('int8range', :oid=>3926, :subtype_oid=>20) if defined?(PGArray) && PGArray.respond_to?(:register) PGArray.register('int4range', :oid=>3905, :scalar_oid=>3904, :scalar_typecast=>:int4range) PGArray.register('numrange', :oid=>3907, :scalar_oid=>3906, :scalar_typecast=>:numrange) PGArray.register('tsrange', :oid=>3909, :scalar_oid=>3908, :scalar_typecast=>:tsrange) PGArray.register('tstzrange', :oid=>3911, :scalar_oid=>3910, :scalar_typecast=>:tstzrange) PGArray.register('daterange', :oid=>3913, :scalar_oid=>3912, :scalar_typecast=>:daterange) PGArray.register('int8range', :oid=>3927, :scalar_oid=>3926, :scalar_typecast=>:int8range) end end module SQL::Builders # Convert the object to a Postgres::PGRange. def pg_range(v, db_type=nil) case v when Postgres::PGRange if db_type.nil? || v.db_type == db_type v else Postgres::PGRange.new(v.begin, v.end, :exclude_begin=>v.exclude_begin?, :exclude_end=>v.exclude_end?, :db_type=>db_type) end when Range Postgres::PGRange.from_range(v, db_type) else # May not be defined unless the pg_range_ops extension is used pg_range_op(v) end end end Database.register_extension(:pg_range, Postgres::PGRange::DatabaseMethods) end # :nocov: if Sequel.core_extensions? class Range # Create a new PGRange using the receiver as the input range, # with the given database type. def pg_range(db_type=nil) Sequel::Postgres::PGRange.from_range(self, db_type) end end end if defined?(Sequel::CoreRefinements) module Sequel::CoreRefinements refine Range do def pg_range(db_type=nil) Sequel::Postgres::PGRange.from_range(self, db_type) end end end end # :nocov: �������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_range_ops.rb���������������������������������������������0000664�0000000�0000000�00000012042�12201565355�0023230�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_range_ops extension adds support to Sequel's DSL to make # it easier to call PostgreSQL range functions and operators. # # To load the extension: # # Sequel.extension :pg_range_ops # # The most common usage is passing an expression to Sequel.pg_range_op: # # r = Sequel.pg_range_op(:range) # # If you have also loaded the pg_range extension, you can use # Sequel.pg_range as well: # # r = Sequel.pg_range(:range) # # Also, on most Sequel expression objects, you can call the pg_range # method: # # r = Sequel.expr(:range).pg_range # # If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]), # or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html]) # and have activated refinements for the file, you can also use Symbol#pg_range: # # r = :range.pg_range # # This creates a Sequel::Postgres::RangeOp object that can be used # for easier querying: # # r.contains(:other) # range @> other # r.contained_by(:other) # range <@ other # r.overlaps(:other) # range && other # r.left_of(:other) # range << other # r.right_of(:other) # range >> other # r.starts_after(:other) # range &> other # r.ends_before(:other) # range &< other # r.adjacent_to(:other) # range -|- other # # r.lower # lower(range) # r.upper # upper(range) # r.isempty # isempty(range) # r.lower_inc # lower_inc(range) # r.upper_inc # upper_inc(range) # r.lower_inf # lower_inf(range) # r.upper_inf # upper_inf(range) # # See the PostgreSQL range function and operator documentation for more # details on what these functions and operators do. # # If you are also using the pg_range extension, you should load it before # loading this extension. Doing so will allow you to use PGArray#op to get # an RangeOp, allowing you to perform range operations on range literals. module Sequel module Postgres # The RangeOp class is a simple container for a single object that # defines methods that yield Sequel expression objects representing # PostgreSQL range operators and functions. # # Most methods in this class are defined via metaprogramming, see # the pg_range_ops extension documentation for details on the API. class RangeOp < Sequel::SQL::Wrapper OPERATORS = { :contains => ["(".freeze, " @> ".freeze, ")".freeze].freeze, :contained_by => ["(".freeze, " <@ ".freeze, ")".freeze].freeze, :left_of => ["(".freeze, " << ".freeze, ")".freeze].freeze, :right_of => ["(".freeze, " >> ".freeze, ")".freeze].freeze, :ends_before => ["(".freeze, " &< ".freeze, ")".freeze].freeze, :starts_after => ["(".freeze, " &> ".freeze, ")".freeze].freeze, :adjacent_to => ["(".freeze, " -|- ".freeze, ")".freeze].freeze, :overlaps => ["(".freeze, " && ".freeze, ")".freeze].freeze, } FUNCTIONS = %w'lower upper isempty lower_inc upper_inc lower_inf upper_inf' FUNCTIONS.each do |f| class_eval("def #{f}; function(:#{f}) end", __FILE__, __LINE__) end OPERATORS.keys.each do |f| class_eval("def #{f}(v); operator(:#{f}, v) end", __FILE__, __LINE__) end # These operators are already supported by the wrapper, but for ranges they # return ranges, so wrap the results in another RangeOp. %w'+ * -'.each do |f| class_eval("def #{f}(v); RangeOp.new(super) end", __FILE__, __LINE__) end # Return the receiver. def pg_range self end private # Create a boolen expression for the given type and argument. def operator(type, other) Sequel::SQL::BooleanExpression.new(:NOOP, Sequel::SQL::PlaceholderLiteralString.new(OPERATORS[type], [value, other])) end # Return a function called with the receiver. def function(name) Sequel::SQL::Function.new(name, self) end end module RangeOpMethods # Wrap the receiver in an RangeOp so you can easily use the PostgreSQL # range functions and operators with it. def pg_range RangeOp.new(self) end end if defined?(PGRange) class PGRange # Wrap the PGRange instance in an RangeOp, allowing you to easily use # the PostgreSQL range functions and operators with literal ranges. def op RangeOp.new(self) end end end end module SQL::Builders # Return the expression wrapped in the Postgres::RangeOp. def pg_range_op(v) case v when Postgres::RangeOp v else Postgres::RangeOp.new(v) end end end class SQL::GenericExpression include Sequel::Postgres::RangeOpMethods end class LiteralString include Sequel::Postgres::RangeOpMethods end end # :nocov: if Sequel.core_extensions? class Symbol include Sequel::Postgres::RangeOpMethods end end if defined?(Sequel::CoreRefinements) module Sequel::CoreRefinements refine Symbol do include Sequel::Postgres::RangeOpMethods end end end # :nocov: ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_row.rb���������������������������������������������������0000664�0000000�0000000�00000053234�12201565355�0022072�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_row extension adds support for Sequel to handle # PostgreSQL's row-valued/composite types. # # This extension integrates with Sequel's native postgres adapter, so # that when composite fields are retrieved, they are parsed and returned # as instances of Sequel::Postgres::PGRow::(HashRow|ArrayRow), or # optionally a custom type. HashRow and ArrayRow are DelegateClasses of # of Hash and Array, so they mostly act like a hash or array, but not # completely (is_a?(Hash) and is_a?(Array) are false). If you want the # actual hash for a HashRow, call HashRow#to_hash, and if you want the # actual array for an ArrayRow, call ArrayRow#to_a. This is done so # that Sequel does not treat a values like an Array or Hash by default, # which would cause issues. # # In addition to the parsers, this extension comes with literalizers # for HashRow and ArrayRow using the standard Sequel literalization callbacks, so # they work with on all adapters. # # The first thing you are going to want to do is to load the extension into # your Database object. Make sure you load the :pg_array extension first # if you plan to use composite types in bound variables: # # DB.extension(:pg_array, :pg_row) # # You can create an anonymous row type by calling the Sequel.pg_row with # an array: # # Sequel.pg_row(array) # # If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]), # or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html]) # and have activated refinements for the file, you can also use Array#pg_row: # # array.pg_row # # However, in most cases you are going to want something beyond anonymous # row types. This extension allows you to register row types on a per # database basis, using Database#register_row_type: # # DB.register_row_type(:foo) # # When you register the row type, Sequel will query the PostgreSQL # system tables to find the related metadata, and will setup # a custom HashRow subclass for that type. This includes looking up # conversion procs for each column in the type, so that when the composite # type is returned from the database, the members of the type have # the correct type. Additionally, if the composite type also has an # array form, Sequel registers an array type for the composite type, # so that array columns of the composite type are converted correctly. # # You can then create values of that type by using Database#row_type: # # DB.row_type(:address, ['123 Sesame St.', 'Some City', '12345']) # # Let's say table address has columns street, city, and zip. This would return # something similar to: # # {:street=>'123 Sesame St.', :city=>'Some City', :zip=>'12345'} # # You can also use a hash: # # DB.row_type(:address, :street=>'123 Sesame St.', :city=>'Some City', :zip=>'12345') # # So if you have a person table that has an address column, here's how you # could insert into the column: # # DB[:table].insert(:address=>DB.row_type(:address, :street=>'123 Sesame St.', :city=>'Some City', :zip=>'12345')) # # Note that registering row types without providing an explicit :converter option # creates anonymous classes. This results in ruby being unable to Marshal such # objects. You can work around this by assigning the anonymous class to a constant. # To get a list of such anonymous classes, you can use the following code: # # DB.conversion_procs.select{|k,v| v.is_a?(Sequel::Postgres::PGRow::Parser) && \ # v.converter && (v.converter.name.nil? || v.converter.name == '') }.map{|k,v| v} # # If you are not using the native postgres adapter, you probably # also want to use the pg_typecast_on_load plugin in the model, and # set it to typecast the composite type column(s) on load. # # This extension requires both the strscan and delegate libraries. require 'delegate' require 'strscan' Sequel.require 'adapters/utils/pg_types' module Sequel module Postgres module PGRow ROW = 'ROW'.freeze CAST = '::'.freeze # Class for row-valued/composite types that are treated as arrays. By default, # this is only used for generic PostgreSQL record types, as registered # types use HashRow by default. class ArrayRow < DelegateClass(Array) include Sequel::SQL::AliasMethods class << self # The database type for this class. May be nil if this class # done not have a specific database type. attr_accessor :db_type # Alias new to call, so that the class itself can be used # directly as a converter. alias call new end # Create a subclass associated with a specific database type. # This is done so that instances of this subclass are # automatically casted to the database type when literalizing. def self.subclass(db_type) Class.new(self) do @db_type = db_type end end # Sets the database type associated with this instance. This is # used to override the class's default database type. attr_writer :db_type # Return the instance's database type, or the class's database # type if the instance has not overridden it. def db_type @db_type || self.class.db_type end # Append SQL fragment related to this object to the sql. def sql_literal_append(ds, sql) sql << ROW ds.literal_append(sql, to_a) if db_type sql << CAST ds.quote_schema_table_append(sql, db_type) end end end # Class for row-valued/composite types that are treated as hashes. # Types registered via Database#register_row_type will use this # class by default. class HashRow < DelegateClass(Hash) include Sequel::SQL::AliasMethods class << self # The columns associated with this class. attr_accessor :columns # The database type for this class. May be nil if this class # done not have a specific database type. attr_accessor :db_type # Alias new to call, so that the class itself can be used # directly as a converter. alias call new end # Create a new subclass of this class with the given database # type and columns. def self.subclass(db_type, columns) Class.new(self) do @db_type = db_type @columns = columns end end # Return the underlying hash for this delegate object. alias to_hash __getobj__ # Sets the columns associated with this instance. This is # used to override the class's default columns. attr_writer :columns # Sets the database type associated with this instance. This is # used to override the class's default database type. attr_writer :db_type # Return the instance's columns, or the class's columns # if the instance has not overridden it. def columns @columns || self.class.columns end # Return the instance's database type, or the class's columns # if the instance has not overridden it. def db_type @db_type || self.class.db_type end # Check that the HashRow has valid columns. This should be used # before all attempts to literalize the object, since literalization # depends on the columns to get the column order. def check_columns! if columns.nil? || columns.empty? raise Error, 'cannot literalize HashRow without columns' end end # Append SQL fragment related to this object to the sql. def sql_literal_append(ds, sql) check_columns! sql << ROW ds.literal_append(sql, values_at(*columns)) if db_type sql << CAST ds.quote_schema_table_append(sql, db_type) end end end ROW_TYPE_CLASSES = [HashRow, ArrayRow] # This parser-like class splits the PostgreSQL # row-valued/composite type output string format # into an array of strings. Note this class makes # no attempt to handle all input formats that PostgreSQL # will accept, it only handles the output format that # PostgreSQL uses. class Splitter < StringScanner OPEN_PAREN = /\(/.freeze CLOSE_PAREN = /\)/.freeze UNQUOTED_RE = /[^,)]*/.freeze SEP_RE = /[,)]/.freeze QUOTE_RE = /"/.freeze QUOTE_SEP_RE = /"[,)]/.freeze QUOTED_RE = /(\\.|""|[^"])*/.freeze REPLACE_RE = /\\(.)|"(")/.freeze REPLACE_WITH = '\1\2'.freeze # Split the stored string into an array of strings, handling # the different types of quoting. def parse return @result if @result values = [] skip(OPEN_PAREN) if skip(CLOSE_PAREN) values << nil else until eos? if skip(QUOTE_RE) values << scan(QUOTED_RE).gsub(REPLACE_RE, REPLACE_WITH) skip(QUOTE_SEP_RE) else v = scan(UNQUOTED_RE) values << (v unless v.empty?) skip(SEP_RE) end end end values end end # The Parser is responsible for taking the input string # from PostgreSQL, and returning an appropriate ruby # object that the type represents, such as an ArrayRow or # HashRow. class Parser # The columns for the parser, if any. If the parser has # no columns, it will treat the input as an array. If # it has columns, it will treat the input as a hash. # If present, should be an array of strings. attr_reader :columns # Converters for each member in the composite type. If # not present, no conversion will be done, so values will # remain strings. If present, should be an array of # callable objects. attr_reader :column_converters # The OIDs for each member in the composite type. Not # currently used, but made available for user code. attr_reader :column_oids # A converter for the object as a whole. Used to wrap # the returned array/hash in another object, such as an # ArrayRow or HashRow. If present, should be callable. attr_reader :converter # The oid for the composite type itself. attr_reader :oid # A callable object used for typecasting the object. This # is similar to the converter, but it is called by the # typecasting code, which has different assumptions than # the converter. For instance, the converter should be # called with all of the member values already typecast, # but the typecaster may not be. attr_reader :typecaster # Sets each of the parser's attributes, using options with # the same name (e.g. :columns sets the columns attribute). def initialize(h=OPTS) @columns = h[:columns] @column_converters = h[:column_converters] @column_oids = h[:column_oids] @converter = h[:converter] @typecaster = h[:typecaster] @oid = h[:oid] end # Convert the PostgreSQL composite type input format into # an appropriate ruby object. def call(s) convert(convert_format(convert_columns(Splitter.new(s).parse))) end # Typecast the given object to the appropriate type using the # typecaster. Note that this does not conversion for the members # of the composite type, since those conversion expect strings and # strings may not be provided. def typecast(obj) case obj when Array _typecast(convert_format(obj)) when Hash unless @columns raise Error, 'PGRow::Parser without columns cannot typecast from a hash' end _typecast(obj) else raise Error, 'PGRow::Parser can only typecast arrays and hashes' end end private # If the parser has a typecaster, call it with # the object, otherwise return the object as is. def _typecast(obj) if t = @typecaster t.call(obj) else obj end end # If the parser has column converters, map the # array of strings input to a array of appropriate # ruby objects, one for each converter. def convert_columns(arr) if ccs = @column_converters arr.zip(ccs).map{|v, pr| (v && pr) ? pr.call(v) : v} else arr end end # If the parser has columns, return a hash assuming # that the array is ordered by the columns. def convert_format(arr) if cs = @columns h = {} arr.zip(cs).each{|v, c| h[c] = v} h else arr end end # If the parser has a converter, call it with the object, # otherwise return the object as is. def convert(obj) if c = @converter c.call(obj) else obj end end end module DatabaseMethods ESCAPE_RE = /("|\\)/.freeze ESCAPE_REPLACEMENT = '\\\\\1'.freeze COMMA = ','.freeze # A hash mapping row type keys (usually symbols), to option # hashes. At the least, the values will contain the :parser # option for the Parser instance that the type will use. attr_reader :row_types # Do some setup for the data structures the module uses. def self.extended(db) # Return right away if row_types has already been set. This # makes things not break if a user extends the database with # this module more than once (since extended is called every # time). return if db.row_types db.instance_eval do @row_types = {} @row_schema_types = {} extend(@row_type_method_module = Module.new) copy_conversion_procs([2249, 2287]) end end # Handle ArrayRow and HashRow values in bound variables. def bound_variable_arg(arg, conn) case arg when ArrayRow "(#{arg.map{|v| bound_variable_array(v) if v}.join(COMMA)})" when HashRow arg.check_columns! "(#{arg.values_at(*arg.columns).map{|v| bound_variable_array(v) if v}.join(COMMA)})" else super end end # Register a new row type for the Database instance. db_type should be the type # symbol. This parses the PostgreSQL system tables to get information the # composite type, and by default has the type return instances of a subclass # of HashRow. # # The following options are supported: # # :converter :: Use a custom converter for the parser. # :typecaster :: Use a custom typecaster for the parser. def register_row_type(db_type, opts=OPTS) procs = @conversion_procs rel_oid = nil array_oid = nil parser_opts = {} # Try to handle schema-qualified types. type_schema, type_name = schema_and_table(db_type) schema_type_string = type_name.to_s # Get basic oid information for the composite type. ds = from(:pg_type). select(:pg_type__oid, :typrelid, :typarray). where([[:typtype, 'c'], [:typname, type_name.to_s]]) if type_schema ds = ds.join(:pg_namespace, [[:oid, :typnamespace], [:nspname, type_schema.to_s]]) schema_type_symbol = :"pg_row_#{type_schema}__#{type_name}" else schema_type_symbol = :"pg_row_#{type_name}" end unless row = ds.first raise Error, "row type #{db_type.inspect} not found in database" end # Manually cast to integer using to_i, because adapter may not cast oid type # correctly (e.g. swift) parser_opts[:oid], rel_oid, array_oid = row.values_at(:oid, :typrelid, :typarray).map{|i| i.to_i} # Get column names and oids for each of the members of the composite type. res = from(:pg_attribute). join(:pg_type, :oid=>:atttypid). where(:attrelid=>rel_oid). where{attnum > 0}. exclude(:attisdropped). order(:attnum). select_map([:attname, Sequel.case({0=>:atttypid}, :pg_type__typbasetype, :pg_type__typbasetype).as(:atttypid)]) if res.empty? raise Error, "no columns for row type #{db_type.inspect} in database" end parser_opts[:columns] = res.map{|r| r[0].to_sym} parser_opts[:column_oids] = res.map{|r| r[1].to_i} # Using the conversion_procs, lookup converters for each member of the composite type parser_opts[:column_converters] = parser_opts[:column_oids].map do |oid| if pr = procs[oid] pr elsif !Sequel::Postgres::STRING_TYPES.include?(oid) # It's not a string type, and it's possible a conversion proc for this # oid will be added later, so do a runtime check for it. lambda{|s| (pr = procs[oid]) ? pr.call(s) : s} end end # Setup the converter and typecaster parser_opts[:converter] = opts.fetch(:converter){HashRow.subclass(db_type, parser_opts[:columns])} parser_opts[:typecaster] = opts.fetch(:typecaster, parser_opts[:converter]) parser = Parser.new(parser_opts) @conversion_procs[parser.oid] = parser if defined?(PGArray) && PGArray.respond_to?(:register) && array_oid && array_oid > 0 array_type_name = if type_schema "#{type_schema}.#{type_name}" else type_name end PGArray.register(array_type_name, :oid=>array_oid, :converter=>parser, :type_procs=>@conversion_procs, :scalar_typecast=>schema_type_symbol) end @row_types[db_type] = opts.merge(:parser=>parser) @row_schema_types[schema_type_string] = schema_type_symbol @schema_type_classes[schema_type_symbol] = ROW_TYPE_CLASSES @row_type_method_module.class_eval do meth = :"typecast_value_#{schema_type_symbol}" define_method(meth) do |v| row_type(db_type, v) end private meth end nil end # When reseting conversion procs, reregister all the row types so that # the system tables are introspected again, picking up database changes. def reset_conversion_procs procs = super row_types.each do |db_type, opts| register_row_type(db_type, opts) end procs end # Handle typecasting of the given object to the given database type. # In general, the given database type should already be registered, # but if obj is an array, this will handled unregistered types. def row_type(db_type, obj) (type_hash = @row_types[db_type]) && (parser = type_hash[:parser]) case obj when ArrayRow, HashRow obj when Array if parser parser.typecast(obj) else obj = ArrayRow.new(obj) obj.db_type = db_type obj end when Hash if parser parser.typecast(obj) else raise InvalidValue, "Database#row_type requires the #{db_type.inspect} type have a registered parser and typecaster when called with a hash" end else raise InvalidValue, "cannot convert #{obj.inspect} to row type #{db_type.inspect}" end end private # Format composite types used in bound variable arrays. def bound_variable_array(arg) case arg when ArrayRow "\"(#{arg.map{|v| bound_variable_array(v) if v}.join(COMMA).gsub(ESCAPE_RE, ESCAPE_REPLACEMENT)})\"" when HashRow arg.check_columns! "\"(#{arg.values_at(*arg.columns).map{|v| bound_variable_array(v) if v}.join(COMMA).gsub(ESCAPE_RE, ESCAPE_REPLACEMENT)})\"" else super end end # Make the column type detection handle registered row types. def schema_column_type(db_type) if type = @row_schema_types[db_type] type else super end end end end # Register the default anonymous record type PG_TYPES[2249] = PGRow::Parser.new(:converter=>PGRow::ArrayRow) if defined?(PGArray) && PGArray.respond_to?(:register) PGArray.register('record', :oid=>2287, :scalar_oid=>2249) end end module SQL::Builders # Wraps the expr array in an anonymous Postgres::PGRow::ArrayRow instance. def pg_row(expr) case expr when Array Postgres::PGRow::ArrayRow.new(expr) else # Will only work if pg_row_ops extension is loaded pg_row_op(expr) end end end Database.register_extension(:pg_row, Postgres::PGRow::DatabaseMethods) end # :nocov: if Sequel.core_extensions? class Array # Wraps the receiver in an anonymous Sequel::Postgres::PGRow::ArrayRow instance. def pg_row Sequel::Postgres::PGRow::ArrayRow.new(self) end end end if defined?(Sequel::CoreRefinements) module Sequel::CoreRefinements refine Array do def pg_row Sequel::Postgres::PGRow::ArrayRow.new(self) end end end end # :nocov: ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pg_row_ops.rb�����������������������������������������������0000664�0000000�0000000�00000013633�12201565355�0022752�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pg_row_ops extension adds support to Sequel's DSL to make # it easier to deal with PostgreSQL row-valued/composite types. # # To load the extension: # # Sequel.extension :pg_row_ops # # The most common usage is passing an expression to Sequel.pg_row_op: # # r = Sequel.pg_row_op(:row_column) # # If you have also loaded the pg_row extension, you can use # Sequel.pg_row as well: # # r = Sequel.pg_row(:row_column) # # Also, on most Sequel expression objects, you can call the pg_row # method: # # r = Sequel.expr(:row_column).pg_row # # If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]), # or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html]) # and have activated refinements for the file, you can also use Symbol#pg_row: # # r = :row_column.pg_row # # There's only fairly basic support currently. You can use the [] method to access # a member of the composite type: # # r[:a] # (row_column).a # # This can be chained: # # r[:a][:b] # ((row_column).a).b # # If you've loaded the pg_array_ops extension, you there is also support for composite # types that include arrays, or arrays of composite types: # # r[1][:a] # (row_column[1]).a # r[:a][1] # (row_column).a[1] # # The only other support is the splat method: # # r.splat # (row_column.*) # # The splat method is necessary if you are trying to reference a table's type when the # table has the same name as one of it's columns. For example: # # DB.create_table(:a){Integer :a; Integer :b} # # Let's say you want to reference the composite type for the table: # # a = Sequel.pg_row_op(:a) # DB[:a].select(a[:b]) # SELECT (a).b FROM a # # Unfortunately, that doesn't work, as it references the integer column, not the table. # The splat method works around this: # # DB[:a].select(a.splat[:b]) # SELECT (a.*).b FROM a # # Splat also takes an argument which is used for casting. This is necessary if you # want to return the composite type itself, instead of the columns in the composite # type. For example: # # DB[:a].select(a.splat).first # SELECT (a.*) FROM a # # => {:a=>1, :b=>2} # # By casting the expression, you can get a composite type returned: # # DB[:a].select(a.splat).first # SELECT (a.*)::a FROM a # # => {:a=>"(1,2)"} # or {:a=>{:a=>1, :b=>2}} if the "a" type has been registered # # with the the pg_row extension # # This feature is mostly useful for a different way to graph tables: # # DB[:a].join(:b, :id=>:b_id).select(Sequel.pg_row_op(:a).splat(:a), # Sequel.pg_row_op(:b).splat(:b)) # # SELECT (a.*)::a, (b.*)::b FROM a INNER JOIN b ON (b.id = a.b_id) # # => {:a=>{:id=>1, :b_id=>2}, :b=>{:id=>2}} module Sequel module Postgres # This class represents a composite type expression reference. class PGRowOp < SQL::PlaceholderLiteralString OPEN = '('.freeze CLOSE_DOT = ').'.freeze CLOSE_STAR = '.*)'.freeze CLOSE_STAR_CAST = '.*)::'.freeze EMPTY = "".freeze ROW = [OPEN, CLOSE_STAR].freeze ROW_CAST = [OPEN, CLOSE_STAR_CAST].freeze QUALIFY = [OPEN, CLOSE_DOT].freeze WRAP = [EMPTY].freeze # Wrap the expression in a PGRowOp, without changing the # SQL it would use. def self.wrap(expr) PGRowOp.new(WRAP, [expr]) end # Access a member of the composite type if given a # symbol or an SQL::Identifier. For all other access, # assuming the pg_array_ops extension is loaded and # that it represents an array access. In either # case, return a PgRowOp so that access can be cascaded. def [](member) case member when Symbol, SQL::Identifier PGRowOp.new(QUALIFY, [self, member]) else PGRowOp.wrap(Sequel.pg_array_op(self)[member]) end end # Use the (identifier).* syntax to reference the members # of the composite type as separate columns. Generally # used when you want to expand the columns of a composite # type to be separate columns in the result set. # # Sequel.pg_row_op(:a).* # (a).* # Sequel.pg_row_op(:a)[:b].* # ((a).b).* def *(ce=(arg=false;nil)) if arg == false Sequel::SQL::ColumnAll.new([self]) else super(ce) end end # Use the (identifier.*) syntax to indicate that this # expression represents the composite type of one # of the tables being referenced, if it has the same # name as one of the columns. If the cast_to argument # is given, also cast the expression to that type # (which should be a symbol representing the composite type). # This is used if you want to return whole table row as a # composite type. # # Sequel.pg_row_op(:a).splat[:b] # (a.*).b # Sequel.pg_row_op(:a).splat(:a) # (a.*)::a def splat(cast_to=nil) if args.length > 1 raise Error, 'cannot splat a PGRowOp with multiple arguments' end if cast_to PGRowOp.new(ROW_CAST, args + [cast_to]) else PGRowOp.new(ROW, args) end end module ExpressionMethods # Return a PGRowOp wrapping the receiver. def pg_row Sequel.pg_row_op(self) end end end end module SQL::Builders # Return a PGRowOp wrapping the given expression. def pg_row_op(expr) Postgres::PGRowOp.wrap(expr) end end class SQL::GenericExpression include Sequel::Postgres::PGRowOp::ExpressionMethods end class LiteralString include Sequel::Postgres::PGRowOp::ExpressionMethods end end # :nocov: if Sequel.core_extensions? class Symbol include Sequel::Postgres::PGRowOp::ExpressionMethods end end if defined?(Sequel::CoreRefinements) module Sequel::CoreRefinements refine Symbol do include Sequel::Postgres::PGRowOp::ExpressionMethods end end end # :nocov: �����������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/pretty_table.rb���������������������������������������������0000664�0000000�0000000�00000001550�12201565355�0023265�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The pretty_table extension adds Sequel::Dataset#print and the # Sequel::PrettyTable class for creating nice-looking plain-text # tables. Example: # # +--+-------+ # |id|name | # |--+-------| # |1 |fasdfas| # |2 |test | # +--+-------+ # # You can load this extension into specific datasets: # # ds = DB[:table] # ds.extension(:pretty_table) # # Or you can load it into all of a database's datasets, which # is probably the desired behavior if you are using this extension: # # DB.extension(:pretty_table) module Sequel extension :_pretty_table module DatasetPrinter # Pretty prints the records in the dataset as plain-text table. def print(*cols) ds = naked rows = ds.all Sequel::PrettyTable.print(rows, cols.empty? ? ds.columns : cols) end end Dataset.register_extension(:pretty_table, DatasetPrinter) end ��������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/query.rb����������������������������������������������������0000664�0000000�0000000�00000003666�12201565355�0021746�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The query extension adds Sequel::Dataset#query which allows # a different way to construct queries instead of the usual # method chaining. See Sequel::Dataset#query for details. # # You can load this extension into specific datasets: # # ds = DB[:table] # ds.extension(:query) # # Or you can load it into all of a database's datasets, which # is probably the desired behavior if you are using this extension: # # DB.extension(:query) module Sequel module DatabaseQuery def self.extended(db) db.extend_datasets(DatasetQuery) end # Return a dataset modified by the query block def query(&block) dataset.query(&block) end end module DatasetQuery # Translates a query block into a dataset. Query blocks are an # alternative to Sequel's usual method chaining, by using # instance_eval with a proxy object: # # dataset = DB[:items].query do # select :x, :y, :z # filter{(x > 1) & (y > 2)} # reverse :z # end # # Which is the same as: # # dataset = DB[:items].select(:x, :y, :z).filter{(x > 1) & (y > 2)}.reverse(:z) def query(&block) query = Dataset::Query.new(self) query.instance_eval(&block) query.dataset end end class Dataset # Proxy object used by Dataset#query. class Query < Sequel::BasicObject # The current dataset in the query. This changes on each method call. attr_reader :dataset def initialize(dataset) @dataset = dataset end # Replace the query's dataset with dataset returned by the method call. def method_missing(method, *args, &block) @dataset = @dataset.send(method, *args, &block) raise(Sequel::Error, "method #{method.inspect} did not return a dataset") unless @dataset.is_a?(Dataset) self end end end Dataset.register_extension(:query, DatasetQuery) Database.register_extension(:query, DatabaseQuery) end ��������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/query_literals.rb�������������������������������������������0000664�0000000�0000000�00000005245�12201565355�0023640�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The query_literals extension changes Sequel's default behavior of # the select, order and group methods so that if the first argument # is a regular string, it is treated as a literal string, with the # rest of the arguments (if any) treated as placeholder values. This # allows you to write code such as: # # DB[:table].select('a, b, ?', 2).group('a, b').order('c') # # The default Sequel behavior would literalize that as: # # SELECT 'a, b, ?', 2 FROM table GROUP BY 'a, b' ORDER BY 'c' # # Using this extension changes the literalization to: # # SELECT a, b, 2, FROM table GROUP BY a, b ORDER BY c # # This extension makes select, group, and order methods operate # like filter methods, which support the same interface. # # There are very few places where Sequel's default behavior is # desirable in this area, but for backwards compatibility, the # defaults won't be changed until the next major release. # # You can load this extension into specific datasets: # # ds = DB[:table] # ds.extension(:query_literals) # # Or you can load it into all of a database's datasets, which # is probably the desired behavior if you are using this extension: # # DB.extension(:query_literals) module Sequel # The QueryLiterals module can be used to make select, group, and # order methods operate similar to the filter methods if the first # argument is a plain string, treating it like a literal string, # with any remaining arguments treated as placeholder values. # # This adds such support to the following methods: select, select_append, # select_group, select_more, group, group_and_count, order, order_append, # and order_more. # # Note that if you pass a block to these methods, it will use the default # implementation without the special literal handling. module QueryLiterals %w'select select_append select_group select_more group group_and_count order order_append order_more'.each do |m| class_eval(<<-END, __FILE__, __LINE__ + 1) def #{m}(*args) if !block_given? && (l = query_literal(args)) super(l) else super end end END end private # If the first argument is a plain string, return a literal string # if there are no additional args or a placeholder literal string with # the remaining args. Otherwise, return nil. def query_literal(args) case (s = args[0]) when LiteralString, SQL::Blob nil when String if args.length == 1 LiteralString.new(s) else SQL::PlaceholderLiteralString.new(s, args[1..-1]) end end end end Dataset.register_extension(:query_literals, QueryLiterals) end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/ruby18_symbol_extensions.rb���������������������������������0000664�0000000�0000000�00000001407�12201565355�0025566�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The ruby18_symbol_extensions adds the <, <=, >, >= to Symbol # to reflect the mathmatical operators. It also adds the [] method # to Symbol for creating SQL functions. # # Usage of this extension is not recommended. This extension will # only load on ruby 1.8, so you will not be able to upgrade to # newer ruby versions if you use it. If you still want to use it, # you can load it via: # # Sequel.extension :ruby18_symbol_extensions raise(Sequel::Error, "The ruby18_symbol_extensions is only available on ruby 1.8.") unless RUBY_VERSION < '1.9.0' class Symbol include Sequel::SQL::InequalityMethods # Create an SQL Function with the receiver as the function name # and the given arguments. def [](*args) Sequel::SQL::Function.new(self, *args) end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/schema_caching.rb�������������������������������������������0000664�0000000�0000000�00000005104�12201565355�0023502�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The schema_caching extension adds a few methods to Sequel::Database # that make it easy to dump the parsed schema information to a file, # and load it from that file. Loading the schema information from a # dumped file is faster than parsing it from the database, so this # can save bootup time for applications with large numbers of models. # # Basic usage in application code: # # DB = Sequel.connect('...') # DB.extension :schema_caching # DB.load_schema_cache('/path/to/schema.dump') # # # load model files # # Then, whenever the database schema is modified, write a new cached # file. You can do that with <tt>bin/sequel</tt>'s -S option: # # bin/sequel -S /path/to/schema.dump postgres://... # # Alternatively, if you don't want to dump the schema information for # all tables, and you don't worry about race conditions, you can # choose to use the following in your application code: # # DB = Sequel.connect('...') # DB.extension :schema_caching # DB.load_schema_cache?('/path/to/schema.dump') # # # load model files # # DB.dump_schema_cache?('/path/to/schema.dump') # # With this method, you just have to delete the schema dump file if # the schema is modified, and the application will recreate it for you # using just the tables that your models use. # # Note that it is up to the application to ensure that the dumped # cached schema reflects the current state of the database. Sequel # does no checking to ensure this, as checking would take time and the # purpose of this code is to take a shortcut. # # The cached schema is dumped in Marshal format, since it is the fastest # and it handles all ruby objects used in the schema hash. Because of this, # you should not attempt to load the schema from a untrusted file. module Sequel module SchemaCaching # Dump the cached schema to the filename given in Marshal format. def dump_schema_cache(file) File.open(file, 'wb'){|f| f.write(Marshal.dump(@schemas))} nil end # Dump the cached schema to the filename given unless the file # already exists. def dump_schema_cache?(file) dump_schema_cache(file) unless File.exist?(file) end # Replace the schema cache with the data from the given file, which # should be in Marshal format. def load_schema_cache(file) @schemas = Marshal.load(File.read(file)) nil end # Replace the schema cache with the data from the given file if the # file exists. def load_schema_cache?(file) load_schema_cache(file) if File.exist?(file) end end Database.register_extension(:schema_caching, SchemaCaching) end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/schema_dumper.rb��������������������������������������������0000664�0000000�0000000�00000046055�12201565355�0023414�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The schema_dumper extension supports dumping tables and indexes # in a Sequel::Migration format, so they can be restored on another # database (which can be the same type or a different type than # the current database). The main interface is through # Sequel::Database#dump_schema_migration. # # To load the extension: # # DB.extension :schema_dumper Sequel.extension :eval_inspect module Sequel module SchemaDumper # Dump foreign key constraints for all tables as a migration. This complements # the :foreign_keys=>false option to dump_schema_migration. This only dumps # the constraints (not the columns) using alter_table/add_foreign_key with an # array of columns. # # Note that the migration this produces does not have a down # block, so you cannot reverse it. def dump_foreign_key_migration(options=OPTS) ts = tables(options) <<END_MIG Sequel.migration do change do #{ts.sort_by{|t| t.to_s}.map{|t| dump_table_foreign_keys(t)}.reject{|x| x == ''}.join("\n\n").gsub(/^/o, ' ')} end end END_MIG end # Dump indexes for all tables as a migration. This complements # the :indexes=>false option to dump_schema_migration. Options: # :same_db :: Create a dump for the same database type, so # don't ignore errors if the index statements fail. # :index_names :: If set to false, don't record names of indexes. If # set to :namespace, prepend the table name to the index name if the # database does not use a global index namespace. def dump_indexes_migration(options=OPTS) ts = tables(options) <<END_MIG Sequel.migration do change do #{ts.sort_by{|t| t.to_s}.map{|t| dump_table_indexes(t, :add_index, options)}.reject{|x| x == ''}.join("\n\n").gsub(/^/o, ' ')} end end END_MIG end # Return a string that contains a Sequel::Migration subclass that when # run would recreate the database structure. Options: # :same_db :: Don't attempt to translate database types to ruby types. # If this isn't set to true, all database types will be translated to # ruby types, but there is no guarantee that the migration generated # will yield the same type. Without this set, types that aren't # recognized will be translated to a string-like type. # :foreign_keys :: If set to false, don't dump foreign_keys (they can be # added later via #dump_foreign_key_migration) # :indexes :: If set to false, don't dump indexes (they can be added # later via #dump_index_migration). # :index_names :: If set to false, don't record names of indexes. If # set to :namespace, prepend the table name to the index name. def dump_schema_migration(options=OPTS) options = options.dup if options[:indexes] == false && !options.has_key?(:foreign_keys) # Unless foreign_keys option is specifically set, disable if indexes # are disabled, as foreign keys that point to non-primary keys rely # on unique indexes being created first options[:foreign_keys] = false end ts = sort_dumped_tables(tables(options), options) skipped_fks = if sfk = options[:skipped_foreign_keys] # Handle skipped foreign keys by adding them at the end via # alter_table/add_foreign_key. Note that skipped foreign keys # probably result in a broken down migration. sfka = sfk.sort_by{|table, fks| table.to_s}.map{|table, fks| dump_add_fk_constraints(table, fks.values)} sfka.join("\n\n").gsub(/^/o, ' ') unless sfka.empty? end <<END_MIG Sequel.migration do change do #{ts.map{|t| dump_table_schema(t, options)}.join("\n\n").gsub(/^/o, ' ')}#{"\n \n" if skipped_fks}#{skipped_fks} end end END_MIG end # Return a string with a create table block that will recreate the given # table's schema. Takes the same options as dump_schema_migration. def dump_table_schema(table, options=OPTS) table = table.value.to_s if table.is_a?(SQL::Identifier) gen = dump_table_generator(table, options) commands = [gen.dump_columns, gen.dump_constraints, gen.dump_indexes].reject{|x| x == ''}.join("\n\n") "create_table(#{table.inspect}#{', :ignore_index_errors=>true' if !options[:same_db] && options[:indexes] != false && !gen.indexes.empty?}) do\n#{commands.gsub(/^/o, ' ')}\nend" end private # If a database default exists and can't be converted, and we are dumping with :same_db, # return a string with the inspect method modified a literal string is created if the code is evaled. def column_schema_to_ruby_default_fallback(default, options) if default.is_a?(String) && options[:same_db] && use_column_schema_to_ruby_default_fallback? default = default.dup def default.inspect "Sequel::LiteralString.new(#{super})" end default end end # Recreate the column in the passed Schema::Generator from the given name and parsed database schema. def recreate_column(name, schema, gen, options) if options[:single_pk] && schema_autoincrementing_primary_key?(schema) type_hash = options[:same_db] ? {:type=>schema[:db_type]} : column_schema_to_ruby_type(schema) [:table, :key, :on_delete, :on_update, :deferrable].each{|f| type_hash[f] = schema[f] if schema[f]} if type_hash == {:type=>Integer} || type_hash == {:type=>"integer"} gen.primary_key(name) else gen.primary_key(name, type_hash) end else col_opts = options[:same_db] ? {:type=>schema[:db_type]} : column_schema_to_ruby_type(schema) type = col_opts.delete(:type) col_opts.delete(:size) if col_opts[:size].nil? col_opts[:default] = if schema[:ruby_default].nil? column_schema_to_ruby_default_fallback(schema[:default], options) else schema[:ruby_default] end col_opts.delete(:default) if col_opts[:default].nil? col_opts[:null] = false if schema[:allow_null] == false if table = schema[:table] [:key, :on_delete, :on_update, :deferrable].each{|f| col_opts[f] = schema[f] if schema[f]} col_opts[:type] = type unless type == Integer || type == 'integer' gen.foreign_key(name, table, col_opts) else gen.column(name, type, col_opts) if [Integer, Bignum, Float].include?(type) && schema[:db_type] =~ / unsigned\z/io gen.check(Sequel::SQL::Identifier.new(name) >= 0) end end end end # Convert the column schema information to a hash of column options, one of which must # be :type. The other options added should modify that type (e.g. :size). If a # database type is not recognized, return it as a String type. def column_schema_to_ruby_type(schema) case schema[:db_type].downcase when /\A(medium|small)?int(?:eger)?(?:\((\d+)\))?( unsigned)?\z/o if !$1 && $2 && $2.to_i >= 10 && $3 # Unsigned integer type with 10 digits can potentially contain values which # don't fit signed integer type, so use bigint type in target database. {:type=>Bignum} else {:type=>Integer} end when /\Atinyint(?:\((\d+)\))?(?: unsigned)?\z/o {:type =>schema[:type] == :boolean ? TrueClass : Integer} when /\Abigint(?:\((?:\d+)\))?(?: unsigned)?\z/o {:type=>Bignum} when /\A(?:real|float|double(?: precision)?|double\(\d+,\d+\)(?: unsigned)?)\z/o {:type=>Float} when 'boolean' {:type=>TrueClass} when /\A(?:(?:tiny|medium|long|n)?text|clob)\z/o {:type=>String, :text=>true} when 'date' {:type=>Date} when /\A(?:small)?datetime\z/o {:type=>DateTime} when /\Atimestamp(?:\((\d+)\))?(?: with(?:out)? time zone)?\z/o {:type=>DateTime, :size=>($1.to_i if $1)} when /\Atime(?: with(?:out)? time zone)?\z/o {:type=>Time, :only_time=>true} when /\An?char(?:acter)?(?:\((\d+)\))?\z/o {:type=>String, :size=>($1.to_i if $1), :fixed=>true} when /\A(?:n?varchar|character varying|bpchar|string)(?:\((\d+)\))?\z/o {:type=>String, :size=>($1.to_i if $1)} when /\A(?:small)?money\z/o {:type=>BigDecimal, :size=>[19,2]} when /\A(?:decimal|numeric|number)(?:\((\d+)(?:,\s*(\d+))?\))?\z/o s = [($1.to_i if $1), ($2.to_i if $2)].compact {:type=>BigDecimal, :size=>(s.empty? ? nil : s)} when /\A(?:bytea|(?:tiny|medium|long)?blob|(?:var)?binary)(?:\((\d+)\))?\z/o {:type=>File, :size=>($1.to_i if $1)} when /\A(?:year|(?:int )?identity)\z/o {:type=>Integer} else {:type=>String} end end # For the table and foreign key metadata array, return an alter_table # string that would add the foreign keys if run in a migration. def dump_add_fk_constraints(table, fks) sfks = "alter_table(#{table.inspect}) do\n" sfks << create_table_generator do fks.sort_by{|fk| fk[:columns].map{|c| c.to_s}}.each do |fk| foreign_key fk[:columns], fk end end.dump_constraints.gsub(/^foreign_key /, ' add_foreign_key ') sfks << "\nend" end # For the table given, get the list of foreign keys and return an alter_table # string that would add the foreign keys if run in a migration. def dump_table_foreign_keys(table, options=OPTS) if supports_foreign_key_parsing? fks = foreign_key_list(table, options).sort_by{|fk| fk[:columns].map{|c| c.to_s}} end if fks.nil? || fks.empty? '' else dump_add_fk_constraints(table, fks) end end # Return a Schema::Generator object that will recreate the # table's schema. Takes the same options as dump_schema_migration. def dump_table_generator(table, options=OPTS) table = table.value.to_s if table.is_a?(SQL::Identifier) raise(Error, "must provide table as a Symbol, String, or Sequel::SQL::Identifier") unless [String, Symbol].any?{|c| table.is_a?(c)} s = schema(table).dup pks = s.find_all{|x| x.last[:primary_key] == true}.map{|x| x.first} options = options.merge(:single_pk=>true) if pks.length == 1 m = method(:recreate_column) im = method(:index_to_generator_opts) if options[:indexes] != false && supports_index_parsing? indexes = indexes(table).sort_by{|k,v| k.to_s} end if options[:foreign_keys] != false && supports_foreign_key_parsing? fk_list = foreign_key_list(table) if (sfk = options[:skipped_foreign_keys]) && (sfkt = sfk[table]) fk_list.delete_if{|fk| sfkt.has_key?(fk[:columns])} end composite_fks, single_fks = fk_list.partition{|h| h[:columns].length > 1} fk_hash = {} single_fks.each do |fk| column = fk.delete(:columns).first fk.delete(:name) fk_hash[column] = fk end s = s.map do |name, info| if fk_info = fk_hash[name] [name, fk_info.merge(info)] else [name, info] end end end create_table_generator do s.each{|name, info| m.call(name, info, self, options)} primary_key(pks) if !@primary_key && pks.length > 0 indexes.each{|iname, iopts| send(:index, iopts[:columns], im.call(table, iname, iopts, options))} if indexes composite_fks.each{|fk| send(:foreign_key, fk[:columns], fk)} if composite_fks end end # Return a string that containing add_index/drop_index method calls for # creating the index migration. def dump_table_indexes(table, meth, options=OPTS) if supports_index_parsing? indexes = indexes(table).sort_by{|k,v| k.to_s} else return '' end im = method(:index_to_generator_opts) gen = create_table_generator do indexes.each{|iname, iopts| send(:index, iopts[:columns], im.call(table, iname, iopts, options))} end gen.dump_indexes(meth=>table, :ignore_errors=>!options[:same_db]) end # Convert the parsed index information into options to the Generators index method. def index_to_generator_opts(table, name, index_opts, options=OPTS) h = {} if options[:index_names] != false && default_index_name(table, index_opts[:columns]) != name.to_s if options[:index_names] == :namespace && !global_index_namespace? h[:name] = "#{table}_#{name}".to_sym else h[:name] = name end end h[:unique] = true if index_opts[:unique] h[:deferrable] = true if index_opts[:deferrable] h end # Sort the tables so that referenced tables are created before tables that # reference them, and then by name. If foreign keys are disabled, just sort by name. def sort_dumped_tables(tables, options=OPTS) if options[:foreign_keys] != false && supports_foreign_key_parsing? table_fks = {} tables.each{|t| table_fks[t] = foreign_key_list(t)} # Remove self referential foreign keys, not important when sorting. table_fks.each{|t, fks| fks.delete_if{|fk| fk[:table] == t}} tables, skipped_foreign_keys = sort_dumped_tables_topologically(table_fks, []) options[:skipped_foreign_keys] = skipped_foreign_keys tables else tables.sort_by{|t| t.to_s} end end # Do a topological sort of tables, so that referenced tables # come before referencing tables. Returns an array of sorted # tables and a hash of skipped foreign keys. The hash will be # empty unless there are circular dependencies. def sort_dumped_tables_topologically(table_fks, sorted_tables) skipped_foreign_keys = {} until table_fks.empty? this_loop = [] table_fks.each do |table, fks| fks.delete_if{|fk| !table_fks.has_key?(fk[:table])} this_loop << table if fks.empty? end if this_loop.empty? # No tables were changed this round, there must be a circular dependency. # Break circular dependency by picking the table with the least number of # outstanding foreign keys and skipping those foreign keys. # The skipped foreign keys will be added at the end of the # migration. skip_table, skip_fks = table_fks.sort_by{|table, fks| [fks.length, table.to_s]}.first skip_fks_hash = skipped_foreign_keys[skip_table] = {} skip_fks.each{|fk| skip_fks_hash[fk[:columns]] = fk} this_loop << skip_table end # Add sorted tables from this loop to the final list sorted_tables.concat(this_loop.sort_by{|t| t.to_s}) # Remove tables that were handled this loop this_loop.each{|t| table_fks.delete(t)} end [sorted_tables, skipped_foreign_keys] end # Don't use a literal string fallback on MySQL, since the defaults it uses aren't # valid literal SQL values. def use_column_schema_to_ruby_default_fallback? database_type != :mysql end end module Schema class Generator # Dump this generator's columns to a string that could be evaled inside # another instance to represent the same columns def dump_columns strings = [] cols = columns.dup cols.each do |x| x.delete(:on_delete) if x[:on_delete] == :no_action x.delete(:on_update) if x[:on_update] == :no_action end if pkn = primary_key_name cols.delete_if{|x| x[:name] == pkn} pk = @primary_key.dup pkname = pk.delete(:name) @db.serial_primary_key_options.each{|k,v| pk.delete(k) if v == pk[k]} strings << "primary_key #{pkname.inspect}#{opts_inspect(pk)}" end cols.each do |c| c = c.dup name = c.delete(:name) strings << if table = c.delete(:table) c.delete(:type) if c[:type] == Integer || c[:type] == 'integer' "foreign_key #{name.inspect}, #{table.inspect}#{opts_inspect(c)}" else type = c.delete(:type) opts = opts_inspect(c) if type.is_a?(Class) "#{type.name} #{name.inspect}#{opts}" else "column #{name.inspect}, #{type.inspect}#{opts}" end end end strings.join("\n") end # Dump this generator's constraints to a string that could be evaled inside # another instance to represent the same constraints def dump_constraints cs = constraints.map do |c| c = c.dup type = c.delete(:type) case type when :check raise(Error, "can't dump check/constraint specified with Proc") if c[:check].is_a?(Proc) name = c.delete(:name) if !name and c[:check].length == 1 and c[:check].first.is_a?(Hash) "check #{c[:check].first.inspect[1...-1]}" else "#{name ? "constraint #{name.inspect}," : 'check'} #{c[:check].map{|x| x.inspect}.join(', ')}" end when :foreign_key c.delete(:on_delete) if c[:on_delete] == :no_action c.delete(:on_update) if c[:on_update] == :no_action c.delete(:deferrable) unless c[:deferrable] cols = c.delete(:columns) table = c.delete(:table) "#{type} #{cols.inspect}, #{table.inspect}#{opts_inspect(c)}" else cols = c.delete(:columns) "#{type} #{cols.inspect}#{opts_inspect(c)}" end end cs.join("\n") end # Dump this generator's indexes to a string that could be evaled inside # another instance to represent the same indexes. Options: # * :add_index - Use add_index instead of index, so the methods # can be called outside of a generator but inside a migration. # The value of this option should be the table name to use. # * :drop_index - Same as add_index, but create drop_index statements. # * :ignore_errors - Add the ignore_errors option to the outputted indexes def dump_indexes(options=OPTS) is = indexes.map do |c| c = c.dup cols = c.delete(:columns) if table = options[:add_index] || options[:drop_index] "#{options[:drop_index] ? 'drop' : 'add'}_index #{table.inspect}, #{cols.inspect}#{', :ignore_errors=>true' if options[:ignore_errors]}#{opts_inspect(c)}" else "index #{cols.inspect}#{opts_inspect(c)}" end end is = is.reverse if options[:drop_index] is.join("\n") end private # Return a string that converts the given options into one # suitable for literal ruby code, handling default values # that don't default to a literal interpretation. def opts_inspect(opts) if opts[:default] opts = opts.dup de = Sequel.eval_inspect(opts.delete(:default)) ", :default=>#{de}#{", #{opts.inspect[1...-1]}" if opts.length > 0}" else ", #{opts.inspect[1...-1]}" if opts.length > 0 end end end end Database.register_extension(:schema_dumper, SchemaDumper) end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/select_remove.rb��������������������������������������������0000664�0000000�0000000�00000004166�12201565355�0023431�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The select_remove extension adds Sequel::Dataset#select_remove for removing existing selected # columns from a dataset. It's not part of Sequel core as it is rarely needed and has # some corner cases where it can't work correctly. # # You can load this extension into specific datasets: # # ds = DB[:table] # ds.extension(:select_remove) # # Or you can load it into all of a database's datasets, which # is probably the desired behavior if you are using this extension: # # DB.extension(:select_remove) module Sequel module SelectRemove # Remove columns from the list of selected columns. If any of the currently selected # columns use expressions/aliases, this will remove selected columns with the given # aliases. It will also remove entries from the selection that match exactly: # # # Assume columns a, b, and c in items table # DB[:items] # SELECT * FROM items # DB[:items].select_remove(:c) # SELECT a, b FROM items # DB[:items].select(:a, :b___c, :c___b).select_remove(:c) # SELECT a, c AS b FROM items # DB[:items].select(:a, :b___c, :c___b).select_remove(:c___b) # SELECT a, b AS c FROM items # # Note that there are a few cases where this method may not work correctly: # # * This dataset joins multiple tables and does not have an existing explicit selection. # In this case, the code will currently use unqualified column names for all columns # the dataset returns, except for the columns given. # * This dataset has an existing explicit selection containing an item that returns # multiple database columns (e.g. Sequel.expr(:table).*, Sequel.lit('column1, column2')). In this case, # the behavior is undefined and this method should not be used. # # There may be other cases where this method does not work correctly, use it with caution. def select_remove(*cols) if (sel = @opts[:select]) && !sel.empty? select(*(columns.zip(sel).reject{|c, s| cols.include?(c)}.map{|c, s| s} - cols)) else select(*(columns - cols)) end end end Dataset.register_extension(:select_remove, SelectRemove) end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/sequel_3_dataset_methods.rb���������������������������������0000664�0000000�0000000�00000010063�12201565355�0025536�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This adds the following dataset methods: # # []= :: filter with the first argument, update with the second # insert_multiple :: insert multiple rows at once # set :: alias for update # to_csv :: return string in csv format for the dataset # db= :: change the dataset's database # opts= :: change the dataset's opts # # It is only recommended to use this for backwards compatibility. # # You can load this extension into specific datasets: # # ds = DB[:table] # ds.extension(:sequel_3_dataset_methods) # # Or you can load it into all of a database's datasets, which # is probably the desired behavior if you are using this extension: # # DB.extension(:sequel_3_dataset_methods) module Sequel module Sequel3DatasetMethods COMMA = Dataset::COMMA # The database related to this dataset. This is the Database instance that # will execute all of this dataset's queries. attr_writer :db # The hash of options for this dataset, keys are symbols. attr_writer :opts # Update all records matching the conditions with the values specified. # Returns the number of rows affected. # # DB[:table][:id=>1] = {:id=>2} # UPDATE table SET id = 2 WHERE id = 1 # # => 1 # number of rows affected def []=(conditions, values) filter(conditions).update(values) end # Inserts multiple values. If a block is given it is invoked for each # item in the given array before inserting it. See +multi_insert+ as # a possibly faster version that may be able to insert multiple # records in one SQL statement (if supported by the database). # Returns an array of primary keys of inserted rows. # # DB[:table].insert_multiple([{:x=>1}, {:x=>2}]) # # => [4, 5] # # INSERT INTO table (x) VALUES (1) # # INSERT INTO table (x) VALUES (2) # # DB[:table].insert_multiple([{:x=>1}, {:x=>2}]){|row| row[:y] = row[:x] * 2; row } # # => [6, 7] # # INSERT INTO table (x, y) VALUES (1, 2) # # INSERT INTO table (x, y) VALUES (2, 4) def insert_multiple(array, &block) if block array.map{|i| insert(block.call(i))} else array.map{|i| insert(i)} end end # Return a copy of the dataset with unqualified identifiers in the # SELECT, WHERE, GROUP, HAVING, and ORDER clauses qualified by the # given table. If no columns are currently selected, select all # columns of the given table. # # DB[:items].filter(:id=>1).qualify_to(:i) # # SELECT i.* FROM items WHERE (i.id = 1) def qualify_to(table) qualify(table) end # Qualify the dataset to its current first source. This is useful # if you have unqualified identifiers in the query that all refer to # the first source, and you want to join to another table which # has columns with the same name as columns in the current dataset. # See +qualify_to+. # # DB[:items].filter(:id=>1).qualify_to_first_source # # SELECT items.* FROM items WHERE (items.id = 1) def qualify_to_first_source qualify end # Alias for update, but not aliased directly so subclasses # don't have to override both methods. def set(*args) update(*args) end # Returns a string in CSV format containing the dataset records. By # default the CSV representation includes the column titles in the # first line. You can turn that off by passing false as the # include_column_titles argument. # # This does not use a CSV library or handle quoting of values in # any way. If any values in any of the rows could include commas or line # endings, you shouldn't use this. # # puts DB[:table].to_csv # SELECT * FROM table # # id,name # # 1,Jim # # 2,Bob def to_csv(include_column_titles = true) n = naked cols = n.columns csv = '' csv << "#{cols.join(COMMA)}\r\n" if include_column_titles n.each{|r| csv << "#{cols.collect{|c| r[c]}.join(COMMA)}\r\n"} csv end end Dataset.register_extension(:sequel_3_dataset_methods, Sequel3DatasetMethods) end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/server_block.rb���������������������������������������������0000664�0000000�0000000�00000007350�12201565355�0023253�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The server_block extension adds the Database#with_server method, which takes a shard # argument and a block, and makes it so that access inside the block will use the # specified shard by default. # # First, you need to enable it on the database object: # # DB.extension :server_block # # Then you can call with_server: # # DB.with_server(:shard1) do # DB[:a].all # Uses shard1 # DB[:a].server(:shard2).all # Uses shard2 # end # DB[:a].all # Uses default # # You can even nest calls to with_server: # # DB.with_server(:shard1) do # DB[:a].all # Uses shard1 # DB.with_server(:shard2) do # DB[:a].all # Uses shard2 # end # DB[:a].all # Uses shard1 # end # DB[:a].all # Uses default # # Note this this extension assumes the following shard names should use the # server/shard passed to with_server: :default, nil, :read_only. All other # shard names will cause the standard behavior to be used. module Sequel module ServerBlock # Enable the server block on the connection pool, choosing the correct # extension depending on whether the connection pool is threaded or not. # Also defines the with_server method on the receiver for easy use. def self.extended(db) pool = db.pool if defined?(ShardedThreadedConnectionPool) && pool.is_a?(ShardedThreadedConnectionPool) pool.extend(ThreadedServerBlock) pool.instance_variable_set(:@default_servers, {}) else pool.extend(UnthreadedServerBlock) pool.instance_variable_set(:@default_servers, []) end end # Delegate to the connection pool def with_server(server, &block) pool.with_server(server, &block) end end # Adds with_server support for the sharded single connection pool. module UnthreadedServerBlock # Set a default server/shard to use inside the block. def with_server(server) begin set_default_server(server) yield ensure clear_default_server end end private # Make the given server the new default server. def set_default_server(server) @default_servers << server end # Remove the current default server, restoring the # previous default server. def clear_default_server @default_servers.pop end # Use the server given to with_server if appropriate. def pick_server(server) if @default_servers.empty? super else case server when :default, nil, :read_only @default_servers.last else super end end end end # Adds with_server support for the sharded threaded connection pool. module ThreadedServerBlock # Set a default server/shard to use inside the block for the current # thread. def with_server(server) begin set_default_server(server) yield ensure clear_default_server end end private # Make the given server the new default server for the current thread. def set_default_server(server) sync{(@default_servers[Thread.current] ||= [])} << server end # Remove the current default server for the current thread, restoring the # previous default server. def clear_default_server t = Thread.current a = sync{@default_servers[t]} a.pop sync{@default_servers.delete(t)} if a.empty? end # Use the server given to with_server for the given thread, if appropriate. def pick_server(server) a = sync{@default_servers[Thread.current]} if !a || a.empty? super else case server when :default, nil, :read_only a.last else super end end end end Database.register_extension(:server_block, ServerBlock) end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/set_overrides.rb��������������������������������������������0000664�0000000�0000000�00000004652�12201565355�0023452�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The set_overrides extension adds the Dataset#set_overrides and # Dataset#set_defaults methods which provide a crude way to # control the values used in INSERT/UPDATE statements if a hash # of values is passed to Dataset#insert or Dataset#update. # It is only recommended to use this for backwards compatibility. # # You can load this extension into specific datasets: # # ds = DB[:table] # ds.extension(:set_overrides) # # Or you can load it into all of a database's datasets, which # is probably the desired behavior if you are using this extension: # # DB.extension(:set_overrides) module Sequel module SetOverrides Dataset::NON_SQL_OPTIONS.concat([:defaults, :overrides]) Dataset.def_mutation_method(:set_defaults, :set_overrides, :module=>self) # Set overrides/defaults for insert hashes def insert_sql(*values) if values.size == 1 && (vals = values.first).is_a?(Hash) super(merge_defaults_overrides(vals)) else super end end # Set the default values for insert and update statements. The values hash passed # to insert or update are merged into this hash, so any values in the hash passed # to insert or update will override values passed to this method. # # DB[:items].set_defaults(:a=>'a', :c=>'c').insert(:a=>'d', :b=>'b') # # INSERT INTO items (a, c, b) VALUES ('d', 'c', 'b') def set_defaults(hash) clone(:defaults=>(@opts[:defaults]||{}).merge(hash)) end # Set values that override hash arguments given to insert and update statements. # This hash is merged into the hash provided to insert or update, so values # will override any values given in the insert/update hashes. # # DB[:items].set_overrides(:a=>'a', :c=>'c').insert(:a=>'d', :b=>'b') # # INSERT INTO items (a, c, b) VALUES ('a', 'c', 'b') def set_overrides(hash) clone(:overrides=>hash.merge(@opts[:overrides]||{})) end # Set overrides/defaults for update hashes def update_sql(values = {}) if values.is_a?(Hash) super(merge_defaults_overrides(values)) else super end end private # Return new hashe with merged defaults and overrides. def merge_defaults_overrides(vals) vals = @opts[:defaults].merge(vals) if @opts[:defaults] vals = vals.merge(@opts[:overrides]) if @opts[:overrides] vals end end Dataset.register_extension(:set_overrides, SetOverrides) end ��������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/split_array_nil.rb������������������������������������������0000664�0000000�0000000�00000004141�12201565355�0023761�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The split_array_nil extension overrides Sequel's default handling of # IN/NOT IN with arrays of values to do specific nil checking. For example, # # ds = DB[:table].where(:column=>[1, nil]) # # By default, that produces the following SQL: # # SELECT * FROM table WHERE (column IN (1, NULL)) # # However, because NULL = NULL is not true in SQL (it is NULL), this # will not return rows in the table where the column is NULL. This # extension allows for an alternative behavior more similar to ruby, # which will return rows in the table where the column is NULL, using # a query like: # # SELECT * FROM table WHERE ((column IN (1)) OR (column IS NULL))) # # Similarly, for NOT IN queries: # # ds = DB[:table].exclude(:column=>[1, nil]) # # Default: # # SELECT * FROM table WHERE (column NOT IN (1, NULL)) # # with split_array_nils extension: # # SELECT * FROM table WHERE ((column NOT IN (1)) AND (column IS NOT NULL))) # # To use this extension with a single dataset: # # ds = ds.extension(:split_array_nil) # # To use this extension for all of a database's datasets: # # DB.extension(:split_array_nil) module Sequel class Dataset module SplitArrayNil # Over the IN/NOT IN handling with an array of values where one of the # values in the array is nil, by removing nils from the array of values, # and using a separate OR IS NULL clause for IN or AND IS NOT NULL clause # for NOT IN. def complex_expression_sql_append(sql, op, args) case op when :IN, :"NOT IN" vals = args.at(1) if vals.is_a?(Array) && vals.any?{|v| v.nil?} cols = args.at(0) vals = vals.compact c = Sequel::SQL::BooleanExpression if op == :IN literal_append(sql, c.new(:OR, c.new(:IN, cols, vals), c.new(:IS, cols, nil))) else literal_append(sql, c.new(:AND, c.new(:"NOT IN", cols, vals), c.new(:"IS NOT", cols, nil))) end else super end else super end end end end Dataset.register_extension(:split_array_nil, Dataset::SplitArrayNil) end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/sql_expr.rb�������������������������������������������������0000664�0000000�0000000�00000001032�12201565355�0022417�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The sql_expr extension adds the sql_expr method to every object, which # returns an wrapped object that works nicely with Sequel's DSL by calling # Sequel.expr: # # 1.sql_expr < :a # 1 < a # false.sql_expr & :a # FALSE AND a # true.sql_expr | :a # TRUE OR a # ~nil.sql_expr # NOT NULL # "a".sql_expr + "b" # 'a' || 'b' # # To load the extension: # # Sequel.extension :sql_expr class Object # Return the object wrapper in an appropriate Sequel expression object. def sql_expr Sequel.expr(self) end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/string_date_time.rb�����������������������������������������0000664�0000000�0000000�00000002450�12201565355�0024110�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The string_date_time extension provides String instance methods # for converting the strings to a date (e.g. String#to_date), allowing # for backwards compatibility with legacy Sequel code. # # To load the extension: # # Sequel.extension :string_date_time class String # Converts a string into a Date object. def to_date begin Date.parse(self, Sequel.convert_two_digit_years) rescue => e raise Sequel.convert_exception_class(e, Sequel::InvalidValue) end end # Converts a string into a DateTime object. def to_datetime begin DateTime.parse(self, Sequel.convert_two_digit_years) rescue => e raise Sequel.convert_exception_class(e, Sequel::InvalidValue) end end # Converts a string into a Time or DateTime object, depending on the # value of Sequel.datetime_class def to_sequel_time begin if Sequel.datetime_class == DateTime DateTime.parse(self, Sequel.convert_two_digit_years) else Sequel.datetime_class.parse(self) end rescue => e raise Sequel.convert_exception_class(e, Sequel::InvalidValue) end end # Converts a string into a Time object. def to_time begin Time.parse(self) rescue => e raise Sequel.convert_exception_class(e, Sequel::InvalidValue) end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/thread_local_timezones.rb�����������������������������������0000664�0000000�0000000�00000004041�12201565355�0025303�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# The thread_local_timezones extension allows you to set a per-thread timezone that # will override the default global timezone while the thread is executing. The # main use case is for web applications that execute each request in its own thread, # and want to set the timezones based on the request. # # To load the extension: # # Sequel.extension :thread_local_timezones # # The most common example is having the database always store time in # UTC, but have the application deal with the timezone of the current # user. That can be done with: # # Sequel.database_timezone = :utc # # In each thread: # Sequel.thread_application_timezone = current_user.timezone # # This extension is designed to work with the named_timezones extension. # # This extension adds the thread_application_timezone=, thread_database_timezone=, # and thread_typecast_timezone= methods to the Sequel module. It overrides # the application_timezone, database_timezone, and typecast_timezone # methods to check the related thread local timezone first, and use it if present. # If the related thread local timezone is not present, it falls back to the # default global timezone. # # There is one special case of note. If you have a default global timezone # and you want to have a nil thread local timezone, you have to set the thread # local value to :nil instead of nil: # # Sequel.application_timezone = :utc # Sequel.thread_application_timezone = nil # Sequel.application_timezone # => :utc # Sequel.thread_application_timezone = :nil # Sequel.application_timezone # => nil module Sequel module ThreadLocalTimezones %w'application database typecast'.each do |t| class_eval("def thread_#{t}_timezone=(tz); Thread.current[:#{t}_timezone] = convert_timezone_setter_arg(tz); end", __FILE__, __LINE__) class_eval(<<END, __FILE__, __LINE__ + 1) def #{t}_timezone if tz = Thread.current[:#{t}_timezone] tz unless tz == :nil else super end end END end end extend ThreadLocalTimezones end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/extensions/to_dot.rb���������������������������������������������������0000664�0000000�0000000�00000010770�12201565355�0022063�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# This adds a <tt>Sequel::Dataset#to_dot</tt> method. The +to_dot+ method # returns a string that can be processed by graphviz's +dot+ program in # order to get a visualization of the dataset. Basically, it shows a version # of the dataset's abstract syntax tree. # # To load the extension: # # Sequel.extension :to_dot module Sequel class ToDot module DatasetMethods # Return a string that can be processed by the +dot+ program (included # with graphviz) in order to see a visualization of the dataset's # abstract syntax tree. def to_dot ToDot.output(self) end end # The option keys that should be included in the dot output. TO_DOT_OPTIONS = [:with, :distinct, :select, :from, :join, :where, :group, :having, :compounds, :order, :limit, :offset, :lock].freeze # Given a +Dataset+, return a string in +dot+ format that will # generate a visualization of the dataset. def self.output(ds) new(ds).output end # Given a +Dataset+, parse the internal structure to generate # a dataset visualization. def initialize(ds) @i = 0 @stack = [@i] @dot = ["digraph G {", "0 [label=\"self\"];"] v(ds, "") @dot << "}" end # Output the dataset visualization as a string in +dot+ format. def output @dot.join("\n") end private # Add an entry to the +dot+ output with the given label. If +j+ # is given, it is used directly as the node or transition. Otherwise # a node is created for the current object. def dot(label, j=nil) @dot << "#{j||@i} [label=#{label.to_s.inspect}];" end # Recursive method that parses all of Sequel's internal datastructures, # adding the appropriate nodes and transitions to the internal +dot+ # structure. def v(e, l) @i += 1 dot(l, "#{@stack.last} -> #{@i}") if l @stack.push(@i) case e when LiteralString dot "#{e.inspect}.lit" # core_sql use when Symbol, Numeric, String, Class, TrueClass, FalseClass, NilClass dot e.inspect when Array dot "Array" e.each_with_index do |val, j| v(val, j) end when Hash dot "Hash" e.each do |k, val| v(val, k) end when SQL::ComplexExpression dot "ComplexExpression: #{e.op}" e.args.each_with_index do |val, j| v(val, j) end when SQL::Identifier dot "Identifier" v(e.value, :value) when SQL::QualifiedIdentifier dot "QualifiedIdentifier" v(e.table, :table) v(e.column, :column) when SQL::OrderedExpression dot "OrderedExpression: #{e.descending ? :DESC : :ASC}#{" NULLS #{e.nulls.to_s.upcase}" if e.nulls}" v(e.expression, :expression) when SQL::AliasedExpression dot "AliasedExpression" v(e.expression, :expression) v(e.aliaz, :alias) when SQL::CaseExpression dot "CaseExpression" v(e.expression, :expression) if e.expression v(e.conditions, :conditions) v(e.default, :default) when SQL::Cast dot "Cast" v(e.expr, :expr) v(e.type, :type) when SQL::Function dot "Function: #{e.f}" e.args.each_with_index do |val, j| v(val, j) end when SQL::Subscript dot "Subscript" v(e.f, :f) v(e.sub, :sub) when SQL::WindowFunction dot "WindowFunction" v(e.function, :function) v(e.window, :window) when SQL::Window dot "Window" v(e.opts, :opts) when SQL::PlaceholderLiteralString str = e.str str = "(#{str})" if e.parens dot "PlaceholderLiteralString: #{str.inspect}" v(e.args, :args) when SQL::JoinClause str = "#{e.join_type.to_s.upcase} JOIN" if e.is_a?(SQL::JoinOnClause) str << " ON" elsif e.is_a?(SQL::JoinUsingClause) str << " USING" end dot str v(e.table, :table) v(e.table_alias, :alias) if e.table_alias if e.is_a?(SQL::JoinOnClause) v(e.on, :on) elsif e.is_a?(SQL::JoinUsingClause) v(e.using, :using) end when Dataset dot "Dataset" TO_DOT_OPTIONS.each do |k| if val = e.opts[k] v(val, k.to_s) end end else dot "Unhandled: #{e.inspect}" end @stack.pop end end Dataset.register_extension(:to_dot, ToDot::DatasetMethods) end ��������ruby-sequel-4.1.1/lib/sequel/model.rb���������������������������������������������������������������0000664�0000000�0000000�00000017535�12201565355�0017502�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'sequel/core' module Sequel # Lets you create a Model subclass with its dataset already set. # +source+ should be an instance of one of the following classes: # # Database :: Sets the database for this model to +source+. # Generally only useful when subclassing directly # from the returned class, where the name of the # subclass sets the table name (which is combined # with the +Database+ in +source+ to create the # dataset to use) # Dataset :: Sets the dataset for this model to +source+. # other :: Sets the table name for this model to +source+. The # class will use the default database for model # classes in order to create the dataset. # # The purpose of this method is to set the dataset/database automatically # for a model class, if the table name doesn't match the implicit # name. This is neater than using set_dataset inside the class, # doesn't require a bogus query for the schema. # # # Using a symbol # class Comment < Sequel::Model(:something) # table_name # => :something # end # # # Using a dataset # class Comment < Sequel::Model(DB1[:something]) # dataset # => DB1[:something] # end # # # Using a database # class Comment < Sequel::Model(DB1) # dataset # => DB1[:comments] # end def self.Model(source) if cache_anonymous_models && (klass = Sequel.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source]}) return klass end klass = if source.is_a?(Database) c = Class.new(Model) c.db = source c else Class.new(Model).set_dataset(source) end Sequel.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source] = klass} if cache_anonymous_models klass end @cache_anonymous_models = true class << self # Whether to cache the anonymous models created by Sequel::Model(). This is # required for reloading them correctly (avoiding the superclass mismatch). True # by default for backwards compatibility. attr_accessor :cache_anonymous_models end # <tt>Sequel::Model</tt> is an object relational mapper built on top of Sequel core. Each # model class is backed by a dataset instance, and many dataset methods can be # called directly on the class. Model datasets return rows as model instances, # which have fairly standard ORM instance behavior. # # <tt>Sequel::Model</tt> is built completely out of plugins. Plugins can override any class, # instance, or dataset method defined by a previous plugin and call super to get the default # behavior. By default, <tt>Sequel::Model</tt> loads two plugins, <tt>Sequel::Model</tt> # (which is itself a plugin) for the base support, and <tt>Sequel::Model::Associations</tt> # for the associations support. # # You can set the +SEQUEL_NO_ASSOCIATIONS+ constant or environment variable to # make Sequel not load the associations plugin by default. class Model OPTS = Sequel::OPTS # Map that stores model classes created with <tt>Sequel::Model()</tt>, to allow the reopening # of classes when dealing with code reloading. ANONYMOUS_MODEL_CLASSES = {} # Class methods added to model that call the method of the same name on the dataset DATASET_METHODS = (Dataset::ACTION_METHODS + Dataset::QUERY_METHODS + [:each_server]) - [:and, :or, :[], :[]=, :columns, :columns!, :delete, :update, :add_graph_aliases] # Boolean settings that can be modified at the global, class, or instance level. BOOLEAN_SETTINGS = [:typecast_empty_string_to_nil, :typecast_on_assignment, :strict_param_setting, \ :raise_on_save_failure, :raise_on_typecast_failure, :require_modification, :use_after_commit_rollback, :use_transactions] # Hooks that are called before an action. Can return false to not do the action. When # overriding these, it is recommended to call +super+ as the last line of your method, # so later hooks are called before earlier hooks. BEFORE_HOOKS = [:before_create, :before_update, :before_save, :before_destroy, :before_validation] # Hooks that are called after an action. When overriding these, it is recommended to call # +super+ on the first line of your method, so later hooks are called after earlier hooks. AFTER_HOOKS = [:after_create, :after_update, :after_save, :after_destroy, :after_validation, :after_commit, :after_rollback, :after_destroy_commit, :after_destroy_rollback] # Hooks that are called around an action. If overridden, these methods must call super # exactly once if the behavior they wrap is desired. The can be used to rescue exceptions # raised by the code they wrap or ensure that some behavior is executed no matter what. AROUND_HOOKS = [:around_create, :around_update, :around_save, :around_destroy, :around_validation] # Empty instance methods to create that the user can override to get hook/callback behavior. # Just like any other method defined by Sequel, if you override one of these, you should # call +super+ to get the default behavior (while empty by default, they can also be defined # by plugins). See the {"Model Hooks" guide}[link:files/doc/model_hooks_rdoc.html] for # more detail on hooks. HOOKS = BEFORE_HOOKS + AFTER_HOOKS # Class instance variables that are inherited in subclasses. If the value is <tt>:dup</tt>, dup is called # on the superclass's instance variable when creating the instance variable in the subclass. # If the value is +nil+, the superclass's instance variable is used directly in the subclass. INHERITED_INSTANCE_VARIABLES = {:@allowed_columns=>:dup, :@dataset_method_modules=>:dup, :@primary_key=>nil, :@use_transactions=>nil, :@raise_on_save_failure=>nil, :@require_modification=>nil, :@restricted_columns=>:dup, :@restrict_primary_key=>nil, :@simple_pk=>nil, :@simple_table=>nil, :@strict_param_setting=>nil, :@typecast_empty_string_to_nil=>nil, :@typecast_on_assignment=>nil, :@raise_on_typecast_failure=>nil, :@plugins=>:dup, :@setter_methods=>nil, :@use_after_commit_rollback=>nil, :@fast_pk_lookup_sql=>nil, :@fast_instance_delete_sql=>nil, :@db=>nil, :@default_set_fields_options=>:dup} # Regular expression that determines if a method name is normal in the sense that # it could be used literally in ruby code without using send. Used to # avoid problems when using eval with a string to define methods. NORMAL_METHOD_NAME_REGEXP = /\A[A-Za-z_][A-Za-z0-9_]*\z/ # Regular expression that determines if the method is a valid setter name # (i.e. it ends with =). SETTER_METHOD_REGEXP = /=\z/ @allowed_columns = nil @db = nil @db_schema = nil @dataset = nil @dataset_method_modules = [] @default_eager_limit_strategy = true @default_set_fields_options = {} @overridable_methods_module = nil @fast_pk_lookup_sql = nil @fast_instance_delete_sql = nil @plugins = [] @primary_key = :id @raise_on_save_failure = true @raise_on_typecast_failure = false @require_modification = nil @restrict_primary_key = true @restricted_columns = nil @setter_methods = nil @simple_pk = nil @simple_table = nil @strict_param_setting = true @typecast_empty_string_to_nil = true @typecast_on_assignment = true @use_after_commit_rollback = true @use_transactions = true Sequel.require %w"default_inflections inflections plugins dataset_module base exceptions errors", "model" if !defined?(::SEQUEL_NO_ASSOCIATIONS) && !ENV.has_key?('SEQUEL_NO_ASSOCIATIONS') Sequel.require 'associations', 'model' plugin Model::Associations end # The setter methods (methods ending with =) that are never allowed # to be called automatically via +set+/+update+/+new+/etc.. RESTRICTED_SETTER_METHODS = instance_methods.map{|x| x.to_s}.grep(SETTER_METHOD_REGEXP) end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/model/�����������������������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0017142�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/model/associations.rb��������������������������������������������������0000664�0000000�0000000�00000345346�12201565355�0022205�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel class Model # Associations are used in order to specify relationships between model classes # that reflect relations between tables in the database using foreign keys. module Associations # Map of association type symbols to association reflection classes. ASSOCIATION_TYPES = {} # Set an empty association reflection hash in the model def self.apply(model) model.instance_variable_set(:@association_reflections, {}) model.instance_variable_set(:@autoreloading_associations, {}) end # AssociationReflection is a Hash subclass that keeps information on Sequel::Model associations. It # provides methods to reduce internal code duplication. It should not # be instantiated by the user. class AssociationReflection < Hash include Sequel::Inflections # Name symbol for the _add internal association method def _add_method :"_add_#{singularize(self[:name])}" end # Name symbol for the _remove_all internal association method def _remove_all_method :"_remove_all_#{self[:name]}" end # Name symbol for the _remove internal association method def _remove_method :"_remove_#{singularize(self[:name])}" end # Name symbol for the _setter association method def _setter_method :"_#{self[:name]}=" end # Name symbol for the add association method def add_method :"add_#{singularize(self[:name])}" end # Name symbol for association method, the same as the name of the association. def association_method self[:name] end # The class associated to the current model class via this association def associated_class cached_fetch(:class){constantize(self[:class_name])} end # The dataset associated via this association, with the non-instance specific # changes already applied. def associated_dataset cached_fetch(:_dataset){apply_dataset_changes(associated_class.dataset.clone)} end # Apply all non-instance specific changes to the given dataset and return it. def apply_dataset_changes(ds) ds.extend(AssociationDatasetMethods) ds.association_reflection = self self[:extend].each{|m| ds.extend(m)} ds = ds.select(*select) if select if c = self[:conditions] ds = (c.is_a?(Array) && !Sequel.condition_specifier?(c)) ? ds.where(*c) : ds.where(c) end ds = ds.order(*self[:order]) if self[:order] ds = ds.limit(*self[:limit]) if self[:limit] ds = ds.limit(1) if !returns_array? && self[:key] ds = ds.eager(*self[:eager]) if self[:eager] ds = ds.distinct if self[:distinct] ds end # Whether this association can have associated objects, given the current # object. Should be false if obj cannot have associated objects because # the necessary key columns are NULL. def can_have_associated_objects?(obj) true end # Name symbol for the dataset association method def dataset_method :"#{self[:name]}_dataset" end # Whether the dataset needs a primary key to function, true by default. def dataset_need_primary_key? true end # The eager limit strategy to use for this dataset. def eager_limit_strategy cached_fetch(:_eager_limit_strategy) do if self[:limit] case s = cached_fetch(:eager_limit_strategy){self[:model].default_eager_limit_strategy || :ruby} when true ds = associated_class.dataset if ds.supports_window_functions? :window_function else :ruby end else s end else nil end end end # The key to use for the key hash when eager loading def eager_loader_key self[:eager_loader_key] end # By default associations do not need to select a key in an associated table # to eagerly load. def eager_loading_use_associated_key? false end # Alias of predicate_key, only for backwards compatibility. def eager_loading_predicate_key predicate_key end # Whether to eagerly graph a lazy dataset, true by default. If this # is false, the association won't respect the :eager_graph option # when loading the association for a single record. def eager_graph_lazy_dataset? true end # The limit and offset for this association (returned as a two element array). def limit_and_offset if (v = self[:limit]).is_a?(Array) v else [v, nil] end end # Whether the associated object needs a primary key to be added/removed, # false by default. def need_associated_primary_key? false end # The keys to use for loading of the regular dataset, as an array. def predicate_keys cached_fetch(:predicate_keys){Array(predicate_key)} end # Qualify +col+ with the given table name. If +col+ is an array of columns, # return an array of qualified columns. Only qualifies Symbols and SQL::Identifier # values, other values are not modified. def qualify(table, col) transform(col) do |k| case k when Symbol, SQL::Identifier SQL::QualifiedIdentifier.new(table, k) else Sequel::Qualifier.new(self[:model].dataset, table).transform(k) end end end # Qualify col with the associated model's table name. def qualify_assoc(col) qualify(associated_class.table_name, col) end # Qualify col with the current model's table name. def qualify_cur(col) qualify(self[:model].table_name, col) end # Returns the reciprocal association variable, if one exists. The reciprocal # association is the association in the associated class that is the opposite # of the current association. For example, Album.many_to_one :artist and # Artist.one_to_many :albums are reciprocal associations. This information is # to populate reciprocal associations. For example, when you do this_artist.add_album(album) # it sets album.artist to this_artist. def reciprocal cached_fetch(:reciprocal) do possible_recips = [] associated_class.all_association_reflections.each do |assoc_reflect| if reciprocal_association?(assoc_reflect) possible_recips << assoc_reflect end end if possible_recips.length == 1 cached_set(:reciprocal_type, possible_recips.first[:type]) if reciprocal_type.is_a?(Array) possible_recips.first[:name] end end end # Whether the reciprocal of this association returns an array of objects instead of a single object, # true by default. def reciprocal_array? true end # Name symbol for the remove_all_ association method def remove_all_method :"remove_all_#{self[:name]}" end # Whether associated objects need to be removed from the association before # being destroyed in order to preserve referential integrity. def remove_before_destroy? true end # Name symbol for the remove_ association method def remove_method :"remove_#{singularize(self[:name])}" end # Whether to check that an object to be disassociated is already associated to this object, false by default. def remove_should_check_existing? false end # Whether this association returns an array of objects instead of a single object, # true by default. def returns_array? true end # The columns to select when loading the association. def select self[:select] end # Whether to set the reciprocal association to self when loading associated # records, false by default. def set_reciprocal_to_self? false end # Name symbol for the setter association method def setter_method :"#{self[:name]}=" end # The range used for slicing when using the :ruby eager limit strategy. def slice_range limit, offset = limit_and_offset if limit || offset (offset||0)..(limit ? (offset||0)+limit-1 : -1) end end private if defined?(RUBY_ENGINE) && RUBY_ENGINE != 'ruby' # :nocov: # On non-GVL rubies, assume the need to synchronize access. Store the key # in a special sub-hash that always uses this method to synchronize access. def cached_fetch(key) fetch(key) do h = self[:cache] Sequel.synchronize{return h[key] if h.has_key?(key)} value = yield Sequel.synchronize{h[key] = value} end end # Cache the value at the given key, synchronizing access. def cached_set(key, value) h = self[:cache] Sequel.synchronize{h[key] = value} end # :nocov: else # On MRI, use a plain fetch, since the GVL will synchronize access. def cached_fetch(key) fetch(key) do h = self[:cache] h.fetch(key){h[key] = yield} end end # On MRI, just set the value at the key in the cache, since the GVL # will synchronize access. def cached_set(key, value) self[:cache][key] = value end end def reciprocal_association?(assoc_reflect) Array(reciprocal_type).include?(assoc_reflect[:type]) && assoc_reflect.associated_class == self[:model] && assoc_reflect[:conditions].nil? && assoc_reflect[:block].nil? end # If +s+ is an array, map +s+ over the block. Otherwise, just call the # block with +s+. def transform(s) s.is_a?(Array) ? s.map(&Proc.new) : (yield s) end end class ManyToOneAssociationReflection < AssociationReflection ASSOCIATION_TYPES[:many_to_one] = self # many_to_one associations can only have associated objects if none of # the :keys options have a nil value. def can_have_associated_objects?(obj) !self[:keys].any?{|k| obj.send(k).nil?} end # Whether the dataset needs a primary key to function, false for many_to_one associations. def dataset_need_primary_key? false end # Default foreign key name symbol for foreign key in current model's table that points to # the given association's table's primary key. def default_key :"#{self[:name]}_id" end # Whether to eagerly graph a lazy dataset, true for many_to_one associations # only if the key is nil. def eager_graph_lazy_dataset? self[:key].nil? end # many_to_one associations don't need an eager limit strategy def eager_limit_strategy nil end # The expression to use on the left hand side of the IN lookup when eager loading def predicate_key cached_fetch(:predicate_key){qualified_primary_key} end # The column(s) in the associated table that the key in the current table references (either a symbol or an array). def primary_key cached_fetch(:primary_key){associated_class.primary_key} end # The columns in the associated table that the key in the current table references (always an array). def primary_keys cached_fetch(:primary_keys){Array(primary_key)} end alias associated_object_keys primary_keys # The method symbol or array of method symbols to call on the associated object # to get the value to use for the foreign keys. def primary_key_method cached_fetch(:primary_key_method){primary_key} end # The array of method symbols to call on the associated object # to get the value to use for the foreign keys. def primary_key_methods cached_fetch(:primary_key_methods){Array(primary_key_method)} end # #primary_key qualified by the associated table def qualified_primary_key cached_fetch(:qualified_primary_key){self[:qualify] == false ? primary_key : qualify_assoc(primary_key)} end # True only if the reciprocal is a one_to_many association. def reciprocal_array? !set_reciprocal_to_self? end # Whether this association returns an array of objects instead of a single object, # false for a many_to_one association. def returns_array? false end # True only if the reciprocal is a one_to_one association. def set_reciprocal_to_self? reciprocal reciprocal_type == :one_to_one end private def reciprocal_association?(assoc_reflect) super && self[:keys] == assoc_reflect[:keys] && primary_key == assoc_reflect.primary_key end # The reciprocal type of a many_to_one association is either # a one_to_many or a one_to_one association. def reciprocal_type cached_fetch(:reciprocal_type){[:one_to_many, :one_to_one]} end end class OneToManyAssociationReflection < AssociationReflection ASSOCIATION_TYPES[:one_to_many] = self # The keys in the associated model's table related to this association def associated_object_keys self[:keys] end # one_to_many associations can only have associated objects if none of # the :keys options have a nil value. def can_have_associated_objects?(obj) !self[:primary_keys].any?{|k| obj.send(k).nil?} end # Default foreign key name symbol for key in associated table that points to # current table's primary key. def default_key :"#{underscore(demodulize(self[:model].name))}_id" end # The hash key to use for the eager loading predicate (left side of IN (1, 2, 3)) def predicate_key cached_fetch(:predicate_key){qualify_assoc(self[:key])} end alias qualified_key predicate_key # The column in the current table that the key in the associated table references. def primary_key self[:primary_key] end # #primary_key qualified by the current table def qualified_primary_key cached_fetch(:qualified_primary_key){qualify_cur(primary_key)} end # Whether the reciprocal of this association returns an array of objects instead of a single object, # false for a one_to_many association. def reciprocal_array? false end # Destroying one_to_many associated objects automatically deletes the foreign key. def remove_before_destroy? false end # The one_to_many association needs to check that an object to be removed already is associated. def remove_should_check_existing? true end # One to many associations set the reciprocal to self when loading associated records. def set_reciprocal_to_self? true end private def reciprocal_association?(assoc_reflect) super && self[:keys] == assoc_reflect[:keys] && primary_key == assoc_reflect.primary_key end # The reciprocal type of a one_to_many association is a many_to_one association. def reciprocal_type :many_to_one end end class OneToOneAssociationReflection < OneToManyAssociationReflection ASSOCIATION_TYPES[:one_to_one] = self # one_to_one associations don't use an eager limit strategy by default, but # support both DISTINCT ON and window functions as strategies. def eager_limit_strategy cached_fetch(:_eager_limit_strategy) do offset = limit_and_offset.last case s = self.fetch(:eager_limit_strategy){(self[:model].default_eager_limit_strategy || :ruby) if offset} when Symbol s when true ds = associated_class.dataset if ds.supports_ordered_distinct_on? && offset.nil? :distinct_on elsif ds.supports_window_functions? :window_function else :ruby end else nil end end end # The limit and offset for this association (returned as a two element array). def limit_and_offset if (v = self[:limit]).is_a?(Array) v else [v, nil] end end # one_to_one associations return a single object, not an array def returns_array? false end end class ManyToManyAssociationReflection < AssociationReflection ASSOCIATION_TYPES[:many_to_many] = self # The alias to use for the associated key when eagerly loading def associated_key_alias self[:left_key_alias] end # The column to use for the associated key when eagerly loading def associated_key_column self[:left_key] end # Alias of right_primary_keys def associated_object_keys right_primary_keys end # many_to_many associations can only have associated objects if none of # the :left_primary_keys options have a nil value. def can_have_associated_objects?(obj) !self[:left_primary_keys].any?{|k| obj.send(k).nil?} end # The default associated key alias(es) to use when eager loading # associations via eager. def default_associated_key_alias self[:uses_left_composite_keys] ? (0...self[:left_keys].length).map{|i| :"x_foreign_key_#{i}_x"} : :x_foreign_key_x end # Default name symbol for the join table. def default_join_table [self[:class_name], self[:model].name].map{|i| underscore(pluralize(demodulize(i)))}.sort.join('_').to_sym end # Default foreign key name symbol for key in join table that points to # current table's primary key (or :left_primary_key column). def default_left_key :"#{underscore(demodulize(self[:model].name))}_id" end # Default foreign key name symbol for foreign key in join table that points to # the association's table's primary key (or :right_primary_key column). def default_right_key :"#{singularize(self[:name])}_id" end # The hash key to use for the eager loading predicate (left side of IN (1, 2, 3)). # The left key qualified by the join table. def predicate_key cached_fetch(:predicate_key){qualify(join_table_alias, self[:left_key])} end alias qualified_left_key predicate_key # The right key qualified by the join table. def qualified_right_key cached_fetch(:qualified_right_key){qualify(join_table_alias, self[:right_key])} end # many_to_many associations need to select a key in an associated table to eagerly load def eager_loading_use_associated_key? true end # The source of the join table. This is the join table itself, unless it # is aliased, in which case it is the unaliased part. def join_table_source cached_fetch(:join_table_source){split_join_table_alias[0]} end # The join table itself, unless it is aliased, in which case this # is the alias. def join_table_alias cached_fetch(:join_table_alias) do s, a = split_join_table_alias a || s end end alias associated_key_table join_table_alias # Whether the associated object needs a primary key to be added/removed, # true for many_to_many associations. def need_associated_primary_key? true end # #right_primary_key qualified by the associated table def qualified_right_primary_key cached_fetch(:qualified_right_primary_key){qualify_assoc(right_primary_key)} end # The primary key column(s) to use in the associated table (can be symbol or array). def right_primary_key cached_fetch(:right_primary_key){associated_class.primary_key} end # The primary key columns to use in the associated table (always array). def right_primary_keys cached_fetch(:right_primary_keys){Array(right_primary_key)} end # The method symbol or array of method symbols to call on the associated objects # to get the foreign key values for the join table. def right_primary_key_method cached_fetch(:right_primary_key_method){right_primary_key} end # The array of method symbols to call on the associated objects # to get the foreign key values for the join table. def right_primary_key_methods cached_fetch(:right_primary_key_methods){Array(right_primary_key_method)} end # The columns to select when loading the association, associated_class.table_name.* by default. def select cached_fetch(:select){Sequel::SQL::ColumnAll.new(associated_class.table_name)} end private def reciprocal_association?(assoc_reflect) super && assoc_reflect[:left_keys] == self[:right_keys] && assoc_reflect[:right_keys] == self[:left_keys] && assoc_reflect[:join_table] == self[:join_table] && right_primary_keys == assoc_reflect[:left_primary_key_columns] && self[:left_primary_key_columns] == assoc_reflect.right_primary_keys end def reciprocal_type :many_to_many end # Split the join table into source and alias parts. def split_join_table_alias associated_class.dataset.split_alias(self[:join_table]) end end # This module contains methods added to all association datasets module AssociationDatasetMethods # The model object that created the association dataset attr_accessor :model_object # The association reflection related to the association dataset attr_accessor :association_reflection end # Each kind of association adds a number of instance methods to the model class which # are specialized according to the association type and optional parameters # given in the definition. Example: # # class Project < Sequel::Model # many_to_one :portfolio # # or: one_to_one :portfolio # one_to_many :milestones # # or: many_to_many :milestones # end # # The project class now has the following instance methods: # portfolio :: Returns the associated portfolio. # portfolio=(obj) :: Sets the associated portfolio to the object, # but the change is not persisted until you save the record (for many_to_one associations). # portfolio_dataset :: Returns a dataset that would return the associated # portfolio, only useful in fairly specific circumstances. # milestones :: Returns an array of associated milestones # add_milestone(obj) :: Associates the passed milestone with this object. # remove_milestone(obj) :: Removes the association with the passed milestone. # remove_all_milestones :: Removes associations with all associated milestones. # milestones_dataset :: Returns a dataset that would return the associated # milestones, allowing for further filtering/limiting/etc. # # If you want to override the behavior of the add_/remove_/remove_all_/ methods # or the association setter method, use the :adder, :remover, :clearer, and/or :setter # options. These options override the default behavior. # # By default the classes for the associations are inferred from the association # name, so for example the Project#portfolio will return an instance of # Portfolio, and Project#milestones will return an array of Milestone # instances. You can use the :class option to change which class is used. # # Association definitions are also reflected by the class, e.g.: # # Project.associations # => [:portfolio, :milestones] # Project.association_reflection(:portfolio) # => {:type => :many_to_one, :name => :portfolio, ...} # # Associations should not have the same names as any of the columns in the # model's current table they reference. If you are dealing with an existing schema that # has a column named status, you can't name the association status, you'd # have to name it foo_status or something else. If you give an association the same name # as a column, you will probably end up with an association that doesn't work, or a SystemStackError. # # For a more in depth general overview, as well as a reference guide, # see the {Association Basics guide}[link:files/doc/association_basics_rdoc.html]. # For examples of advanced usage, see the {Advanced Associations guide}[link:files/doc/advanced_associations_rdoc.html]. module ClassMethods # All association reflections defined for this model (default: {}). attr_reader :association_reflections # Hash with column symbol keys and arrays of many_to_one # association symbols that should be cleared when the column # value changes. attr_reader :autoreloading_associations # The default :eager_limit_strategy option to use for limited or offset associations (default: true, causing Sequel # to use what it considers the most appropriate strategy). attr_accessor :default_eager_limit_strategy # Array of all association reflections for this model class def all_association_reflections association_reflections.values end # Given an association reflection and a dataset, apply the # :select, :conditions, :order, :eager, :distinct, and :eager_block # association options to the given dataset and return the dataset # or a modified copy of it. def apply_association_dataset_opts(opts, ds) ds = ds.select(*opts.select) if opts.select if c = opts[:conditions] ds = (c.is_a?(Array) && !Sequel.condition_specifier?(c)) ? ds.where(*c) : ds.where(c) end ds = ds.order(*opts[:order]) if opts[:order] ds = ds.eager(opts[:eager]) if opts[:eager] ds = ds.distinct if opts[:distinct] ds = opts[:eager_block].call(ds) if opts[:eager_block] ds end # Associates a related model with the current model. The following types are # supported: # # :many_to_one :: Foreign key in current model's table points to # associated model's primary key. Each associated model object can # be associated with more than one current model objects. Each current # model object can be associated with only one associated model object. # :one_to_many :: Foreign key in associated model's table points to this # model's primary key. Each current model object can be associated with # more than one associated model objects. Each associated model object # can be associated with only one current model object. # :one_to_one :: Similar to one_to_many in terms of foreign keys, but # only one object is associated to the current object through the # association. The methods created are similar to many_to_one, except # that the one_to_one setter method saves the passed object. # :many_to_many :: A join table is used that has a foreign key that points # to this model's primary key and a foreign key that points to the # associated model's primary key. Each current model object can be # associated with many associated model objects, and each associated # model object can be associated with many current model objects. # # The following options can be supplied: # === Multiple Types # :adder :: Proc used to define the private _add_* method for doing the database work # to associate the given object to the current object (*_to_many assocations). # :after_add :: Symbol, Proc, or array of both/either specifying a callback to call # after a new item is added to the association. # :after_load :: Symbol, Proc, or array of both/either specifying a callback to call # after the associated record(s) have been retrieved from the database. # :after_remove :: Symbol, Proc, or array of both/either specifying a callback to call # after an item is removed from the association. # :after_set :: Symbol, Proc, or array of both/either specifying a callback to call # after an item is set using the association setter method. # :allow_eager :: If set to false, you cannot load the association eagerly # via eager or eager_graph # :before_add :: Symbol, Proc, or array of both/either specifying a callback to call # before a new item is added to the association. # :before_remove :: Symbol, Proc, or array of both/either specifying a callback to call # before an item is removed from the association. # :before_set :: Symbol, Proc, or array of both/either specifying a callback to call # before an item is set using the association setter method. # :cartesian_product_number :: the number of joins completed by this association that could cause more # than one row for each row in the current table (default: 0 for # many_to_one and one_to_one associations, 1 for one_to_many and # many_to_many associations). # :class :: The associated class or its name as a string or symbol. If not # given, uses the association's name, which is camelized (and # singularized unless the type is :many_to_one or :one_to_one). If this is specified # as a string or symbol, you must specify the full class name (e.g. "SomeModule::MyModel"). # :clearer :: Proc used to define the private _remove_all_* method for doing the database work # to remove all objects associated to the current object (*_to_many assocations). # :clone :: Merge the current options and block into the options and block used in defining # the given association. Can be used to DRY up a bunch of similar associations that # all share the same options such as :class and :key, while changing the order and block used. # :conditions :: The conditions to use to filter the association, can be any argument passed to where. # :dataset :: A proc that is instance_execed to get the base dataset to use (before the other # options are applied). If the proc accepts an argument, it is passed the related # association reflection. # :distinct :: Use the DISTINCT clause when selecting associating object, both when # lazy loading and eager loading via .eager (but not when using .eager_graph). # :eager :: The associations to eagerly load via +eager+ when loading the associated object(s). # :eager_block :: If given, use the block instead of the default block when # eagerly loading. To not use a block when eager loading (when one is used normally), # set to nil. # :eager_graph :: The associations to eagerly load via +eager_graph+ when loading the associated object(s). # many_to_many associations with this option cannot be eagerly loaded via +eager+. # :eager_grapher :: A proc to use to implement eager loading via +eager_graph+, overriding the default. # Takes an options hash with the entries :self (the receiver of the eager_graph call), # :table_alias (the alias to use for table to graph into the association), :implicit_qualifier # (the alias that was used for the current table), and possibly :eager_block (a callback # proc accepting the associated dataset, for per-call customization). # Should return a copy of the dataset with the association graphed into it. # :eager_limit_strategy :: Determines the strategy used for enforcing limits and offsets when eager loading # associations via the +eager+ method. # :eager_loader :: A proc to use to implement eager loading, overriding the default. Takes a single hash argument, # with at least the keys: :rows, which is an array of current model instances, :associations, # which is a hash of dependent associations, :self, which is the dataset doing the eager loading, # :eager_block, which is a dynamic callback that should be called with the dataset, and :id_map, # which is a mapping of key values to arrays of current model instances. In the proc, the # associated records should be queried from the database and the associations cache for each # record should be populated. # :eager_loader_key :: A symbol for the key column to use to populate the key_hash # for the eager loader. Can be set to nil to not populate the key_hash. # :extend :: A module or array of modules to extend the dataset with. # :graph_alias_base :: The base name to use for the table alias when eager graphing. Defaults to the name # of the association. If the alias name has already been used in the query, Sequel will create # a unique alias by appending a numeric suffix (e.g. alias_0, alias_1, ...) until the alias is # unique. # :graph_block :: The block to pass to join_table when eagerly loading # the association via +eager_graph+. # :graph_conditions :: The additional conditions to use on the SQL join when eagerly loading # the association via +eager_graph+. Should be a hash or an array of two element arrays. If not # specified, the :conditions option is used if it is a hash or array of two element arrays. # :graph_join_type :: The type of SQL join to use when eagerly loading the association via # eager_graph. Defaults to :left_outer. # :graph_only_conditions :: The conditions to use on the SQL join when eagerly loading # the association via +eager_graph+, instead of the default conditions specified by the # foreign/primary keys. This option causes the :graph_conditions option to be ignored. # :graph_select :: A column or array of columns to select from the associated table # when eagerly loading the association via +eager_graph+. Defaults to all # columns in the associated table. # :limit :: Limit the number of records to the provided value. Use # an array with two elements for the value to specify a # limit (first element) and an offset (second element). # :methods_module :: The module that methods the association creates will be placed into. Defaults # to the module containing the model's columns. # :order :: the column(s) by which to order the association dataset. Can be a # singular column symbol or an array of column symbols. # :order_eager_graph :: Whether to add the association's order to the graphed dataset's order when graphing # via +eager_graph+. Defaults to true, so set to false to disable. # :read_only :: Do not add a setter method (for many_to_one or one_to_one associations), # or add_/remove_/remove_all_ methods (for one_to_many and many_to_many associations). # :reciprocal :: the symbol name of the reciprocal association, # if it exists. By default, Sequel will try to determine it by looking at the # associated model's assocations for a association that matches # the current association's key(s). Set to nil to not use a reciprocal. # :remover :: Proc used to define the private _remove_* method for doing the database work # to remove the association between the given object and the current object (*_to_many assocations). # :select :: the columns to select. Defaults to the associated class's # table_name.* in a many_to_many association, which means it doesn't include the attributes from the # join table. If you want to include the join table attributes, you can # use this option, but beware that the join table attributes can clash with # attributes from the model table, so you should alias any attributes that have # the same name in both the join table and the associated table. # :setter :: Proc used to define the private _*= method for doing the work to setup the assocation # between the given object and the current object (*_to_one associations). # :validate :: Set to false to not validate when implicitly saving any associated object. # === :many_to_one # :key :: foreign key in current model's table that references # associated model's primary key, as a symbol. Defaults to :"#{name}_id". Can use an # array of symbols for a composite key association. # :key_column :: Similar to, and usually identical to, :key, but :key refers to the model method # to call, where :key_column refers to the underlying column. Should only be # used if the the model method differs from the foreign key column, in conjunction # with defining a model alias method for the key column. # :primary_key :: column in the associated table that :key option references, as a symbol. # Defaults to the primary key of the associated table. Can use an # array of symbols for a composite key association. # :primary_key_method :: the method symbol or array of method symbols to call on the associated # object to get the foreign key values. Defaults to :primary_key option. # :qualify :: Whether to use qualifier primary keys when loading the association. The default # is true, so you must set to false to not qualify. Qualification rarely causes # problems, but it's necessary to disable in some cases, such as when you are doing # a JOIN USING operation on the column on Oracle. # === :one_to_many and :one_to_one # :key :: foreign key in associated model's table that references # current model's primary key, as a symbol. Defaults to # :"#{self.name.underscore}_id". Can use an # array of symbols for a composite key association. # :key_method :: the method symbol or array of method symbols to call on the associated # object to get the foreign key values. Defaults to :key option. # :primary_key :: column in the current table that :key option references, as a symbol. # Defaults to primary key of the current table. Can use an # array of symbols for a composite key association. # :primary_key_column :: Similar to, and usually identical to, :primary_key, but :primary_key refers # to the model method call, where :primary_key_column refers to the underlying column. # Should only be used if the the model method differs from the primary key column, in # conjunction with defining a model alias method for the primary key column. # === :many_to_many # :graph_join_table_block :: The block to pass to +join_table+ for # the join table when eagerly loading the association via +eager_graph+. # :graph_join_table_conditions :: The additional conditions to use on the SQL join for # the join table when eagerly loading the association via +eager_graph+. # Should be a hash or an array of two element arrays. # :graph_join_table_join_type :: The type of SQL join to use for the join table when eagerly # loading the association via +eager_graph+. Defaults to the # :graph_join_type option or :left_outer. # :graph_join_table_only_conditions :: The conditions to use on the SQL join for the join # table when eagerly loading the association via +eager_graph+, # instead of the default conditions specified by the # foreign/primary keys. This option causes the # :graph_join_table_conditions option to be ignored. # :join_table :: name of table that includes the foreign keys to both # the current model and the associated model, as a symbol. Defaults to the name # of current model and name of associated model, pluralized, # underscored, sorted, and joined with '_'. # :join_table_block :: proc that can be used to modify the dataset used in the add/remove/remove_all # methods. Should accept a dataset argument and return a modified dataset if present. # :left_key :: foreign key in join table that points to current model's # primary key, as a symbol. Defaults to :"#{self.name.underscore}_id". # Can use an array of symbols for a composite key association. # :left_primary_key :: column in current table that :left_key points to, as a symbol. # Defaults to primary key of current table. Can use an # array of symbols for a composite key association. # :left_primary_key_column :: Similar to, and usually identical to, :left_primary_key, but :left_primary_key refers to # the model method to call, where :left_primary_key_column refers to the underlying column. Should only # be used if the model method differs from the left primary key column, in conjunction # with defining a model alias method for the left primary key column. # :right_key :: foreign key in join table that points to associated # model's primary key, as a symbol. Defaults to :"#{name.to_s.singularize}_id". # Can use an array of symbols for a composite key association. # :right_primary_key :: column in associated table that :right_key points to, as a symbol. # Defaults to primary key of the associated table. Can use an # array of symbols for a composite key association. # :right_primary_key_method :: the method symbol or array of method symbols to call on the associated # object to get the foreign key values for the join table. # Defaults to :right_primary_key option. # :uniq :: Adds a after_load callback that makes the array of objects unique. def associate(type, name, opts = OPTS, &block) raise(Error, 'invalid association type') unless assoc_class = ASSOCIATION_TYPES[type] raise(Error, 'Model.associate name argument must be a symbol') unless name.is_a?(Symbol) # dup early so we don't modify opts orig_opts = opts.dup if opts[:clone] cloned_assoc = association_reflection(opts[:clone]) raise(Error, "cannot clone an association to an association of different type (association #{name} with type #{type} cloning #{opts[:clone]} with type #{cloned_assoc[:type]})") unless cloned_assoc[:type] == type || [cloned_assoc[:type], type].all?{|t| [:one_to_many, :one_to_one].include?(t)} orig_opts = cloned_assoc[:orig_opts].merge(orig_opts) end opts = orig_opts.merge(:type => type, :name => name, :cache=>{}, :model => self) opts[:block] = block if block opts = assoc_class.new.merge!(opts) opts[:eager_block] = block unless opts.include?(:eager_block) if !opts.has_key?(:predicate_key) && opts.has_key?(:eager_loading_predicate_key) opts[:predicate_key] = opts[:eager_loading_predicate_key] end opts[:graph_join_type] ||= :left_outer opts[:order_eager_graph] = true unless opts.include?(:order_eager_graph) conds = opts[:conditions] opts[:graph_alias_base] ||= name opts[:graph_conditions] = conds if !opts.include?(:graph_conditions) and Sequel.condition_specifier?(conds) opts[:graph_conditions] = opts.fetch(:graph_conditions, []).to_a opts[:graph_select] = Array(opts[:graph_select]) if opts[:graph_select] [:before_add, :before_remove, :after_add, :after_remove, :after_load, :before_set, :after_set, :extend].each do |cb_type| opts[cb_type] = Array(opts[cb_type]) end late_binding_class_option(opts, opts.returns_array? ? singularize(name) : name) # Remove :class entry if it exists and is nil, to work with cached_fetch opts.delete(:class) unless opts[:class] send(:"def_#{type}", opts) orig_opts.delete(:clone) orig_opts.merge!(:class_name=>opts[:class_name], :class=>opts[:class], :block=>block) opts[:orig_opts] = orig_opts # don't add to association_reflections until we are sure there are no errors association_reflections[name] = opts end # The association reflection hash for the association of the given name. def association_reflection(name) association_reflections[name] end # Array of association name symbols def associations association_reflections.keys end # Modify and return eager loading dataset based on association options. def eager_loading_dataset(opts, ds, select, associations, eager_options=OPTS) ds = apply_association_dataset_opts(opts, ds) ds = ds.select(*select) if select if opts[:eager_graph] raise(Error, "cannot eagerly load a #{opts[:type]} association that uses :eager_graph") if opts.eager_loading_use_associated_key? ds = ds.eager_graph(opts[:eager_graph]) end ds = ds.eager(associations) unless Array(associations).empty? ds = eager_options[:eager_block].call(ds) if eager_options[:eager_block] if opts.eager_loading_use_associated_key? ds = if opts[:uses_left_composite_keys] ds.select_append(*opts.associated_key_alias.zip(opts.predicate_keys).map{|a, k| SQL::AliasedExpression.new(k, a)}) else ds.select_append(SQL::AliasedExpression.new(opts.predicate_key, opts.associated_key_alias)) end end ds end # Shortcut for adding a many_to_many association, see #associate def many_to_many(name, opts=OPTS, &block) associate(:many_to_many, name, opts, &block) end # Shortcut for adding a many_to_one association, see #associate def many_to_one(name, opts=OPTS, &block) associate(:many_to_one, name, opts, &block) end # Shortcut for adding a one_to_many association, see #associate def one_to_many(name, opts=OPTS, &block) associate(:one_to_many, name, opts, &block) end # Shortcut for adding a one_to_one association, see #associate. def one_to_one(name, opts=OPTS, &block) associate(:one_to_one, name, opts, &block) end Plugins.inherited_instance_variables(self, :@association_reflections=>:dup, :@autoreloading_associations=>:hash_dup, :@default_eager_limit_strategy=>nil) Plugins.def_dataset_methods(self, [:eager, :eager_graph]) private # Use a window function to limit the results of the eager loading dataset. def apply_window_function_eager_limit_strategy(ds, opts) rn = ds.row_number_column limit, offset = opts.limit_and_offset ds = ds.unordered.select_append{row_number(:over, :partition=>opts.predicate_key, :order=>ds.opts[:order]){}.as(rn)}.from_self ds = if opts[:type] == :one_to_one ds.where(rn => offset ? offset+1 : 1) elsif offset offset += 1 if limit ds.where(rn => (offset...(offset+limit))) else ds.where{SQL::Identifier.new(rn) >= offset} end else ds.where{SQL::Identifier.new(rn) <= limit} end end # The module to use for the association's methods. Defaults to # the overridable_methods_module. def association_module(opts=OPTS) opts.fetch(:methods_module, overridable_methods_module) end # Add a method to the module included in the class, so the method # can be easily overridden in the class itself while allowing for # super to be called. def association_module_def(name, opts=OPTS, &block) association_module(opts).module_eval{define_method(name, &block)} end # Add a private method to the module included in the class. def association_module_private_def(name, opts=OPTS, &block) association_module_def(name, opts, &block) association_module(opts).send(:private, name) end # Add the add_ instance method def def_add_method(opts) association_module_def(opts.add_method, opts){|o,*args| add_associated_object(opts, o, *args)} end # Adds the association dataset methods to the association methods module. def def_association_dataset_methods(opts) association_module_def(opts.dataset_method, opts){_dataset(opts)} def_association_method(opts) end # Adds the association method to the association methods module. def def_association_method(opts) association_module_def(opts.association_method, opts){|*dynamic_opts, &block| load_associated_objects(opts, dynamic_opts[0], &block)} end # Configures many_to_many association reflection and adds the related association methods def def_many_to_many(opts) name = opts[:name] model = self left = (opts[:left_key] ||= opts.default_left_key) lcks = opts[:left_keys] = Array(left) right = (opts[:right_key] ||= opts.default_right_key) rcks = opts[:right_keys] = Array(right) left_pk = (opts[:left_primary_key] ||= self.primary_key) opts[:eager_loader_key] = left_pk unless opts.has_key?(:eager_loader_key) lcpks = opts[:left_primary_keys] = Array(left_pk) lpkc = opts[:left_primary_key_column] ||= left_pk lpkcs = opts[:left_primary_key_columns] ||= Array(lpkc) raise(Error, "mismatched number of left keys: #{lcks.inspect} vs #{lcpks.inspect}") unless lcks.length == lcpks.length if opts[:right_primary_key] rcpks = Array(opts[:right_primary_key]) raise(Error, "mismatched number of right keys: #{rcks.inspect} vs #{rcpks.inspect}") unless rcks.length == rcpks.length end uses_lcks = opts[:uses_left_composite_keys] = lcks.length > 1 opts[:uses_right_composite_keys] = rcks.length > 1 opts[:cartesian_product_number] ||= 1 join_table = (opts[:join_table] ||= opts.default_join_table) left_key_alias = opts[:left_key_alias] ||= opts.default_associated_key_alias graph_jt_conds = opts[:graph_join_table_conditions] = opts.fetch(:graph_join_table_conditions, []).to_a opts[:graph_join_table_join_type] ||= opts[:graph_join_type] opts[:after_load].unshift(:array_uniq!) if opts[:uniq] slice_range = opts.slice_range opts[:dataset] ||= proc{opts.associated_dataset.inner_join(join_table, rcks.zip(opts.right_primary_keys) + opts.predicate_keys.zip(lcpks.map{|k| send(k)}), :qualify=>:deep)} opts[:eager_loader] ||= proc do |eo| h = eo[:id_map] rows = eo[:rows] rows.each{|object| object.associations[name] = []} r = rcks.zip(opts.right_primary_keys) l = [[opts.predicate_key, h.keys]] ds = model.eager_loading_dataset(opts, opts.associated_class.inner_join(join_table, r + l, :qualify=>:deep), nil, eo[:associations], eo) if opts.eager_limit_strategy == :window_function delete_rn = true rn = ds.row_number_column ds = apply_window_function_eager_limit_strategy(ds, opts) end ds.all do |assoc_record| assoc_record.values.delete(rn) if delete_rn hash_key = if uses_lcks left_key_alias.map{|k| assoc_record.values.delete(k)} else assoc_record.values.delete(left_key_alias) end next unless objects = h[hash_key] objects.each{|object| object.associations[name].push(assoc_record)} end if opts.eager_limit_strategy == :ruby rows.each{|o| o.associations[name] = o.associations[name][slice_range] || []} end end join_type = opts[:graph_join_type] select = opts[:graph_select] use_only_conditions = opts.include?(:graph_only_conditions) only_conditions = opts[:graph_only_conditions] conditions = opts[:graph_conditions] graph_block = opts[:graph_block] use_jt_only_conditions = opts.include?(:graph_join_table_only_conditions) jt_only_conditions = opts[:graph_join_table_only_conditions] jt_join_type = opts[:graph_join_table_join_type] jt_graph_block = opts[:graph_join_table_block] opts[:eager_grapher] ||= proc do |eo| ds = eo[:self] ds = ds.graph(join_table, use_jt_only_conditions ? jt_only_conditions : lcks.zip(lpkcs) + graph_jt_conds, :select=>false, :table_alias=>ds.unused_table_alias(join_table, [eo[:table_alias]]), :join_type=>jt_join_type, :implicit_qualifier=>eo[:implicit_qualifier], :qualify=>:deep, :from_self_alias=>ds.opts[:eager_graph][:master], &jt_graph_block) ds.graph(eager_graph_dataset(opts, eo), use_only_conditions ? only_conditions : opts.right_primary_keys.zip(rcks) + conditions, :select=>select, :table_alias=>eo[:table_alias], :qualify=>:deep, :join_type=>join_type, &graph_block) end def_association_dataset_methods(opts) return if opts[:read_only] adder = opts[:adder] || proc do |o| h = {} lcks.zip(lcpks).each{|k, pk| h[k] = send(pk)} rcks.zip(opts.right_primary_key_methods).each{|k, pk| h[k] = o.send(pk)} _join_table_dataset(opts).insert(h) end association_module_private_def(opts._add_method, opts, &adder) remover = opts[:remover] || proc do |o| _join_table_dataset(opts).where(lcks.zip(lcpks.map{|k| send(k)}) + rcks.zip(opts.right_primary_key_methods.map{|k| o.send(k)})).delete end association_module_private_def(opts._remove_method, opts, &remover) clearer = opts[:clearer] || proc do _join_table_dataset(opts).where(lcks.zip(lcpks.map{|k| send(k)})).delete end association_module_private_def(opts._remove_all_method, opts, &clearer) def_add_method(opts) def_remove_methods(opts) end # Configures many_to_one association reflection and adds the related association methods def def_many_to_one(opts) name = opts[:name] model = self opts[:key] = opts.default_key unless opts.has_key?(:key) key = opts[:key] opts[:eager_loader_key] = key unless opts.has_key?(:eager_loader_key) cks = opts[:graph_keys] = opts[:keys] = Array(key) opts[:key_column] ||= key opts[:graph_keys] = opts[:key_columns] = Array(opts[:key_column]) opts[:qualified_key] = opts.qualify_cur(key) if opts[:primary_key] cpks = Array(opts[:primary_key]) raise(Error, "mismatched number of keys: #{cks.inspect} vs #{cpks.inspect}") unless cks.length == cpks.length end uses_cks = opts[:uses_composite_keys] = cks.length > 1 opts[:cartesian_product_number] ||= 0 if !opts.has_key?(:many_to_one_pk_lookup) && (opts[:dataset] || opts[:conditions] || opts[:block] || opts[:select] || (opts.has_key?(:key) && opts[:key] == nil)) opts[:many_to_one_pk_lookup] = false end auto_assocs = @autoreloading_associations cks.each do |k| (auto_assocs[k] ||= []) << name end opts[:dataset] ||= proc do opts.associated_dataset.where(opts.predicate_keys.zip(cks.map{|k| send(k)})) end opts[:eager_loader] ||= proc do |eo| h = eo[:id_map] keys = h.keys # Default the cached association to nil, so any object that doesn't have it # populated will have cached the negative lookup. eo[:rows].each{|object| object.associations[name] = nil} # Skip eager loading if no objects have a foreign key for this association unless keys.empty? klass = opts.associated_class model.eager_loading_dataset(opts, klass.where(opts.predicate_key=>keys), nil, eo[:associations], eo).all do |assoc_record| hash_key = uses_cks ? opts.primary_key_methods.map{|k| assoc_record.send(k)} : assoc_record.send(opts.primary_key_method) next unless objects = h[hash_key] objects.each{|object| object.associations[name] = assoc_record} end end end join_type = opts[:graph_join_type] select = opts[:graph_select] use_only_conditions = opts.include?(:graph_only_conditions) only_conditions = opts[:graph_only_conditions] conditions = opts[:graph_conditions] graph_block = opts[:graph_block] graph_cks = opts[:graph_keys] opts[:eager_grapher] ||= proc do |eo| ds = eo[:self] ds.graph(eager_graph_dataset(opts, eo), use_only_conditions ? only_conditions : opts.primary_keys.zip(graph_cks) + conditions, eo.merge(:select=>select, :join_type=>join_type, :qualify=>:deep, :from_self_alias=>ds.opts[:eager_graph][:master]), &graph_block) end def_association_dataset_methods(opts) return if opts[:read_only] setter = opts[:setter] || proc{|o| cks.zip(opts.primary_key_methods).each{|k, pk| send(:"#{k}=", (o.send(pk) if o))}} association_module_private_def(opts._setter_method, opts, &setter) association_module_def(opts.setter_method, opts){|o| set_associated_object(opts, o)} end # Configures one_to_many and one_to_one association reflections and adds the related association methods def def_one_to_many(opts) one_to_one = opts[:type] == :one_to_one name = opts[:name] model = self key = (opts[:key] ||= opts.default_key) km = opts[:key_method] ||= opts[:key] cks = opts[:keys] = Array(key) opts[:key_methods] = Array(opts[:key_method]) primary_key = (opts[:primary_key] ||= self.primary_key) opts[:eager_loader_key] = primary_key unless opts.has_key?(:eager_loader_key) cpks = opts[:primary_keys] = Array(primary_key) pkc = opts[:primary_key_column] ||= primary_key pkcs = opts[:primary_key_columns] ||= Array(pkc) raise(Error, "mismatched number of keys: #{cks.inspect} vs #{cpks.inspect}") unless cks.length == cpks.length uses_cks = opts[:uses_composite_keys] = cks.length > 1 slice_range = opts.slice_range opts[:dataset] ||= proc do opts.associated_dataset.where(opts.predicate_keys.zip(cpks.map{|k| send(k)})) end opts[:eager_loader] ||= proc do |eo| h = eo[:id_map] rows = eo[:rows] reciprocal = opts.reciprocal klass = opts.associated_class filter_keys = opts.predicate_key ds = model.eager_loading_dataset(opts, klass.where(filter_keys=>h.keys), nil, eo[:associations], eo) assign_singular = true if one_to_one case opts.eager_limit_strategy when :distinct_on ds = ds.distinct(*filter_keys).order_prepend(*filter_keys) when :window_function delete_rn = true rn = ds.row_number_column ds = apply_window_function_eager_limit_strategy(ds, opts) when :ruby assign_singular = false if one_to_one && slice_range end if assign_singular rows.each{|object| object.associations[name] = nil} else rows.each{|object| object.associations[name] = []} end ds.all do |assoc_record| assoc_record.values.delete(rn) if delete_rn hash_key = uses_cks ? km.map{|k| assoc_record.send(k)} : assoc_record.send(km) next unless objects = h[hash_key] if assign_singular objects.each do |object| unless object.associations[name] object.associations[name] = assoc_record assoc_record.associations[reciprocal] = object if reciprocal end end else objects.each do |object| object.associations[name].push(assoc_record) assoc_record.associations[reciprocal] = object if reciprocal end end end if opts.eager_limit_strategy == :ruby if one_to_one if slice_range rows.each{|o| o.associations[name] = o.associations[name][slice_range.begin]} end else rows.each{|o| o.associations[name] = o.associations[name][slice_range] || []} end end end join_type = opts[:graph_join_type] select = opts[:graph_select] use_only_conditions = opts.include?(:graph_only_conditions) only_conditions = opts[:graph_only_conditions] conditions = opts[:graph_conditions] opts[:cartesian_product_number] ||= one_to_one ? 0 : 1 graph_block = opts[:graph_block] opts[:eager_grapher] ||= proc do |eo| ds = eo[:self] ds = ds.graph(eager_graph_dataset(opts, eo), use_only_conditions ? only_conditions : cks.zip(pkcs) + conditions, eo.merge(:select=>select, :join_type=>join_type, :qualify=>:deep, :from_self_alias=>ds.opts[:eager_graph][:master]), &graph_block) # We only load reciprocals for one_to_many associations, as other reciprocals don't make sense ds.opts[:eager_graph][:reciprocals][eo[:table_alias]] = opts.reciprocal ds end def_association_dataset_methods(opts) ck_nil_hash ={} cks.each{|k| ck_nil_hash[k] = nil} unless opts[:read_only] validate = opts[:validate] if one_to_one setter = opts[:setter] || proc do |o| up_ds = _apply_association_options(opts, opts.associated_dataset.where(cks.zip(cpks.map{|k| send(k)}))) if o up_ds = up_ds.exclude(o.pk_hash) unless o.new? cks.zip(cpks).each{|k, pk| o.send(:"#{k}=", send(pk))} end checked_transaction do up_ds.update(ck_nil_hash) o.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save") if o end end association_module_private_def(opts._setter_method, opts, &setter) association_module_def(opts.setter_method, opts){|o| set_one_to_one_associated_object(opts, o)} else adder = opts[:adder] || proc do |o| cks.zip(cpks).each{|k, pk| o.send(:"#{k}=", send(pk))} o.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save") end association_module_private_def(opts._add_method, opts, &adder) remover = opts[:remover] || proc do |o| cks.each{|k| o.send(:"#{k}=", nil)} o.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save") end association_module_private_def(opts._remove_method, opts, &remover) clearer = opts[:clearer] || proc do _apply_association_options(opts, opts.associated_dataset.where(cks.zip(cpks.map{|k| send(k)}))).update(ck_nil_hash) end association_module_private_def(opts._remove_all_method, opts, &clearer) def_add_method(opts) def_remove_methods(opts) end end end # Alias of def_one_to_many, since they share pretty much the same code. def def_one_to_one(opts) def_one_to_many(opts) end # Add the remove_ and remove_all instance methods def def_remove_methods(opts) association_module_def(opts.remove_method, opts){|o,*args| remove_associated_object(opts, o, *args)} association_module_def(opts.remove_all_method, opts){|*args| remove_all_associated_objects(opts, *args)} end # Return dataset to graph into given the association reflection, applying the :callback option if set. def eager_graph_dataset(opts, eager_options) ds = opts.associated_class.dataset if cb = eager_options[:callback] ds = cb.call(ds) end ds end end # Instance methods used to implement the associations support. module InstanceMethods # The currently cached associations. A hash with the keys being the # association name symbols and the values being the associated object # or nil (many_to_one), or the array of associated objects (*_to_many). def associations @associations ||= {} end # Freeze the associations cache when freezing the object. Note that # retrieving associations after freezing will still work in most cases, # but the associations will not be cached in the association cache. def freeze associations.freeze super end private # Apply the association options such as :order and :limit to the given dataset, returning a modified dataset. def _apply_association_options(opts, ds) unless ds.kind_of?(AssociationDatasetMethods) ds = opts.apply_dataset_changes(ds) end ds.model_object = self ds = ds.eager_graph(opts[:eager_graph]) if opts[:eager_graph] && opts.eager_graph_lazy_dataset? ds = instance_exec(ds, &opts[:block]) if opts[:block] ds end # Return a dataset for the association after applying any dynamic callback. def _associated_dataset(opts, dynamic_opts) ds = send(opts.dataset_method) if callback = dynamic_opts[:callback] ds = callback.call(ds) end ds end # Return an association dataset for the given association reflection def _dataset(opts) raise(Sequel::Error, "model object #{inspect} does not have a primary key") if opts.dataset_need_primary_key? && !pk ds = if opts[:dataset].arity == 1 instance_exec(opts, &opts[:dataset]) else instance_exec(&opts[:dataset]) end _apply_association_options(opts, ds) end # Dataset for the join table of the given many to many association reflection def _join_table_dataset(opts) ds = model.db.from(opts.join_table_source) opts[:join_table_block] ? opts[:join_table_block].call(ds) : ds end # Return the associated single object for the given association reflection and dynamic options # (or nil if no associated object). def _load_associated_object(opts, dynamic_opts) _load_associated_object_array(opts, dynamic_opts).first end # Load the associated objects for the given association reflection and dynamic options # as an array. def _load_associated_object_array(opts, dynamic_opts) _associated_dataset(opts, dynamic_opts).all end # Return the associated objects from the dataset, without association callbacks, reciprocals, and caching. # Still apply the dynamic callback if present. def _load_associated_objects(opts, dynamic_opts=OPTS) if opts.can_have_associated_objects?(self) if opts.returns_array? _load_associated_object_array(opts, dynamic_opts) elsif load_with_primary_key_lookup?(opts, dynamic_opts) opts.associated_class.send(:primary_key_lookup, ((fk = opts[:key]).is_a?(Array) ? fk.map{|c| send(c)} : send(fk))) else _load_associated_object(opts, dynamic_opts) end elsif opts.returns_array? [] end end # Clear the associations cache when refreshing def _refresh_set_values(hash) @associations.clear if @associations super end # Add the given associated object to the given association def add_associated_object(opts, o, *args) klass = opts.associated_class if o.is_a?(Hash) o = klass.new(o) elsif o.is_a?(Integer) || o.is_a?(String) || o.is_a?(Array) o = klass.with_pk!(o) elsif !o.is_a?(klass) raise(Sequel::Error, "associated object #{o.inspect} not of correct type #{klass}") end raise(Sequel::Error, "model object #{inspect} does not have a primary key") unless pk ensure_associated_primary_key(opts, o, *args) return if run_association_callbacks(opts, :before_add, o) == false send(opts._add_method, o, *args) if array = associations[opts[:name]] and !array.include?(o) array.push(o) end add_reciprocal_object(opts, o) run_association_callbacks(opts, :after_add, o) o end # Add/Set the current object to/as the given object's reciprocal association. def add_reciprocal_object(opts, o) return if o.frozen? return unless reciprocal = opts.reciprocal if opts.reciprocal_array? if array = o.associations[reciprocal] and !array.include?(self) array.push(self) end else o.associations[reciprocal] = self end end # Call uniq! on the given array. This is used by the :uniq option, # and is an actual method for memory reasons. def array_uniq!(a) a.uniq! end # If a foreign key column value changes, clear the related # cached associations. def change_column_value(column, value) if assocs = model.autoreloading_associations[column] assocs.each{|a| associations.delete(a)} end super end # Save the associated object if the associated object needs a primary key # and the associated object is new and does not have one. Raise an error if # the object still does not have a primary key def ensure_associated_primary_key(opts, o, *args) if opts.need_associated_primary_key? o.save(:validate=>opts[:validate]) if o.new? raise(Sequel::Error, "associated object #{o.inspect} does not have a primary key") unless o.pk end end # Load the associated objects using the dataset, handling callbacks, reciprocals, and caching. def load_associated_objects(opts, dynamic_opts=nil) if dynamic_opts == true or dynamic_opts == false or dynamic_opts == nil dynamic_opts = {:reload=>dynamic_opts} elsif dynamic_opts.respond_to?(:call) dynamic_opts = {:callback=>dynamic_opts} end if block_given? dynamic_opts = dynamic_opts.merge(:callback=>Proc.new) end name = opts[:name] if associations.include?(name) and !dynamic_opts[:callback] and !dynamic_opts[:reload] associations[name] else objs = _load_associated_objects(opts, dynamic_opts) if opts.set_reciprocal_to_self? if opts.returns_array? objs.each{|o| add_reciprocal_object(opts, o)} elsif objs add_reciprocal_object(opts, objs) end end # If the current object is frozen, you can't update the associations # cache. This can cause issues for after_load procs that expect # the objects to be already cached in the associations, but # unfortunately that case cannot be handled. associations[name] = objs unless frozen? run_association_callbacks(opts, :after_load, objs) frozen? ? objs : associations[name] end end # Whether to use a simple primary key lookup on the associated class when loading. def load_with_primary_key_lookup?(opts, dynamic_opts) opts[:type] == :many_to_one && !dynamic_opts[:callback] && opts.send(:cached_fetch, :many_to_one_pk_lookup){opts.primary_key == opts.associated_class.primary_key} end # Remove all associated objects from the given association def remove_all_associated_objects(opts, *args) raise(Sequel::Error, "model object #{inspect} does not have a primary key") unless pk send(opts._remove_all_method, *args) ret = associations[opts[:name]].each{|o| remove_reciprocal_object(opts, o)} if associations.include?(opts[:name]) associations[opts[:name]] = [] ret end # Remove the given associated object from the given association def remove_associated_object(opts, o, *args) klass = opts.associated_class if o.is_a?(Integer) || o.is_a?(String) || o.is_a?(Array) o = remove_check_existing_object_from_pk(opts, o, *args) elsif !o.is_a?(klass) raise(Sequel::Error, "associated object #{o.inspect} not of correct type #{klass}") elsif opts.remove_should_check_existing? && send(opts.dataset_method).where(o.pk_hash).empty? raise(Sequel::Error, "associated object #{o.inspect} is not currently associated to #{inspect}") end raise(Sequel::Error, "model object #{inspect} does not have a primary key") unless pk raise(Sequel::Error, "associated object #{o.inspect} does not have a primary key") if opts.need_associated_primary_key? && !o.pk return if run_association_callbacks(opts, :before_remove, o) == false send(opts._remove_method, o, *args) associations[opts[:name]].delete_if{|x| o === x} if associations.include?(opts[:name]) remove_reciprocal_object(opts, o) run_association_callbacks(opts, :after_remove, o) o end # Check that the object from the associated table specified by the primary key # is currently associated to the receiver. If it is associated, return the object, otherwise # raise an error. def remove_check_existing_object_from_pk(opts, o, *args) key = o pkh = opts.associated_class.qualified_primary_key_hash(key) raise(Sequel::Error, "no object with key(s) #{key.inspect} is currently associated to #{inspect}") unless o = send(opts.dataset_method).first(pkh) o end # Remove/unset the current object from/as the given object's reciprocal association. def remove_reciprocal_object(opts, o) return unless reciprocal = opts.reciprocal if opts.reciprocal_array? if array = o.associations[reciprocal] array.delete_if{|x| self === x} end else o.associations[reciprocal] = nil end end # Run the callback for the association with the object. def run_association_callbacks(reflection, callback_type, object) raise_error = raise_on_save_failure || !reflection.returns_array? stop_on_false = [:before_add, :before_remove, :before_set].include?(callback_type) reflection[callback_type].each do |cb| res = case cb when Symbol send(cb, object) when Proc cb.call(self, object) else raise Error, "callbacks should either be Procs or Symbols" end if res == false and stop_on_false raise(BeforeHookFailed, "Unable to modify association for #{inspect}: one of the #{callback_type} hooks returned false") if raise_error return false end end end # Set the given object as the associated object for the given *_to_one association reflection def _set_associated_object(opts, o) a = associations[opts[:name]] return if a && a == o && !set_associated_object_if_same? run_association_callbacks(opts, :before_set, o) remove_reciprocal_object(opts, a) if a send(opts._setter_method, o) associations[opts[:name]] = o add_reciprocal_object(opts, o) if o run_association_callbacks(opts, :after_set, o) o end # Whether run the associated object setter code if passed the same object as the one already # cached in the association. Usually not set (so nil), can be set on a per-object basis # if necessary. def set_associated_object_if_same? @set_associated_object_if_same end # Set the given object as the associated object for the given many_to_one association reflection def set_associated_object(opts, o) raise(Error, "associated object #{o.inspect} does not have a primary key") if o && !o.pk _set_associated_object(opts, o) end # Set the given object as the associated object for the given one_to_one association reflection def set_one_to_one_associated_object(opts, o) raise(Error, "object #{inspect} does not have a primary key") unless pk _set_associated_object(opts, o) end end # Eager loading makes it so that you can load all associated records for a # set of objects in a single query, instead of a separate query for each object. # # Two separate implementations are provided. +eager+ should be used most of the # time, as it loads associated records using one query per association. However, # it does not allow you the ability to filter or order based on columns in associated tables. +eager_graph+ loads # all records in a single query using JOINs, allowing you to filter or order based on columns in associated # tables. However, +eager_graph+ is usually slower than +eager+, especially if multiple # one_to_many or many_to_many associations are joined. # # You can cascade the eager loading (loading associations on associated objects) # with no limit to the depth of the cascades. You do this by passing a hash to +eager+ or +eager_graph+ # with the keys being associations of the current model and values being # associations of the model associated with the current model via the key. # # The arguments can be symbols or hashes with symbol keys (for cascaded # eager loading). Examples: # # Album.eager(:artist).all # Album.eager_graph(:artist).all # Album.eager(:artist, :genre).all # Album.eager_graph(:artist, :genre).all # Album.eager(:artist).eager(:genre).all # Album.eager_graph(:artist).eager(:genre).all # Artist.eager(:albums=>:tracks).all # Artist.eager_graph(:albums=>:tracks).all # Artist.eager(:albums=>{:tracks=>:genre}).all # Artist.eager_graph(:albums=>{:tracks=>:genre}).all # # You can also pass a callback as a hash value in order to customize the dataset being # eager loaded at query time, analogous to the way the :eager_block association option # allows you to customize it at association definition time. For example, # if you wanted artists with their albums since 1990: # # Artist.eager(:albums => proc{|ds| ds.where{year > 1990}}) # # Or if you needed albums and their artist's name only, using a single query: # # Albums.eager_graph(:artist => proc{|ds| ds.select(:name)}) # # To cascade eager loading while using a callback, you substitute the cascaded # associations with a single entry hash that has the proc callback as the key and # the cascaded associations as the value. This will load artists with their albums # since 1990, and also the tracks on those albums and the genre for those tracks: # # Artist.eager(:albums => {proc{|ds| ds.where{year > 1990}}=>{:tracks => :genre}}) module DatasetMethods Sequel::Dataset.def_mutation_method(:eager, :eager_graph, :module=>self) # If the expression is in the form <tt>x = y</tt> where +y+ is a <tt>Sequel::Model</tt> # instance, array of <tt>Sequel::Model</tt> instances, or a <tt>Sequel::Model</tt> dataset, # assume +x+ is an association symbol and look up the association reflection # via the dataset's model. From there, return the appropriate SQL based on the type of # association and the values of the foreign/primary keys of +y+. For most association # types, this is a simple transformation, but for +many_to_many+ associations this # creates a subquery to the join table. def complex_expression_sql_append(sql, op, args) r = args.at(1) if (((op == :'=' || op == :'!=') and r.is_a?(Sequel::Model)) || (multiple = ((op == :IN || op == :'NOT IN') and ((is_ds = r.is_a?(Sequel::Dataset)) or r.all?{|x| x.is_a?(Sequel::Model)})))) l = args.at(0) if ar = model.association_reflections[l] if multiple klass = ar.associated_class if is_ds if r.respond_to?(:model) unless r.model <= klass # A dataset for a different model class, could be a valid regular query return super end else # Not a model dataset, could be a valid regular query return super end else unless r.all?{|x| x.is_a?(klass)} raise Sequel::Error, "invalid association class for one object for association #{l.inspect} used in dataset filter for model #{model.inspect}, expected class #{klass.inspect}" end end elsif !r.is_a?(ar.associated_class) raise Sequel::Error, "invalid association class #{r.class.inspect} for association #{l.inspect} used in dataset filter for model #{model.inspect}, expected class #{ar.associated_class.inspect}" end if exp = association_filter_expression(op, ar, r) literal_append(sql, exp) else raise Sequel::Error, "invalid association type #{ar[:type].inspect} for association #{l.inspect} used in dataset filter for model #{model.inspect}" end elsif multiple && (is_ds || r.empty?) # Not a query designed for this support, could be a valid regular query super else raise Sequel::Error, "invalid association #{l.inspect} used in dataset filter for model #{model.inspect}" end else super end end # The preferred eager loading method. Loads all associated records using one # query for each association. # # The basic idea for how it works is that the dataset is first loaded normally. # Then it goes through all associations that have been specified via +eager+. # It loads each of those associations separately, then associates them back # to the original dataset via primary/foreign keys. Due to the necessity of # all objects being present, you need to use +all+ to use eager loading, as it # can't work with +each+. # # This implementation avoids the complexity of extracting an object graph out # of a single dataset, by building the object graph out of multiple datasets, # one for each association. By using a separate dataset for each association, # it avoids problems such as aliasing conflicts and creating cartesian product # result sets if multiple one_to_many or many_to_many eager associations are requested. # # One limitation of using this method is that you cannot filter the dataset # based on values of columns in an associated table, since the associations are loaded # in separate queries. To do that you need to load all associations in the # same query, and extract an object graph from the results of that query. If you # need to filter based on columns in associated tables, look at +eager_graph+ # or join the tables you need to filter on manually. # # Each association's order, if defined, is respected. # If the association uses a block or has an :eager_block argument, it is used. def eager(*associations) opt = @opts[:eager] opt = opt ? opt.dup : {} associations.flatten.each do |association| case association when Symbol check_association(model, association) opt[association] = nil when Hash association.keys.each{|assoc| check_association(model, assoc)} opt.merge!(association) else raise(Sequel::Error, 'Associations must be in the form of a symbol or hash') end end clone(:eager=>opt) end # The secondary eager loading method. Loads all associations in a single query. This # method should only be used if you need to filter or order based on columns in associated tables. # # This method uses <tt>Dataset#graph</tt> to create appropriate aliases for columns in all the # tables. Then it uses the graph's metadata to build the associations from the single hash, and # finally replaces the array of hashes with an array model objects inside all. # # Be very careful when using this with multiple one_to_many or many_to_many associations, as you can # create large cartesian products. If you must graph multiple one_to_many and many_to_many associations, # make sure your filters are narrow if you have a large database. # # Each association's order, if definied, is respected. +eager_graph+ probably # won't work correctly on a limited dataset, unless you are # only graphing many_to_one and one_to_one associations. # # Does not use the block defined for the association, since it does a single query for # all objects. You can use the :graph_* association options to modify the SQL query. # # Like +eager+, you need to call +all+ on the dataset for the eager loading to work. If you just # call +each+, it will yield plain hashes, each containing all columns from all the tables. def eager_graph(*associations) if eg = @opts[:eager_graph] eg = eg.dup [:requirements, :reflections, :reciprocals].each{|k| eg[k] = eg[k].dup} ds = clone(:eager_graph=>eg) ds.eager_graph_associations(ds, model, ds.opts[:eager_graph][:master], [], *associations) else # Each of the following have a symbol key for the table alias, with the following values: # :reciprocals - the reciprocal instance variable to use for this association # :reflections - AssociationReflection instance related to this association # :requirements - array of requirements for this association ds = clone(:eager_graph=>{:requirements=>{}, :master=>alias_symbol(first_source), :reflections=>{}, :reciprocals=>{}, :cartesian_product_number=>0, :row_proc=>row_proc}) ds.eager_graph_associations(ds, model, ds.opts[:eager_graph][:master], [], *associations). naked end end # Do not attempt to split the result set into associations, # just return results as simple objects. This is useful if you # want to use eager_graph as a shortcut to have all of the joins # and aliasing set up, but want to do something else with the dataset. def ungraphed ds = super.clone(:eager_graph=>nil) if (eg = @opts[:eager_graph]) && (rp = eg[:row_proc]) ds.row_proc = rp end ds end protected # Call graph on the association with the correct arguments, # update the eager_graph data structure, and recurse into # eager_graph_associations if there are any passed in associations # (which would be dependencies of the current association) # # Arguments: # ds :: Current dataset # model :: Current Model # ta :: table_alias used for the parent association # requirements :: an array, used as a stack for requirements # r :: association reflection for the current association, or an SQL::AliasedExpression # with the reflection as the expression and the alias base as the aliaz. # *associations :: any associations dependent on this one def eager_graph_association(ds, model, ta, requirements, r, *associations) if r.is_a?(SQL::AliasedExpression) alias_base = r.aliaz r = r.expression else alias_base = r[:graph_alias_base] end assoc_table_alias = ds.unused_table_alias(alias_base) loader = r[:eager_grapher] if !associations.empty? if associations.first.respond_to?(:call) callback = associations.first associations = {} elsif associations.length == 1 && (assocs = associations.first).is_a?(Hash) && assocs.length == 1 && (pr_assoc = assocs.to_a.first) && pr_assoc.first.respond_to?(:call) callback, assoc = pr_assoc associations = assoc.is_a?(Array) ? assoc : [assoc] end end ds = loader.call(:self=>ds, :table_alias=>assoc_table_alias, :implicit_qualifier=>ta, :callback=>callback) ds = ds.order_more(*qualified_expression(r[:order], assoc_table_alias)) if r[:order] and r[:order_eager_graph] eager_graph = ds.opts[:eager_graph] eager_graph[:requirements][assoc_table_alias] = requirements.dup eager_graph[:reflections][assoc_table_alias] = r eager_graph[:cartesian_product_number] += r[:cartesian_product_number] || 2 ds = ds.eager_graph_associations(ds, r.associated_class, assoc_table_alias, requirements + [assoc_table_alias], *associations) unless associations.empty? ds end # Check the associations are valid for the given model. # Call eager_graph_association on each association. # # Arguments: # ds :: Current dataset # model :: Current Model # ta :: table_alias used for the parent association # requirements :: an array, used as a stack for requirements # *associations :: the associations to add to the graph def eager_graph_associations(ds, model, ta, requirements, *associations) return ds if associations.empty? associations.flatten.each do |association| ds = case association when Symbol, SQL::AliasedExpression ds.eager_graph_association(ds, model, ta, requirements, eager_graph_check_association(model, association)) when Hash association.each do |assoc, assoc_assocs| ds = ds.eager_graph_association(ds, model, ta, requirements, eager_graph_check_association(model, assoc), assoc_assocs) end ds else raise(Sequel::Error, 'Associations must be in the form of a symbol or hash') end end ds end # Replace the array of plain hashes with an array of model objects will all eager_graphed # associations set in the associations cache for each object. def eager_graph_build_associations(hashes) hashes.replace(EagerGraphLoader.new(self).load(hashes)) end private # Return an expression for filtering by the given association reflection and associated object. def association_filter_expression(op, ref, obj) meth = :"#{ref[:type]}_association_filter_expression" send(meth, op, ref, obj) if respond_to?(meth, true) end # Handle inversion for association filters by returning an inverted expression, # plus also handling cases where the referenced columns are NULL. def association_filter_handle_inversion(op, exp, cols) if op == :'!=' || op == :'NOT IN' if exp == SQL::Constants::FALSE ~exp else ~exp | Sequel::SQL::BooleanExpression.from_value_pairs(cols.zip([]), :OR) end else exp end end # Return an expression for making sure that the given keys match the value of # the given methods for either the single object given or for any of the objects # given if +obj+ is an array. def association_filter_key_expression(keys, meths, obj) vals = if obj.is_a?(Sequel::Dataset) {(keys.length == 1 ? keys.first : keys)=>obj.select(*meths).exclude(Sequel::SQL::BooleanExpression.from_value_pairs(meths.zip([]), :OR))} else vals = Array(obj).reject{|o| !meths.all?{|m| o.send(m)}} return SQL::Constants::FALSE if vals.empty? if obj.is_a?(Array) if keys.length == 1 meth = meths.first {keys.first=>vals.map{|o| o.send(meth)}} else {keys=>vals.map{|o| meths.map{|m| o.send(m)}}} end else keys.zip(meths.map{|k| obj.send(k)}) end end SQL::BooleanExpression.from_value_pairs(vals) end # Make sure the association is valid for this model, and return the related AssociationReflection. def check_association(model, association) raise(Sequel::UndefinedAssociation, "Invalid association #{association} for #{model.name}") unless reflection = model.association_reflection(association) raise(Sequel::Error, "Eager loading is not allowed for #{model.name} association #{association}") if reflection[:allow_eager] == false reflection end # Allow associations that are eagerly graphed to be specified as an SQL::AliasedExpression, for # per-call determining of the alias base. def eager_graph_check_association(model, association) if association.is_a?(SQL::AliasedExpression) SQL::AliasedExpression.new(check_association(model, association.expression), association.aliaz) else check_association(model, association) end end # Eagerly load all specified associations def eager_load(a, eager_assoc=@opts[:eager]) return if a.empty? # Key is foreign/primary key name symbol # Value is hash with keys being foreign/primary key values (generally integers) # and values being an array of current model objects with that # specific foreign/primary key key_hash = {} # Reflections for all associations to eager load reflections = eager_assoc.keys.collect{|assoc| model.association_reflection(assoc) || (raise Sequel::UndefinedAssociation, "Model: #{self}, Association: #{assoc}")} # Populate the key_hash entry for each association being eagerly loaded reflections.each do |r| if key = r.eager_loader_key # key_hash for this key has already been populated, # skip populating again so that duplicate values # aren't added. unless id_map = key_hash[key] id_map = key_hash[key] = Hash.new{|h,k| h[k] = []} # Supporting both single (Symbol) and composite (Array) keys. a.each do |rec| case key when Array if (k = key.map{|k2| rec.send(k2)}) && k.all? id_map[k] << rec end when Symbol if k = rec.send(key) id_map[k] << rec end else raise Error, "unhandled eager_loader_key #{key.inspect} for association #{r[:name]}" end end end else id_map = nil end loader = r[:eager_loader] associations = eager_assoc[r[:name]] if associations.respond_to?(:call) eager_block = associations associations = {} elsif associations.is_a?(Hash) && associations.length == 1 && (pr_assoc = associations.to_a.first) && pr_assoc.first.respond_to?(:call) eager_block, associations = pr_assoc end loader.call(:key_hash=>key_hash, :rows=>a, :associations=>associations, :self=>self, :eager_block=>eager_block, :id_map=>id_map) a.each{|object| object.send(:run_association_callbacks, r, :after_load, object.associations[r[:name]])} unless r[:after_load].empty? end end # Return a subquery expression for filering by a many_to_many association def many_to_many_association_filter_expression(op, ref, obj) lpks, lks, rks = ref.values_at(:left_primary_key_columns, :left_keys, :right_keys) jt = ref.join_table_alias lpks = lpks.first if lpks.length == 1 lpks = ref.qualify(model.table_name, lpks) meths = if obj.is_a?(Sequel::Dataset) ref.qualify(obj.model.table_name, ref.right_primary_keys) else ref.right_primary_key_methods end exp = association_filter_key_expression(ref.qualify(jt, rks), meths, obj) if exp == SQL::Constants::FALSE association_filter_handle_inversion(op, exp, Array(lpks)) else association_filter_handle_inversion(op, SQL::BooleanExpression.from_value_pairs(lpks=>model.db.from(ref[:join_table]).select(*ref.qualify(jt, lks)).where(exp).exclude(SQL::BooleanExpression.from_value_pairs(ref.qualify(jt, lks).zip([]), :OR))), Array(lpks)) end end # Return a simple equality expression for filering by a many_to_one association def many_to_one_association_filter_expression(op, ref, obj) keys = ref.qualify(model.table_name, ref[:key_columns]) meths = if obj.is_a?(Sequel::Dataset) ref.qualify(obj.model.table_name, ref.primary_keys) else ref.primary_key_methods end association_filter_handle_inversion(op, association_filter_key_expression(keys, meths, obj), keys) end # Return a simple equality expression for filering by a one_to_* association def one_to_many_association_filter_expression(op, ref, obj) keys = ref.qualify(model.table_name, ref[:primary_key_columns]) meths = if obj.is_a?(Sequel::Dataset) ref.qualify(obj.model.table_name, ref[:keys]) else ref[:key_methods] end association_filter_handle_inversion(op, association_filter_key_expression(keys, meths, obj), keys) end alias one_to_one_association_filter_expression one_to_many_association_filter_expression # Build associations from the graph if #eager_graph was used, # and/or load other associations if #eager was used. def post_load(all_records) eager_graph_build_associations(all_records) if @opts[:eager_graph] eager_load(all_records) if @opts[:eager] super end end # This class is the internal implementation of eager_graph. It is responsible for taking an array of plain # hashes and returning an array of model objects with all eager_graphed associations already set in the # association cache. class EagerGraphLoader # Hash with table alias symbol keys and after_load hook values attr_reader :after_load_map # Hash with table alias symbol keys and association name values attr_reader :alias_map # Hash with table alias symbol keys and subhash values mapping column_alias symbols to the # symbol of the real name of the column attr_reader :column_maps # Recursive hash with table alias symbol keys mapping to hashes with dependent table alias symbol keys. attr_reader :dependency_map # Hash with table alias symbol keys and [limit, offset] values attr_reader :limit_map # Hash with table alias symbol keys and callable values used to create model instances # The table alias symbol for the primary model attr_reader :master # Hash with table alias symbol keys and primary key symbol values (or arrays of primary key symbols for # composite key tables) attr_reader :primary_keys # Hash with table alias symbol keys and reciprocal association symbol values, # used for setting reciprocals for one_to_many associations. attr_reader :reciprocal_map # Hash with table alias symbol keys and subhash values mapping primary key symbols (or array of symbols) # to model instances. Used so that only a single model instance is created for each object. attr_reader :records_map # Hash with table alias symbol keys and AssociationReflection values attr_reader :reflection_map # Hash with table alias symbol keys and callable values used to create model instances attr_reader :row_procs # Hash with table alias symbol keys and true/false values, where true means the # association represented by the table alias uses an array of values instead of # a single value (i.e. true => *_many, false => *_to_one). attr_reader :type_map # Initialize all of the data structures used during loading. def initialize(dataset) opts = dataset.opts eager_graph = opts[:eager_graph] @master = eager_graph[:master] requirements = eager_graph[:requirements] reflection_map = @reflection_map = eager_graph[:reflections] reciprocal_map = @reciprocal_map = eager_graph[:reciprocals] @unique = eager_graph[:cartesian_product_number] > 1 alias_map = @alias_map = {} type_map = @type_map = {} after_load_map = @after_load_map = {} limit_map = @limit_map = {} reflection_map.each do |k, v| alias_map[k] = v[:name] type_map[k] = v.returns_array? after_load_map[k] = v[:after_load] unless v[:after_load].empty? limit_map[k] = v.limit_and_offset if v[:limit] end # Make dependency map hash out of requirements array for each association. # This builds a tree of dependencies that will be used for recursion # to ensure that all parts of the object graph are loaded into the # appropriate subordinate association. @dependency_map = {} # Sort the associations by requirements length, so that # requirements are added to the dependency hash before their # dependencies. requirements.sort_by{|a| a[1].length}.each do |ta, deps| if deps.empty? dependency_map[ta] = {} else deps = deps.dup hash = dependency_map[deps.shift] deps.each do |dep| hash = hash[dep] end hash[ta] = {} end end # This mapping is used to make sure that duplicate entries in the # result set are mapped to a single record. For example, using a # single one_to_many association with 10 associated records, # the main object column values appear in the object graph 10 times. # We map by primary key, if available, or by the object's entire values, # if not. The mapping must be per table, so create sub maps for each table # alias. records_map = {@master=>{}} alias_map.keys.each{|ta| records_map[ta] = {}} @records_map = records_map datasets = opts[:graph][:table_aliases].to_a.reject{|ta,ds| ds.nil?} column_aliases = opts[:graph_aliases] || opts[:graph][:column_aliases] primary_keys = {} column_maps = {} models = {} row_procs = {} datasets.each do |ta, ds| models[ta] = ds.model primary_keys[ta] = [] column_maps[ta] = {} row_procs[ta] = ds.row_proc end column_aliases.each do |col_alias, tc| ta, column = tc column_maps[ta][col_alias] = column end column_maps.each do |ta, h| pk = models[ta].primary_key if pk.is_a?(Array) primary_keys[ta] = [] h.select{|ca, c| primary_keys[ta] << ca if pk.include?(c)} else h.select{|ca, c| primary_keys[ta] = ca if pk == c} end end @column_maps = column_maps @primary_keys = primary_keys @row_procs = row_procs # For performance, create two special maps for the master table, # so you can skip a hash lookup. @master_column_map = column_maps[master] @master_primary_keys = primary_keys[master] # Add a special hash mapping table alias symbols to 5 element arrays that just # contain the data in other data structures for that table alias. This is # used for performance, to get all values in one hash lookup instead of # separate hash lookups for each data structure. ta_map = {} alias_map.keys.each do |ta| ta_map[ta] = [records_map[ta], row_procs[ta], alias_map[ta], type_map[ta], reciprocal_map[ta]] end @ta_map = ta_map end # Return an array of primary model instances with the associations cache prepopulated # for all model objects (both primary and associated). def load(hashes) master = master() # Assign to local variables for speed increase rp = row_procs[master] rm = records_map[master] dm = dependency_map # This will hold the final record set that we will be replacing the object graph with. records = [] hashes.each do |h| unless key = master_pk(h) key = hkey(master_hfor(h)) end unless primary_record = rm[key] primary_record = rm[key] = rp.call(master_hfor(h)) # Only add it to the list of records to return if it is a new record records.push(primary_record) end # Build all associations for the current object and it's dependencies _load(dm, primary_record, h) end # Remove duplicate records from all associations if this graph could possibly be a cartesian product # Run after_load procs if there are any post_process(records, dm) if @unique || !after_load_map.empty? || !limit_map.empty? records end private # Recursive method that creates associated model objects and associates them to the current model object. def _load(dependency_map, current, h) dependency_map.each do |ta, deps| unless key = pk(ta, h) ta_h = hfor(ta, h) unless ta_h.values.any? assoc_name = alias_map[ta] unless (assoc = current.associations).has_key?(assoc_name) assoc[assoc_name] = type_map[ta] ? [] : nil end next end key = hkey(ta_h) end rm, rp, assoc_name, tm, rcm = @ta_map[ta] unless rec = rm[key] rec = rm[key] = rp.call(hfor(ta, h)) end if tm unless (assoc = current.associations).has_key?(assoc_name) assoc[assoc_name] = [] end assoc[assoc_name].push(rec) rec.associations[rcm] = current if rcm else current.associations[assoc_name] ||= rec end # Recurse into dependencies of the current object _load(deps, rec, h) unless deps.empty? end end # Return the subhash for the specific table alias +ta+ by parsing the values out of the main hash +h+ def hfor(ta, h) out = {} @column_maps[ta].each{|ca, c| out[c] = h[ca]} out end # Return a suitable hash key for any subhash +h+, which is an array of values by column order. # This is only used if the primary key cannot be used. def hkey(h) h.sort_by{|x| x[0].to_s} end # Return the subhash for the master table by parsing the values out of the main hash +h+ def master_hfor(h) out = {} @master_column_map.each{|ca, c| out[c] = h[ca]} out end # Return a primary key value for the master table by parsing it out of the main hash +h+. def master_pk(h) x = @master_primary_keys if x.is_a?(Array) unless x == [] x = x.map{|ca| h[ca]} x if x.all? end else h[x] end end # Return a primary key value for the given table alias by parsing it out of the main hash +h+. def pk(ta, h) x = primary_keys[ta] if x.is_a?(Array) unless x == [] x = x.map{|ca| h[ca]} x if x.all? end else h[x] end end # If the result set is the result of a cartesian product, then it is possible that # there are multiple records for each association when there should only be one. # In that case, for each object in all associations loaded via +eager_graph+, run # uniq! on the association to make sure no duplicate records show up. # Note that this can cause legitimate duplicate records to be removed. def post_process(records, dependency_map) records.each do |record| dependency_map.each do |ta, deps| assoc_name = alias_map[ta] list = record.send(assoc_name) rec_list = if type_map[ta] list.uniq! if lo = limit_map[ta] limit, offset = lo offset ||= 0 list.replace(list[(offset)..(limit ? (offset)+limit-1 : -1)]) end list elsif list [list] else [] end record.send(:run_association_callbacks, reflection_map[ta], :after_load, list) if after_load_map[ta] post_process(rec_list, deps) if !rec_list.empty? && !deps.empty? end end end end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/model/base.rb����������������������������������������������������������0000664�0000000�0000000�00000231112�12201565355�0020401�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel class Model extend Enumerable extend Inflections # Class methods for Sequel::Model that implement basic model functionality. # # * All of the method names in Model::DATASET_METHODS have class methods created that call # the Model's dataset with the method of the same name with the given arguments. module ClassMethods # Which columns should be the only columns allowed in a call to a mass assignment method (e.g. set) # (default: not set, so all columns not otherwise restricted are allowed). attr_reader :allowed_columns # Array of modules that extend this model's dataset. Stored # so that if the model's dataset is changed, it will be extended # with all of these modules. attr_reader :dataset_method_modules # The default options to use for Model#set_fields. These are merged with # the options given to set_fields. attr_accessor :default_set_fields_options # SQL string fragment used for faster DELETE statement creation when deleting/destroying # model instances, or nil if the optimization should not be used. For internal use only. attr_reader :fast_instance_delete_sql # The dataset that instance datasets (#this) are based on. Generally a naked version of # the model's dataset limited to one row. For internal use only. attr_reader :instance_dataset # Array of plugin modules loaded by this class # # Sequel::Model.plugins # # => [Sequel::Model, Sequel::Model::Associations] attr_reader :plugins # The primary key for the class. Sequel can determine this automatically for # many databases, but not all, so you may need to set it manually. If not # determined automatically, the default is :id. attr_reader :primary_key # Whether to raise an error instead of returning nil on a failure # to save/create/save_changes/etc due to a validation failure or # a before_* hook returning false. attr_accessor :raise_on_save_failure # Whether to raise an error when unable to typecast data for a column # (default: true). This should be set to false if you want to use # validations to display nice error messages to the user (e.g. most # web applications). You can use the validates_schema_types validation # (from the validation_helpers plugin) in connection with this setting to # check for typecast failures during validation. attr_accessor :raise_on_typecast_failure # Whether to raise an error if an UPDATE or DELETE query related to # a model instance does not modify exactly 1 row. If set to false, # Sequel will not check the number of rows modified (default: true). attr_accessor :require_modification # Should be the literal primary key column name if this Model's table has a simple primary key, or # nil if the model has a compound primary key or no primary key. attr_reader :simple_pk # Should be the literal table name if this Model's dataset is a simple table (no select, order, join, etc.), # or nil otherwise. This and simple_pk are used for an optimization in Model.[]. attr_reader :simple_table # Whether new/set/update and their variants should raise an error # if an invalid key is used. A key is invalid if no setter method exists # for that key or the access to the setter method is restricted (e.g. due to it # being a primary key field). If set to false, silently skip # any key where the setter method doesn't exist or access to it is restricted. attr_accessor :strict_param_setting # Whether to typecast the empty string ('') to nil for columns that # are not string or blob. In most cases the empty string would be the # way to specify a NULL SQL value in string form (nil.to_s == ''), # and an empty string would not usually be typecast correctly for other # types, so the default is true. attr_accessor :typecast_empty_string_to_nil # Whether to typecast attribute values on assignment (default: true). # If set to false, no typecasting is done, so it will be left up to the # database to typecast the value correctly. attr_accessor :typecast_on_assignment # Whether to enable the after_commit and after_rollback hooks when saving/destroying # instances. On by default, can be turned off for performance reasons or when using # prepared transactions (which aren't compatible with after commit/rollback). attr_accessor :use_after_commit_rollback # Whether to use a transaction by default when saving/deleting records (default: true). # If you are sending database queries in before_* or after_* hooks, you shouldn't change # the default setting without a good reason. attr_accessor :use_transactions # Returns the first record from the database matching the conditions. # If a hash is given, it is used as the conditions. If another # object is given, it finds the first record whose primary key(s) match # the given argument(s). If no object is returned by the dataset, returns nil. # # Artist[1] # SELECT * FROM artists WHERE id = 1 # # => #<Artist {:id=>1, ...}> # # Artist[:name=>'Bob'] # SELECT * FROM artists WHERE (name = 'Bob') LIMIT 1 # # => #<Artist {:name=>'Bob', ...}> def [](*args) args = args.first if args.size <= 1 args.is_a?(Hash) ? dataset[args] : (primary_key_lookup(args) unless args.nil?) end # Initializes a model instance as an existing record. This constructor is # used by Sequel to initialize model instances when fetching records. # Requires that values be a hash where all keys are symbols. It # probably should not be used by external code. def call(values) o = allocate o.instance_variable_set(:@values, values) o end # Clear the setter_methods cache def clear_setter_methods_cache @setter_methods = nil end # Returns the columns in the result set in their original order. # Generally, this will use the columns determined via the database # schema, but in certain cases (e.g. models that are based on a joined # dataset) it will use <tt>Dataset#columns</tt> to find the columns. # # Artist.columns # # => [:id, :name] def columns @columns || set_columns(dataset.naked.columns) end # Creates instance using new with the given values and block, and saves it. # # Artist.create(:name=>'Bob') # # INSERT INTO artists (name) VALUES ('Bob') # # Artist.create do |a| # a.name = 'Jim' # end # INSERT INTO artists (name) VALUES ('Jim') def create(values = {}, &block) new(values, &block).save end # Returns the dataset associated with the Model class. Raises # an +Error+ if there is no associated dataset for this class. # In most cases, you don't need to call this directly, as Model # proxies many dataset methods to the underlying dataset. # # Artist.dataset.all # SELECT * FROM artists def dataset @dataset || raise(Error, "No dataset associated with #{self}") end # Alias of set_dataset def dataset=(ds) set_dataset(ds) end # Extend the dataset with a module, similar to adding # a plugin with the methods defined in DatasetMethods. # This is the recommended way to add methods to model datasets. # # If an argument, it should be a module, and is used to extend # the underlying dataset. Otherwise an anonymous module is created, and # if a block is given, it is module_evaled, allowing you do define # dataset methods directly using the standard ruby def syntax. # Returns the module given or the anonymous module created. # # # Usage with existing module # Artist.dataset_module Sequel::ColumnsIntrospection # # # Usage with anonymous module # Artist.dataset_module do # def foo # :bar # end # end # Artist.dataset.foo # # => :bar # Artist.foo # # => :bar # # Any anonymous modules created are actually instances of Sequel::Model::DatasetModule # (a Module subclass), which allows you to call the subset method on them: # # Artist.dataset_module do # subset :released, Sequel.identifier(release_date) > Sequel::CURRENT_DATE # end # # Any public methods in the dataset module will have class methods created that # call the method on the dataset, assuming that the class method is not already # defined. def dataset_module(mod = nil) if mod raise Error, "can't provide both argument and block to Model.dataset_module" if block_given? dataset_extend(mod) mod else @dataset_module ||= DatasetModule.new(self) @dataset_module.module_eval(&Proc.new) if block_given? dataset_extend(@dataset_module) @dataset_module end end # Returns the database associated with the Model class. # If this model doesn't have a database associated with it, # assumes the superclass's database, or the first object in # Sequel::DATABASES. If no Sequel::Database object has # been created, raises an error. # # Artist.db.transaction do # BEGIN # Artist.create(:name=>'Bob') # # INSERT INTO artists (name) VALUES ('Bob') # end # COMMIT def db return @db if @db @db = self == Model ? Sequel.synchronize{DATABASES.first} : superclass.db raise(Error, "No database associated with #{self}: have you called Sequel.connect or #{self}.db= ?") unless @db @db end # Sets the database associated with the Model class. If the # model has an associated dataset, sets the model's dataset # to a dataset on the new database with the same options # used by the current dataset. This can be used directly on # Sequel::Model to set the default database to be used # by subclasses, or to override the database used for specific # models: # # Sequel::Model.db = DB1 # Artist.db = DB2 # # Note that you should not use this to change the model's database # at runtime. If you have that need, you should look into Sequel's # sharding support. def db=(db) @db = db set_dataset(db.dataset.clone(@dataset.opts)) if @dataset end # Returns the cached schema information if available or gets it # from the database. This is a hash where keys are column symbols # and values are hashes of information related to the column. See # <tt>Database#schema</tt>. # # Artist.db_schema # # {:id=>{:type=>:integer, :primary_key=>true, ...}, # # :name=>{:type=>:string, :primary_key=>false, ...}} def db_schema @db_schema ||= get_db_schema end # Create a column alias, where the column methods have one name, but the underlying storage uses a # different name. def def_column_alias(meth, column) clear_setter_methods_cache overridable_methods_module.module_eval do define_method(meth){self[column]} define_method("#{meth}="){|v| self[column] = v} end end # If a block is given, define a method on the dataset (if the model currently has an dataset) with the given argument name using # the given block. Also define a class method on the model that calls the # dataset method. Stores the method name and block so that it can be reapplied if the model's # dataset changes. # # If a block is not given, just define a class method on the model for each argument # that calls the dataset method of the same argument name. # # It is recommended that you define methods inside a block passed to #dataset_module # instead of using this method, as #dataset_module allows you to use normal # ruby def syntax. # # # Add new dataset method and class method that calls it # Artist.def_dataset_method(:by_name){order(:name)} # Artist.filter(:name.like('A%')).by_name # Artist.by_name.filter(:name.like('A%')) # # # Just add a class method that calls an existing dataset method # Artist.def_dataset_method(:server!) # Artist.server!(:server1) def def_dataset_method(*args, &block) raise(Error, "No arguments given") if args.empty? if block raise(Error, "Defining a dataset method using a block requires only one argument") if args.length > 1 dataset_module{define_method(args.first, &block)} else args.each{|arg| def_model_dataset_method(arg)} end end # Finds a single record according to the supplied filter. # You are encouraged to use Model.[] or Model.first instead of this method. # # Artist.find(:name=>'Bob') # # SELECT * FROM artists WHERE (name = 'Bob') LIMIT 1 # # Artist.find{name > 'M'} # # SELECT * FROM artists WHERE (name > 'M') LIMIT 1 def find(*args, &block) filter(*args, &block).first end # Like +find+ but invokes create with given conditions when record does not # exist. Unlike +find+ in that the block used in this method is not passed # to +find+, but instead is passed to +create+ only if +find+ does not # return an object. # # Artist.find_or_create(:name=>'Bob') # # SELECT * FROM artists WHERE (name = 'Bob') LIMIT 1 # # INSERT INTO artists (name) VALUES ('Bob') # # Artist.find_or_create(:name=>'Jim'){|a| a.hometown = 'Sactown'} # # SELECT * FROM artists WHERE (name = 'Jim') LIMIT 1 # # INSERT INTO artists (name, hometown) VALUES ('Jim', 'Sactown') def find_or_create(cond, &block) find(cond) || create(cond, &block) end # Clear the setter_methods cache when a module is included, as it # may contain setter methods. def include(mod) clear_setter_methods_cache super end # If possible, set the dataset for the model subclass as soon as it # is created. Also, make sure the inherited class instance variables # are copied into the subclass. # # Sequel queries the database to get schema information as soon as # a model class is created: # # class Artist < Sequel::Model # Causes schema query # end def inherited(subclass) super ivs = subclass.instance_variables.collect{|x| x.to_s} inherited_instance_variables.each do |iv, dup| next if ivs.include?(iv.to_s) if (sup_class_value = instance_variable_get(iv)) && dup sup_class_value = case dup when :dup sup_class_value.dup when :hash_dup h = {} sup_class_value.each{|k,v| h[k] = v.dup} h when Proc dup.call(sup_class_value) else raise Error, "bad inherited instance variable type: #{dup.inspect}" end end subclass.instance_variable_set(iv, sup_class_value) end unless ivs.include?("@dataset") if self == Model || !@dataset n = subclass.name unless n.nil? || n.empty? db subclass.set_dataset(subclass.implicit_table_name) rescue nil end elsif @dataset subclass.set_dataset(@dataset.clone, :inherited=>true) rescue nil end end end # Returns the implicit table name for the model class, which is the demodulized, # underscored, pluralized name of the class. # # Artist.implicit_table_name # => :artists # Foo::ArtistAlias.implicit_table_name # => :artist_aliases def implicit_table_name pluralize(underscore(demodulize(name))).to_sym end # Calls #call with the values hash. Only for backwards compatibility. def load(values) call(values) end # Clear the setter_methods cache when a setter method is added def method_added(meth) clear_setter_methods_cache if meth.to_s =~ SETTER_METHOD_REGEXP super end # Mark the model as not having a primary key. Not having a primary key # can cause issues, among which is that you won't be able to update records. # # Artist.primary_key # => :id # Artist.no_primary_key # Artist.primary_key # => nil def no_primary_key clear_setter_methods_cache self.simple_pk = @primary_key = nil end # Loads a plugin for use with the model class, passing optional arguments # to the plugin. If the plugin is a module, load it directly. Otherwise, # require the plugin from either sequel/plugins/#{plugin} or # sequel_#{plugin}, and then attempt to load the module using a # the camelized plugin name under Sequel::Plugins. def plugin(plugin, *args, &block) m = plugin.is_a?(Module) ? plugin : plugin_module(plugin) unless @plugins.include?(m) @plugins << m m.apply(self, *args, &block) if m.respond_to?(:apply) extend(m::ClassMethods) if plugin_module_defined?(m, :ClassMethods) include(m::InstanceMethods) if plugin_module_defined?(m, :InstanceMethods) if plugin_module_defined?(m, :DatasetMethods) dataset_extend(m::DatasetMethods, :create_class_methods=>false) end end m.configure(self, *args, &block) if m.respond_to?(:configure) end # Returns primary key attribute hash. If using a composite primary key # value such be an array with values for each primary key in the correct # order. For a standard primary key, value should be an object with a # compatible type for the key. If the model does not have a primary key, # raises an +Error+. # # Artist.primary_key_hash(1) # => {:id=>1} # Artist.primary_key_hash([1, 2]) # => {:id1=>1, :id2=>2} def primary_key_hash(value) raise(Error, "#{self} does not have a primary key") unless key = @primary_key case key when Array hash = {} key.each_with_index{|k,i| hash[k] = value[i]} hash else {key => value} end end # Return a hash where the keys are qualified column references. Uses the given # qualifier if provided, or the table_name otherwise. This is useful if you # plan to join other tables to this table and you want the column references # to be qualified. # # Artist.filter(Artist.qualified_primary_key_hash(1)) # # SELECT * FROM artists WHERE (artists.id = 1) def qualified_primary_key_hash(value, qualifier=table_name) h = primary_key_hash(value) h.to_a.each{|k,v| h[SQL::QualifiedIdentifier.new(qualifier, k)] = h.delete(k)} h end # Restrict the setting of the primary key(s) when using mass assignment (e.g. +set+). Because # this is the default, this only make sense to use in a subclass where the # parent class has used +unrestrict_primary_key+. def restrict_primary_key clear_setter_methods_cache @restrict_primary_key = true end # Whether or not setting the primary key(s) when using mass assignment (e.g. +set+) is # restricted, true by default. def restrict_primary_key? @restrict_primary_key end # Set the columns to allow when using mass assignment (e.g. +set+). Using this means that # any columns not listed here will not be modified. If you have any virtual # setter methods (methods that end in =) that you want to be used during # mass assignment, they need to be listed here as well (without the =). # # It may be better to use a method such as +set_only+ or +set_fields+ that lets you specify # the allowed fields per call. # # Artist.set_allowed_columns(:name, :hometown) # Artist.set(:name=>'Bob', :hometown=>'Sactown') # No Error # Artist.set(:name=>'Bob', :records_sold=>30000) # Error def set_allowed_columns(*cols) clear_setter_methods_cache @allowed_columns = cols end # Sets the dataset associated with the Model class. +ds+ can be a +Symbol+, # +LiteralString+, <tt>SQL::Identifier</tt>, <tt>SQL::QualifiedIdentifier</tt>, # <tt>SQL::AliasedExpression</tt> # (all specifying a table name in the current database), or a +Dataset+. # If a dataset is used, the model's database is changed to the database of the given # dataset. If a dataset is not used, a dataset is created from the current # database with the table name given. Other arguments raise an +Error+. # Returns self. # # This changes the row_proc of the dataset to return # model objects and extends the dataset with the dataset_method_modules. # It also attempts to determine the database schema for the model, # based on the given dataset. # # Artist.set_dataset(:tbl_artists) # Artist.set_dataset(DB[:artists]) # # Note that you should not use this to change the model's dataset # at runtime. If you have that need, you should look into Sequel's # sharding support. def set_dataset(ds, opts=OPTS) inherited = opts[:inherited] case ds when Symbol, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression, LiteralString self.simple_table = db.literal(ds) ds = db.from(ds) when Dataset self.simple_table = if ds.send(:simple_select_all?) ds.literal(ds.first_source_table) else nil end @db = ds.db else raise(Error, "Model.set_dataset takes one of the following classes as an argument: Symbol, LiteralString, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression, Dataset") end set_dataset_row_proc(ds) @dataset = ds @require_modification = Sequel::Model.require_modification.nil? ? @dataset.provides_accurate_rows_matched? : Sequel::Model.require_modification if inherited self.simple_table = superclass.simple_table @columns = @dataset.columns rescue nil else @dataset_method_modules.each{|m| @dataset.extend(m)} if @dataset_method_modules end @dataset.model = self if @dataset.respond_to?(:model=) check_non_connection_error{@db_schema = (inherited ? superclass.db_schema : get_db_schema)} reset_instance_dataset self end # Sets the primary key for this model. You can use either a regular # or a composite primary key. To not use a primary key, set to nil # or use +no_primary_key+. On most adapters, Sequel can automatically # determine the primary key to use, so this method is not needed often. # # class Person < Sequel::Model # # regular key # set_primary_key :person_id # end # # class Tagging < Sequel::Model # # composite key # set_primary_key [:taggable_id, :tag_id] # end def set_primary_key(key) clear_setter_methods_cache if key.is_a?(Array) && key.length < 2 key = key.first end self.simple_pk = if key && !key.is_a?(Array) (@dataset || db).literal(key) end @primary_key = key end # Cache of setter methods to allow by default, in order to speed up new/set/update instance methods. def setter_methods @setter_methods ||= get_setter_methods end # Sets up a dataset method that returns a filtered dataset. # Sometimes thought of as a scope, and like most dataset methods, # they can be chained. # For example: # # Topic.subset(:joes, :username.like('%joe%')) # Topic.subset(:popular){num_posts > 100} # Topic.subset(:recent){created_on > Date.today - 7} # # Allows you to do: # # Topic.joes.recent.popular # # to get topics with a username that includes joe that # have more than 100 posts and were created less than # 7 days ago. # # Both the args given and the block are passed to <tt>Dataset#filter</tt>. # # This method creates dataset methods that do not accept arguments. To create # dataset methods that accept arguments, you should use define a # method directly inside a #dataset_module block. def subset(name, *args, &block) dataset_module.subset(name, *args, &block) end # Returns name of primary table for the dataset. If the table for the dataset # is aliased, returns the aliased name. # # Artist.table_name # => :artists # Sequel::Model(:foo).table_name # => :foo # Sequel::Model(:foo___bar).table_name # => :bar def table_name dataset.first_source_alias end # Allow the setting of the primary key(s) when using the mass assignment methods. # Using this method can open up security issues, be very careful before using it. # # Artist.set(:id=>1) # Error # Artist.unrestrict_primary_key # Artist.set(:id=>1) # No Error def unrestrict_primary_key clear_setter_methods_cache @restrict_primary_key = false end # Return the model instance with the primary key, or nil if there is no matching record. def with_pk(pk) primary_key_lookup(pk) end # Return the model instance with the primary key, or raise NoMatchingRow if there is no matching record. def with_pk!(pk) with_pk(pk) || raise(NoMatchingRow) end # Add model methods that call dataset methods Plugins.def_dataset_methods(self, DATASET_METHODS) private # Yield to the passed block and swallow all errors other than DatabaseConnectionErrors. def check_non_connection_error begin yield rescue Sequel::DatabaseConnectionError raise rescue nil end end # Add the module to the class's dataset_method_modules. Extend the dataset with the # module if the model has a dataset. Add dataset methods to the class for all # public dataset methods. def dataset_extend(mod, opts=OPTS) @dataset.extend(mod) if @dataset reset_instance_dataset dataset_method_modules << mod unless opts[:create_class_methods] == false mod.public_instance_methods.each{|meth| def_model_dataset_method(meth)} end end # Create a column accessor for a column with a method name that is hard to use in ruby code. def def_bad_column_accessor(column) overridable_methods_module.module_eval do define_method(column){self[column]} define_method("#{column}="){|v| self[column] = v} end end # Create the column accessors. For columns that can be used as method names directly in ruby code, # use a string to define the method for speed. For other columns names, use a block. def def_column_accessor(*columns) clear_setter_methods_cache columns, bad_columns = columns.partition{|x| NORMAL_METHOD_NAME_REGEXP.match(x.to_s)} bad_columns.each{|x| def_bad_column_accessor(x)} im = instance_methods.collect{|x| x.to_s} columns.each do |column| meth = "#{column}=" overridable_methods_module.module_eval("def #{column}; self[:#{column}] end", __FILE__, __LINE__) unless im.include?(column.to_s) overridable_methods_module.module_eval("def #{meth}(v); self[:#{column}] = v end", __FILE__, __LINE__) unless im.include?(meth) end end # Define a model method that calls the dataset method with the same name, # only used for methods with names that can't be presented directly in # ruby code. def def_model_dataset_method(meth) return if respond_to?(meth, true) if meth.to_s =~ NORMAL_METHOD_NAME_REGEXP instance_eval("def #{meth}(*args, &block); dataset.#{meth}(*args, &block) end", __FILE__, __LINE__) else (class << self; self; end).send(:define_method, meth){|*args, &block| dataset.send(meth, *args, &block)} end end # Get the schema from the database, fall back on checking the columns # via the database if that will return inaccurate results or if # it raises an error. def get_db_schema(reload = false) set_columns(nil) return nil unless @dataset schema_hash = {} ds_opts = dataset.opts get_columns = proc{check_non_connection_error{columns} || []} schema_array = check_non_connection_error{db.schema(dataset, :reload=>reload)} if db.supports_schema_parsing? if schema_array schema_array.each{|k,v| schema_hash[k] = v} if ds_opts.include?(:select) # We don't remove the columns from the schema_hash, # as it's possible they will be used for typecasting # even if they are not selected. cols = get_columns.call cols.each{|c| schema_hash[c] ||= {}} def_column_accessor(*schema_hash.keys) else # Dataset is for a single table with all columns, # so set the columns based on the order they were # returned by the schema. cols = schema_array.collect{|k,v| k} set_columns(cols) # Set the primary key(s) based on the schema information, # if the schema information includes primary key information if schema_array.all?{|k,v| v.has_key?(:primary_key)} pks = schema_array.collect{|k,v| k if v[:primary_key]}.compact pks.length > 0 ? set_primary_key(pks) : no_primary_key end # Also set the columns for the dataset, so the dataset # doesn't have to do a query to get them. dataset.instance_variable_set(:@columns, cols) end else # If the dataset uses multiple tables or custom sql or getting # the schema raised an error, just get the columns and # create an empty schema hash for it. get_columns.call.each{|c| schema_hash[c] = {}} end schema_hash end # Uncached version of setter_methods, to be overridden by plugins # that want to modify the methods used. def get_setter_methods if allowed_columns allowed_columns.map{|x| "#{x}="} else meths = instance_methods.collect{|x| x.to_s}.grep(SETTER_METHOD_REGEXP) - RESTRICTED_SETTER_METHODS meths -= Array(primary_key).map{|x| "#{x}="} if primary_key && restrict_primary_key? meths end end # A hash of instance variables to automatically set up in subclasses. # See Sequel::Model::INHERITED_INSTANCE_VARIABLES. It is safe to modify # the hash returned by this method, though it may not be safe to modify # values of the hash. def inherited_instance_variables INHERITED_INSTANCE_VARIABLES.dup end # For the given opts hash and default name or :class option, add a # :class_name option unless already present which contains the name # of the class to use as a string. The purpose is to allow late # binding to the class later using constantize. def late_binding_class_option(opts, default) case opts[:class] when String, Symbol # Delete :class to allow late binding opts[:class_name] ||= opts.delete(:class).to_s when Class opts[:class_name] ||= opts[:class].name end opts[:class_name] ||= ((name || '').split("::")[0..-2] + [camelize(default)]).join('::') end # Module that the class includes that holds methods the class adds for column accessors and # associations so that the methods can be overridden with +super+. def overridable_methods_module include(@overridable_methods_module = Module.new) unless @overridable_methods_module @overridable_methods_module end # Returns the module for the specified plugin. If the module is not # defined, the corresponding plugin required. def plugin_module(plugin) module_name = plugin.to_s.gsub(/(^|_)(.)/){|x| x[-1..-1].upcase} if !Sequel::Plugins.const_defined?(module_name) || (Sequel.const_defined?(module_name) && Sequel::Plugins.const_get(module_name) == Sequel.const_get(module_name)) begin require "sequel/plugins/#{plugin}" rescue LoadError => e begin require "sequel_#{plugin}" rescue LoadError => e2 e.message << "; #{e2.message}" raise e end end end Sequel::Plugins.const_get(module_name) end # Check if the plugin module +plugin+ defines the constant named by +submod+. def plugin_module_defined?(plugin, submod) if RUBY_VERSION >= '1.9' plugin.const_defined?(submod, false) else # :nocov: plugin.const_defined?(submod) # :nocov: end end # Find the row in the dataset that matches the primary key. Uses # a static SQL optimization if the table and primary key are simple. # # This method should not be called with a nil primary key, in case # it is overridden by plugins which assume that the passed argument # is valid. def primary_key_lookup(pk) if sql = @fast_pk_lookup_sql sql = sql.dup ds = dataset ds.literal_append(sql, pk) ds.fetch_rows(sql){|r| return ds.row_proc.call(r)} nil else dataset[primary_key_hash(pk)] end end # Reset the cached fast primary lookup SQL if a simple table and primary key # are used, or set it to nil if not used. def reset_fast_pk_lookup_sql @fast_pk_lookup_sql = if @simple_table && @simple_pk "SELECT * FROM #@simple_table WHERE #@simple_pk = ".freeze end @fast_instance_delete_sql = if @simple_table && @simple_pk "DELETE FROM #@simple_table WHERE #@simple_pk = ".freeze end end # Reset the instance dataset to a modified copy of the current dataset, # should be used whenever the model's dataset is modified. def reset_instance_dataset @instance_dataset = @dataset.limit(1).naked if @dataset end # Set the columns for this model and create accessor methods for each column. def set_columns(new_columns) @columns = new_columns def_column_accessor(*new_columns) if new_columns @columns end # Set the dataset's row_proc to the current model. def set_dataset_row_proc(ds) ds.row_proc = self end # Reset the fast primary key lookup SQL when the simple_pk value changes. def simple_pk=(pk) @simple_pk = pk reset_fast_pk_lookup_sql end # Reset the fast primary key lookup SQL when the simple_table value changes. def simple_table=(t) @simple_table = t reset_fast_pk_lookup_sql end # Returns a copy of the model's dataset with custom SQL # # Artist.fetch("SELECT * FROM artists WHERE name LIKE 'A%'") # Artist.fetch("SELECT * FROM artists WHERE id = ?", 1) alias fetch with_sql end # Sequel::Model instance methods that implement basic model functionality. # # * All of the methods in +HOOKS+ and +AROUND_HOOKS+ create instance methods that are called # by Sequel when the appropriate action occurs. For example, when destroying # a model object, Sequel will call +around_destroy+, which will call +before_destroy+, do # the destroy, and then call +after_destroy+. # * The following instance_methods all call the class method of the same # name: columns, db, primary_key, db_schema. # * All of the methods in +BOOLEAN_SETTINGS+ create attr_writers allowing you # to set values for the attribute. It also creates instance getters returning # the value of the setting. If the value has not yet been set, it # gets the default value from the class by calling the class method of the same name. module InstanceMethods HOOKS.each{|h| class_eval("def #{h}; end", __FILE__, __LINE__)} AROUND_HOOKS.each{|h| class_eval("def #{h}; yield end", __FILE__, __LINE__)} # Define instance method(s) that calls class method(s) of the # same name. Replaces the construct: # # define_method(meth){self.class.send(meth)} [:columns, :db, :primary_key, :db_schema].each{|meth| class_eval("def #{meth}; self.class.#{meth} end", __FILE__, __LINE__)} # Define instance method(s) that calls class method(s) of the # same name, caching the result in an instance variable. Define # standard attr_writer method for modifying that instance variable. BOOLEAN_SETTINGS.each{|meth| class_eval("def #{meth}; !defined?(@#{meth}) ? (frozen? ? self.class.#{meth} : (@#{meth} = self.class.#{meth})) : @#{meth} end", __FILE__, __LINE__)} attr_writer(*BOOLEAN_SETTINGS) # The hash of attribute values. Keys are symbols with the names of the # underlying database columns. # # Artist.new(:name=>'Bob').values # => {:name=>'Bob'} # Artist[1].values # => {:id=>1, :name=>'Jim', ...} attr_reader :values alias to_hash values # Creates new instance and passes the given values to set. # If a block is given, yield the instance to the block unless # from_db is true. # # Arguments: # values :: should be a hash to pass to set. # from_db :: only for backwards compatibility, forget it exists. # # Artist.new(:name=>'Bob') # # Artist.new do |a| # a.name = 'Bob' # end def initialize(values = {}) @values = {} @new = true @modified = true initialize_set(values) changed_columns.clear yield self if block_given? end # Returns value of the column's attribute. # # Artist[1][:id] #=> 1 def [](column) @values[column] end # Sets the value for the given column. If typecasting is enabled for # this object, typecast the value based on the column's type. # If this is a new record or the typecasted value isn't the same # as the current value for the column, mark the column as changed. # # a = Artist.new # a[:name] = 'Bob' # a.values #=> {:name=>'Bob'} def []=(column, value) # If it is new, it doesn't have a value yet, so we should # definitely set the new value. # If the column isn't in @values, we can't assume it is # NULL in the database, so assume it has changed. v = typecast_value(column, value) vals = @values if new? || !vals.include?(column) || v != (c = vals[column]) || v.class != c.class change_column_value(column, v) end end # Alias of eql? def ==(obj) eql?(obj) end # If pk is not nil, true only if the objects have the same class and pk. # If pk is nil, false. # # Artist[1] === Artist[1] # true # Artist.new === Artist.new # false # Artist[1].set(:name=>'Bob') == Artist[1] # => true def ===(obj) pk.nil? ? false : (obj.class == model) && (obj.pk == pk) end # class is defined in Object, but it is also a keyword, # and since a lot of instance methods call class methods, # this alias makes it so you can use model instead of # self.class. # # Artist.new.model # => Artist alias_method :model, :class # The autoincrementing primary key for this model object. Should be # overridden if you have a composite primary key with one part of it # being autoincrementing. def autoincrementing_primary_key primary_key end # The columns that have been updated. This isn't completely accurate, # as it could contain columns whose values have not changed. # # a = Artist[1] # a.changed_columns # => [] # a.name = 'Bob' # a.changed_columns # => [:name] def changed_columns @changed_columns ||= [] end # Deletes and returns +self+. Does not run destroy hooks. # Look into using +destroy+ instead. # # Artist[1].delete # DELETE FROM artists WHERE (id = 1) # # => #<Artist {:id=>1, ...}> def delete raise Sequel::Error, "can't delete frozen object" if frozen? _delete self end # Like delete but runs hooks before and after delete. # If before_destroy returns false, returns false without # deleting the object the the database. Otherwise, deletes # the item from the database and returns self. Uses a transaction # if use_transactions is true or if the :transaction option is given and # true. # # Artist[1].destroy # BEGIN; DELETE FROM artists WHERE (id = 1); COMMIT; # # => #<Artist {:id=>1, ...}> def destroy(opts = OPTS) raise Sequel::Error, "can't destroy frozen object" if frozen? checked_save_failure(opts){checked_transaction(opts){_destroy(opts)}} end # Iterates through all of the current values using each. # # Album[1].each{|k, v| puts "#{k} => #{v}"} # # id => 1 # # name => 'Bob' def each(&block) @values.each(&block) end # Compares model instances by values. # # Artist[1] == Artist[1] # => true # Artist.new == Artist.new # => true # Artist[1].set(:name=>'Bob') == Artist[1] # => false def eql?(obj) (obj.class == model) && (obj.values == @values) end # Returns the validation errors associated with this object. # See +Errors+. def errors @errors ||= errors_class.new end # Returns true when current instance exists, false otherwise. # Generally an object that isn't new will exist unless it has # been deleted. Uses a database query to check for existence, # unless the model object is new, in which case this is always # false. # # Artist[1].exists? # SELECT 1 FROM artists WHERE (id = 1) # # => true # Artist.new.exists? # # => false def exists? new? ? false : !this.get(SQL::AliasedExpression.new(1, :one)).nil? end # Ignore the model's setter method cache when this instances extends a module, as the # module may contain setter methods. def extend(mod) @singleton_setter_added = true super end # Freeze the object in such a way that it is still usable but not modifiable. # Once an object is frozen, you cannot modify it's values, changed_columns, # errors, or dataset. def freeze values.freeze changed_columns.freeze errors validate errors.freeze this.freeze if !new? && model.primary_key super end # Value that should be unique for objects with the same class and pk (if pk is not nil), or # the same class and values (if pk is nil). # # Artist[1].hash == Artist[1].hash # true # Artist[1].set(:name=>'Bob').hash == Artist[1].hash # true # Artist.new.hash == Artist.new.hash # true # Artist.new(:name=>'Bob').hash == Artist.new.hash # false def hash case primary_key when Array [model, !pk.all? ? @values : pk].hash when Symbol [model, pk.nil? ? @values : pk].hash else [model, @values].hash end end # Returns value for the :id attribute, even if the primary key is # not id. To get the primary key value, use +pk+. # # Artist[1].id # => 1 def id @values[:id] end # Returns a string representation of the model instance including # the class name and values. def inspect "#<#{model.name} @values=#{inspect_values}>" end # Returns the keys in +values+. May not include all column names. # # Artist.new.keys # => [] # Artist.new(:name=>'Bob').keys # => [:name] # Artist[1].keys # => [:id, :name] def keys @values.keys end # Refresh this record using +for_update+ unless this is a new record. Returns self. # This can be used to make sure no other process is updating the record at the # same time. # # a = Artist[1] # Artist.db.transaction do # a.lock! # a.update(...) # end def lock! _refresh(this.for_update) unless new? self end # Remove elements of the model object that make marshalling fail. Returns self. # # a = Artist[1] # a.marshallable! # Marshal.dump(a) def marshallable! @this = nil self end # Explicitly mark the object as modified, so +save_changes+/+update+ will # run callbacks even if no columns have changed. # # a = Artist[1] # a.save_changes # No callbacks run, as no changes # a.modified! # a.save_changes # Callbacks run, even though no changes made # # If a column is given, specifically marked that column as modified, # so that +save_changes+/+update+ will include that column in the # update. This should be used if you plan on mutating the column # value instead of assigning a new column value: # # a.modified!(:name) # a.name.gsub!(/[aeou]/, 'i') def modified!(column=nil) if column && !changed_columns.include?(column) changed_columns << column end @modified = true end # Whether this object has been modified since last saved, used by # save_changes to determine whether changes should be saved. New # values are always considered modified. # # a = Artist[1] # a.modified? # => false # a.set(:name=>'Jim') # a.modified? # => true # # If a column is given, specifically check if the given column has # been modified: # # a.modified?(:num_albums) # => false # a.num_albums = 10 # a.modified?(:num_albums) # => true def modified?(column=nil) if column changed_columns.include?(column) else @modified || !changed_columns.empty? end end # Returns true if the current instance represents a new record. # # Artist.new.new? # => true # Artist[1].new? # => false def new? defined?(@new) ? @new : (@new = false) end # Returns the primary key value identifying the model instance. # Raises an +Error+ if this model does not have a primary key. # If the model has a composite primary key, returns an array of values. # # Artist[1].pk # => 1 # Artist[[1, 2]].pk # => [1, 2] def pk raise(Error, "No primary key is associated with this model") unless key = primary_key if key.is_a?(Array) vals = @values key.map{|k| vals[k]} else @values[key] end end # Returns a hash mapping the receivers primary key column(s) to their values. # # Artist[1].pk_hash # => {:id=>1} # Artist[[1, 2]].pk_hash # => {:id1=>1, :id2=>2} def pk_hash model.primary_key_hash(pk) end # Reloads attributes from database and returns self. Also clears all # changed_columns information. Raises an +Error+ if the record no longer # exists in the database. # # a = Artist[1] # a.name = 'Jim' # a.refresh # a.name # => 'Bob' def refresh raise Sequel::Error, "can't refresh frozen object" if frozen? _refresh(this) self end # Alias of refresh, but not aliased directly to make overriding in a plugin easier. def reload refresh end # Creates or updates the record, after making sure the record # is valid and before hooks execute successfully. Fails if: # # * the record is not valid, or # * before_save returns false, or # * the record is new and before_create returns false, or # * the record is not new and before_update returns false. # # If +save+ fails and either raise_on_save_failure or the # :raise_on_failure option is true, it raises ValidationFailed # or HookFailed. Otherwise it returns nil. # # If it succeeds, it returns self. # # You can provide an optional list of columns to update, in which # case it only updates those columns, or a options hash. # # Takes the following options: # # :changed :: save all changed columns, instead of all columns or the columns given # :columns :: array of specific columns that should be saved. # :raise_on_failure :: set to true or false to override the current # +raise_on_save_failure+ setting # :server :: set the server/shard on the object before saving, and use that # server/shard in any transaction. # :transaction :: set to true or false to override the current # +use_transactions+ setting # :validate :: set to false to skip validation def save(opts=OPTS) raise Sequel::Error, "can't save frozen object" if frozen? set_server(opts[:server]) if opts[:server] if opts[:validate] != false unless checked_save_failure(opts){_valid?(true, opts)} raise(ValidationFailed.new(self)) if raise_on_failure?(opts) return end end checked_save_failure(opts){checked_transaction(opts){_save(opts)}} end # Saves only changed columns if the object has been modified. # If the object has not been modified, returns nil. If unable to # save, returns false unless +raise_on_save_failure+ is true. # # a = Artist[1] # a.save_changes # => nil # a.name = 'Jim' # a.save_changes # UPDATE artists SET name = 'Bob' WHERE (id = 1) # # => #<Artist {:id=>1, :name=>'Jim', ...} def save_changes(opts=OPTS) save(opts.merge(:changed=>true)) || false if modified? end # Updates the instance with the supplied values with support for virtual # attributes, raising an exception if a value is used that doesn't have # a setter method (or ignoring it if <tt>strict_param_setting = false</tt>). # Does not save the record. # # artist.set(:name=>'Jim') # artist.name # => 'Jim' def set(hash) set_restricted(hash, :default) end # Set all values using the entries in the hash, ignoring any setting of # allowed_columns in the model. # # Artist.set_allowed_columns(:num_albums) # artist.set_all(:name=>'Jim') # artist.name # => 'Jim' def set_all(hash) set_restricted(hash, :all) end # For each of the fields in the given array +fields+, call the setter # method with the value of that +hash+ entry for the field. Returns self. # # You can provide an options hash, with the following options currently respected: # :missing :: Can be set to :skip to skip missing entries or :raise to raise an # Error for missing entries. The default behavior is not to check for # missing entries, in which case the default value is used. To be # friendly with most web frameworks, the missing check will also check # for the string version of the argument in the hash if given a symbol. # # Examples: # # artist.set_fields({:name=>'Jim'}, [:name]) # artist.name # => 'Jim' # # artist.set_fields({:hometown=>'LA'}, [:name]) # artist.name # => nil # artist.hometown # => 'Sac' # # artist.name # => 'Jim' # artist.set_fields({}, [:name], :missing=>:skip) # artist.name # => 'Jim' # # artist.name # => 'Jim' # artist.set_fields({}, [:name], :missing=>:raise) # # Sequel::Error raised def set_fields(hash, fields, opts=nil) opts = if opts model.default_set_fields_options.merge(opts) else model.default_set_fields_options end case opts[:missing] when :skip fields.each do |f| if hash.has_key?(f) send("#{f}=", hash[f]) elsif f.is_a?(Symbol) && hash.has_key?(sf = f.to_s) send("#{sf}=", hash[sf]) end end when :raise fields.each do |f| if hash.has_key?(f) send("#{f}=", hash[f]) elsif f.is_a?(Symbol) && hash.has_key?(sf = f.to_s) send("#{sf}=", hash[sf]) else raise(Sequel::Error, "missing field in hash: #{f.inspect} not in #{hash.inspect}") end end else fields.each{|f| send("#{f}=", hash[f])} end self end # Set the values using the entries in the hash, only if the key # is included in only. It may be a better idea to use +set_fields+ # instead of this method. # # artist.set_only({:name=>'Jim'}, :name) # artist.name # => 'Jim' # # artist.set_only({:hometown=>'LA'}, :name) # Raise Error def set_only(hash, *only) set_restricted(hash, only.flatten) end # Set the shard that this object is tied to. Returns self. def set_server(s) @server = s @this.opts[:server] = s if @this self end # REMOVE41 def set_values(hash) Sequel::Deprecation.deprecate('Model#set_values is deprecreated and will be removed in Sequel 4.1. Please use _refresh_set_values or _save_set_values or set the values directly.') @values = hash end # Clear the setter_methods cache when a method is added def singleton_method_added(meth) @singleton_setter_added = true if meth.to_s =~ SETTER_METHOD_REGEXP super end # Returns (naked) dataset that should return only this instance. # # Artist[1].this # # SELECT * FROM artists WHERE (id = 1) LIMIT 1 def this @this ||= use_server(model.instance_dataset.filter(pk_hash)) end # Runs #set with the passed hash and then runs save_changes. # # artist.update(:name=>'Jim') # UPDATE artists SET name = 'Jim' WHERE (id = 1) def update(hash) update_restricted(hash, :default) end # Update all values using the entries in the hash, ignoring any setting of # +allowed_columns+ in the model. # # Artist.set_allowed_columns(:num_albums) # artist.update_all(:name=>'Jim') # UPDATE artists SET name = 'Jim' WHERE (id = 1) def update_all(hash) update_restricted(hash, :all) end # Update the instances values by calling +set_fields+ with the arguments, then # saves any changes to the record. Returns self. # # artist.update_fields({:name=>'Jim'}, [:name]) # # UPDATE artists SET name = 'Jim' WHERE (id = 1) # # artist.update_fields({:hometown=>'LA'}, [:name]) # # UPDATE artists SET name = NULL WHERE (id = 1) def update_fields(hash, fields, opts=nil) set_fields(hash, fields, opts) save_changes end # Update the values using the entries in the hash, only if the key # is included in only. It may be a better idea to use +update_fields+ # instead of this method. # # artist.update_only({:name=>'Jim'}, :name) # # UPDATE artists SET name = 'Jim' WHERE (id = 1) # # artist.update_only({:hometown=>'LA'}, :name) # Raise Error def update_only(hash, *only) update_restricted(hash, only.flatten) end # Validates the object. If the object is invalid, errors should be added # to the errors attribute. By default, does nothing, as all models # are valid by default. See the {"Model Validations" guide}[link:files/doc/validations_rdoc.html]. # for details about validation. Should not be called directly by # user code, call <tt>valid?</tt> instead to check if an object # is valid. def validate end # Validates the object and returns true if no errors are reported. # # artist(:name=>'Valid').valid? # => true # artist(:name=>'Invalid').valid? # => false # artist.errors.full_messages # => ['name cannot be Invalid'] def valid?(opts = OPTS) _valid?(false, opts) end private # Do the deletion of the object's dataset, and check that the row # was actually deleted. def _delete n = _delete_without_checking raise(NoExistingObject, "Attempt to delete object did not result in a single row modification (Rows Deleted: #{n}, SQL: #{_delete_dataset.delete_sql})") if require_modification && n != 1 n end # The dataset to use when deleting the object. The same as the object's # dataset by default. def _delete_dataset this end # Actually do the deletion of the object's dataset. Return the # number of rows modified. def _delete_without_checking if sql = (m = model).fast_instance_delete_sql sql = sql.dup (ds = m.dataset).literal_append(sql, pk) ds.with_sql_delete(sql) else _delete_dataset.delete end end # Internal destroy method, separted from destroy to # allow running inside a transaction def _destroy(opts) sh = {:server=>this_server} db.after_rollback(sh){after_destroy_rollback} if uacr = use_after_commit_rollback called = false around_destroy do called = true raise_hook_failure(:before_destroy) if before_destroy == false _destroy_delete after_destroy true end raise_hook_failure(:around_destroy) unless called db.after_commit(sh){after_destroy_commit} if uacr self end # Internal delete method to call when destroying an object, # separated from delete to allow you to override destroy's version # without affecting delete. def _destroy_delete delete end # Insert the record into the database, returning the primary key if # the record should be refreshed from the database. def _insert ds = _insert_dataset if !ds.opts[:select] and ds.supports_insert_select? and h = _insert_select_raw(ds) _save_set_values(h) nil else iid = _insert_raw(ds) # if we have a regular primary key and it's not set in @values, # we assume it's the last inserted id if (pk = autoincrementing_primary_key) && pk.is_a?(Symbol) && !(vals = @values)[pk] vals[pk] = iid end pk end end # The dataset to use when inserting a new object. The same as the model's # dataset by default. def _insert_dataset use_server(model.instance_dataset) end # Insert into the given dataset and return the primary key created (if any). def _insert_raw(ds) ds.insert(@values) end # Insert into the given dataset and return the hash of column values. def _insert_select_raw(ds) ds.insert_select(@values) end # Refresh using a particular dataset, used inside save to make sure the same server # is used for reading newly inserted values from the database def _refresh(dataset) _refresh_set_values(_refresh_get(dataset) || raise(Error, "Record not found")) changed_columns.clear end # Get the row of column data from the database. def _refresh_get(dataset) dataset.first end # Set the refreshed values after def _refresh_set_values(h) @values = h end # Internal version of save, split from save to allow running inside # it's own transaction. def _save(opts) sh = {:server=>this_server} db.after_rollback(sh){after_rollback} if uacr = use_after_commit_rollback was_new = false pk = nil called_save = false called_cu = false around_save do called_save = true raise_hook_failure(:before_save) if before_save == false if new? was_new = true around_create do called_cu = true raise_hook_failure(:before_create) if before_create == false pk = _insert @this = nil @new = false @was_new = true after_create true end raise_hook_failure(:around_create) unless called_cu else around_update do called_cu = true raise_hook_failure(:before_update) if before_update == false columns = opts[:columns] if columns.nil? @columns_updated = if opts[:changed] @values.reject{|k,v| !changed_columns.include?(k)} else _save_update_all_columns_hash end changed_columns.clear else # update only the specified columns columns = Array(columns) @columns_updated = @values.reject{|k, v| !columns.include?(k)} changed_columns.reject!{|c| columns.include?(c)} end _update_columns(@columns_updated) @this = nil after_update true end raise_hook_failure(:around_update) unless called_cu end after_save true end raise_hook_failure(:around_save) unless called_save if was_new @was_new = nil pk ? _save_refresh : changed_columns.clear else @columns_updated = nil end @modified = false db.after_commit(sh){after_commit} if uacr self end # Refresh the object after saving it, used to get # default values of all columns. Separated from _save so it # can be overridden to avoid the refresh. def _save_refresh _save_set_values(_refresh_get(this.opts[:server] ? this : this.server(:default)) || raise(Error, "Record not found")) changed_columns.clear end # Set values to the provided hash. Called after a create, # to set the full values from the database in the model instance. def _save_set_values(h) @values = h end # Return a hash of values used when saving all columns of an # existing object (i.e. not passing specific columns to save # or using update/save_changes). Defaults to all of the # object's values except unmodified primary key columns, as some # databases don't like you setting primary key values even # to their existing values. def _save_update_all_columns_hash v = @values.dup Array(primary_key).each{|x| v.delete(x) unless changed_columns.include?(x)} v end # Call _update with the given columns, if any are present. # Plugins can override this method in order to update with # additional columns, even when the column hash is initially empty. def _update_columns(columns) _update(columns) unless columns.empty? end # Update this instance's dataset with the supplied column hash, # checking that only a single row was modified. def _update(columns) n = _update_without_checking(columns) raise(NoExistingObject, "Attempt to update object did not result in a single row modification (SQL: #{_update_dataset.update_sql(columns)})") if require_modification && n != 1 n end # The dataset to use when updating an object. The same as the object's # dataset by default. def _update_dataset this end # Update this instances dataset with the supplied column hash. def _update_without_checking(columns) _update_dataset.update(columns) end # Internal validation method. If +raise_errors+ is +true+, hook # failures will be raised as HookFailure exceptions. If it is # +false+, +false+ will be returned instead. def _valid?(raise_errors, opts) return errors.empty? if frozen? errors.clear called = false error = false around_validation do called = true if before_validation == false if raise_errors raise_hook_failure(:before_validation) else error = true end false else validate after_validation errors.empty? end end error = true unless called if error if raise_errors raise_hook_failure(:around_validation) else false end else errors.empty? end end # If not raising on failure, check for HookFailed # being raised by yielding and swallow it. def checked_save_failure(opts) if raise_on_failure?(opts) yield else begin yield rescue HookFailed nil end end end # If transactions should be used, wrap the yield in a transaction block. def checked_transaction(opts=OPTS) use_transaction?(opts) ? db.transaction({:server=>this_server}.merge(opts)){yield} : yield end # Change the value of the column to given value, recording the change. def change_column_value(column, value) cc = changed_columns cc << column unless cc.include?(column) @values[column] = value end # Default error class used for errors. def errors_class Errors end # Set the columns with the given hash. By default, the same as +set+, but # exists so it can be overridden. This is called only for new records, before # changed_columns is cleared. def initialize_set(h) set(h) unless h.empty? end # Default inspection output for the values hash, overwrite to change what #inspect displays. def inspect_values @values.inspect end # Whether to raise or return false if this action fails. If the # :raise_on_failure option is present in the hash, use that, otherwise, # fallback to the object's raise_on_save_failure (if set), or # class's default (if not). def raise_on_failure?(opts) opts.fetch(:raise_on_failure, raise_on_save_failure) end # Raise an error appropriate to the hook type. May be swallowed by # checked_save_failure depending on the raise_on_failure? setting. def raise_hook_failure(type) raise HookFailed.new("the #{type} hook failed", self) end # Get the ruby class or classes related to the given column's type. def schema_type_class(column) if (sch = db_schema[column]) && (type = sch[:type]) db.schema_type_class(type) end end # Call setter methods based on keys in hash, with the appropriate values. # Restrict which methods can be called based on the provided type. def set_restricted(hash, type) return self if hash.empty? meths = setter_methods(type) strict = strict_param_setting hash.each do |k,v| m = "#{k}=" if meths.include?(m) send(m, v) elsif strict # Avoid using respond_to? or creating symbols from user input if public_methods.map{|s| s.to_s}.include?(m) if Array(model.primary_key).map{|s| s.to_s}.member?(k.to_s) && model.restrict_primary_key? raise Error, "#{k} is a restricted primary key" else raise Error, "#{k} is a restricted column" end else raise Error, "method #{m} doesn't exist" end end end self end # Returns all methods that can be used for attribute assignment (those that end with =), # depending on the type: # # :default :: Use the default methods allowed in th model class. # :all :: Allow setting all setters, except those specifically restricted (such as ==). # Array :: Only allow setting of columns in the given array. def setter_methods(type) if type == :default if !@singleton_setter_added || model.allowed_columns return model.setter_methods end end if type.is_a?(Array) type.map{|x| "#{x}="} else meths = methods.collect{|x| x.to_s}.grep(SETTER_METHOD_REGEXP) - RESTRICTED_SETTER_METHODS meths -= Array(primary_key).map{|x| "#{x}="} if type != :all && primary_key && model.restrict_primary_key? meths end end # The server/shard that the model object's dataset uses, or :default if the # model object's dataset does not have an associated shard. def this_server if (s = @server) s elsif (t = @this) t.opts[:server] || :default else model.dataset.opts[:server] || :default end end # Typecast the value to the column's type if typecasting. Calls the database's # typecast_value method, so database adapters can override/augment the handling # for database specific column types. def typecast_value(column, value) return value unless typecast_on_assignment && db_schema && (col_schema = db_schema[column]) value = nil if '' == value and typecast_empty_string_to_nil and col_schema[:type] and ![:string, :blob].include?(col_schema[:type]) raise(InvalidValue, "nil/NULL is not allowed for the #{column} column") if raise_on_typecast_failure && value.nil? && (col_schema[:allow_null] == false) begin model.db.typecast_value(col_schema[:type], value) rescue InvalidValue raise_on_typecast_failure ? raise : value end end # Set the columns, filtered by the only and except arrays. def update_restricted(hash, type) set_restricted(hash, type) save_changes end # Set the given dataset to use the current object's shard. def use_server(ds) @server ? ds.server(@server) : ds end # Whether to use a transaction for this action. If the :transaction # option is present in the hash, use that, otherwise, fallback to the # object's default (if set), or class's default (if not). def use_transaction?(opts = OPTS) opts.fetch(:transaction, use_transactions) end end # Dataset methods are methods that the model class extends its dataset with in # the call to set_dataset. module DatasetMethods # The model class associated with this dataset # # Artist.dataset.model # => Artist attr_accessor :model # Assume if a single integer is given that it is a lookup by primary # key, and call with_pk with the argument. # # Artist.dataset[1] # SELECT * FROM artists WHERE (id = 1) LIMIT 1 def [](*args) if args.length == 1 && (i = args.at(0)) && i.is_a?(Integer) with_pk(i) else super end end # Destroy each row in the dataset by instantiating it and then calling # destroy on the resulting model object. This isn't as fast as deleting # the dataset, which does a single SQL call, but this runs any destroy # hooks on each object in the dataset. # # Artist.dataset.destroy # # DELETE FROM artists WHERE (id = 1) # # DELETE FROM artists WHERE (id = 2) # # ... def destroy pr = proc{all{|r| r.destroy}.length} model.use_transactions ? @db.transaction(:server=>opts[:server], &pr) : pr.call end # Allow Sequel::Model classes to be used as dataset arguments when graphing: # # Artist.graph(Album, :artist_id=>id) # # SELECT artists.id, artists.name, albums.id AS albums_id, albums.artist_id, albums.name AS albums_name # # FROM artists LEFT OUTER JOIN albums ON (albums.artist_id = artists.id) def graph(table, *args, &block) if table.is_a?(Class) && table < Sequel::Model super(table.dataset, *args, &block) else super end end # Handle Sequel::Model instances when inserting, using the model instance's # values for the insert, unless the model instance can be used directly in # SQL. # # Album.insert(Album.load(:name=>'A')) # # INSERT INTO albums (name) VALUES ('A') def insert_sql(*values) if values.size == 1 && (v = values.at(0)).is_a?(Sequel::Model) && !v.respond_to?(:sql_literal_append) super(v.to_hash) else super end end # Allow Sequel::Model classes to be used as table name arguments in dataset # join methods: # # Artist.join(Album, :artist_id=>id) # # SELECT * FROM artists INNER JOIN albums ON (albums.artist_id = artists.id) def join_table(type, table, *args, &block) if table.is_a?(Class) && table < Sequel::Model if table.dataset.simple_select_all? super(type, table.table_name, *args, &block) else super(type, table.dataset, *args, &block) end else super end end # If there is no order already defined on this dataset, order it by # the primary key and call last. # # Album.last # # SELECT * FROM albums ORDER BY id DESC LIMIT 1 def last(*a, &block) if opts[:order].nil? && model && (pk = model.primary_key) order(*pk).last(*a, &block) else super end end # If there is no order already defined on this dataset, order it by # the primary key and call paged_each. # # Album.paged_each{|row| ...} # # SELECT * FROM albums ORDER BY id LIMIT 1000 OFFSET 0 # # SELECT * FROM albums ORDER BY id LIMIT 1000 OFFSET 1000 # # SELECT * FROM albums ORDER BY id LIMIT 1000 OFFSET 2000 # # ... def paged_each(*a, &block) if opts[:order].nil? && model && (pk = model.primary_key) order(*pk).paged_each(*a, &block) else super end end # This allows you to call +to_hash+ without any arguments, which will # result in a hash with the primary key value being the key and the # model object being the value. # # Artist.dataset.to_hash # SELECT * FROM artists # # => {1=>#<Artist {:id=>1, ...}>, # # 2=>#<Artist {:id=>2, ...}>, # # ...} def to_hash(key_column=nil, value_column=nil) if key_column super else raise(Sequel::Error, "No primary key for model") unless model && (pk = model.primary_key) super(pk, value_column) end end # Given a primary key value, return the first record in the dataset with that primary key # value. If no records matches, returns nil. # # # Single primary key # Artist.dataset.with_pk(1) # SELECT * FROM artists WHERE (id = 1) LIMIT 1 # # # Composite primary key # Artist.dataset.with_pk([1, 2]) # SELECT * FROM artists # # WHERE ((id1 = 1) AND (id2 = 2)) LIMIT 1 def with_pk(pk) first(model.qualified_primary_key_hash(pk)) end # Same as with_pk, but raises NoMatchingRow instead of returning nil if no # row matches. def with_pk!(pk) with_pk(pk) || raise(NoMatchingRow) end end extend ClassMethods plugin self end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/model/dataset_module.rb������������������������������������������������0000664�0000000�0000000�00000001664�12201565355�0022470�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel class Model # This Module subclass is used by Model.dataset_module # to add dataset methods to classes. It adds a couple # of features standard Modules, allowing you to use # the same subset method you can call on Model, as well # as making sure that public methods added to the module # automatically have class methods created for them. class DatasetModule < ::Module # Store the model related to this dataset module. def initialize(model) @model = model end # Define a named filter for this dataset, see # Model.subset for details. def subset(name, *args, &block) define_method(name){filter(*args, &block)} end private # Add a class method to the related model that # calls the dataset method of the same name. def method_added(meth) @model.send(:def_model_dataset_method, meth) end end end end ����������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/model/default_inflections.rb�������������������������������������������0000664�0000000�0000000�00000003040�12201565355�0023505�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel # Proc that is instance evaled to create the default inflections for both the # model inflector and the inflector extension. DEFAULT_INFLECTIONS_PROC = proc do plural(/$/, 's') plural(/s$/i, 's') plural(/(alias|(?:stat|octop|vir|b)us)$/i, '\1es') plural(/(buffal|tomat)o$/i, '\1oes') plural(/([ti])um$/i, '\1a') plural(/sis$/i, 'ses') plural(/(?:([^f])fe|([lr])f)$/i, '\1\2ves') plural(/(hive)$/i, '\1s') plural(/([^aeiouy]|qu)y$/i, '\1ies') plural(/(x|ch|ss|sh)$/i, '\1es') plural(/(matr|vert|ind)ix|ex$/i, '\1ices') plural(/([m|l])ouse$/i, '\1ice') singular(/s$/i, '') singular(/([ti])a$/i, '\1um') singular(/(analy|ba|cri|diagno|parenthe|progno|synop|the)ses$/i, '\1sis') singular(/([^f])ves$/i, '\1fe') singular(/([h|t]ive)s$/i, '\1') singular(/([lr])ves$/i, '\1f') singular(/([^aeiouy]|qu)ies$/i, '\1y') singular(/(m)ovies$/i, '\1ovie') singular(/(x|ch|ss|sh)es$/i, '\1') singular(/([m|l])ice$/i, '\1ouse') singular(/buses$/i, 'bus') singular(/oes$/i, 'o') singular(/shoes$/i, 'shoe') singular(/(alias|(?:stat|octop|vir|b)us)es$/i, '\1') singular(/(vert|ind)ices$/i, '\1ex') singular(/matrices$/i, 'matrix') irregular('person', 'people') irregular('man', 'men') irregular('child', 'children') irregular('sex', 'sexes') irregular('move', 'moves') irregular('quiz', 'quizzes') irregular('testis', 'testes') uncountable(%w(equipment information rice money species series fish sheep news)) end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/model/errors.rb��������������������������������������������������������0000664�0000000�0000000�00000002636�12201565355�0021012�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel class Model # Errors represents validation errors, a simple hash subclass # with a few convenience methods. class Errors < ::Hash ATTRIBUTE_JOINER = ' and '.freeze # Adds an error for the given attribute. # # errors.add(:name, 'is not valid') if name == 'invalid' def add(att, msg) fetch(att){self[att] = []} << msg end # Return the total number of error messages. # # errors.count # => 3 def count values.inject(0){|m, v| m + v.length} end # Return true if there are no error messages, false otherwise. def empty? count == 0 end # Returns an array of fully-formatted error messages. # # errors.full_messages # # => ['name is not valid', # # 'hometown is not at least 2 letters'] def full_messages inject([]) do |m, kv| att, errors = *kv errors.each {|e| m << (e.is_a?(LiteralString) ? e : "#{Array(att).join(ATTRIBUTE_JOINER)} #{e}")} m end end # Returns the array of errors for the given attribute, or nil # if there are no errors for the attribute. # # errors.on(:name) # => ['name is not valid'] # errors.on(:id) # => nil def on(att) if v = fetch(att, nil) and !v.empty? v end end end end end ��������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/model/exceptions.rb����������������������������������������������������0000664�0000000�0000000�00000002523�12201565355�0021652�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel # Exception class raised when +raise_on_save_failure+ is set and a before hook returns false # or an around hook doesn't call super or yield. class HookFailed < Error # The Sequel::Model instance related to this error. attr_reader :model def initialize(message, model=nil) @model = model super(message) end end # Deprecated alias for HookFailed, kept for backwards compatibility BeforeHookFailed = HookFailed # Exception class raised when +require_modification+ is set and an UPDATE or DELETE statement to modify the dataset doesn't # modify a single row. class NoExistingObject < Error; end # Raised when an undefined association is used when eager loading. class UndefinedAssociation < Error; end # Exception class raised when +raise_on_save_failure+ is set and validation fails class ValidationFailed < Error # The Sequel::Model object related to this exception. attr_reader :model # The Sequel::Model::Errors object related to this exception. attr_reader :errors def initialize(errors) if errors.is_a?(Sequel::Model) @model = errors errors = @model.errors end if errors.respond_to?(:full_messages) @errors = errors super(errors.full_messages.join(', ')) else super end end end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/model/inflections.rb���������������������������������������������������0000664�0000000�0000000�00000014324�12201565355�0022010�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel # Yield the Inflections module if a block is given, and return # the Inflections module. def self.inflections yield Inflections if block_given? Inflections end # This module acts as a singleton returned/yielded by Sequel.inflections, # which is used to override or specify additional inflection rules # for Sequel. Examples: # # Sequel.inflections do |inflect| # inflect.plural /^(ox)$/i, '\1\2en' # inflect.singular /^(ox)en/i, '\1' # # inflect.irregular 'octopus', 'octopi' # # inflect.uncountable "equipment" # end # # New rules are added at the top. So in the example above, the irregular rule for octopus will now be the first of the # pluralization and singularization rules that is runs. This guarantees that your rules run before any of the rules that may # already have been loaded. module Inflections CAMELIZE_CONVERT_REGEXP = /(^|_)(.)/.freeze CAMELIZE_MODULE_REGEXP = /\/(.?)/.freeze DASH = '-'.freeze DEMODULIZE_CONVERT_REGEXP = /^.*::/.freeze EMPTY_STRING= ''.freeze SLASH = '/'.freeze VALID_CONSTANT_NAME_REGEXP = /\A(?:::)?([A-Z]\w*(?:::[A-Z]\w*)*)\z/.freeze UNDERSCORE = '_'.freeze UNDERSCORE_CONVERT_REGEXP1 = /([A-Z]+)([A-Z][a-z])/.freeze UNDERSCORE_CONVERT_REGEXP2 = /([a-z\d])([A-Z])/.freeze UNDERSCORE_CONVERT_REPLACE = '\1_\2'.freeze UNDERSCORE_MODULE_REGEXP = /::/.freeze @plurals, @singulars, @uncountables = [], [], [] class << self # Array of two element arrays, first containing a regex, and the second containing a substitution pattern, used for plurization. attr_reader :plurals # Array of two element arrays, first containing a regex, and the second containing a substitution pattern, used for singularization. attr_reader :singulars # Array of strings for words were the singular form is the same as the plural form attr_reader :uncountables end # Clears the loaded inflections within a given scope (default is :all). Give the scope as a symbol of the inflection type, # the options are: :plurals, :singulars, :uncountables # # Examples: # clear :all # clear :plurals def self.clear(scope = :all) case scope when :all @plurals, @singulars, @uncountables = [], [], [] else instance_variable_set("@#{scope}", []) end end # Specifies a new irregular that applies to both pluralization and singularization at the same time. This can only be used # for strings, not regular expressions. You simply pass the irregular in singular and plural form. # # Examples: # irregular 'octopus', 'octopi' # irregular 'person', 'people' def self.irregular(singular, plural) plural(Regexp.new("(#{singular[0,1]})#{singular[1..-1]}$", "i"), '\1' + plural[1..-1]) singular(Regexp.new("(#{plural[0,1]})#{plural[1..-1]}$", "i"), '\1' + singular[1..-1]) end # Specifies a new pluralization rule and its replacement. The rule can either be a string or a regular expression. # The replacement should always be a string that may include references to the matched data from the rule. # # Example: # plural(/(x|ch|ss|sh)$/i, '\1es') def self.plural(rule, replacement) @plurals.insert(0, [rule, replacement]) end # Specifies a new singularization rule and its replacement. The rule can either be a string or a regular expression. # The replacement should always be a string that may include references to the matched data from the rule. # # Example: # singular(/([^aeiouy]|qu)ies$/i, '\1y') def self.singular(rule, replacement) @singulars.insert(0, [rule, replacement]) end # Add uncountable words that shouldn't be attempted inflected. # # Examples: # uncountable "money" # uncountable "money", "information" # uncountable %w( money information rice ) def self.uncountable(*words) (@uncountables << words).flatten! end instance_eval(&DEFAULT_INFLECTIONS_PROC) private # Convert the given string to CamelCase. Will also convert '/' to '::' which is useful for converting paths to namespaces. def camelize(s) s = s.to_s return s.camelize if s.respond_to?(:camelize) s = s.gsub(CAMELIZE_MODULE_REGEXP){|x| "::#{x[-1..-1].upcase unless x == SLASH}"}.gsub(CAMELIZE_CONVERT_REGEXP){|x| x[-1..-1].upcase} s end # Tries to find a declared constant with the name specified # in the string. It raises a NameError when the name is not in CamelCase # or is not initialized. def constantize(s) s = s.to_s return s.constantize if s.respond_to?(:constantize) raise(NameError, "#{s.inspect} is not a valid constant name!") unless m = VALID_CONSTANT_NAME_REGEXP.match(s) Object.module_eval("::#{m[1]}", __FILE__, __LINE__) end # Removes the module part from the expression in the string def demodulize(s) s = s.to_s return s.demodulize if s.respond_to?(:demodulize) s.gsub(DEMODULIZE_CONVERT_REGEXP, EMPTY_STRING) end # Returns the plural form of the word in the string. def pluralize(s) s = s.to_s return s.pluralize if s.respond_to?(:pluralize) result = s.dup Inflections.plurals.each{|(rule, replacement)| break if result.gsub!(rule, replacement)} unless Inflections.uncountables.include?(s.downcase) result end # The reverse of pluralize, returns the singular form of a word in a string. def singularize(s) s = s.to_s return s.singularize if s.respond_to?(:singularize) result = s.dup Inflections.singulars.each{|(rule, replacement)| break if result.gsub!(rule, replacement)} unless Inflections.uncountables.include?(s.downcase) result end # The reverse of camelize. Makes an underscored form from the expression in the string. # Also changes '::' to '/' to convert namespaces to paths. def underscore(s) s = s.to_s return s.underscore if s.respond_to?(:underscore) s.gsub(UNDERSCORE_MODULE_REGEXP, SLASH).gsub(UNDERSCORE_CONVERT_REGEXP1, UNDERSCORE_CONVERT_REPLACE). gsub(UNDERSCORE_CONVERT_REGEXP2, UNDERSCORE_CONVERT_REPLACE).tr(DASH, UNDERSCORE).downcase end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/model/plugins.rb�������������������������������������������������������0000664�0000000�0000000�00000004062�12201565355�0021152�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel # Empty namespace that plugins should use to store themselves, # so they can be loaded via Model.plugin. # # Plugins should be modules with one of the following conditions: # * A singleton method named apply, which takes a model, # additional arguments, and an optional block. This is called # the first time the plugin is loaded for this model (unless it was # already loaded by an ancestor class), before including/extending # any modules, with the arguments # and block provided to the call to Model.plugin. # * A module inside the plugin module named ClassMethods, # which will extend the model class. # * A module inside the plugin module named InstanceMethods, # which will be included in the model class. # * A module inside the plugin module named DatasetMethods, # which will extend the model's dataset. # * A singleton method named configure, which takes a model, # additional arguments, and an optional block. This is called # every time the Model.plugin method is called, after including/extending # any modules. module Plugins # In the given module +mod+, define methods that are call the same method # on the dataset. This is designed for plugins to define dataset methods # inside ClassMethods that call the implementations in DatasetMethods. def self.def_dataset_methods(mod, meths) Array(meths).each do |meth| mod.class_eval("def #{meth}(*args, &block); dataset.#{meth}(*args, &block) end", __FILE__, __LINE__) end end # Add method to +mod+ that overrides inherited_instance_variables to include the # values in this hash. def self.inherited_instance_variables(mod, hash) mod.send(:define_method, :inherited_instance_variables) do || super().merge!(hash) end end # Add method to +mod+ that overrides set_dataset to call the method afterward. def self.after_set_dataset(mod, meth) mod.send(:define_method, :set_dataset) do |*a| r = super(*a) send(meth) r end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/no_core_ext.rb���������������������������������������������������������0000664�0000000�0000000�00000000021�12201565355�0020664�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'sequel' ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/���������������������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0017523�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/active_model.rb������������������������������������������������0000664�0000000�0000000�00000005321�12201565355�0022504�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'active_model' module Sequel module Plugins # The ActiveModel plugin makes Sequel::Model objects # pass the ActiveModel::Lint tests, which should # hopefully mean full ActiveModel compliance. This should # allow the full support of Sequel::Model objects in Rails 3. # This plugin requires active_model in order to use # ActiveModel::Naming. # # Usage: # # # Make all subclasses active_model compliant (called before loading subclasses) # Sequel::Model.plugin :active_model # # # Make the Album class active_model compliant # Album.plugin :active_model module ActiveModel # ActiveModel compliant error class class Errors < Sequel::Model::Errors # Add autovivification so that #[] always returns an array. def [](k) fetch(k){self[k] = []} end end module ClassMethods include ::ActiveModel::Naming # Class level cache for to_partial_path. def _to_partial_path @_to_partial_path ||= "#{underscore(pluralize(to_s))}/#{underscore(demodulize(to_s))}".freeze end end module InstanceMethods # The default string to join composite primary keys with in to_param. DEFAULT_TO_PARAM_JOINER = '-'.freeze # Record that an object was destroyed, for later use by # destroyed? def after_destroy super @destroyed = true end # False if the object is new? or has been destroyed, true otherwise. def persisted? !new? && @destroyed != true end # An array of primary key values, or nil if the object is not persisted. def to_key if primary_key.is_a?(Symbol) [pk] if pk else pk if pk.all? end end # With the ActiveModel plugin, Sequel model objects are already # compliant, so this returns self. def to_model self end # An string representing the object's primary key. For composite # primary keys, joins them with to_param_joiner. def to_param if persisted? and k = to_key k.join(to_param_joiner) end end # Returns a string identifying the path associated with the object. def to_partial_path model._to_partial_path end private # Use ActiveModel compliant errors class. def errors_class Errors end # The string to use to join composite primary key param strings. def to_param_joiner DEFAULT_TO_PARAM_JOINER end end end end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/after_initialize.rb��������������������������������������������0000664�0000000�0000000�00000002004�12201565355�0023366�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # Adds an after_initialize hook to models, called after initializing # both new objects and ones loaded from the database. # # Usage: # # # Make all model subclasses support the after_initialize hook # Sequel::Model.plugin :after_initialize # # # Make the Album class support the after_initialize hook # Album.plugin :after_initialize module AfterInitialize module ClassMethods # Call after_initialize for model objects loaded from the database. def call(h={}) v = super v.after_initialize v end end module InstanceMethods # Call after_initialize for new model objects. def initialize(h={}) super after_initialize end # An empty after_initialize hook, so that plugins that use this # can always call super to get the default behavior. def after_initialize end end end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/association_autoreloading.rb�����������������������������������0000664�0000000�0000000�00000000213�12201565355�0025275�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # Empty plugin module for backwards compatibility module AssociationAutoreloading end end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/association_dependencies.rb������������������������������������0000664�0000000�0000000�00000011623�12201565355�0025075�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The AssociationDependencies plugin allows you do easily set up before and/or after destroy hooks # for destroying, deleting, or nullifying associated model objects. The following # association types support the following dependency actions: # # * :many_to_many - :nullify (removes all related entries in join table) # * :many_to_one - :delete, :destroy # * :one_to_many, one_to_one - :delete, :destroy, :nullify (sets foreign key to NULL for all associated objects) # # This plugin works directly with the association datasets and does not use any cached association values. # The :delete action will delete all associated objects from the database in a single SQL call. # The :destroy action will load each associated object from the database and call the destroy method on it. # # To set up an association dependency, you must provide a hash with association name symbols # and dependency action values. You can provide the hash to the plugin call itself or # to the add_association_dependencies method: # # Business.plugin :association_dependencies, :address=>:delete # # or: # Artist.plugin :association_dependencies # Artist.add_association_dependencies :albums=>:destroy, :reviews=>:delete, :tags=>:nullify module AssociationDependencies # Mapping of association types to when the dependency calls should be made (either # :before for in before_destroy or :after for in after_destroy) ASSOCIATION_MAPPING = {:one_to_many=>:before, :many_to_one=>:after, :many_to_many=>:before, :one_to_one=>:before} # The valid dependence actions DEPENDENCE_ACTIONS = [:delete, :destroy, :nullify] # Initialize the association_dependencies hash for this model. def self.apply(model, hash=OPTS) model.instance_eval{@association_dependencies = {:before_delete=>[], :before_destroy=>[], :before_nullify=>[], :after_delete=>[], :after_destroy=>[]}} end # Call add_association_dependencies with any dependencies given in the plugin call. def self.configure(model, hash=OPTS) model.add_association_dependencies(hash) unless hash.empty? end module ClassMethods # A hash specifying the association dependencies for each model. The keys # are symbols indicating the type of action and when it should be executed # (e.g. :before_delete). Values are an array of method symbols. # For before_nullify, the symbols are remove_all_association methods. For other # types, the symbols are association_dataset methods, on which delete or # destroy is called. attr_reader :association_dependencies # Add association dependencies to this model. The hash should have association name # symbol keys and dependency action symbol values (e.g. :albums=>:destroy). def add_association_dependencies(hash) hash.each do |association, action| raise(Error, "Nonexistent association: #{association}") unless r = association_reflection(association) type = r[:type] raise(Error, "Invalid dependence action type: association: #{association}, dependence action: #{action}") unless DEPENDENCE_ACTIONS.include?(action) raise(Error, "Invalid association type: association: #{association}, type: #{type}") unless time = ASSOCIATION_MAPPING[type] association_dependencies[:"#{time}_#{action}"] << if action == :nullify case type when :one_to_many , :many_to_many proc{send(r.remove_all_method)} when :one_to_one proc{send(r.setter_method, nil)} else raise(Error, "Can't nullify many_to_one associated objects: association: #{association}") end else raise(Error, "Can only nullify many_to_many associations: association: #{association}") if type == :many_to_many r.dataset_method end end end Plugins.inherited_instance_variables(self, :@association_dependencies=>:hash_dup) end module InstanceMethods # Run the delete and destroy association dependency actions for # many_to_one associations. def after_destroy super model.association_dependencies[:after_delete].each{|m| send(m).delete} model.association_dependencies[:after_destroy].each{|m| send(m).destroy} end # Run the delete, destroy, and nullify association dependency actions for # *_to_many associations. def before_destroy model.association_dependencies[:before_delete].each{|m| send(m).delete} model.association_dependencies[:before_destroy].each{|m| send(m).destroy} model.association_dependencies[:before_nullify].each{|p| instance_eval(&p)} super end end end end end �������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/association_pks.rb���������������������������������������������0000664�0000000�0000000�00000017242�12201565355�0023247�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The association_pks plugin adds the association_pks and association_pks= # instance methods to the model class for each association added. These # methods allow for easily returning the primary keys of the associated # objects, and easily modifying the associated objects to set the primary # keys to just the ones given: # # Artist.one_to_many :albums # artist = Artist[1] # artist.album_pks # [1, 2, 3] # artist.album_pks = [2, 4] # artist.album_pks # [2, 4] # # Note that it uses the singular form of the association name. Also note # that the setter both associates to new primary keys not in the assocation # and disassociates from primary keys not provided to the method. # # This plugin makes modifications directly to the underlying tables, # it does not create or return any model objects, and therefore does # not call any callbacks. If you have any association callbacks, # you probably should not use the setter methods. # # Usage: # # # Make all model subclass *_to_many associations have association_pks # # methods (called before loading subclasses) # Sequel::Model.plugin :association_pks # # # Make the Album *_to_many associations have association_pks # # methods (called before the association methods) # Album.plugin :association_pks module AssociationPks module ClassMethods private # Define a association_pks method using the block for the association reflection def def_association_pks_getter(opts, &block) association_module_def(:"#{singularize(opts[:name])}_pks", opts, &block) end # Define a association_pks= method using the block for the association reflection, # if the association is not read only. def def_association_pks_setter(opts, &block) association_module_def(:"#{singularize(opts[:name])}_pks=", opts, &block) unless opts[:read_only] end # Add a getter that checks the join table for matching records and # a setter that deletes from or inserts into the join table. def def_many_to_many(opts) super # Grab values from the reflection so that the hash lookup only needs to be # done once instead of inside ever method call. lk, lpk, rk = opts.values_at(:left_key, :left_primary_key, :right_key) # Add 2 separate implementations of the getter method optimized for the # composite and singular left key cases, and 4 separate implementations of the setter # method optimized for each combination of composite and singular keys for both # the left and right keys. if lpk.is_a?(Array) def_association_pks_getter(opts) do h = {} lk.zip(lpk).each{|k, pk| h[k] = send(pk)} _join_table_dataset(opts).filter(h).select_map(rk) end if rk.is_a?(Array) def_association_pks_setter(opts) do |pks| pks = convert_cpk_array(opts, pks) checked_transaction do lpkv = lpk.map{|k| send(k)} ds = _join_table_dataset(opts).filter(lk.zip(lpkv)) ds.exclude(rk=>pks).delete pks -= ds.select_map(rk) h = {} lk.zip(lpkv).each{|k, v| h[k] = v} pks.each do |pk| ih = h.dup rk.zip(pk).each{|k, v| ih[k] = v} ds.insert(ih) end end end else def_association_pks_setter(opts) do |pks| pks = convert_pk_array(opts, pks) checked_transaction do lpkv = lpk.map{|k| send(k)} ds = _join_table_dataset(opts).filter(lk.zip(lpkv)) ds.exclude(rk=>pks).delete pks -= ds.select_map(rk) h = {} lk.zip(lpkv).each{|k, v| h[k] = v} pks.each do |pk| ds.insert(h.merge(rk=>pk)) end end end end else def_association_pks_getter(opts) do _join_table_dataset(opts).filter(lk=>send(lpk)).select_map(rk) end if rk.is_a?(Array) def_association_pks_setter(opts) do |pks| pks = convert_cpk_array(opts, pks) checked_transaction do lpkv = send(lpk) ds = _join_table_dataset(opts).filter(lk=>lpkv) ds.exclude(rk=>pks).delete pks -= ds.select_map(rk) pks.each do |pk| h = {lk=>lpkv} rk.zip(pk).each{|k, v| h[k] = v} ds.insert(h) end end end else def_association_pks_setter(opts) do |pks| pks = convert_pk_array(opts, pks) checked_transaction do lpkv = send(lpk) ds = _join_table_dataset(opts).filter(lk=>lpkv) ds.exclude(rk=>pks).delete pks -= ds.select_map(rk) pks.each{|pk| ds.insert(lk=>lpkv, rk=>pk)} end end end end end # Add a getter that checks the association dataset and a setter # that updates the associated table. def def_one_to_many(opts) super return if opts[:type] == :one_to_one key = opts[:key] def_association_pks_getter(opts) do send(opts.dataset_method).select_map(opts.associated_class.primary_key) end def_association_pks_setter(opts) do |pks| primary_key = opts.associated_class.primary_key pks = if primary_key.is_a?(Array) convert_cpk_array(opts, pks) else convert_pk_array(opts, pks) end pkh = {primary_key=>pks} if key.is_a?(Array) h = {} nh = {} key.zip(pk).each do|k, v| h[k] = v nh[k] = nil end else h = {key=>pk} nh = {key=>nil} end checked_transaction do ds = send(opts.dataset_method) ds.unfiltered.filter(pkh).update(h) ds.exclude(pkh).update(nh) end end end end module InstanceMethods private # If any of associated class's composite primary key column types is integer, # typecast the appropriate values to integer before using them. def convert_cpk_array(opts, cpks) if klass = opts.associated_class and sch = klass.db_schema and (cols = sch.values_at(*klass.primary_key)).all? and (convs = cols.map{|c| c[:type] == :integer}).any? cpks.map do |cpk| cpk.zip(convs).map do |pk, conv| conv ? model.db.typecast_value(:integer, pk) : pk end end else cpks end end # If the associated class's primary key column type is integer, # typecast all provided values to integer before using them. def convert_pk_array(opts, pks) if klass = opts.associated_class and sch = klass.db_schema and col = sch[klass.primary_key] and col[:type] == :integer pks.map{|pk| model.db.typecast_value(:integer, pk)} else pks end end end end end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/association_proxies.rb�����������������������������������������0000664�0000000�0000000�00000011006�12201565355�0024133�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # Sequel by default does not use proxies for associations. The association # method for *_to_many associations returns an array, and the association_dataset # method returns a dataset. This plugin makes the association method return a proxy # that will load the association and call a method on the association array if sent # an array method, and otherwise send the method to the association's dataset. # # You can override which methods to forward to the dataset by passing a block to the plugin: # # plugin :association_proxies do |opts| # [:find, :where, :create].include?(opts[:method]) # end # # If the block returns false or nil, the method is sent to the array of associated # objects. Otherwise, the method is sent to the association dataset. Here are the entries # in the hash passed to the block: # # :method :: The name of the method # :arguments :: The arguments to the method # :block :: The block given to the method # :instance :: The model instance related to the call # :reflection :: The reflection for the association related to the call # :proxy_argument :: The argument given to the association method call # :proxy_block :: The block given to the association method call # # For example, in a call like: # # artist.albums(1){|ds| ds}.foo(2){|x| 3} # # The opts passed to the block would be: # # { # :method => :foo, # :arguments => [2], # :block => {|x| 3}, # :instance => artist, # :reflection => {:name=>:albums, ...}, # :proxy_argument => 1, # :proxy_block => {|ds| ds} # } # # Usage: # # # Use association proxies in all model subclasses (called before loading subclasses) # Sequel::Model.plugin :association_proxies # # # Use association proxies in a specific model subclass # Album.plugin :association_proxies module AssociationProxies def self.configure(model, &block) model.instance_eval do @association_proxy_to_dataset = block if block @association_proxy_to_dataset ||= AssociationProxy::DEFAULT_PROXY_TO_DATASET end end # A proxy for the association. Calling an array method will load the # associated objects and call the method on the associated object array. # Calling any other method will call that method on the association's dataset. class AssociationProxy < BasicObject array = [] # Default proc used to determine whether to sent the method to the dataset. # If the array would respond to it, sends it to the array instead of the dataset. DEFAULT_PROXY_TO_DATASET = proc{|opts| !array.respond_to?(opts[:method])} # Set the association reflection to use, and whether the association should be # reloaded if an array method is called. def initialize(instance, reflection, proxy_argument, &proxy_block) @instance = instance @reflection = reflection @proxy_argument = proxy_argument @proxy_block = proxy_block end # Call the method given on the array of associated objects if the method # is an array method, otherwise call the method on the association's dataset. def method_missing(meth, *args, &block) v = if @instance.model.association_proxy_to_dataset.call(:method=>meth, :arguments=>args, :block=>block, :instance=>@instance, :reflection=>@reflection, :proxy_argument=>@proxy_argument, :proxy_block=>@proxy_block) @instance.send(@reflection.dataset_method) else @instance.send(:load_associated_objects, @reflection, @proxy_argument, &@proxy_block) end v.send(meth, *args, &block) end end module ClassMethods # Proc that accepts a method name, array of arguments, and block and # should return a truthy value to send the method to the dataset instead of the # array of associated objects. attr_reader :association_proxy_to_dataset Plugins.inherited_instance_variables(self, :@association_proxy_to_dataset=>nil) # Changes the association method to return a proxy instead of the associated objects # directly. def def_association_method(opts) opts.returns_array? ? association_module_def(opts.association_method, opts){|*r, &block| AssociationProxy.new(self, opts, r[0], &block)} : super end end end end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/auto_validations.rb��������������������������������������������0000664�0000000�0000000�00000013407�12201565355�0023422�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The auto_validations plugin automatically sets up three types of validations # for your model columns: # # 1. type validations for all columns # 2. not_null validations on NOT NULL columns (optionally, presence validations) # 3. unique validations on columns or sets of columns with unique indexes # # To determine the columns to use for the not_null validations and the types for the type validations, # the plugin looks at the database schema for the model's table. To determine # the unique validations, Sequel looks at the indexes on the table. In order # for this plugin to be fully functional, the underlying database adapter needs # to support both schema and index parsing. # # This plugin uses the validation_helpers plugin underneath to implement the # validations. It does not allow for any per-column validation message # customization, but you can alter the messages for the given type of validation # on a per-model basis (see the validation_helpers documentation). # # You can skip certain types of validations from being automatically added via: # # Model.skip_auto_validations(:not_null) # # If you want to skip all auto validations (only useful if loading the plugin # in a superclass): # # Model.skip_auto_validations(:all) # # By default, the plugin uses a not_null validation for NOT NULL columns, but that # can be changed to a presence validation using an option: # # Model.plugin :auto_validations, :not_null=>:presence # # This is useful if you want to enforce that NOT NULL string columns do not # allow empty values. # # Usage: # # # Make all model subclass use auto validations (called before loading subclasses) # Sequel::Model.plugin :auto_validations # # # Make the Album class use auto validations # Album.plugin :auto_validations module AutoValidations def self.apply(model, opts=OPTS) model.instance_eval do plugin :validation_helpers @auto_validate_presence = false @auto_validate_not_null_columns = [] @auto_validate_explicit_not_null_columns = [] @auto_validate_unique_columns = [] @auto_validate_types = true end end # Setup auto validations for the model if it has a dataset. def self.configure(model, opts=OPTS) model.instance_eval do setup_auto_validations if @dataset if opts[:not_null] == :presence @auto_validate_presence = true end end end module ClassMethods # The columns with automatic not_null validations attr_reader :auto_validate_not_null_columns # The columns with automatic not_null validations for columns present in the values. attr_reader :auto_validate_explicit_not_null_columns # The columns or sets of columns with automatic unique validations attr_reader :auto_validate_unique_columns Plugins.inherited_instance_variables(self, :@auto_validate_presence=>nil, :@auto_validate_types=>nil, :@auto_validate_not_null_columns=>:dup, :@auto_validate_explicit_not_null_columns=>:dup, :@auto_validate_unique_columns=>:dup) Plugins.after_set_dataset(self, :setup_auto_validations) # Whether to use a presence validation for not null columns def auto_validate_presence? @auto_validate_presence end # Whether to automatically validate schema types for all columns def auto_validate_types? @auto_validate_types end # Skip automatic validations for the given validation type (:not_null, :types, :unique). # If :all is given as the type, skip all auto validations. def skip_auto_validations(type) if type == :all [:not_null, :types, :unique].each{|v| skip_auto_validations(v)} elsif type == :types @auto_validate_types = false else send("auto_validate_#{type}_columns").clear end end private # Parse the database schema and indexes and record the columns to automatically validate. def setup_auto_validations not_null_cols, explicit_not_null_cols = db_schema.select{|col, sch| sch[:allow_null] == false}.partition{|col, sch| sch[:ruby_default].nil?}.map{|cs| cs.map{|col, sch| col}} @auto_validate_not_null_columns = not_null_cols - Array(primary_key) explicit_not_null_cols += Array(primary_key) @auto_validate_explicit_not_null_columns = explicit_not_null_cols.uniq @auto_validate_unique_columns = if db.supports_index_parsing? db.indexes(dataset.first_source_table).select{|name, idx| idx[:unique] == true}.map{|name, idx| idx[:columns]} else [] end end end module InstanceMethods # Validate the model's auto validations columns def validate super unless (not_null_columns = model.auto_validate_not_null_columns).empty? if model.auto_validate_presence? validates_presence(not_null_columns) else validates_not_null(not_null_columns) end end unless (not_null_columns = model.auto_validate_explicit_not_null_columns).empty? if model.auto_validate_presence? validates_presence(not_null_columns, :allow_missing=>true) else validates_not_null(not_null_columns, :allow_missing=>true) end end validates_schema_types if model.auto_validate_types? model.auto_validate_unique_columns.each{|cols| validates_unique(cols)} end end end end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/blacklist_security.rb������������������������������������������0000664�0000000�0000000�00000007610�12201565355�0023753�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The blacklist_security plugin contains blacklist-based support for # mass assignment, specifying which columns to not allow mass assignment for, # implicitly allowing mass assignment for columns not listed. This is only # for backwards compatibility, it should not be used by new code. # # Usage: # # # Make all model subclasses support the blacklist security features. # Sequel::Model.plugin :blacklist_security # # # Make the Album class support the blacklist security features. # Album.plugin :blacklist_security module BlacklistSecurity module ClassMethods # Which columns are specifically restricted in a call to set/update/new/etc. # (default: not set). Some columns are restricted regardless of # this setting, such as the primary key column and columns in Model::RESTRICTED_SETTER_METHODS. attr_reader :restricted_columns # Set the columns to restrict when using mass assignment (e.g. +set+). Using this means that # attempts to call setter methods for the columns listed here will cause an # exception or be silently skipped (based on the +strict_param_setting+ setting). # If you have any virtual setter methods (methods that end in =) that you # want not to be used during mass assignment, they need to be listed here as well (without the =). # # It's generally a bad idea to rely on a blacklist approach for security. Using a whitelist # approach such as set_allowed_columns or the instance level set_only or set_fields methods # is usually a better choice. So use of this method is generally a bad idea. # # Artist.set_restricted_columns(:records_sold) # Artist.set(:name=>'Bob', :hometown=>'Sactown') # No Error # Artist.set(:name=>'Bob', :records_sold=>30000) # Error def set_restricted_columns(*cols) clear_setter_methods_cache @restricted_columns = cols end private # If allowed_columns is not set but restricted_columns is, remove the # restricted_columns. def get_setter_methods meths = super if !allowed_columns && restricted_columns meths -= restricted_columns.map{|x| "#{x}="} end meths end end module InstanceMethods # Special array subclass used for marking methods to be removed. class ExceptionList < Array end # Set all values using the entries in the hash, except for the keys # given in except. You should probably use +set_fields+ or +set_only+ # instead of this method, as blacklist approaches to security are a bad idea. # # artist.set_except({:name=>'Jim'}, :hometown) # artist.name # => 'Jim' def set_except(hash, *except) set_restricted(hash, ExceptionList.new(except.flatten)) end # Update all values using the entries in the hash, except for the keys # given in except. You should probably use +update_fields+ or +update_only+ # instead of this method, as blacklist approaches to security are a bad idea. # # artist.update_except({:name=>'Jim'}, :hometown) # UPDATE artists SET name = 'Jim' WHERE (id = 1) def update_except(hash, *except) update_restricted(hash, ExceptionList.new(except.flatten)) end private # If set_except or update_except was used, remove the related methods from the list. def setter_methods(type) if type.is_a?(ExceptionList) meths = super(:all) meths -= Array(primary_key).map{|x| "#{x}="} if primary_key && model.restrict_primary_key? meths -= type.map{|x| "#{x}="} meths else super end end end end end end ������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/boolean_readers.rb���������������������������������������������0000664�0000000�0000000�00000004571�12201565355�0023203�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The BooleanReaders plugin allows for the creation of attribute? methods # for boolean columns, which provides a nicer API. By default, the accessors # are created for all columns of type :boolean. However, you can provide a # block to the plugin to change the criteria used to determine if a # column is boolean. The block is yielded with the column symbol for each # column in the models dataset. # # Usage: # # # Add boolean attribute? methods for all columns of type :boolean # # in all model subclasses (called before loading subclasses) # Sequel::Model.plugin :boolean_readers # # # Add boolean readers for all tinyint columns in the Album class # Album.plugin(:boolean_readers){|c| db_schema[c][:db_type] =~ /\Atinyint/} # # # Add a boolean reader for a specific columns in the Artist class # Artist.plugin(:boolean_readers){|c| [:column1, :column2, :column3].include?(c)} module BooleanReaders # Default proc for determining if given column is a boolean, which # just checks that the :type is boolean. DEFAULT_BOOLEAN_ATTRIBUTE_PROC = lambda{|c| s = db_schema[c] and s[:type] == :boolean} # Add the boolean_attribute? class method to the model, and create # attribute? boolean reader methods for the class's columns if the class has a dataset. def self.configure(model, &block) model.instance_eval do (class << self; self; end).send(:define_method, :boolean_attribute?, &(block || DEFAULT_BOOLEAN_ATTRIBUTE_PROC)) send(:create_boolean_readers) if @dataset end end module ClassMethods Plugins.after_set_dataset(self, :create_boolean_readers) private # Add a attribute? method for the column to a module included in the class. def create_boolean_reader(column) overridable_methods_module.module_eval do define_method("#{column}?"){model.db.typecast_value(:boolean, send(column))} end end # Add attribute? methods for all of the boolean attributes for this model. def create_boolean_readers im = instance_methods.collect{|x| x.to_s} cs = columns rescue return cs.each{|c| create_boolean_reader(c) if boolean_attribute?(c) && !im.include?("#{c}?")} end end end end end ���������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/caching.rb�����������������������������������������������������0000664�0000000�0000000�00000012020�12201565355�0021437�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # Sequel's built-in caching plugin supports caching to any object that # implements the Ruby-Memcache API (or memcached API with the :ignore_exceptions # option): # # cache_store.set(key, obj, time) # Associate the obj with the given key # # in the cache for the time (specified # # in seconds). # cache_store.get(key) => obj # Returns object set with same key. # cache_store.get(key2) => nil # nil returned if there isn't an object # # currently in the cache with that key. # cache_store.delete(key) # Remove key from cache # # If the :ignore_exceptions option is true, exceptions raised by cache_store.get # are ignored and nil is returned instead. The memcached API is to # raise an exception for a missing record, so if you use memcached, you will # want to use this option. # # Note that only Model.[] method calls with a primary key argument are cached # using this plugin. # # Usage: # # # Make all subclasses use the same cache (called before loading subclasses) # # using the Ruby-Memcache API, with the cache stored in the CACHE constant # Sequel::Model.plugin :caching, CACHE # # # Make the Album class use the cache with a 30 minute time-to-live # Album.plugin :caching, CACHE, :ttl=>1800 # # # Make the Artist class use a cache with the memcached protocol # Artist.plugin :caching, MEMCACHED_CACHE, :ignore_exceptions=>true module Caching # Set the cache_store and cache_ttl attributes for the given model. # If the :ttl option is not given, 3600 seconds is the default. def self.configure(model, store, opts=OPTS) model.instance_eval do @cache_store = store @cache_ttl = opts[:ttl] || 3600 @cache_ignore_exceptions = opts[:ignore_exceptions] end end module ClassMethods # If true, ignores exceptions when gettings cached records (the memcached API). attr_reader :cache_ignore_exceptions # The cache store object for the model, which should implement the # Ruby-Memcache (or memcached) API attr_reader :cache_store # The time to live for the cache store, in seconds. attr_reader :cache_ttl # Delete the cached object with the given primary key. def cache_delete_pk(pk) cache_delete(cache_key(pk)) end # Return the cached object with the given primary key, # or nil if no such object is in the cache. def cache_get_pk(pk) cache_get(cache_key(pk)) end # Return a key string for the given primary key. def cache_key(pk) raise(Error, 'no primary key for this record') unless pk.is_a?(Array) ? pk.all? : pk "#{self}:#{Array(pk).join(',')}" end Plugins.inherited_instance_variables(self, :@cache_store=>nil, :@cache_ttl=>nil, :@cache_ignore_exceptions=>nil) # Set the time to live for the cache store, in seconds (default is 3600, # so 1 hour). def set_cache_ttl(ttl) @cache_ttl = ttl end private # Delete the entry with the matching key from the cache def cache_delete(ck) if @cache_ignore_exceptions @cache_store.delete(ck) rescue nil else @cache_store.delete(ck) end nil end # Returned the cached object, or nil if the object was not # in the cached def cache_get(ck) if @cache_ignore_exceptions @cache_store.get(ck) rescue nil else @cache_store.get(ck) end end # Set the object in the cache_store with the given key for cache_ttl seconds. def cache_set(ck, obj) @cache_store.set(ck, obj, @cache_ttl) end # Check the cache before a database lookup unless a hash is supplied. def primary_key_lookup(pk) ck = cache_key(pk) unless obj = cache_get(ck) if obj = super(pk) cache_set(ck, obj) end end obj end end module InstanceMethods # Remove the object from the cache when updating def before_update cache_delete super end # Return a key unique to the underlying record for caching, based on the # primary key value(s) for the object. If the model does not have a primary # key, raise an Error. def cache_key model.cache_key(pk) end # Remove the object from the cache when deleting def delete cache_delete super end private # Delete this object from the cache def cache_delete model.cache_delete_pk(pk) end end end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/class_table_inheritance.rb�������������������������������������0000664�0000000�0000000�00000022472�12201565355�0024704�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The class_table_inheritance plugin allows you to model inheritance in the # database using a table per model class in the hierarchy, with only columns # unique to that model class (or subclass hierarchy) being stored in the related # table. For example, with this hierarchy: # # Employee # / \ # Staff Manager # | # Executive # # the following database schema may be used (table - columns): # # * employees - id, name, kind # * staff - id, manager_id # * managers - id, num_staff # * executives - id, num_managers # # The class_table_inheritance plugin assumes that the main table # (e.g. employees) has a primary key field (usually autoincrementing), # and all other tables have a foreign key of the same name that points # to the same key in their superclass's table. For example: # # * employees.id - primary key, autoincrementing # * staff.id - foreign key referencing employees(id) # * managers.id - foreign key referencing employees(id) # * executives.id - foreign key referencing managers(id) # # When using the class_table_inheritance plugin, subclasses use joined # datasets: # # Employee.dataset.sql # SELECT * FROM employees # Manager.dataset.sql # SELECT * FROM employees # # INNER JOIN managers USING (id) # Executive.dataset.sql # SELECT * FROM employees # # INNER JOIN managers USING (id) # # INNER JOIN executives USING (id) # # This allows Executive.all to return instances with all attributes # loaded. The plugin overrides the deleting, inserting, and updating # in the model to work with multiple tables, by handling each table # individually. # # This plugin allows the use of a :key option when loading to mark # a column holding a class name. This allows methods on the # superclass to return instances of specific subclasses. # This plugin also requires the lazy_attributes plugin and uses it to # return subclass specific attributes that would not be loaded # when calling superclass methods (since those wouldn't join # to the subclass tables). For example: # # a = Employee.all # [<#Staff>, <#Manager>, <#Executive>] # a.first.values # {:id=>1, name=>'S', :kind=>'Staff'} # a.first.manager_id # Loads the manager_id attribute from the database # # Usage: # # # Set up class table inheritance in the parent class # # (Not in the subclasses) # Employee.plugin :class_table_inheritance # # # Set the +kind+ column to hold the class name, and # # set the subclass table to map to for each subclass # Employee.plugin :class_table_inheritance, :key=>:kind, :table_map=>{:Staff=>:staff} module ClassTableInheritance # The class_table_inheritance plugin requires the lazy_attributes plugin # to handle lazily-loaded attributes for subclass instances returned # by superclass methods. def self.apply(model, opts=OPTS) model.plugin :lazy_attributes end # Initialize the per-model data structures and set the dataset's row_proc # to check for the :key option column for the type of class when loading objects. # Options: # * :key - The column symbol holding the name of the model class this # is an instance of. Necessary if you want to call model methods # using the superclass, but have them return subclass instances. # * :table_map - Hash with class name symbol keys and table name symbol # values. Necessary if the implicit table name for the model class # does not match the database table name def self.configure(model, opts=OPTS) model.instance_eval do m = method(:constantize) @cti_base_model = self @cti_key = key = opts[:key] @cti_tables = [table_name] @cti_columns = {table_name=>columns} @cti_table_map = opts[:table_map] || {} dataset.row_proc = if key lambda{|r| (m.call(r[key]) rescue model).call(r)} else model end end end module ClassMethods # The parent/root/base model for this class table inheritance hierarchy. # This is the only model in the hierarchy that load the # class_table_inheritance plugin. attr_reader :cti_base_model # Hash with table name symbol keys and arrays of column symbol values, # giving the columns to update in each backing database table. attr_reader :cti_columns # The column containing the class name as a string. Used to # return instances of subclasses when calling the superclass's # load method. attr_reader :cti_key # An array of table symbols that back this model. The first is # cti_base_model table symbol, and the last is the current model # table symbol. attr_reader :cti_tables # A hash with class name symbol keys and table name symbol values. # Specified with the :table_map option to the plugin, and used if # the implicit naming is incorrect. attr_reader :cti_table_map # Add the appropriate data structures to the subclass. Does not # allow anonymous subclasses to be created, since they would not # be mappable to a table. def inherited(subclass) cc = cti_columns ck = cti_key ct = cti_tables.dup ctm = cti_table_map.dup cbm = cti_base_model pk = primary_key ds = dataset subclass.instance_eval do raise(Error, "cannot create anonymous subclass for model class using class_table_inheritance") if !(n = name) || n.empty? table = ctm[n.to_sym] || implicit_table_name columns = db.from(table).columns @cti_key = ck @cti_tables = ct + [table] @cti_columns = cc.merge(table=>columns) @cti_table_map = ctm @cti_base_model = cbm # Need to set dataset and columns before calling super so that # the main column accessor module is included in the class before any # plugin accessor modules (such as the lazy attributes accessor module). set_dataset(ds.join(table, [pk])) set_columns(self.columns) end super subclass.instance_eval do m = method(:constantize) dataset.row_proc = if cti_key lambda{|r| (m.call(r[ck]) rescue subclass).call(r)} else subclass end (columns - [cbm.primary_key]).each{|a| define_lazy_attribute_getter(a)} cti_tables.reverse.each do |table| db.schema(table).each{|k,v| db_schema[k] = v} end end end # The primary key in the parent/base/root model, which should have a # foreign key with the same name referencing it in each model subclass. def primary_key return super if self == cti_base_model cti_base_model.primary_key end # The table name for the current model class's main table (not used # by any superclasses). def table_name self == cti_base_model ? super : cti_tables.last end private # If calling set_dataset manually, make sure to set the dataset # row proc to one that handles inheritance correctly. def set_dataset_row_proc(ds) ds.row_proc = @dataset.row_proc if @dataset end end module InstanceMethods # Set the cti_key column to the name of the model. def before_create send("#{model.cti_key}=", model.name.to_s) if model.cti_key super end # Delete the row from all backing tables, starting from the # most recent table and going through all superclasses. def delete raise Sequel::Error, "can't delete frozen object" if frozen? m = model m.cti_tables.reverse.each do |table| m.db.from(table).filter(m.primary_key=>pk).delete end self end private # Insert rows into all backing tables, using the columns # in each table. def _insert return super if model == model.cti_base_model iid = @values[primary_key] m = model m.cti_tables.each do |table| h = {} h[m.primary_key] ||= iid if iid m.cti_columns[table].each{|c| h[c] = @values[c] if @values.include?(c)} nid = m.db.from(table).insert(h) iid ||= nid end @values[primary_key] = iid end # Update rows in all backing tables, using the columns in each table. def _update(columns) pkh = pk_hash m = model m.cti_tables.each do |table| h = {} m.cti_columns[table].each{|c| h[c] = columns[c] if columns.include?(c)} m.db.from(table).filter(pkh).update(h) unless h.empty? end end end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/composition.rb�������������������������������������������������0000664�0000000�0000000�00000016320�12201565355�0022415�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The composition plugin allows you to easily define a virtual # attribute where the backing data is composed of other columns. # # There are two ways to use the plugin. One way is with the # :mapping option. A simple example of this is when you have a # database table with separate columns for year, month, and day, # but where you want to deal with Date objects in your ruby code. # This can be handled with: # # Album.plugin :composition # Album.composition :date, :mapping=>[:year, :month, :day] # # With the :mapping option, you can provide a :class option # that gives the class to use, but if that is not provided, it # is inferred from the name of the composition (e.g. :date -> Date). # When the <tt>date</tt> method is called, it will return a # Date object by calling: # # Date.new(year, month, day) # # When saving the object, if the date composition has been used # (by calling either the getter or setter method), it will # populate the related columns of the object before saving: # # self.year = date.year # self.month = date.month # self.day = date.day # # The :mapping option is just a shortcut that works in particular # cases. To handle any case, you can define a custom :composer # and :decomposer procs. The :composer proc will be instance_evaled # the first time the getter is called, and the :decomposer proc # will be instance_evaled before saving. The above example could # also be implemented as: # # Album.composition :date, # :composer=>proc{Date.new(year, month, day) if year || month || day}, # :decomposer=>(proc do # if d = compositions[:date] # self.year = d.year # self.month = d.month # self.day = d.day # else # self.year = nil # self.month = nil # self.day = nil # end # end) # # Note that when using the composition object, you should not # modify the underlying columns if you are also instantiating # the composition, as otherwise the composition object values # will override any underlying columns when the object is saved. module Composition # Define the necessary class instance variables. def self.apply(model) model.instance_eval{@compositions = {}} end module ClassMethods # A hash with composition name keys and composition reflection # hash values. attr_reader :compositions # A module included in the class holding the composition # getter and setter methods. attr_reader :composition_module # Define a composition for this model, with name being the name of the composition. # You must provide either a :mapping option or both the :composer and :decomposer options. # # Options: # * :class - if using the :mapping option, the class to use, as a Class, String or Symbol. # * :composer - A proc that is instance evaled when the composition getter method is called # to create the composition. # * :decomposer - A proc that is instance evaled before saving the model object, # if the composition object exists, which sets the columns in the model object # based on the value of the composition object. # * :mapping - An array where each element is either a symbol or an array of two symbols. # A symbol is treated like an array of two symbols where both symbols are the same. # The first symbol represents the getter method in the model, and the second symbol # represents the getter method in the composition object. Example: # # Uses columns year, month, and day in the current model # # Uses year, month, and day methods in the composition object # :mapping=>[:year, :month, :day] # # Uses columns year, month, and day in the current model # # Uses y, m, and d methods in the composition object where # # for example y in the composition object represents year # # in the model object. # :mapping=>[[:year, :y], [:month, :m], [:day, :d]] def composition(name, opts=OPTS) opts = opts.dup compositions[name] = opts if mapping = opts[:mapping] keys = mapping.map{|k| k.is_a?(Array) ? k.first : k} if !opts[:composer] late_binding_class_option(opts, name) klass = opts[:class] class_proc = proc{klass || constantize(opts[:class_name])} opts[:composer] = proc do if values = keys.map{|k| send(k)} and values.any?{|v| !v.nil?} class_proc.call.new(*values) else nil end end end if !opts[:decomposer] setter_meths = keys.map{|k| :"#{k}="} cov_methods = mapping.map{|k| k.is_a?(Array) ? k.last : k} setters = setter_meths.zip(cov_methods) opts[:decomposer] = proc do if (o = compositions[name]).nil? setter_meths.each{|sm| send(sm, nil)} else setters.each{|sm, cm| send(sm, o.send(cm))} end end end end raise(Error, "Must provide :composer and :decomposer options, or :mapping option") unless opts[:composer] && opts[:decomposer] define_composition_accessor(name, opts) end Plugins.inherited_instance_variables(self, :@compositions=>:dup) # Define getter and setter methods for the composition object. def define_composition_accessor(name, opts=OPTS) include(@composition_module ||= Module.new) unless composition_module composer = opts[:composer] composition_module.class_eval do define_method(name) do if compositions.has_key?(name) compositions[name] elsif frozen? instance_eval(&composer) else compositions[name] = instance_eval(&composer) end end define_method("#{name}=") do |v| modified! compositions[name] = v end end end end module InstanceMethods # For each composition, set the columns in the model class based # on the composition object. def before_save @compositions.keys.each{|n| instance_eval(&model.compositions[n][:decomposer])} if @compositions super end # Cache of composition objects for this class. def compositions @compositions ||= {} end # Freeze compositions hash when freezing model instance. def freeze compositions.freeze super end private # Clear the cached compositions when manually refreshing. def _refresh_set_values(hash) @compositions.clear if @compositions super end end end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/constraint_validations.rb��������������������������������������0000664�0000000�0000000�00000020551�12201565355�0024634�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The constraint_validations plugin is designed to be used with databases # that used the constraint_validations extension when creating their # tables. The extension adds validation metadata for constraints created, # and this plugin reads that metadata and automatically creates validations # for all of the constraints. For example, if you used the extension # and created your albums table like this: # # DB.create_table(:albums) do # primary_key :id # String :name # validate do # min_length 5, :name # end # end # # Then when you went to save an album that uses this plugin: # # Album.create(:name=>'abc') # # raises Sequel::ValidationFailed: name is shorter than 5 characters # # Usage: # # # Make all model subclasses use constraint validations (called before loading subclasses) # Sequel::Model.plugin :constraint_validations # # # Make the Album class use constraint validations # Album.plugin :constraint_validations module ConstraintValidations # The default constraint validation metadata table name. DEFAULT_CONSTRAINT_VALIDATIONS_TABLE = :sequel_constraint_validations # Automatically load the validation_helpers plugin to run the actual validations. def self.apply(model, opts=OPTS) model.instance_eval do plugin :validation_helpers @constraint_validations_table = DEFAULT_CONSTRAINT_VALIDATIONS_TABLE @constraint_validation_options = {} end end # Parse the constraint validations metadata from the database. Options: # :constraint_validations_table :: Override the name of the constraint validations # metadata table. Should only be used if the table # name was overridden when creating the constraint # validations. # :validation_options :: Override/augment the options stored in the database with the # given options. Keys should be validation type symbols (e.g. # :presence) and values should be hashes of options specific # to that validation type. def self.configure(model, opts=OPTS) model.instance_eval do if table = opts[:constraint_validations_table] @constraint_validations_table = table end if vos = opts[:validation_options] vos.each do |k, v| if existing_options = @constraint_validation_options[k] v = existing_options.merge(v) end @constraint_validation_options[k] = v end end parse_constraint_validations end end module DatabaseMethods # A hash of validation method call metadata for all tables in the database. # The hash is keyed by table name string and contains arrays of validation # method call arrays. attr_accessor :constraint_validations end module ClassMethods # An array of validation method call arrays. Each array is an array that # is splatted to send to perform a validation via validation_helpers. attr_reader :constraint_validations # A hash of reflections of constraint validations. Keys are type name # symbols. Each value is an array of pairs, with the first element being # the validation type symbol (e.g. :presence) and the second element being # options for the validation. If the validation takes an argument, it appears # as the :argument entry in the validation option hash. attr_reader :constraint_validation_reflections # The name of the table containing the constraint validations metadata. attr_reader :constraint_validations_table Plugins.inherited_instance_variables(self, :@constraint_validations_table=>nil, :@constraint_validation_options=>:hash_dup) Plugins.after_set_dataset(self, :parse_constraint_validations) private # If the database has not already parsed constraint validation # metadata, then run a query to get the metadata data and transform it # into arrays of validation method calls. # # If this model has associated dataset, use the model's table name # to get the validations for just this model. def parse_constraint_validations db.extend(DatabaseMethods) unless hash = Sequel.synchronize{db.constraint_validations} hash = {} db.from(constraint_validations_table).each do |r| (hash[r[:table]] ||= []) << r end Sequel.synchronize{db.constraint_validations = hash} end if @dataset ds = @dataset.clone ds.quote_identifiers = false table_name = ds.literal(ds.first_source_table) reflections = {} @constraint_validations = (Sequel.synchronize{hash[table_name]} || []).map{|r| constraint_validation_array(r, reflections)} @constraint_validation_reflections = reflections end end # Given a specific database constraint validation metadata row hash, transform # it in an validation method call array suitable for splatting to send. def constraint_validation_array(r, reflections) opts = {} opts[:message] = r[:message] if r[:message] opts[:allow_nil] = true if db.typecast_value(:boolean, r[:allow_nil]) type = r[:validation_type].to_sym arg = r[:argument] column = r[:column] case type when :like, :ilike arg = constraint_validation_like_to_regexp(arg, type == :ilike) type = :format when :exact_length, :min_length, :max_length arg = arg.to_i when :length_range arg = constraint_validation_int_range(arg) when :format arg = Regexp.new(arg) when :iformat arg = Regexp.new(arg, Regexp::IGNORECASE) type = :format when :includes_str_array arg = arg.split(',') type = :includes when :includes_int_array arg = arg.split(',').map{|x| x.to_i} type = :includes when :includes_int_range arg = constraint_validation_int_range(arg) type = :includes end column = if type == :unique column.split(',').map{|c| c.to_sym} else column.to_sym end if type_opts = @constraint_validation_options[type] opts = opts.merge(type_opts) end reflection_opts = opts a = [:"validates_#{type}"] if arg a << arg reflection_opts = reflection_opts.merge(:argument=>arg) end a << column unless opts.empty? a << opts end if column.is_a?(Array) && column.length == 1 column = column.first end (reflections[column] ||= []) << [type, reflection_opts] a end # Return a range of integers assuming the argument is in # 1..2 or 1...2 format. def constraint_validation_int_range(arg) arg =~ /(\d+)\.\.(\.)?(\d+)/ Range.new($1.to_i, $3.to_i, $2 == '.') end # Transform the LIKE pattern string argument into a # Regexp argument suitable for use with validates_format. def constraint_validation_like_to_regexp(arg, case_insensitive) arg = Regexp.escape(arg).gsub(/%%|%|_/) do |s| case s when '%%' '%' when '%' '.*' when '_' '.' end end arg = "\\A#{arg}\\z" if case_insensitive Regexp.new(arg, Regexp::IGNORECASE) else Regexp.new(arg) end end end module InstanceMethods # Run all of the constraint validations parsed from the database # when validating the instance. def validate super model.constraint_validations.each do |v| send(*v) end end end end end end �������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/dataset_associations.rb����������������������������������������0000664�0000000�0000000�00000011446�12201565355�0024262�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # DatasetAssociations allows you to easily use your model associations # via datasets. For each association you define, it creates a dataset # method for that association that returns a dataset of all objects # that are associated to objects in the current dataset. Here's a simple # example: # # class Artist < Sequel::Model # plugin :dataset_associations # one_to_many :albums # end # Artist.filter(id=>1..100).albums # # SELECT * FROM albums # # WHERE (albums.artist_id IN ( # # SELECT id FROM artists # # WHERE ((id >= 1) AND (id <= 100)))) # # This works for all of the association types that ship with Sequel, # including the many_through_many type. Most association options that # are supported when eager loading are supported when using a # dataset association. However, associations that use :limit or # one_to_one associations that are really one_to_many relationships # in the database will not work correctly, returning all associated # objects. # # As the dataset methods return datasets, you can easily chain the # methods to get associated datasets of associated datasets: # # Artist.filter(id=>1..100).albums.filter{name < 'M'}.tags # # SELECT tags.* FROM tags # # WHERE (tags.id IN ( # # SELECT albums_tags.tag_id FROM albums # # INNER JOIN albums_tags # # ON (albums_tags.album_id = albums.id) # # WHERE # # ((albums.artist_id IN ( # # SELECT id FROM artists # # WHERE ((id >= 1) AND (id <= 100))) # # AND # # (name < 'M'))))) # # Usage: # # # Make all model subclasses create association methods for datasets # Sequel::Model.plugin :dataset_associations # # # Make the Album class create association methods for datasets # Album.plugin :dataset_associations module DatasetAssociations module ClassMethods # Set up a dataset method for each association to return an associated dataset def associate(type, name, *) ret = super r = association_reflection(name) meth = r.returns_array? ? name : pluralize(name).to_sym def_dataset_method(meth){associated(name)} ret end Plugins.def_dataset_methods(self, :associated) end module DatasetMethods # For the association given by +name+, return a dataset of associated objects # such that it would return the union of calling the association method on # all objects returned by the current dataset. # # This supports most options that are supported when eager loading. It doesn't # support limits on the associations, or one_to_one associations that are really # one_to_many and use an order to select the first matching object. In both of # those cases, this will return an array of all matching objects. def associated(name) raise Error, "unrecognized association name: #{name.inspect}" unless r = model.association_reflection(name) ds = r.associated_class.dataset sds = opts[:limit] ? self : unordered ds = case r[:type] when :many_to_one ds.filter(r.qualified_primary_key=>sds.select(*Array(r[:qualified_key]))) when :one_to_one, :one_to_many ds.filter(r.qualified_key=>sds.select(*Array(r.qualified_primary_key))) when :many_to_many ds.filter(r.qualified_right_primary_key=>sds.select(*Array(r.qualified_right_key)). join(r[:join_table], r[:left_keys].zip(r[:left_primary_keys]), :implicit_qualifier=>model.table_name)) when :many_through_many fre = r.reverse_edges.first fe, *edges = r.edges sds = sds.select(*Array(r.qualify(fre[:table], fre[:left]))). join(fe[:table], Array(fe[:right]).zip(Array(fe[:left])), :implicit_qualifier=>model.table_name) edges.each{|e| sds = sds.join(e[:table], Array(e[:right]).zip(Array(e[:left])))} ds.filter(r.qualified_right_primary_key=>sds) when :pg_array_to_many ds.filter(Sequel.expr(r.primary_key=>sds.select{Sequel.pg_array_op(r.qualify(r[:model].table_name, r[:key])).unnest})) when :many_to_pg_array ds.filter(Sequel.function(:coalesce, Sequel.pg_array_op(r[:key]).overlaps(sds.select{array_agg(r.qualify(r[:model].table_name, r.primary_key))}), false)) else raise Error, "unrecognized association type for association #{name.inspect}: #{r[:type].inspect}" end ds = model.apply_association_dataset_opts(r, ds) r[:extend].each{|m| ds.extend(m)} ds end end end end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/defaults_setter.rb���������������������������������������������0000664�0000000�0000000�00000004465�12201565355�0023256�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # DefaultsSetter is a simple plugin that sets non-nil/NULL default values upon # initialize: # # # column a default NULL # # column b default 2 # album = Album.new.values # {:b => 2} # album = Album.new(:a=>1, :b=>3).values # {:a => 1, :b => 3} # # Usage: # # # Make all model subclass instances set defaults (called before loading subclasses) # Sequel::Model.plugin :defaults_setter # # # Make the Album class set defaults # Album.plugin :defaults_setter module DefaultsSetter # Set the default values based on the model schema def self.configure(model) model.send(:set_default_values) end module ClassMethods # The default values to set in initialize for this model. A hash with column symbol # keys and default values. If the default values respond to +call+, it will be called # to get the value, otherwise the value will be used directly. You can manually modify # this hash to set specific default values, by default the ones will be parsed from the database. attr_reader :default_values Plugins.after_set_dataset(self, :set_default_values) private # Parse the cached database schema for this model and set the default values appropriately. def set_default_values h = {} @db_schema.each{|k, v| h[k] = convert_default_value(v[:ruby_default]) unless v[:ruby_default].nil?} if @db_schema @default_values = h end # Handle the CURRENT_DATE and CURRENT_TIMESTAMP values specially by returning an appropriate Date or # Time/DateTime value. def convert_default_value(v) case v when Sequel::CURRENT_DATE lambda{Date.today} when Sequel::CURRENT_TIMESTAMP lambda{Sequel.datetime_class.now} else v end end end module InstanceMethods # Use default value for a new record if values doesn't already contain an entry for it. def [](k) if new? && !values.has_key?(k) v = model.default_values[k] v.respond_to?(:call) ? v.call : v else super end end end end end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/dirty.rb�������������������������������������������������������0000664�0000000�0000000�00000016741�12201565355�0021214�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The dirty plugin makes Sequel save the initial value of # a column when setting a new value for the column. This # makes it easier to see what changes were made to the object: # # artist.name # => 'Foo' # artist.name = 'Bar' # artist.initial_value(:name) # 'Foo' # artist.column_change(:name) # ['Foo', 'Bar'] # artist.column_changes # {:name => ['Foo', 'Bar']} # artist.column_changed?(:name) # true # artist.reset_column(:name) # artist.name # => 'Foo' # artist.column_changed?(:name) # false # # It also makes changed_columns more accurate in that it # can detect when a the column value is changed and then # changed back: # # artist.name # => 'Foo' # artist.name = 'Bar' # artist.changed_columns # => [:name] # artist.name = 'Foo' # artist.changed_columns # => [] # # It can handle situations where a column value is # modified in place: # # artist.will_change_column(:name) # artist.name.gsub!(/o/, 'u') # artist.changed_columns # => [:name] # artist.initial_value(:name) # => 'Foo' # artist.column_change(:name) # => ['Foo', 'Fuu'] # # It also saves the previously changed values after an update: # # artist.update(:name=>'Bar') # artist.column_changes # => {} # artist.previous_changes # => {:name=>['Foo', 'Bar']} # # Usage: # # # Make all model subclass instances record previous values (called before loading subclasses) # Sequel::Model.plugin :dirty # # # Make the Album class record previous values # Album.plugin :dirty module Dirty module InstanceMethods # A hash of previous changes before the object was # saved, in the same format as #column_changes. # Note that this is not necessarily the same as the columns # that were used in the update statement. attr_reader :previous_changes # An array with the initial value and the current value # of the column, if the column has been changed. If the # column has not been changed, returns nil. # # column_change(:name) # => ['Initial', 'Current'] def column_change(column) [initial_value(column), send(column)] if column_changed?(column) end # A hash with column symbol keys and pairs of initial and # current values for all changed columns. # # column_changes # => {:name => ['Initial', 'Current']} def column_changes h = {} initial_values.each do |column, value| h[column] = [value, send(column)] end h end # Either true or false depending on whether the column has # changed. Note that this is not exactly the same as checking if # the column is in changed_columns, if the column was not set # initially. # # column_changed?(:name) # => true def column_changed?(column) initial_values.has_key?(column) end # The initial value of the given column. If the column value has # not changed, this will be the same as the current value of the # column. # # initial_value(:name) # => 'Initial' def initial_value(column) initial_values.fetch(column){send(column)} end # A hash with column symbol keys and initial values. # # initial_values # {:name => 'Initial'} def initial_values @initial_values ||= {} end # Freeze internal data structures def freeze initial_values.freeze missing_initial_values.freeze @previous_changes.freeze if @previous_changes super end # Reset the column to its initial value. If the column was not set # initial, removes it from the values. # # reset_column(:name) # name # => 'Initial' def reset_column(column) if initial_values.has_key?(column) send(:"#{column}=", initial_values[column]) end if missing_initial_values.include?(column) values.delete(column) end end # Manually specify that a column will change. This should only be used # if you plan to modify a column value in place, which is not recommended. # # will_change_column(:name) # name.gsub(/i/i, 'o') # column_change(:name) # => ['Initial', 'onotoal'] def will_change_column(column) changed_columns << column unless changed_columns.include?(column) check_missing_initial_value(column) value = if initial_values.has_key?(column) initial_values[column] else send(column) end initial_values[column] = if value && value != true && value.respond_to?(:clone) begin value.clone rescue TypeError value end else value end end private # Reset the initial values when setting values. def _refresh_set_values(hash) reset_initial_values super end # Reset the initial values after saving. def after_save super reset_initial_values end # Save the current changes so they are available after updating. This happens # before after_save resets them. def after_update super @previous_changes = column_changes end # When changing the column value, save the initial column value. If the column # value is changed back to the initial value, update changed columns to remove # the column. def change_column_value(column, value) if (iv = initial_values).has_key?(column) initial = iv[column] super if value == initial changed_columns.delete(column) unless missing_initial_values.include?(column) iv.delete(column) end else check_missing_initial_value(column) iv[column] = send(column) super end end # If the values hash does not contain the column, make sure missing_initial_values # does so that it doesn't get deleted from changed_columns if changed back, # and so that resetting the column value can be handled correctly. def check_missing_initial_value(column) unless values.has_key?(column) || (miv = missing_initial_values).include?(column) miv << column end end # Reset the initial values when initializing. def initialize_set(h) super reset_initial_values end # Array holding column symbols that were not present initially. This is necessary # to differentiate between values that were not present and values that were # present but equal to nil. def missing_initial_values @missing_initial_values ||= [] end # Clear the data structures that store the initial values. def reset_initial_values @initial_values.clear if @initial_values @missing_initial_values.clear if @missing_initial_values end end end end end �������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/eager_each.rb��������������������������������������������������0000664�0000000�0000000�00000004054�12201565355�0022116�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The eager_each plugin makes calling each on an eager loaded dataset do eager loading. # By default, each does not work on an eager loaded dataset, because each iterates # over rows of the dataset as they come in, and to eagerly load you need to have all # values up front. With the default associations code, you must call #all on an eagerly # loaded dataset, as calling #each on an #eager dataset skips the eager loading, and calling # #each on an #eager_graph dataset makes it yield plain hashes with columns from all # tables, instead of yielding the instances of the main model. # # This plugin makes #each call #all for eagerly loaded datasets. As #all usually calls # #each, this is a bit of issue, but this plugin resolves the issue by cloning the dataset # and setting a new flag in the cloned dataset, so that each can check with the flag to # determine whether it should call all. # # Usage: # # # Make all model subclass instances eagerly load for each (called before loading subclasses) # Sequel::Model.plugin :eager_each # # # Make the Album class eagerly load for each # Album.plugin :eager_each module EagerEach module DatasetMethods # Call #all instead of #each if eager loading, # uless #each is being called by #all. def each(&block) if use_eager_all? all(&block) else super end end # If eager loading, clone the dataset and set a flag to let #each know not to call #all, # to avoid the infinite loop. def all(&block) if use_eager_all? clone(:all_called=>true).all(&block) else super end end private # Wether to use all when each is called, true when eager loading # unless the flag has already been set. def use_eager_all? (opts[:eager] || opts[:eager_graph]) && !opts[:all_called] end end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/error_splitter.rb����������������������������������������������0000664�0000000�0000000�00000003224�12201565355�0023130�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The error_splitter plugin automatically splits errors entries related to # multiple columns to have separate error entries, one per column. For example, # a multiple column uniqueness entry: # # validates_unique([:artist_id, :name]) # # would by default result in errors entries such as: # # {[:artist_id, :name]=>'is already taken'} # # This plugin transforms those errors into: # # {:artist_id=>'is already taken', :name=>'is already taken'} # # The main reason to split errors is if you have a list of fields that you # are checking for validation errors. If you don't split the errors, then: # # errors.on(:artist_id) # # would not return the uniqueness error. # # Usage: # # # Make all model subclass instances split errors (called before loading subclasses) # Sequel::Model.plugin :error_splitter # # # Make the Album class split errors # Album.plugin :error_splitter module ErrorSplitter module InstanceMethods # If the model instance is not valid, go through all of the errors entries. For # any that apply to multiple columns, remove them and add separate error entries, # one per column. def _valid?(*) v = super unless v errors.keys.select{|k| k.is_a?(Array)}.each do |ks| msgs = errors.delete(ks) ks.each do |k| msgs.each do |msg| errors.add(k, msg) end end end end v end end end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/force_encoding.rb����������������������������������������������0000664�0000000�0000000�00000005353�12201565355�0023022�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������if RUBY_VERSION >= '1.9.0' module Sequel module Plugins # The ForceEncoding plugin allows you force specific encodings for all # strings that are used by the model. When model instances are loaded # from the database, all values in the hash that are strings are # forced to the given encoding. Whenever you update a model column # attribute, the resulting value is forced to a given encoding if the # value is a string. There are two ways to specify the encoding. You # can either do so in the plugin call itself, or via the # forced_encoding class accessor. # # Usage: # # # Force all strings to be UTF8 encoded in a all model subclasses # # (called before loading subclasses) # Sequel::Model.plugin :force_encoding, 'UTF-8' # # # Force the encoding for the Album model to UTF8 # Album.plugin :force_encoding # Album.forced_encoding = 'UTF-8' module ForceEncoding # Set the forced_encoding based on the value given in the plugin call. # Note that if a the plugin has been previously loaded, any previous # forced encoding is overruled, even if no encoding is given when calling # the plugin. def self.configure(model, encoding=nil) model.forced_encoding = encoding end module ClassMethods # The string encoding to force on a column string values attr_accessor :forced_encoding Plugins.inherited_instance_variables(self, :@forced_encoding=>nil) def call(values) o = super o.send(:force_hash_encoding, o.values) o end end module InstanceMethods private # Force the encoding of all string values when setting the instance's values. def _refresh_set_values(values) super(force_hash_encoding(values)) end # Force the encoding of all string values when setting the instance's values. def _save_set_values(values) super(force_hash_encoding(values)) end # Force the encoding for all string values in the given row hash. def force_hash_encoding(row) fe = model.forced_encoding row.values.each{|v| v.force_encoding(fe) if v.is_a?(String)} if fe row end # Force the encoding of all returned strings to the model's forced_encoding. def typecast_value(column, value) s = super if s.is_a?(String) && (fe = model.forced_encoding) s = s.dup if s.frozen? s.force_encoding(fe) end s end end end end end else # :nocov: raise LoadError, 'ForceEncoding plugin only works on Ruby 1.9+' # :nocov: end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/hook_class_methods.rb������������������������������������������0000664�0000000�0000000�00000012111�12201565355�0023714�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # Sequel's built-in hook class methods plugin is designed for backwards # compatibility. Its use is not encouraged, it is recommended to use # instance methods and super instead of this plugin. What this plugin # allows you to do is, for example: # # # Block only, can cause duplicate hooks if code is reloaded # before_save{self.created_at = Time.now} # # Block with tag, safe for reloading # before_save(:set_created_at){self.created_at = Time.now} # # Tag only, safe for reloading, calls instance method # before_save(:set_created_at) # # Pretty much anything you can do with a hook class method, you can also # do with an instance method instead: # # def before_save # return false if super == false # self.created_at = Time.now # end # # Note that returning false in any before hook block will skip further # before hooks and abort the action. So if a before_save hook block returns # false, future before_save hook blocks are not called, and the save is aborted. # # Usage: # # # Allow use of hook class methods in all model subclasses (called before loading subclasses) # Sequel::Model.plugin :hook_class_methods # # # Allow the use of hook class methods in the Album class # Album.plugin :hook_class_methods module HookClassMethods # Set up the hooks instance variable in the model. def self.apply(model) hooks = model.instance_variable_set(:@hooks, {}) Model::HOOKS.each{|h| hooks[h] = []} end module ClassMethods Model::HOOKS.each{|h| class_eval("def #{h}(method = nil, &block); add_hook(:#{h}, method, &block) end", __FILE__, __LINE__)} # This adds a new hook type. It will define both a class # method that you can use to add hooks, as well as an instance method # that you can use to call all hooks of that type. The class method # can be called with a symbol or a block or both. If a block is given and # and symbol is not, it adds the hook block to the hook type. If a block # and symbol are both given, it replaces the hook block associated with # that symbol for a given hook type, or adds it if there is no hook block # with that symbol for that hook type. If no block is given, it assumes # the symbol specifies an instance method to call and adds it to the hook # type. # # If any before hook block returns false, the instance method will return false # immediately without running the rest of the hooks of that type. # # It is recommended that you always provide a symbol to this method, # for descriptive purposes. It's only necessary to do so when you # are using a system that reloads code. # # Example of usage: # # class MyModel # define_hook :before_move_to # before_move_to(:check_move_allowed){|o| o.allow_move?} # def move_to(there) # return if before_move_to == false # # move MyModel object to there # end # end # # Do not call this method with untrusted input, as that can result in # arbitrary code execution. def add_hook_type(*hooks) Model::HOOKS.concat(hooks) hooks.each do |hook| @hooks[hook] = [] instance_eval("def #{hook}(method = nil, &block); add_hook(:#{hook}, method, &block) end", __FILE__, __LINE__) class_eval("def #{hook}; model.hook_blocks(:#{hook}){|b| return false if instance_eval(&b) == false}; end", __FILE__, __LINE__) end end # Returns true if there are any hook blocks for the given hook. def has_hooks?(hook) !@hooks[hook].empty? end # Yield every block related to the given hook. def hook_blocks(hook) @hooks[hook].each{|k,v| yield v} end Plugins.inherited_instance_variables(self, :@hooks=>:hash_dup) private # Add a hook block to the list of hook methods. # If a non-nil tag is given and it already is in the list of hooks, # replace it with the new block. def add_hook(hook, tag, &block) unless block (raise Error, 'No hook method specified') unless tag block = proc {send tag} end h = @hooks[hook] if tag && (old = h.find{|x| x[0] == tag}) old[1] = block else if hook.to_s =~ /^before/ h.unshift([tag,block]) else h << [tag, block] end end end end module InstanceMethods Model::BEFORE_HOOKS.each{|h| class_eval("def #{h}; model.hook_blocks(:#{h}){|b| return false if instance_eval(&b) == false}; super; end", __FILE__, __LINE__)} Model::AFTER_HOOKS.each{|h| class_eval("def #{h}; super; model.hook_blocks(:#{h}){|b| instance_eval(&b)}; end", __FILE__, __LINE__)} end end end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/input_transformer.rb�������������������������������������������0000664�0000000�0000000�00000006042�12201565355�0023633�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # InputTransformer is a plugin that allows generic transformations # of input values in model column setters. Example: # # Album.plugin :input_transformer # Album.add_input_transformer(:reverser){|v| v.is_a?(String) ? v.reverse : v} # album = Album.new(:name=>'foo') # album.name # => 'oof' # # You can specifically set some columns to skip some input # input transformers: # # Album.skip_input_transformer(:reverser, :foo) # Album.new(:foo=>'bar').foo # => 'bar' # # Usage: # # # Make all model subclass instances support input transformers (called before loading subclasses) # Sequel::Model.plugin :input_transformer # # # Make the Album class support input transformers # Album.plugin :input_transformer module InputTransformer def self.apply(model, *) model.instance_eval do @input_transformers = {} @input_transformer_order = [] @skip_input_transformer_columns = {} end end # If an input transformer is given in the plugin call, # add it as a transformer def self.configure(model, transformer_name=nil, &block) model.add_input_transformer(transformer_name, &block) if transformer_name || block end module ClassMethods # Hash of input transformer name symbols to transformer callables. attr_reader :input_transformers # The order in which to call the input transformers. attr_reader :input_transformer_order Plugins.inherited_instance_variables(self, :@skip_input_transformer_columns=>:hash_dup, :@input_transformers=>:dup, :@input_transformer_order=>:dup) # Add an input transformer to this model. def add_input_transformer(transformer_name, &block) raise(Error, 'must provide both transformer name and block when adding input transformer') unless transformer_name && block @input_transformers[transformer_name] = block @input_transformer_order.unshift(transformer_name) @skip_input_transformer_columns[transformer_name] = [] end # Set columns that the transformer should skip. def skip_input_transformer(transformer_name, *columns) @skip_input_transformer_columns[transformer_name].concat(columns).uniq! end # Return true if the transformer should not be called for the given column. def skip_input_transformer?(transformer_name, column) @skip_input_transformer_columns[transformer_name].include?(column) end end module InstanceMethods # Transform the input using all of the transformers, except those explicitly # skipped, before setting the value in the model object. def []=(k, v) model.input_transformer_order.each do |transformer_name| v = model.input_transformers[transformer_name].call(v) unless model.skip_input_transformer?(transformer_name, k) end super end end end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/instance_filters.rb��������������������������������������������0000664�0000000�0000000�00000010150�12201565355�0023401�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # This plugin allows you to add filters on a per object basis that # restrict updating or deleting the object. It's designed for cases # where you would normally have to drop down to the dataset level # to get the necessary control, because you only want to delete or # update the rows in certain cases based on the current status of # the row in the database. # # class Item < Sequel::Model # plugin :instance_filters # end # # # These are two separate objects that represent the same # # database row. # i1 = Item.first(:id=>1, :delete_allowed=>false) # i2 = Item.first(:id=>1, :delete_allowed=>false) # # # Add an instance filter to the object. This filter is in effect # # until the object is successfully updated or deleted. # i1.instance_filter(:delete_allowed=>true) # # # Attempting to delete the object where the filter doesn't # # match any rows raises an error. # i1.delete # raises Sequel::NoExistingObject # # # The other object that represents the same row has no # # instance filters, and can be updated normally. # i2.update(:delete_allowed=>true) # # # Even though the filter is now still in effect, since the # # database row has been updated to allow deleting, # # delete now works. # i1.delete # # This plugin sets the require_modification flag on the model, # so if the model's dataset doesn't provide an accurate number # of matched rows, this could result in invalid exceptions being raised. module InstanceFilters # Exception class raised when updating or deleting an object does # not affect exactly one row. Error = Sequel::NoExistingObject # Set the require_modification flag to true for the model. def self.configure(model) model.require_modification = true end module InstanceMethods # Clear the instance filters after successfully destroying the object. def after_destroy super clear_instance_filters end # Clear the instance filters after successfully updating the object. def after_update super clear_instance_filters end # Freeze the instance filters when freezing the object def freeze instance_filters.freeze super end # Add an instance filter to the array of instance filters # Both the arguments given and the block are passed to the # dataset's filter method. def instance_filter(*args, &block) instance_filters << [args, block] end private # If there are any instance filters, make sure not to use the # instance delete optimization. def _delete_without_checking if @instance_filters && !@instance_filters.empty? _delete_dataset.delete else super end end # Lazily initialize the instance filter array. def instance_filters @instance_filters ||= [] end # Apply the instance filters to the given dataset def apply_instance_filters(ds) instance_filters.inject(ds){|ds1, i| ds1.filter(*i[0], &i[1])} end # Clear the instance filters. def clear_instance_filters instance_filters.clear end # Apply the instance filters to the dataset returned by super. def _delete_dataset apply_instance_filters(super) end # Apply the instance filters to the dataset returned by super. def _update_dataset apply_instance_filters(super) end # Only use prepared statements for update and delete queries # if there are no instance filters. def use_prepared_statements_for?(type) if (type == :update || type == :delete) && !instance_filters.empty? false else super end end end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/instance_hooks.rb����������������������������������������������0000664�0000000�0000000�00000006524�12201565355�0023066�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The instance_hooks plugin allows you to add hooks to specific instances, # by passing a block to a _hook method (e.g. before_save_hook{do_something}). # The block is executed when the hook is called (e.g. before_save). # # All of the standard hooks are supported, except for after_initialize. # Instance level before hooks are executed in reverse order of addition before # calling super. Instance level after hooks are executed in order of addition # after calling super. If any of the instance level before hook blocks return # false, no more instance level before hooks are called and false is returned. # # Instance level hooks for before and after are cleared after all related # after level instance hooks have run. This means that if you add a before_create # and before_update instance hooks to a new object, the before_create hook will # be run the first time you save the object (creating it), and the before_update # hook will be run the second time you save the object (updating it), and no # hooks will be run the third time you save the object. # # Usage: # # # Add the instance hook methods to all model subclass instances (called before loading subclasses) # Sequel::Model.plugin :instance_hooks # # # Add the instance hook methods just to Album instances # Album.plugin :instance_hooks module InstanceHooks module InstanceMethods BEFORE_HOOKS = Sequel::Model::BEFORE_HOOKS AFTER_HOOKS = Sequel::Model::AFTER_HOOKS - [:after_initialize] HOOKS = BEFORE_HOOKS + AFTER_HOOKS HOOKS.each{|h| class_eval(<<-END , __FILE__, __LINE__+1)} def #{h}_hook(&block) raise Sequel::Error, "can't add hooks to frozen object" if frozen? add_instance_hook(:#{h}, &block) self end END BEFORE_HOOKS.each{|h| class_eval("def #{h}; run_before_instance_hooks(:#{h}) == false ? false : super end", __FILE__, __LINE__)} AFTER_HOOKS.each{|h| class_eval(<<-END, __FILE__, __LINE__ + 1)} def #{h} super run_after_instance_hooks(:#{h}) @instance_hooks.delete(:#{h}) @instance_hooks.delete(:#{h.to_s.sub('after', 'before')}) end END private # Add the block as an instance level hook. For before hooks, add it to # the beginning of the instance hook's array. For after hooks, add it # to the end. def add_instance_hook(hook, &block) instance_hooks(hook).send(BEFORE_HOOKS.include?(hook) ? :unshift : :push, block) end # An array of instance level hook blocks for the given hook type. def instance_hooks(hook) @instance_hooks ||= {} @instance_hooks[hook] ||= [] end # Run all hook blocks of the given hook type. def run_after_instance_hooks(hook) instance_hooks(hook).each{|b| b.call} end # Run all hook blocks of the given hook type. If a hook block returns false, # immediately return false without running the remaining blocks. def run_before_instance_hooks(hook) instance_hooks(hook).each{|b| return false if b.call == false} end end end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/json_serializer.rb���������������������������������������������0000664�0000000�0000000�00000031506�12201565355�0023257�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'json' module Sequel module Plugins # The json_serializer plugin handles serializing entire Sequel::Model # objects to JSON, as well as support for deserializing JSON directly # into Sequel::Model objects. It requires the json library, and can # work with either the pure ruby version or the C extension. # # Basic Example: # # album = Album[1] # album.to_json # # => '{"json_class"=>"Album","id"=>1,"name"=>"RF","artist_id"=>2}' # # In addition, you can provide options to control the JSON output: # # album.to_json(:only=>:name) # album.to_json(:except=>[:id, :artist_id]) # # => '{"json_class"="Album","name"=>"RF"}' # # album.to_json(:include=>:artist) # # => '{"json_class":"Album","id":1,"name":"RF","artist_id":2, # "artist":{"json_class":"Artist","id":2,"name":"YJM"}}' # # You can use a hash value with <tt>:include</tt> to pass options # to associations: # # album.to_json(:include=>{:artist=>{:only=>:name}}) # # => '{"json_class":"Album","id":1,"name":"RF","artist_id":2, # "artist":{"json_class":"Artist","name":"YJM"}}' # # You can specify the <tt>:root</tt> option to nest the JSON under the # name of the model: # # album.to_json(:root => true) # # => '{"album":{"id":1,"name":"RF","artist_id":2}}' # # Additionally, +to_json+ also exists as a class and dataset method, both # of which return all objects in the dataset: # # Album.to_json # Album.filter(:artist_id=>1).to_json(:include=>:tags) # # If you have an existing array of model instances you want to convert to # JSON, you can call the class to_json method with the :array option: # # Album.to_json(:array=>[Album[1], Album[2]]) # # In addition to creating JSON, this plugin also enables Sequel::Model # classes to create instances directly from JSON using the from_json class # method: # # json = album.to_json # album = Album.from_json(json) # # The array_from_json class method exists to parse arrays of model instances # from json: # # json = Album.filter(:artist_id=>1).to_json # albums = Album.array_from_json(json) # # These does not necessarily round trip, since doing so would let users # create model objects with arbitrary values. By default, from_json will # call set with the values in the hash. If you want to specify the allowed # fields, you can use the :fields option, which will call set_fields with # the given fields: # # Album.from_json(album.to_json, :fields=>%w'id name') # # If you want to update an existing instance, you can use the from_json # instance method: # # album.from_json(json) # # Both of these allow creation of cached associated objects, if you provide # the :associations option: # # album.from_json(json, :associations=>:artist) # # You can even provide options when setting up the associated objects: # # album.from_json(json, :associations=>{:artist=>{:fields=>%w'id name', :associations=>:tags}}) # # Note that active_support/json makes incompatible changes to the to_json API, # and breaks some aspects of the json_serializer plugin. You can undo the damage # done by active_support/json by doing: # # class Array # def to_json(options = {}) # JSON.generate(self) # end # end # # class Hash # def to_json(options = {}) # JSON.generate(self) # end # end # # Note that this will probably cause active_support/json to no longer work # correctly in some cases. # # Usage: # # # Add JSON output capability to all model subclass instances (called before loading subclasses) # Sequel::Model.plugin :json_serializer # # # Add JSON output capability to Album class instances # Album.plugin :json_serializer module JsonSerializer # Set up the column readers to do deserialization and the column writers # to save the value in deserialized_values. def self.configure(model, opts={}) model.instance_eval do @json_serializer_opts = (@json_serializer_opts || {}).merge(opts) end end # Helper class used for making sure that cascading options # for model associations works correctly. Cascaded options # work by creating instances of this class, which take a # literal JSON string and have +to_json+ return it. class Literal # Store the literal JSON to use def initialize(json) @json = json end # Return the literal JSON to use def to_json(*a) @json end end module ClassMethods # The default opts to use when serializing model objects to JSON. attr_reader :json_serializer_opts # Attempt to parse a single instance from the given JSON string, # with options passed to InstanceMethods#from_json_node. def from_json(json, opts=OPTS) v = Sequel.parse_json(json) case v when self v when Hash new.from_json_node(v, opts) else raise Error, "parsed json doesn't return a hash or instance of #{self}" end end # Attempt to parse an array of instances from the given JSON string, # with options passed to InstanceMethods#from_json_node. def array_from_json(json, opts=OPTS) v = Sequel.parse_json(json) if v.is_a?(Array) raise(Error, 'parsed json returned an array containing non-hashes') unless v.all?{|ve| ve.is_a?(Hash) || ve.is_a?(self)} v.map{|ve| ve.is_a?(self) ? ve : new.from_json_node(ve, opts)} else raise(Error, 'parsed json did not return an array') end end Plugins.inherited_instance_variables(self, :@json_serializer_opts=>lambda do |json_serializer_opts| opts = {} json_serializer_opts.each{|k, v| opts[k] = (v.is_a?(Array) || v.is_a?(Hash)) ? v.dup : v} opts end) Plugins.def_dataset_methods(self, :to_json) end module InstanceMethods # Parse the provided JSON, which should return a hash, # and process the hash with from_json_node. def from_json(json, opts=OPTS) from_json_node(Sequel.parse_json(json), opts) end # Using the provided hash, update the instance with data contained in the hash. By default, just # calls set with the hash values. # # Options: # :associations :: Indicates that the associations cache should be updated by creating # a new associated object using data from the hash. Should be a Symbol # for a single association, an array of symbols for multiple associations, # or a hash with symbol keys and dependent association option hash values. # :fields :: Changes the behavior to call set_fields using the provided fields, instead of calling set. def from_json_node(hash, opts=OPTS) unless hash.is_a?(Hash) raise Error, "parsed json doesn't return a hash" end populate_associations = {} if assocs = opts[:associations] assocs = case assocs when Symbol {assocs=>{}} when Array assocs_tmp = {} assocs.each{|v| assocs_tmp[v] = {}} assocs_tmp when Hash assocs else raise Error, ":associations should be Symbol, Array, or Hash if present" end assocs.each do |assoc, assoc_opts| if assoc_values = hash.delete(assoc.to_s) unless r = model.association_reflection(assoc) raise Error, "Association #{assoc} is not defined for #{model}" end populate_associations[assoc] = if r.returns_array? raise Error, "Attempt to populate array association with a non-array" unless assoc_values.is_a?(Array) assoc_values.map{|v| v.is_a?(r.associated_class) ? v : r.associated_class.new.from_json_node(v, assoc_opts)} else raise Error, "Attempt to populate non-array association with an array" if assoc_values.is_a?(Array) assoc_values.is_a?(r.associated_class) ? assoc_values : r.associated_class.new.from_json_node(assoc_values, assoc_opts) end end end end if fields = opts[:fields] set_fields(hash, fields, opts) else set(hash) end populate_associations.each do |assoc, values| associations[assoc] = values end self end # Return a string in JSON format. Accepts the following # options: # # :except :: Symbol or Array of Symbols of columns not # to include in the JSON output. # :include :: Symbol, Array of Symbols, or a Hash with # Symbol keys and Hash values specifying # associations or other non-column attributes # to include in the JSON output. Using a nested # hash, you can pass options to associations # to affect the JSON used for associated objects. # :only :: Symbol or Array of Symbols of columns to only # include in the JSON output, ignoring all other # columns. # :root :: Qualify the JSON with the name of the object. def to_json(*a) if opts = a.first.is_a?(Hash) opts = model.json_serializer_opts.merge(a.first) a = [] else opts = model.json_serializer_opts end vals = values cols = if only = opts[:only] Array(only) else vals.keys - Array(opts[:except]) end h = {} cols.each{|c| h[c.to_s] = send(c)} if inc = opts[:include] if inc.is_a?(Hash) inc.each do |k, v| v = v.empty? ? [] : [v] h[k.to_s] = case objs = send(k) when Array objs.map{|obj| Literal.new(Sequel.object_to_json(obj, *v))} else Literal.new(Sequel.object_to_json(objs, *v)) end end else Array(inc).each{|c| h[c.to_s] = send(c)} end end h = {model.send(:underscore, model.to_s) => h} if opts[:root] Sequel.object_to_json(h, *a) end end module DatasetMethods # Return a JSON string representing an array of all objects in # this dataset. Takes the same options as the the instance # method, and passes them to every instance. Additionally, # respects the following options: # # :array :: An array of instances. If this is not provided, # calls #all on the receiver to get the array. # :root :: If set to :collection, only wraps the collection # in a root object. If set to :instance, only wraps # the instances in a root object. If set to :both, # wraps both the collection and instances in a root # object. Unfortunately, for backwards compatibility, # if this option is true and doesn't match one of those # symbols, it defaults to both. That may change in a # future version, so for forwards compatibility, you # should pick a specific symbol for your desired # behavior. def to_json(*a) if opts = a.first.is_a?(Hash) opts = model.json_serializer_opts.merge(a.first) a = [] else opts = model.json_serializer_opts end collection_root = case opts[:root] when nil, false, :instance false when :both true else opts = opts.dup opts.delete(:root) true end res = if row_proc array = if opts[:array] opts = opts.dup opts.delete(:array) else all end array.map{|obj| Literal.new(Sequel.object_to_json(obj, opts))} else all end if collection_root Sequel.object_to_json({model.send(:pluralize, model.send(:underscore, model.to_s)) => res}, *a) else Sequel.object_to_json(res, *a) end end end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/lazy_attributes.rb���������������������������������������������0000664�0000000�0000000�00000007042�12201565355�0023300�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The lazy_attributes plugin allows users to easily set that some attributes # should not be loaded by default when loading model objects. If the attribute # is needed after the instance has been retrieved, a database query is made to # retreive the value of the attribute. # # This plugin depends on the tactical_eager_loading plugin, and allows you to # eagerly load lazy attributes for all objects retrieved with the current object. # So the following code should issue one query to get the albums and one query to # get the reviews for all of those albums: # # Album.plugin :lazy_attributes, :review # Album.filter{id<100}.all do |a| # a.review # end # # # You can specify multiple columns to lazily load: # Album.plugin :lazy_attributes, :review, :tracklist module LazyAttributes # Lazy attributes requires the tactical_eager_loading plugin def self.apply(model, *attrs) model.plugin :tactical_eager_loading end # Set the attributes given as lazy attributes def self.configure(model, *attrs) model.lazy_attributes(*attrs) unless attrs.empty? end module ClassMethods # Module to store the lazy attribute getter methods, so they can # be overridden and call super to get the lazy attribute behavior attr_accessor :lazy_attributes_module # Remove the given attributes from the list of columns selected by default. # For each attribute given, create an accessor method that allows a lazy # lookup of the attribute. Each attribute should be given as a symbol. def lazy_attributes(*attrs) set_dataset(dataset.select(*(columns - attrs))) attrs.each{|a| define_lazy_attribute_getter(a)} end private # Add a lazy attribute getter method to the lazy_attributes_module def define_lazy_attribute_getter(a) include(self.lazy_attributes_module ||= Module.new) unless lazy_attributes_module lazy_attributes_module.class_eval do define_method(a) do if !values.has_key?(a) && !new? lazy_attribute_lookup(a) else super() end end end end end module InstanceMethods private # If the model was selected with other model objects, eagerly load the # attribute for all of those objects. If not, query the database for # the attribute for just the current object. Return the value of # the attribute for the current object. def lazy_attribute_lookup(a) if frozen? return this.dup.select(a).get(a) end if retrieved_with raise(Error, "Invalid primary key column for #{model}: #{pkc.inspect}") unless primary_key = model.primary_key composite_pk = true if primary_key.is_a?(Array) id_map = {} retrieved_with.each{|o| id_map[o.pk] = o unless o.values.has_key?(a) || o.frozen?} model.select(*(Array(primary_key) + [a])).filter(primary_key=>id_map.keys).naked.each do |row| obj = id_map[composite_pk ? row.values_at(*primary_key) : row[primary_key]] if obj && !obj.values.has_key?(a) obj.values[a] = row[a] end end end values[a] = this.select(a).get(a) unless values.has_key?(a) values[a] end end end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/list.rb��������������������������������������������������������0000664�0000000�0000000�00000015215�12201565355�0021027�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The list plugin allows for model instances to be part of an ordered list, # based on a position field in the database. It can either consider all # rows in the table as being from the same list, or you can specify scopes # so that multiple lists can be kept in the same table. # # Basic Example: # # class Item < Sequel::Model(:items) # plugin :list # will use :position field for position # plugin :list, :field=>:pos # will use :pos field for position # end # # item = Item[1] # # # Get the next or previous item in the list # # item.next # item.prev # # # Modify the item's position, which may require modifying other items in # # the same list # # item.move_to(3) # item.move_to_top # item.move_to_bottom # item.move_up # item.move_down # # You can provide a <tt>:scope</tt> option to scope the list. This option # can be a symbol or array of symbols specifying column name(s), or a proc # that accepts a model instance and returns a dataset representing the list # the object is in. # # For example, if each item has a +user_id+ field, and you want every user # to have their own list: # # Item.plugin :list, :scope=>:user_id # # Note that using this plugin modifies the order of the model's dataset to # sort by the position and scope fields. Also note that this plugin is subject to # race conditions, and is not safe when concurrent modifications are made # to the same list. # # Additionally, note that unlike ruby arrays, the list plugin assumes that the # first entry in the list has position 1, not position 0. # # Copyright (c) 2007-2010 Sharon Rosner, Wayne E. Seguin, Aman Gupta, Adrian Madrid, Jeremy Evans module List # Set the +position_field+ and +scope_proc+ attributes for the model, # using the <tt>:field</tt> and <tt>:scope</tt> options, respectively. # The <tt>:scope</tt> option can be a symbol, array of symbols, or a proc that # accepts a model instance and returns a dataset representing the list. # Also, modify the model dataset's order to order by the position and scope fields. def self.configure(model, opts = OPTS) model.position_field = opts[:field] || :position model.dataset = model.dataset.order_prepend(model.position_field) model.scope_proc = case scope = opts[:scope] when Symbol model.dataset = model.dataset.order_prepend(scope) proc{|obj| obj.model.filter(scope=>obj.send(scope))} when Array model.dataset = model.dataset.order_prepend(*scope) proc{|obj| obj.model.filter(scope.map{|s| [s, obj.send(s)]})} else scope end end module ClassMethods # The column name holding the position in the list, as a symbol. attr_accessor :position_field # A proc that scopes the dataset, so that there can be multiple positions # in the list, but the positions are unique with the scoped dataset. This # proc should accept an instance and return a dataset representing the list. attr_accessor :scope_proc Plugins.inherited_instance_variables(self, :@position_field=>nil, :@scope_proc=>nil) end module InstanceMethods # The model object at the given position in the list containing this instance. def at_position(p) list_dataset.first(position_field => p) end # Set the value of the position_field to the maximum value plus 1 unless the # position field already has a value. def before_create unless send(position_field) send("#{position_field}=", list_dataset.max(position_field).to_i+1) end super end # Find the last position in the list containing this instance. def last_position list_dataset.max(position_field).to_i end # A dataset that represents the list containing this instance. def list_dataset model.scope_proc ? model.scope_proc.call(self) : model.dataset end # Move this instance down the given number of places in the list, # or 1 place if no argument is specified. def move_down(n = 1) move_to(position_value + n) end # Move this instance to the given place in the list. Raises an # exception if target is less than 1 or greater than the last position in the list. def move_to(target, lp = nil) current = position_value if target != current checked_transaction do ds = list_dataset op, ds = if target < current raise(Sequel::Error, "Moving too far up (target = #{target})") if target < 1 [:+, ds.filter(position_field=>target...current)] else lp ||= last_position raise(Sequel::Error, "Moving too far down (target = #{target}, last_position = #{lp})") if target > lp [:-, ds.filter(position_field=>(current + 1)..target)] end ds.update(position_field => Sequel::SQL::NumericExpression.new(op, position_field, 1)) update(position_field => target) end end self end # Move this instance to the bottom (last position) of the list. def move_to_bottom lp = last_position move_to(lp, lp) end # Move this instance to the top (first position, position 1) of the list. def move_to_top move_to(1) end # Move this instance the given number of places up in the list, or 1 place # if no argument is specified. def move_up(n = 1) move_to(position_value - n) end # The model instance the given number of places below this model instance # in the list, or 1 place below if no argument is given. def next(n = 1) n == 0 ? self : at_position(position_value + n) end # The value of the model's position field for this instance. def position_value send(position_field) end # The model instance the given number of places below this model instance # in the list, or 1 place below if no argument is given. def prev(n = 1) self.next(n * -1) end private # The model's position field, an instance method for ease of use. def position_field model.position_field end end end end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/many_through_many.rb�������������������������������������������0000664�0000000�0000000�00000035554�12201565355�0023614�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The many_through_many plugin allow you to create an association to multiple objects using multiple join tables. # For example, assume the following associations: # # Artist.many_to_many :albums # Album.many_to_many :tags # # The many_through_many plugin would allow this: # # Artist.plugin :many_through_many # Artist.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] # # Which will give you the tags for all of the artist's albums. # # Let's break down the 2nd argument of the many_through_many call: # # [[:albums_artists, :artist_id, :album_id], # [:albums, :id, :id], # [:albums_tags, :album_id, :tag_id]] # # This argument is an array of arrays with three elements. Each entry in the main array represents a JOIN in SQL: # # * The first element in each array represents the name of the table to join. # * The second element in each array represents the column used to join to the previous table. # * The third element in each array represents the column used to join to the next table. # # So the "Artist.many_through_many :tags" is translated into something similar to: # # FROM artists # JOIN albums_artists ON (artists.id = albums_artists.artist_id) # JOIN albums ON (albums_artists.album_id = albums.id) # JOIN albums_tags ON (albums.id = albums_tag.album_id) # JOIN tags ON (albums_tags.tag_id = tags.id) # # The "artists.id" and "tags.id" criteria come from other association options (defaulting to the primary keys of the current and # associated tables), but hopefully you can see how each argument in the array is used in the JOIN clauses. # # Here are some more examples: # # # Same as Artist.many_to_many :albums # Artist.many_through_many :albums, [[:albums_artists, :artist_id, :album_id]] # # # All artists that are associated to any album that this artist is associated to # Artist.many_through_many :artists, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id]] # # # All albums by artists that are associated to any album that this artist is associated to # Artist.many_through_many :artist_albums, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], \ # [:albums_artists, :album_id, :artist_id], [:artists, :id, :id], [:albums_artists, :artist_id, :album_id]], \ # :class=>:Album # # # All tracks on albums by this artist # Artist.many_through_many :tracks, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id]], \ # :right_primary_key=>:album_id # # Often you don't want the current object to appear in the array of associated objects. This is easiest to handle via an :after_load hook: # # Artist.many_through_many :artists, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id]], # :after_load=>proc{|artist, associated_artists| associated_artists.delete(artist)} # # You can also handle it by adding a dataset block that excludes the current record (so it won't be retrieved at all), but # that won't work when eagerly loading, which is why the :after_load proc is recommended instead. # # It's also common to not want duplicate records, in which case the :distinct option can be used: # # Artist.many_through_many :artists, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id]], # :distinct=>true module ManyThroughMany # The AssociationReflection subclass for many_through_many associations. class ManyThroughManyAssociationReflection < Sequel::Model::Associations::ManyToManyAssociationReflection Sequel::Model::Associations::ASSOCIATION_TYPES[:many_through_many] = self # The default associated key alias(es) to use when eager loading # associations via eager. def default_associated_key_alias self[:uses_left_composite_keys] ? (0...self[:through].first[:left].length).map{|i| :"x_foreign_key_#{i}_x"} : :x_foreign_key_x end %w'associated_key_table predicate_key edges final_edge final_reverse_edge reverse_edges'.each do |meth| class_eval(<<-END, __FILE__, __LINE__+1) def #{meth} cached_fetch(:#{meth}){calculate_edges[:#{meth}]} end END end # Many through many associations don't have a reciprocal def reciprocal nil end private # Make sure to use unique table aliases when lazy loading or eager loading def calculate_reverse_edge_aliases(reverse_edges) aliases = [associated_class.table_name] reverse_edges.each do |e| table_alias = e[:table] if aliases.include?(table_alias) i = 0 table_alias = loop do ta = :"#{table_alias}_#{i}" break ta unless aliases.include?(ta) i += 1 end end aliases.push(e[:alias] = table_alias) end end # Transform the :through option into a list of edges and reverse edges to use to join tables when loading the association. def calculate_edges es = [{:left_table=>self[:model].table_name, :left_key=>self[:left_primary_key_column]}] self[:through].each do |t| es.last.merge!(:right_key=>t[:left], :right_table=>t[:table], :join_type=>t[:join_type]||self[:graph_join_type], :conditions=>(t[:conditions]||[]).to_a, :block=>t[:block]) es.last[:only_conditions] = t[:only_conditions] if t.include?(:only_conditions) es << {:left_table=>t[:table], :left_key=>t[:right]} end es.last.merge!(:right_key=>right_primary_key, :right_table=>associated_class.table_name) edges = es.map do |e| h = {:table=>e[:right_table], :left=>e[:left_key], :right=>e[:right_key], :conditions=>e[:conditions], :join_type=>e[:join_type], :block=>e[:block]} h[:only_conditions] = e[:only_conditions] if e.include?(:only_conditions) h end reverse_edges = es.reverse.map{|e| {:table=>e[:left_table], :left=>e[:left_key], :right=>e[:right_key]}} reverse_edges.pop calculate_reverse_edge_aliases(reverse_edges) final_reverse_edge = reverse_edges.pop final_reverse_alias = final_reverse_edge[:alias] h = {:final_edge=>edges.pop, :final_reverse_edge=>final_reverse_edge, :edges=>edges, :reverse_edges=>reverse_edges, :predicate_key=>qualify(final_reverse_alias, edges.first[:right]), :associated_key_table=>final_reverse_edge[:alias], } h.each{|k, v| cached_set(k, v)} h end end module ClassMethods # Create a many_through_many association. Arguments: # * name - Same as associate, the name of the association. # * through - The tables and keys to join between the current table and the associated table. # Must be an array, with elements that are either 3 element arrays, or hashes with keys :table, :left, and :right. # The required entries in the array/hash are: # :table (first array element) :: The name of the table to join. # :left (middle array element) :: The key joining the table to the previous table. Can use an # array of symbols for a composite key association. # :right (last array element) :: The key joining the table to the next table. Can use an # array of symbols for a composite key association. # If a hash is provided, the following keys are respected when using eager_graph: # :block :: A proc to use as the block argument to join. # :conditions :: Extra conditions to add to the JOIN ON clause. Must be a hash or array of two pairs. # :join_type :: The join type to use for the join, defaults to :left_outer. # :only_conditions :: Conditions to use for the join instead of the ones specified by the keys. # * opts - The options for the associaion. Takes the same options as many_to_many. def many_through_many(name, through, opts=OPTS, &block) associate(:many_through_many, name, opts.merge(through.is_a?(Hash) ? through : {:through=>through}), &block) end private # Create the association methods and :eager_loader and :eager_grapher procs. def def_many_through_many(opts) name = opts[:name] model = self opts[:read_only] = true opts[:after_load].unshift(:array_uniq!) if opts[:uniq] opts[:cartesian_product_number] ||= 2 opts[:through] = opts[:through].map do |e| case e when Array raise(Error, "array elements of the through option/argument for many_through_many associations must have at least three elements") unless e.length == 3 {:table=>e[0], :left=>e[1], :right=>e[2]} when Hash raise(Error, "hash elements of the through option/argument for many_through_many associations must contain :table, :left, and :right keys") unless e[:table] && e[:left] && e[:right] e else raise(Error, "the through option/argument for many_through_many associations must be an enumerable of arrays or hashes") end end left_key = opts[:left_key] = opts[:through].first[:left] uses_lcks = opts[:uses_left_composite_keys] = left_key.is_a?(Array) left_keys = Array(left_key) left_pk = (opts[:left_primary_key] ||= self.primary_key) opts[:eager_loader_key] = left_pk unless opts.has_key?(:eager_loader_key) left_pks = opts[:left_primary_keys] = Array(left_pk) lpkc = opts[:left_primary_key_column] ||= left_pk opts[:left_primary_key_columns] ||= Array(lpkc) opts[:dataset] ||= lambda do ds = opts.associated_dataset opts.reverse_edges.each{|t| ds = ds.join(t[:table], Array(t[:left]).zip(Array(t[:right])), :table_alias=>t[:alias], :qualify=>:deep)} ft = opts.final_reverse_edge ds.join(ft[:table], Array(ft[:left]).zip(Array(ft[:right])) + opts.predicate_keys.zip(left_pks.map{|k| send(k)}), :table_alias=>ft[:alias], :qualify=>:deep) end slice_range = opts.slice_range left_key_alias = opts[:left_key_alias] ||= opts.default_associated_key_alias opts[:eager_loader] ||= lambda do |eo| h = eo[:id_map] rows = eo[:rows] rows.each{|object| object.associations[name] = []} ds = opts.associated_class opts.reverse_edges.each{|t| ds = ds.join(t[:table], Array(t[:left]).zip(Array(t[:right])), :table_alias=>t[:alias], :qualify=>:deep)} ft = opts.final_reverse_edge ds = ds.join(ft[:table], Array(ft[:left]).zip(Array(ft[:right])) + [[opts.predicate_key, h.keys]], :table_alias=>ft[:alias], :qualify=>:deep) ds = model.eager_loading_dataset(opts, ds, nil, eo[:associations], eo) if opts.eager_limit_strategy == :window_function delete_rn = true rn = ds.row_number_column ds = apply_window_function_eager_limit_strategy(ds, opts) end ds.all do |assoc_record| assoc_record.values.delete(rn) if delete_rn hash_key = if uses_lcks left_key_alias.map{|k| assoc_record.values.delete(k)} else assoc_record.values.delete(left_key_alias) end next unless objects = h[hash_key] objects.each{|object| object.associations[name].push(assoc_record)} end if opts.eager_limit_strategy == :ruby rows.each{|o| o.associations[name] = o.associations[name][slice_range] || []} end end join_type = opts[:graph_join_type] select = opts[:graph_select] graph_block = opts[:graph_block] only_conditions = opts[:graph_only_conditions] use_only_conditions = opts.include?(:graph_only_conditions) conditions = opts[:graph_conditions] opts[:eager_grapher] ||= proc do |eo| ds = eo[:self] iq = eo[:implicit_qualifier] opts.edges.each do |t| ds = ds.graph(t[:table], t.fetch(:only_conditions, (Array(t[:right]).zip(Array(t[:left])) + t[:conditions])), :select=>false, :table_alias=>ds.unused_table_alias(t[:table]), :join_type=>t[:join_type], :qualify=>:deep, :implicit_qualifier=>iq, &t[:block]) iq = nil end fe = opts.final_edge ds.graph(opts.associated_class, use_only_conditions ? only_conditions : (Array(opts.right_primary_key).zip(Array(fe[:left])) + conditions), :select=>select, :table_alias=>eo[:table_alias], :qualify=>:deep, :join_type=>join_type, &graph_block) end def_association_dataset_methods(opts) end end module DatasetMethods private # Use a subquery to filter rows to those related to the given associated object def many_through_many_association_filter_expression(op, ref, obj) lpks = ref[:left_primary_key_columns] lpks = lpks.first if lpks.length == 1 lpks = ref.qualify(model.table_name, lpks) edges = ref.edges first, rest = edges.first, edges[1..-1] ds = model.db[first[:table]].select(*Array(ref.qualify(first[:table], first[:right]))) rest.each{|e| ds = ds.join(e[:table], e.fetch(:only_conditions, (Array(e[:right]).zip(Array(e[:left])) + e[:conditions])), :table_alias=>ds.unused_table_alias(e[:table]), :qualify=>:deep, &e[:block])} last_alias = if rest.empty? first[:table] else last_join = ds.opts[:join].last last_join.table_alias || last_join.table end meths = if obj.is_a?(Sequel::Dataset) ref.qualify(obj.model.table_name, ref.right_primary_keys) else ref.right_primary_key_methods end exp = association_filter_key_expression(ref.qualify(last_alias, Array(ref.final_edge[:left])), meths, obj) if exp == SQL::Constants::FALSE association_filter_handle_inversion(op, exp, Array(lpks)) else ds = ds.where(exp).exclude(SQL::BooleanExpression.from_value_pairs(ds.opts[:select].zip([]), :OR)) association_filter_handle_inversion(op, SQL::BooleanExpression.from_value_pairs(lpks=>ds), Array(lpks)) end end end end end end ����������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/many_to_one_pk_lookup.rb���������������������������������������0000664�0000000�0000000�00000000204�12201565355�0024436�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # Empty plugin module for backwards compatibility module ManyToOnePkLookup end end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/nested_attributes.rb�������������������������������������������0000664�0000000�0000000�00000035140�12201565355�0023603�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The nested_attributes plugin allows you to create, update, and delete # associated objects directly by calling a method on the current object. # Nested attributes are defined using the nested_attributes class method: # # Artist.one_to_many :albums # Artist.plugin :nested_attributes # Artist.nested_attributes :albums # # The nested_attributes call defines a single method, <tt><i>association</i>_attributes=</tt>, # (e.g. <tt>albums_attributes=</tt>). So if you have an Artist instance: # # a = Artist.new(:name=>'YJM') # # You can create new album instances related to this artist: # # a.albums_attributes = [{:name=>'RF'}, {:name=>'MO'}] # # Note that this doesn't send any queries to the database yet. That doesn't happen till # you save the object: # # a.save # # That will save the artist first, and then save both albums. If either the artist # is invalid or one of the albums is invalid, none of the objects will be saved to the # database, and all related validation errors will be available in the artist's validation # errors. # # In addition to creating new associated objects, you can also update existing associated # objects. You just need to make sure that the primary key field is filled in for the # associated object: # # a.update(:albums_attributes => [{:id=>1, :name=>'T'}]) # # Since the primary key field is filled in, the plugin will update the album with id 1 instead # of creating a new album. # # If you would like to delete the associated object instead of updating it, you add a _delete # entry to the hash: # # a.update(:albums_attributes => [{:id=>1, :_delete=>true}]) # # This will delete the related associated object from the database. If you want to leave the # associated object in the database, but just remove it from the association, add a _remove # entry in the hash: # # a.update(:albums_attributes => [{:id=>1, :_remove=>true}]) # # The above example was for a one_to_many association, but the plugin also works similarly # for other association types. For one_to_one and many_to_one associations, you need to # pass a single hash instead of an array of hashes. # # This plugin is mainly designed to make it easy to use on html forms, where a single form # submission can contained nested attributes (and even nested attributes of those attributes). # You just need to name your form inputs correctly: # # artist[name] # artist[albums_attributes][0][:name] # artist[albums_attributes][1][:id] # artist[albums_attributes][1][:name] # # Your web stack will probably parse that into a nested hash similar to: # # {:artist=>{:name=>?, :albums_attributes=>{0=>{:name=>?}, 1=>{:id=>?, :name=>?}}}} # # Then you can do: # # artist.update(params[:artist]) # # To save changes to the artist, create the first album and associate it to the artist, # and update the other existing associated album. module NestedAttributes # Depend on the instance_hooks plugin. def self.apply(model) model.plugin(:instance_hooks) end module ClassMethods # Module to store the nested_attributes setter methods, so they can # call be overridden and call super to get the default behavior attr_accessor :nested_attributes_module # Allow nested attributes to be set for the given associations. Options: # * :destroy - Allow destruction of nested records. # * :fields - If provided, should be an Array or proc. If it is an array, # restricts the fields allowed to be modified through the # association_attributes= method to the specific fields given. If it is # a proc, it will be called with the associated object and should return an # array of the allowable fields. # * :limit - For *_to_many associations, a limit on the number of records # that will be processed, to prevent denial of service attacks. # * :reject_if - A proc that is given each attribute hash before it is # passed to its associated object. If the proc returns a truthy # value, the attribute hash is ignored. # * :remove - Allow disassociation of nested records (can remove the associated # object from the parent object, but not destroy the associated object). # * :strict - Kept for backward compatibility. Setting it to false is # equivalent to setting :unmatched_pk to :ignore. # * :transform - A proc to transform attribute hashes before they are # passed to associated object. Takes two arguments, the parent object and # the attribute hash. Uses the return value as the new attribute hash. # * :unmatched_pk - Specify the action to be taken if a primary key is # provided in a record, but it doesn't match an existing associated # object. Set to :create to create a new object with that primary # key, :ignore to ignore the record, or :raise to raise an error. # The default is :raise. # # If a block is provided, it is used to set the :reject_if option. def nested_attributes(*associations, &block) include(self.nested_attributes_module ||= Module.new) unless nested_attributes_module opts = associations.last.is_a?(Hash) ? associations.pop : {} reflections = associations.map{|a| association_reflection(a) || raise(Error, "no association named #{a} for #{self}")} reflections.each do |r| r[:nested_attributes] = opts r[:nested_attributes][:unmatched_pk] ||= opts.delete(:strict) == false ? :ignore : :raise r[:nested_attributes][:reject_if] ||= block def_nested_attribute_method(r) end end private # Add a nested attribute setter method to a module included in the # class. def def_nested_attribute_method(reflection) nested_attributes_module.class_eval do if reflection.returns_array? define_method("#{reflection[:name]}_attributes=") do |array| nested_attributes_list_setter(reflection, array) end else define_method("#{reflection[:name]}_attributes=") do |h| nested_attributes_setter(reflection, h) end end end end end module InstanceMethods private # Check that the keys related to the association are not modified inside the block. Does # not use an ensure block, so callers should be careful. def nested_attributes_check_key_modifications(reflection, obj) keys = reflection.associated_object_keys.map{|x| obj.send(x)} yield unless keys == reflection.associated_object_keys.map{|x| obj.send(x)} raise(Error, "Modifying association dependent key(s) when updating associated objects is not allowed") end end # Create a new associated object with the given attributes, validate # it when the parent is validated, and save it when the object is saved. # Returns the object created. def nested_attributes_create(reflection, attributes) obj = reflection.associated_class.new nested_attributes_set_attributes(reflection, obj, attributes) after_validation_hook{validate_associated_object(reflection, obj)} if reflection.returns_array? send(reflection[:name]) << obj after_save_hook{send(reflection.add_method, obj)} else associations[reflection[:name]] = obj # Because we are modifying the associations cache manually before the # setter is called, we still want to run the setter code even though # the cached value will be the same as the given value. @set_associated_object_if_same = true # Don't need to validate the object twice if :validate association option is not false # and don't want to validate it at all if it is false. if reflection[:type] == :many_to_one before_save_hook{send(reflection.setter_method, obj.save(:validate=>false))} else after_save_hook{send(reflection.setter_method, obj)} end end obj end # Take an array or hash of attribute hashes and set each one individually. # If a hash is provided it, sort it by key and then use the values. # If there is a limit on the nested attributes for this association, # make sure the length of the attributes_list is not greater than the limit. def nested_attributes_list_setter(reflection, attributes_list) attributes_list = attributes_list.sort_by{|x| x.to_s}.map{|k,v| v} if attributes_list.is_a?(Hash) if (limit = reflection[:nested_attributes][:limit]) && attributes_list.length > limit raise(Error, "number of nested attributes (#{attributes_list.length}) exceeds the limit (#{limit})") end attributes_list.each{|a| nested_attributes_setter(reflection, a)} end # Remove the given associated object from the current object. If the # :destroy option is given, destroy the object after disassociating it # (unless destroying the object would automatically disassociate it). # Returns the object removed. def nested_attributes_remove(reflection, obj, opts=OPTS) if !opts[:destroy] || reflection.remove_before_destroy? before_save_hook do if reflection.returns_array? send(reflection.remove_method, obj) else send(reflection.setter_method, nil) end end end after_save_hook{obj.destroy} if opts[:destroy] obj end # Set the fields in the obj based on the association, only allowing # specific :fields if configured. def nested_attributes_set_attributes(reflection, obj, attributes) if fields = reflection[:nested_attributes][:fields] fields = fields.call(obj) if fields.respond_to?(:call) obj.set_only(attributes, fields) else obj.set(attributes) end end # Modify the associated object based on the contents of the attributes hash: # * If a :transform block was given to nested_attributes, use it to modify the attribute hash. # * If a block was given to nested_attributes, call it with the attributes and return immediately if the block returns true. # * If a primary key exists in the attributes hash and it matches an associated object: # ** If _delete is a key in the hash and the :destroy option is used, destroy the matching associated object. # ** If _remove is a key in the hash and the :remove option is used, disassociated the matching associated object. # ** Otherwise, update the matching associated object with the contents of the hash. # * If a primary key exists in the attributes hash but it does not match an associated object, # either raise an error, create a new object or ignore the hash, depending on the :unmatched_pk option. # * If no primary key exists in the attributes hash, create a new object. def nested_attributes_setter(reflection, attributes) if a = reflection[:nested_attributes][:transform] attributes = a.call(self, attributes) end return if (b = reflection[:nested_attributes][:reject_if]) && b.call(attributes) modified! klass = reflection.associated_class sym_keys = Array(klass.primary_key) str_keys = sym_keys.map{|k| k.to_s} if (pk = attributes.values_at(*sym_keys)).all? || (pk = attributes.values_at(*str_keys)).all? pk = pk.map{|k| k.to_s} obj = Array(send(reflection[:name])).find{|x| Array(x.pk).map{|k| k.to_s} == pk} end if obj attributes = attributes.dup.delete_if{|k,v| str_keys.include? k.to_s} if reflection[:nested_attributes][:destroy] && klass.db.send(:typecast_value_boolean, attributes.delete(:_delete) || attributes.delete('_delete')) nested_attributes_remove(reflection, obj, :destroy=>true) elsif reflection[:nested_attributes][:remove] && klass.db.send(:typecast_value_boolean, attributes.delete(:_remove) || attributes.delete('_remove')) nested_attributes_remove(reflection, obj) else nested_attributes_update(reflection, obj, attributes) end elsif pk.all? && reflection[:nested_attributes][:unmatched_pk] != :create if reflection[:nested_attributes][:unmatched_pk] == :raise raise(Error, "no matching associated object with given primary key (association: #{reflection[:name]}, pk: #{pk})") end else nested_attributes_create(reflection, attributes) end end # Update the given object with the attributes, validating it when the # parent object is validated and saving it when the parent is saved. # Returns the object updated. def nested_attributes_update(reflection, obj, attributes) nested_attributes_update_attributes(reflection, obj, attributes) after_validation_hook{validate_associated_object(reflection, obj)} # Don't need to validate the object twice if :validate association option is not false # and don't want to validate it at all if it is false. after_save_hook{obj.save_changes(:validate=>false)} obj end # Update the attributes for the given object related to the current object through the association. def nested_attributes_update_attributes(reflection, obj, attributes) nested_attributes_check_key_modifications(reflection, obj) do nested_attributes_set_attributes(reflection, obj, attributes) end end # Validate the given associated object, adding any validation error messages from the # given object to the parent object. def validate_associated_object(reflection, obj) return if reflection[:validate] == false association = reflection[:name] obj.errors.full_messages.each{|m| errors.add(association, m)} unless obj.valid? end end end end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/optimistic_locking.rb������������������������������������������0000664�0000000�0000000�00000005266�12201565355�0023753�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # This plugin implements a simple database-independent locking mechanism # to ensure that concurrent updates do not override changes. This is # best implemented by a code example: # # class Person < Sequel::Model # plugin :optimistic_locking # end # p1 = Person[1] # p2 = Person[1] # p1.update(:name=>'Jim') # works # p2.update(:name=>'Bob') # raises Sequel::Plugins::OptimisticLocking::Error # # In order for this plugin to work, you need to make sure that the database # table has a lock_version column (or other column you name via the lock_column # class level accessor) that defaults to 0. # # This plugin relies on the instance_filters plugin. module OptimisticLocking # Exception class raised when trying to update or destroy a stale object. Error = Sequel::NoExistingObject # Load the instance_filters plugin into the model. def self.apply(model, opts=OPTS) model.plugin :instance_filters end # Set the lock_column to the :lock_column option, or :lock_version if # that option is not given. def self.configure(model, opts=OPTS) model.lock_column = opts[:lock_column] || :lock_version end module ClassMethods # The column holding the version of the lock attr_accessor :lock_column Plugins.inherited_instance_variables(self, :@lock_column=>nil) end module InstanceMethods # Add the lock column instance filter to the object before destroying it. def before_destroy lock_column_instance_filter super end # Add the lock column instance filter to the object before updating it. def before_update lock_column_instance_filter super end private # Add the lock column instance filter to the object. def lock_column_instance_filter lc = model.lock_column instance_filter(lc=>send(lc)) end # Clear the instance filters when refreshing, so that attempting to # refresh after a failed save removes the previous lock column filter # (the new one will be added before updating). def _refresh(ds) clear_instance_filters super end # Only update the row if it has the same lock version, and increment the # lock version. def _update_columns(columns) lc = model.lock_column lcv = send(lc) columns[lc] = lcv + 1 super send("#{lc}=", lcv + 1) end end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/pg_array_associations.rb���������������������������������������0000664�0000000�0000000�00000043310�12201565355�0024434�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel extension :pg_array, :pg_array_ops module Plugins # This plugin allows you to create associations where the foreign keys # are stored in a PostgreSQL array column in one of the tables. The # model with the table containing the array column has a # pg_array_to_many association to the associated model, and the # model with the table containing the primary key referenced by # elements in the array column has a many_to_pg_array association # to the associated model. # # # Database schema: # # tags albums # # :id (int4) <--\ :id # # :name \-- :tag_ids (int4[]) # # :name # # class Album # plugin :pg_array_associations # pg_array_to_many :tags # end # class Tag # plugin :pg_array_associations # many_to_pg_array :albums # end # # These association types work similarly to Sequel's other association # types, so you can use them as you would any other association. Unlike # other associations, they do not support composite keys. # # One thing that is different is that the modification methods for # pg_array_to_many associations do not affect the database, since they # operate purely on the receiver. For example: # # album = Album[1] # album.add_tag(Tag[2]) # # does not save the album. This allows you to call add_tag repeatedly # and the save after to combine all changes into a single query. Note # that the many_to_pg_array association modification methods do save, so: # # tag = Tag[2] # tag.add_album(Album[1]) # # will save the changes to the album. # # They support some additional options specific to this plugin: # # :array_type :: This allows you to specify the type of the array. This # is only necessary to set in very narrow circumstances, # such as when this plugin needs to create an array type, # and typecasting is turned off or not setup correctly # for the model object. # :save_after_modify :: For pg_array_to_many associations, this makes the # the modification methods save the current object, # so they operate more similarly to the one_to_many # and many_to_many association modification methods. # :uniq :: Similar to many_to_many associations, this can be used to # make sure the returned associated object array has uniq values. # # Note that until PostgreSQL gains the ability to enforce foreign key # constraints in array columns, this plugin is not recommended for # production use unless you plan on emulating referential integrity # constraints via triggers. # # This plugin should work on all supported PostgreSQL versions, except # the remove_all modification method for many_to_pg_array associations, which # requires the array_remove method added in PostgreSQL 9.3. module PgArrayAssociations # The AssociationReflection subclass for many_to_pg_array associations. class ManyToPgArrayAssociationReflection < Sequel::Model::Associations::AssociationReflection Sequel::Model::Associations::ASSOCIATION_TYPES[:many_to_pg_array] = self # The array column in the associated model containing foreign keys to # the current model. def associated_object_keys [self[:key]] end # many_to_pg_array associations can have associated objects as long as they have # a primary key. def can_have_associated_objects?(obj) obj.send(self[:primary_key]) end # Assume that the key in the associated table uses a version of the current # model's name suffixed with _ids. def default_key :"#{underscore(demodulize(self[:model].name))}_ids" end # The hash key to use for the eager loading predicate (left side of IN (1, 2, 3)) def predicate_key cached_fetch(:predicate_key){qualify_assoc(self[:key_column])} end # The column in the current table that the keys in the array column in the # associated table reference. def primary_key self[:primary_key] end # Destroying the associated object automatically removes the association, # since the association is stored in the associated object. def remove_before_destroy? false end private # Only consider an association as a reciprocal if it has matching keys # and primary keys. def reciprocal_association?(assoc_reflect) super && self[:key] == assoc_reflect[:key] && primary_key == assoc_reflect.primary_key end def reciprocal_type :pg_array_to_many end end # The AssociationReflection subclass for pg_array_to_many associations. class PgArrayToManyAssociationReflection < Sequel::Model::Associations::AssociationReflection Sequel::Model::Associations::ASSOCIATION_TYPES[:pg_array_to_many] = self # An array containing the primary key for the associated model. def associated_object_keys Array(primary_key) end # pg_array_to_many associations can only have associated objects if # the array field is not nil or empty. def can_have_associated_objects?(obj) v = obj.send(self[:key]) v && !v.empty? end # pg_array_to_many associations do not need a primary key. def dataset_need_primary_key? false end # Use a default key name of *_ids, for similarity to other association types # that use *_id for single keys. def default_key :"#{singularize(self[:name])}_ids" end # A qualified version of the associated primary key. def predicate_key cached_fetch(:predicate_key){qualify_assoc(primary_key)} end # The primary key of the associated model. def primary_key cached_fetch(:primary_key){associated_class.primary_key} end # The method to call to get value of the primary key of the associated model. def primary_key_method cached_fetch(:primary_key_method){primary_key} end private # Only consider an association as a reciprocal if it has matching keys # and primary keys. def reciprocal_association?(assoc_reflect) super && self[:key] == assoc_reflect[:key] && primary_key == assoc_reflect.primary_key end def reciprocal_type :many_to_pg_array end end module ClassMethods # Create a many_to_pg_array association, for the case where the associated # table contains the array with foreign keys pointing to the current table. # See associate for options. def many_to_pg_array(name, opts=OPTS, &block) associate(:many_to_pg_array, name, opts, &block) end # Create a pg_array_to_many association, for the case where the current # table contains the array with foreign keys pointing to the associated table. # See associate for options. def pg_array_to_many(name, opts=OPTS, &block) associate(:pg_array_to_many, name, opts, &block) end private # Setup the many_to_pg_array-specific datasets, eager loaders, and modification methods. def def_many_to_pg_array(opts) name = opts[:name] model = self pk = opts[:eager_loader_key] = opts[:primary_key] ||= model.primary_key opts[:key] = opts.default_key unless opts.has_key?(:key) key = opts[:key] key_column = opts[:key_column] ||= opts[:key] opts[:after_load].unshift(:array_uniq!) if opts[:uniq] slice_range = opts.slice_range opts[:dataset] ||= lambda do opts.associated_dataset.where(Sequel.pg_array_op(opts.predicate_key).contains([send(pk)])) end opts[:eager_loader] ||= proc do |eo| id_map = eo[:id_map] rows = eo[:rows] rows.each do |object| object.associations[name] = [] end klass = opts.associated_class ds = model.eager_loading_dataset(opts, klass.where(Sequel.pg_array_op(opts.predicate_key).overlaps(id_map.keys)), nil, eo[:associations], eo) ds.all do |assoc_record| if pks ||= assoc_record.send(key) pks.each do |pkv| next unless objects = id_map[pkv] objects.each do |object| object.associations[name].push(assoc_record) end end end end if slice_range rows.each{|o| o.associations[name] = o.associations[name][slice_range] || []} end end join_type = opts[:graph_join_type] select = opts[:graph_select] opts[:cartesian_product_number] ||= 1 if opts.include?(:graph_only_conditions) conditions = opts[:graph_only_conditions] graph_block = opts[:graph_block] else conditions = opts[:graph_conditions] conditions = nil if conditions.empty? graph_block = proc do |j, lj, js| Sequel.pg_array_op(Sequel.deep_qualify(j, key_column)).contains([Sequel.deep_qualify(lj, opts.primary_key)]) end if orig_graph_block = opts[:graph_block] pg_array_graph_block = graph_block graph_block = proc do |j, lj, js| Sequel.&(orig_graph_block.call(j,lj,js), pg_array_graph_block.call(j, lj, js)) end end end opts[:eager_grapher] ||= proc do |eo| ds = eo[:self] ds = ds.graph(eager_graph_dataset(opts, eo), conditions, eo.merge(:select=>select, :join_type=>join_type, :qualify=>:deep, :from_self_alias=>ds.opts[:eager_graph][:master]), &graph_block) ds end def_association_dataset_methods(opts) unless opts[:read_only] validate = opts[:validate] array_type = opts[:array_type] ||= :integer adder = opts[:adder] || proc do |o| if array = o.send(key) array << send(pk) else o.send("#{key}=", Sequel.pg_array([send(pk)], array_type)) end o.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save") end association_module_private_def(opts._add_method, opts, &adder) remover = opts[:remover] || proc do |o| if (array = o.send(key)) && !array.empty? array.delete(send(pk)) o.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save") end end association_module_private_def(opts._remove_method, opts, &remover) clearer = opts[:clearer] || proc do opts.associated_dataset.where(Sequel.pg_array_op(key).contains([send(pk)])).update(key=>Sequel.function(:array_remove, key, send(pk))) end association_module_private_def(opts._remove_all_method, opts, &clearer) def_add_method(opts) def_remove_methods(opts) end end # Setup the pg_array_to_many-specific datasets, eager loaders, and modification methods. def def_pg_array_to_many(opts) name = opts[:name] model = self opts[:key] = opts.default_key unless opts.has_key?(:key) key = opts[:key] key_column = opts[:key_column] ||= key opts[:eager_loader_key] = nil opts[:after_load].unshift(:array_uniq!) if opts[:uniq] slice_range = opts.slice_range opts[:dataset] ||= lambda do opts.associated_dataset.where(opts.predicate_key=>send(key).to_a) end opts[:eager_loader] ||= proc do |eo| rows = eo[:rows] id_map = {} pkm = opts.primary_key_method rows.each do |object| object.associations[name] = [] if associated_pks = object.send(key) associated_pks.each do |apk| (id_map[apk] ||= []) << object end end end klass = opts.associated_class ds = model.eager_loading_dataset(opts, klass.where(opts.predicate_key=>id_map.keys), nil, eo[:associations], eo) ds.all do |assoc_record| if objects = id_map[assoc_record.send(pkm)] objects.each do |object| object.associations[name].push(assoc_record) end end end if slice_range rows.each{|o| o.associations[name] = o.associations[name][slice_range] || []} end end join_type = opts[:graph_join_type] select = opts[:graph_select] opts[:cartesian_product_number] ||= 1 if opts.include?(:graph_only_conditions) conditions = opts[:graph_only_conditions] graph_block = opts[:graph_block] else conditions = opts[:graph_conditions] conditions = nil if conditions.empty? graph_block = proc do |j, lj, js| Sequel.pg_array_op(Sequel.deep_qualify(lj, key_column)).contains([Sequel.deep_qualify(j, opts.primary_key)]) end if orig_graph_block = opts[:graph_block] pg_array_graph_block = graph_block graph_block = proc do |j, lj, js| Sequel.&(orig_graph_block.call(j,lj,js), pg_array_graph_block.call(j, lj, js)) end end end opts[:eager_grapher] ||= proc do |eo| ds = eo[:self] ds = ds.graph(eager_graph_dataset(opts, eo), conditions, eo.merge(:select=>select, :join_type=>join_type, :qualify=>:deep, :from_self_alias=>ds.opts[:eager_graph][:master]), &graph_block) ds end def_association_dataset_methods(opts) unless opts[:read_only] validate = opts[:validate] array_type = opts[:array_type] ||= :integer if opts[:save_after_modify] save_after_modify = proc do |obj| obj.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save") end end adder = opts[:adder] || proc do |o| opk = o.send(opts.primary_key) if array = send(key) modified!(key) array << opk else send("#{key}=", Sequel.pg_array([opk], array_type)) end save_after_modify.call(self) if save_after_modify end association_module_private_def(opts._add_method, opts, &adder) remover = opts[:remover] || proc do |o| if (array = send(key)) && !array.empty? modified!(key) array.delete(o.send(opts.primary_key)) save_after_modify.call(self) if save_after_modify end end association_module_private_def(opts._remove_method, opts, &remover) clearer = opts[:clearer] || proc do if (array = send(key)) && !array.empty? modified!(key) array.clear save_after_modify.call(self) if save_after_modify end end association_module_private_def(opts._remove_all_method, opts, &clearer) def_add_method(opts) def_remove_methods(opts) end end end module DatasetMethods private # Support filtering by many_to_pg_array associations using a subquery. def many_to_pg_array_association_filter_expression(op, ref, obj) pk = ref.qualify(model.table_name, ref.primary_key) key = ref[:key] expr = case obj when Sequel::Model if (assoc_pks = obj.send(key)) && !assoc_pks.empty? Sequel.expr(pk=>assoc_pks.to_a) end when Array if (assoc_pks = obj.map{|o| o.send(key)}.flatten.compact.uniq) && !assoc_pks.empty? Sequel.expr(pk=>assoc_pks) end when Sequel::Dataset Sequel.expr(pk=>obj.select{Sequel.pg_array_op(ref.qualify(obj.model.table_name, ref[:key_column])).unnest}) end expr = Sequel::SQL::Constants::FALSE unless expr association_filter_handle_inversion(op, expr, [pk]) end # Support filtering by pg_array_to_many associations using a subquery. def pg_array_to_many_association_filter_expression(op, ref, obj) key = ref.qualify(model.table_name, ref[:key_column]) expr = case obj when Sequel::Model if pkv = obj.send(ref.primary_key_method) Sequel.pg_array_op(key).contains([pkv]) end when Array if (pkvs = obj.map{|o| o.send(ref.primary_key_method)}.compact) && !pkvs.empty? Sequel.pg_array(key).overlaps(pkvs) end when Sequel::Dataset Sequel.function(:coalesce, Sequel.pg_array_op(key).overlaps(obj.select{array_agg(ref.qualify(obj.model.table_name, ref.primary_key))}), Sequel::SQL::Constants::FALSE) end expr = Sequel::SQL::Constants::FALSE unless expr association_filter_handle_inversion(op, expr, [key]) end end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/pg_row.rb������������������������������������������������������0000664�0000000�0000000�00000007711�12201565355�0021353�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The pg_row plugin allows you to use Sequel::Model classes as composite type # classes, via the pg_row extension. So if you have an address table: # # DB.create_table(:address) do # String :street # String :city # String :zip # end # # and a company table with an address: # # DB.create_table(:company) do # String :name # address :address # end # # You can create a Sequel::Model for the address table, and load the plugin, # which registers the row type: # # class Address < Sequel::Model(:address) # plugin :pg_row # end # # Then when you select from the company table (even using a plain dataset), # it will return address values as instances of Address: # # DB[:company].first # # => {:name=>'MS', :address=> # Address.load(:street=>'123 Foo St', :city=>'Bar Town', :zip=>'12345')} # # If you want a lot of your models to be used as row types, you can load the # plugin into Sequel::Model itself: # # Sequel::Model.plugin :pg_row # # And then call register_row_type in the class # # Address.register_row_type # # Note that automatic conversion only works with the native postgres adapter. # For other adapters that connect to PostgreSQL, you need to call the conversion # proc manually. # # In addition to returning row-valued/composite types as instances of Sequel::Model, # this also lets you use model instances in datasets when inserting, updating, and # filtering: # # DB[:company].insert(:name=>'MS', :address=> # Address.load(:street=>'123 Foo St', :city=>'Bar Town', :zip=>'12345')) module PgRow # When loading the extension, make sure the database has the pg_row extension # loaded, load the custom database extensions, and automatically register the # row type if the model has a dataset. def self.configure(model) model.db.extension(:pg_row) model.db.extend(DatabaseMethods) model.register_row_type if model.instance_variable_get(:@dataset) end module DatabaseMethods ESCAPE_RE = /("|\\)/.freeze ESCAPE_REPLACEMENT = '\\\\\1'.freeze COMMA = ',' # Handle Sequel::Model instances in bound variables. def bound_variable_arg(arg, conn) case arg when Sequel::Model "(#{arg.values.values_at(*arg.columns).map{|v| bound_variable_array(v)}.join(COMMA)})" else super end end # If a Sequel::Model instance is given, return it as-is # instead of attempting to convert it. def row_type(db_type, v) if v.is_a?(Sequel::Model) v else super end end private # Handle Sequel::Model instances in bound variable arrays. def bound_variable_array(arg) case arg when Sequel::Model "\"(#{arg.values.values_at(*arg.columns).map{|v| bound_variable_array(v)}.join(COMMA).gsub(ESCAPE_RE, ESCAPE_REPLACEMENT)})\"" else super end end end module ClassMethods # Register the model's row type with the database. def register_row_type table = dataset.first_source_table db.register_row_type(table, :converter=>self, :typecaster=>method(:new)) db.instance_variable_get(:@schema_type_classes)[:"pg_row_#{table}"] = self end end module InstanceMethods ROW = 'ROW'.freeze CAST = '::'.freeze # Literalize the model instance and append it to the sql. def sql_literal_append(ds, sql) sql << ROW ds.literal_append(sql, values.values_at(*columns)) sql << CAST ds.quote_schema_table_append(sql, model.dataset.first_source_table) end end end end end �������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/pg_typecast_on_load.rb�����������������������������������������0000664�0000000�0000000�00000005606�12201565355�0024074�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The PgTypecastOnLoad plugin exists because when you connect to PostgreSQL # using the do, swift, or jdbc adapter, Sequel doesn't have complete # control over typecasting, and may return columns as strings instead of how # the native postgres adapter would typecast them. This is mostly needed for # the additional support that the pg_* extensions add for advanced PostgreSQL # types such as arrays. # # This plugin makes model loading to do the same conversion that the # native postgres adapter would do for all columns given. You can either # specify the columns to typecast on load in the plugin call itself, or # afterwards using add_pg_typecast_on_load_columns: # # # aliases => text[] column # # config => hstore column # # Album.plugin :pg_typecast_on_load, :aliases, :config # # or: # Album.plugin :pg_typecast_on_load # Album.add_pg_typecast_on_load_columns :aliases, :config # # This plugin only handles values that the adapter returns as strings. If # the adapter returns a value other than a string, this plugin will have no # effect. You may be able to use the regular typecast_on_load plugin to # handle those cases. module PgTypecastOnLoad # Call add_pg_typecast_on_load_columns on the passed column arguments. def self.configure(model, *columns) model.instance_eval do @pg_typecast_on_load_columns ||= [] add_pg_typecast_on_load_columns(*columns) end end module ClassMethods # The columns to typecast on load for this model. attr_reader :pg_typecast_on_load_columns # Add additional columns to typecast on load for this model. def add_pg_typecast_on_load_columns(*columns) @pg_typecast_on_load_columns.concat(columns) end def call(values) super(load_typecast_pg(values)) end # Lookup the conversion proc for the column's oid in the Database # object, and use it to convert the value. def load_typecast_pg(values) pg_typecast_on_load_columns.each do |c| if (v = values[c]).is_a?(String) && (oid = db_schema[c][:oid]) && (pr = db.conversion_procs[oid]) values[c] = pr.call(v) end end values end Plugins.inherited_instance_variables(self, :@pg_typecast_on_load_columns=>:dup) end module InstanceMethods private # Typecast specific columns using the conversion procs when manually refreshing. def _refresh_set_values(values) super(model.load_typecast_pg(values)) end # Typecast specific columns using the conversion procs when refreshing after save. def _save_set_values(values) super(model.load_typecast_pg(values)) end end end end end ��������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/prepared_statements.rb�����������������������������������������0000664�0000000�0000000�00000017027�12201565355�0024130�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel class Model module InstanceMethods # Whether prepared statements should be used for the given type of query # (:insert, :insert_select, :refresh, :update, or :delete). True by default, # can be overridden in other plugins to disallow prepared statements for # specific types of queries. def use_prepared_statements_for?(type) true end end end module Plugins # The prepared_statements plugin modifies the model to use prepared statements for # instance level deletes and saves, as well as class level lookups by # primary key. # # Note that this plugin is unsafe in some circumstances, as it can allow up to # 2^N prepared statements to be created for each type of insert and update query, where # N is the number of colums in the table. It is recommended that you use the # +prepared_statements_safe+ plugin in addition to this plugin to reduce the number # of prepared statements that can be created, unless you tightly control how your # model instances are saved. # # Usage: # # # Make all model subclasses use prepared statements (called before loading subclasses) # Sequel::Model.plugin :prepared_statements # # # Make the Album class use prepared statements # Album.plugin :prepared_statements module PreparedStatements # Synchronize access to the integer sequence so that no two calls get the same integer. MUTEX = Mutex.new i = 0 # This plugin names prepared statements uniquely using an integer sequence, this # lambda returns the next integer to use. NEXT = lambda{MUTEX.synchronize{i += 1}} # Setup the datastructure used to hold the prepared statements in the model. def self.apply(model) model.instance_variable_set(:@prepared_statements, :insert=>{}, :insert_select=>{}, :update=>{}, :lookup_sql=>{}, :fixed=>{}) end module ClassMethods Plugins.inherited_instance_variables(self, :@prepared_statements=>lambda{|v| {:insert=>{}, :insert_select=>{}, :update=>{}, :lookup_sql=>{}, :fixed=>{}}}) private # Create a prepared statement based on the given dataset with a unique name for the given # type of query and values. def prepare_statement(ds, type, vals=OPTS) ps = ds.prepare(type, :"smpsp_#{NEXT.call}", vals) ps.log_sql = true ps end # Return a sorted array of columns for use as a hash key. def prepared_columns(cols) RUBY_VERSION >= '1.9' ? cols.sort : cols.sort_by{|c| c.to_s} end # Return a prepared statement that can be used to delete a row from this model's dataset. def prepared_delete cached_prepared_statement(:fixed, :delete){prepare_statement(filter(prepared_statement_key_array(primary_key)), :delete)} end # Return a prepared statement that can be used to insert a row using the given columns. def prepared_insert(cols) cached_prepared_statement(:insert, prepared_columns(cols)){prepare_statement(dataset, :insert, prepared_statement_key_hash(cols))} end # Return a prepared statement that can be used to insert a row using the given columns # and return that column values for the row created. def prepared_insert_select(cols) if dataset.supports_insert_select? cached_prepared_statement(:insert_select, prepared_columns(cols)){prepare_statement(naked.clone(:server=>dataset.opts.fetch(:server, :default)), :insert_select, prepared_statement_key_hash(cols))} end end # Return a prepared statement that can be used to lookup a row solely based on the primary key. def prepared_lookup cached_prepared_statement(:fixed, :lookup){prepare_statement(filter(prepared_statement_key_array(primary_key)), :first)} end # Return a prepared statement that can be used to refresh a row to get new column values after insertion. def prepared_refresh cached_prepared_statement(:fixed, :refresh){prepare_statement(naked.clone(:server=>dataset.opts.fetch(:server, :default)).filter(prepared_statement_key_array(primary_key)), :first)} end # Return an array of two element arrays with the column symbol as the first entry and the # placeholder symbol as the second entry. def prepared_statement_key_array(keys) if dataset.requires_placeholder_type_specifiers? sch = db_schema Array(keys).map do |k| if (s = sch[k]) && (t = s[:type]) [k, :"$#{k}__#{t}"] else [k, :"$#{k}"] end end else Array(keys).map{|k| [k, :"$#{k}"]} end end # Return a hash mapping column symbols to placeholder symbols. def prepared_statement_key_hash(keys) Hash[*(prepared_statement_key_array(keys).flatten)] end # Return a prepared statement that can be used to update row using the given columns. def prepared_update(cols) cached_prepared_statement(:update, prepared_columns(cols)){prepare_statement(filter(prepared_statement_key_array(primary_key)), :update, prepared_statement_key_hash(cols))} end # Use a prepared statement to query the database for the row matching the given primary key. def primary_key_lookup(pk) prepared_lookup.call(primary_key_hash(pk)) end private # If a prepared statement has already been cached for the given type and subtype, # return it. Otherwise, yield to the block to get the prepared statement, and cache it. def cached_prepared_statement(type, subtype) h = @prepared_statements[type] Sequel.synchronize do if v = h[subtype] return v end end ps = yield Sequel.synchronize{h[subtype] = ps} end end module InstanceMethods private # Use a prepared statement to delete the row. def _delete_without_checking if use_prepared_statements_for?(:delete) model.send(:prepared_delete).call(pk_hash) else super end end # Use a prepared statement to insert the values into the model's dataset. def _insert_raw(ds) if use_prepared_statements_for?(:insert) model.send(:prepared_insert, @values.keys).call(@values) else super end end # Use a prepared statement to insert the values into the model's dataset # and return the new column values. def _insert_select_raw(ds) if use_prepared_statements_for?(:insert_select) if ps = model.send(:prepared_insert_select, @values.keys) ps.call(@values) end else super end end # Use a prepared statement to refresh this model's column values. def _refresh_get(ds) if use_prepared_statements_for?(:refresh) model.send(:prepared_refresh).call(pk_hash) else super end end # Use a prepared statement to update this model's columns in the database. def _update_without_checking(columns) if use_prepared_statements_for?(:update) model.send(:prepared_update, columns.keys).call(columns.merge(pk_hash)) else super end end end end end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/prepared_statements_associations.rb����������������������������0000664�0000000�0000000�00000010270�12201565355�0026700�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The prepared_statements_associations plugin modifies the regular association # load method to use a cached prepared statement to load the associations. # It will not work on all associations, but it should skip the use of prepared # statements for associations where it will not work, assuming you load the # plugin before defining the associations. # # Usage: # # # Make all model subclasses more safe when using prepared statements (called before loading subclasses) # Sequel::Model.plugin :prepared_statements_associations # # # Make the Album class more safe when using prepared statements # Album.plugin :prepared_statements_associations module PreparedStatementsAssociations # Synchronize access to the integer sequence so that no two calls get the same integer. MUTEX = Mutex.new i = 0 # This plugin names prepared statements uniquely using an integer sequence, this # lambda returns the next integer to use. NEXT = lambda{MUTEX.synchronize{i += 1}} module ClassMethods # Disable prepared statement use if a block is given, or the :dataset or :conditions # options are used, or you are cloning an association. def associate(type, name, opts = OPTS, &block) if block || opts[:dataset] || (opts[:clone] && association_reflection(opts[:clone])[:prepared_statement] == false) opts = opts.merge(:prepared_statement=>false) end super(type, name, opts, &block) end end module InstanceMethods private # Return a bound variable hash that maps the keys in +ks+ (qualified by the +table+) # to the values of the results of sending the methods in +vs+. def association_bound_variable_hash(table, ks, vs) Hash[*ks.zip(vs).map{|k, v| [:"#{table}.#{k}", send(v)]}.flatten] end # Given an association reflection, return a bound variable hash for the given # association for this instance's values. def association_bound_variables(opts) case opts[:type] when :many_to_one association_bound_variable_hash(opts.associated_class.table_name, opts.primary_keys, opts[:keys]) when :one_to_many, :one_to_one association_bound_variable_hash(opts.associated_class.table_name, opts[:keys], opts[:primary_keys]) when :many_to_many association_bound_variable_hash(opts.join_table_alias, opts[:left_keys], opts[:left_primary_keys]) when :many_through_many association_bound_variable_hash(opts.final_reverse_edge[:alias], Array(opts[:left_key]), opts[:left_primary_keys]) end end # Given an association reflection, return and cache a prepared statement for this association such # that, given appropriate bound variables, the prepared statement will work correctly for any # instance. Return false if such a prepared statement cannot be created. def association_prepared_statement(opts, assoc_bv) opts.send(:cached_fetch, :prepared_statement) do ds, bv = _associated_dataset(opts, {}).unbind if bv.length != assoc_bv.length h = {} bv.each do |k,v| h[k] = v unless assoc_bv.has_key?(k) end ds = ds.bind(h) end ps = ds.prepare(opts.returns_array? ? :select : :first, :"smpsap_#{NEXT.call}") ps.log_sql = true ps end end # If a prepared statement can be used to load the associated objects, execute it to retrieve them. Otherwise, # fall back to the default implementation. def _load_associated_objects(opts, dynamic_opts=OPTS) if !opts.can_have_associated_objects?(self) || dynamic_opts[:callback] || (load_with_primary_key_lookup?(opts, dynamic_opts) && opts.associated_class.respond_to?(:cache_get_pk)) super elsif (bv = association_bound_variables(opts)) && (ps ||= association_prepared_statement(opts, bv)) ps.call(bv) else super end end end end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/prepared_statements_safe.rb������������������������������������0000664�0000000�0000000�00000005323�12201565355�0025122�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The prepared_statements_safe plugin modifies the model to reduce the number of # prepared statements that can be created, by setting as many columns as possible # before creating, and by changing +save_changes+ to save all columns instead of # just the changed ones. # # This plugin depends on the +prepared_statements+ plugin. # # Usage: # # # Make all model subclasses more safe when using prepared statements (called before loading subclasses) # Sequel::Model.plugin :prepared_statements_safe # # # Make the Album class more safe when using prepared statements # Album.plugin :prepared_statements_safe module PreparedStatementsSafe # Depend on the prepared_statements plugin def self.apply(model) model.plugin(:prepared_statements) end # Set the column defaults to use when creating on the model. def self.configure(model) model.send(:set_prepared_statements_column_defaults) end module ClassMethods # A hash with column symbol keys and default values. Instance's # values are merged into this hash before creating to reduce the # number of free columns (columns that may or may not be present # in the INSERT statement), as the number of prepared statements # that can be created is 2^N (where N is the number of free columns). attr_reader :prepared_statements_column_defaults Plugins.inherited_instance_variables(self, :@prepared_statements_column_defaults=>:dup) Plugins.after_set_dataset(self, :set_prepared_statements_column_defaults) private # Set the column defaults based on the database schema. All columns # are set to a default value unless they are a primary key column or # they don't have a parseable default. def set_prepared_statements_column_defaults if db_schema h = {} db_schema.each do |k, v| h[k] = v[:ruby_default] if (v[:ruby_default] || !v[:default]) && !v[:primary_key] end @prepared_statements_column_defaults = h end end end module InstanceMethods # Merge the current values into the default values to reduce the number # of free columns. def before_create if v = model.prepared_statements_column_defaults @values = v.merge(values) end super end # Always do a full save of all columns to reduce the number of prepared # statements that can be used. def save_changes(opts=OPTS) save(opts) || false if modified? end end end end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/prepared_statements_with_pk.rb���������������������������������0000664�0000000�0000000�00000004507�12201565355�0025654�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The prepared_statements_with_pk plugin allows Dataset#with_pk for model datasets # to use prepared statements by extract the values of previously bound variables # using <tt>Dataset#unbind</tt>, and attempting to use a prepared statement if the # variables can be unbound correctly. See +Unbinder+ for details about what types of # dataset filters can be unbound correctly. # # This plugin depends on the +prepared_statements+ plugin and should be considered unsafe. # Unbinding dataset values cannot be done correctly in all cases, and use of this plugin # in cases where not there are variables that are not unbound can lead to an denial of # service attack by allocating an arbitrary number of prepared statements. You have been # warned. # # Usage: # # # Make all model subclasses use prepared statements for Dataset#with_pk (called before loading subclasses) # Sequel::Model.plugin :prepared_statements_with_pk # # # Make the Album class use prepared statements for Dataset#with_pk # Album.plugin :prepared_statements_with_pk module PreparedStatementsWithPk # Depend on the prepared_statements plugin def self.apply(model) model.plugin(:prepared_statements) end module ClassMethods private # Return a prepared statement that can be used to lookup a row given a dataset for the row matching # the primary key. def prepared_lookup_dataset(ds) cached_prepared_statement(:lookup_sql, ds.sql){prepare_statement(ds.filter(prepared_statement_key_array(primary_key).map{|k, v| [SQL::QualifiedIdentifier.new(ds.model.table_name, k), v]}), :first)} end end module DatasetMethods # Use a prepared statement to find a row with the matching primary key # inside this dataset. def with_pk(pk) begin ds, bv = unbind rescue UnbindDuplicate super else begin bv = bv.merge!(model.primary_key_hash(pk)){|k, v1, v2| ((v1 == v2) ? v1 : raise(UnbindDuplicate))} rescue UnbindDuplicate super else model.send(:prepared_lookup_dataset, ds).call(bv) end end end end end end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/rcte_tree.rb���������������������������������������������������0000664�0000000�0000000�00000033533�12201565355�0022033�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # = Overview # # The rcte_tree plugin deals with tree structured data stored # in the database using the adjacency list model (where child rows # have a foreign key pointing to the parent rows), using recursive # common table expressions to load all ancestors in a single query, # all descendants in a single query, and all descendants to a given # level (where level 1 is children, level 2 is children and grandchildren # etc.) in a single query. # # = Background # # There are two types of common models for storing tree structured data # in an SQL database, the adjacency list model and the nested set model. # Before recursive common table expressions (or similar capabilities such # as CONNECT BY for Oracle), the nested set model was the only easy way # to retrieve all ancestors and descendants in a single query. However, # it has significant performance corner cases. # # On PostgreSQL 8.4, with a significant number of rows, the nested set # model is almost 500 times slower than using a recursive common table # expression with the adjacency list model to get all descendants, and # almost 24,000 times slower to get all descendants to a given level. # # Considering that the nested set model requires more difficult management # than the adjacency list model, it's almost always better to use the # adjacency list model if your database supports common table expressions. # See http://explainextended.com/2009/09/24/adjacency-list-vs-nested-sets-postgresql/ # for detailed analysis. # # = Usage # # The rcte_tree plugin adds four associations to the model: parent, children, ancestors, and # descendants. Both the parent and children are fairly standard many_to_one # and one_to_many associations, respectively. However, the ancestors and # descendants associations are special. Both the ancestors and descendants # associations will automatically set the parent and children associations, # respectively, for current object and all of the ancestor or descendant # objects, whenever they are loaded (either eagerly or lazily). Additionally, # the descendants association can take a level argument when called eagerly, # which limits the returned objects to only that many levels in the tree (see # the Overview). # # Model.plugin :rcte_tree # # # Lazy loading # model = Model.first # model.parent # model.children # model.ancestors # Populates :parent association for all ancestors # model.descendants # Populates :children association for all descendants # # # Eager loading - also populates the :parent and children associations # # for all ancestors and descendants # Model.filter(:id=>[1, 2]).eager(:ancestors, :descendants).all # # # Eager loading children and grand children # Model.filter(:id=>[1, 2]).eager(:descendants=>2).all # # Eager loading children, grand children, and great grand children # Model.filter(:id=>[1, 2]).eager(:descendants=>3).all # # = Options # # You can override the options for any specific association by making # sure the plugin options contain one of the following keys: # # * :parent - hash of options for the parent association # * :children - hash of options for the children association # * :ancestors - hash of options for the ancestors association # * :descendants - hash of options for the descendants association # # Note that you can change the name of the above associations by specifying # a :name key in the appropriate hash of options above. For example: # # Model.plugin :rcte_tree, :parent=>{:name=>:mother}, # :children=>{:name=>:daughters}, :descendants=>{:name=>:offspring} # # Any other keys in the main options hash are treated as options shared by # all of the associations. Here's a few options that affect the plugin: # # * :key - The foreign key in the table that points to the primary key # of the parent (default: :parent_id) # * :primary_key - The primary key to use (default: the model's primary key) # * :key_alias - The symbol identifier to use for aliasing when eager # loading (default: :x_root_x) # * :cte_name - The symbol identifier to use for the common table expression # (default: :t) # * :level_alias - The symbol identifier to use when eagerly loading descendants # up to a given level (default: :x_level_x) module RcteTree # Create the appropriate parent, children, ancestors, and descendants # associations for the model. def self.apply(model, opts=OPTS) model.plugin :tree, opts opts = opts.dup opts[:class] = model opts[:methods_module] = Module.new model.send(:include, opts[:methods_module]) key = opts[:key] ||= :parent_id prkey = opts[:primary_key] ||= model.primary_key parent = opts.merge(opts.fetch(:parent, {})).fetch(:name, :parent) childrena = opts.merge(opts.fetch(:children, {})).fetch(:name, :children) ka = opts[:key_alias] ||= :x_root_x t = opts[:cte_name] ||= :t opts[:reciprocal] = nil c_all = if model.dataset.recursive_cte_requires_column_aliases? # Work around Oracle/ruby-oci8 bug that returns integers as BigDecimals in recursive queries. conv_bd = model.db.database_type == :oracle col_aliases = model.dataset.columns model_table = model.table_name col_aliases.map{|c| SQL::QualifiedIdentifier.new(model_table, c)} else [SQL::ColumnAll.new(model.table_name)] end a = opts.merge(opts.fetch(:ancestors, {})) ancestors = a.fetch(:name, :ancestors) a[:read_only] = true unless a.has_key?(:read_only) a[:eager_loader_key] = key a[:dataset] ||= proc do base_ds = model.filter(prkey=>send(key)) recursive_ds = model.join(t, key=>prkey) if c = a[:conditions] (base_ds, recursive_ds) = [base_ds, recursive_ds].collect do |ds| (c.is_a?(Array) && !Sequel.condition_specifier?(c)) ? ds.filter(*c) : ds.filter(c) end end table_alias = model.dataset.schema_and_table(model.table_name)[1].to_sym model.from(SQL::AliasedExpression.new(t, table_alias)). with_recursive(t, col_aliases ? base_ds.select(*col_aliases) : base_ds.select_all, recursive_ds.select(*c_all), :args=>col_aliases) end aal = Array(a[:after_load]) aal << proc do |m, ancs| unless m.associations.has_key?(parent) parent_map = {m[prkey]=>m} child_map = {} child_map[m[key]] = m if m[key] m.associations[parent] = nil ancs.each do |obj| obj.associations[parent] = nil parent_map[obj[prkey]] = obj if ok = obj[key] child_map[ok] = obj end end parent_map.each do |parent_id, obj| if child = child_map[parent_id] child.associations[parent] = obj end end end end a[:after_load] ||= aal a[:eager_loader] ||= proc do |eo| id_map = eo[:id_map] parent_map = {} children_map = {} eo[:rows].each do |obj| parent_map[obj[prkey]] = obj (children_map[obj[key]] ||= []) << obj obj.associations[ancestors] = [] obj.associations[parent] = nil end r = model.association_reflection(ancestors) base_case = model.filter(prkey=>id_map.keys). select(SQL::AliasedExpression.new(prkey, ka), *c_all) recursive_case = model.join(t, key=>prkey). select(SQL::QualifiedIdentifier.new(t, ka), *c_all) if c = r[:conditions] (base_case, recursive_case) = [base_case, recursive_case].collect do |ds| (c.is_a?(Array) && !Sequel.condition_specifier?(c)) ? ds.filter(*c) : ds.filter(c) end end table_alias = model.dataset.schema_and_table(model.table_name)[1].to_sym elds = model.eager_loading_dataset(r, model.from(SQL::AliasedExpression.new(t, table_alias)). with_recursive(t, base_case, recursive_case, :args=>(([ka] + col_aliases) if col_aliases)), r.select, eo[:associations], eo) elds = elds.select_append(ka) unless elds.opts[:select] == nil elds.all do |obj| opk = obj[prkey] if parent_map.has_key?(opk) if idm_obj = parent_map[opk] idm_obj.values[ka] = obj.values[ka] obj = idm_obj end else obj.associations[parent] = nil parent_map[opk] = obj (children_map[obj[key]] ||= []) << obj end kv = obj.values.delete(ka) kv = kv.to_i if conv_bd && kv.is_a?(BigDecimal) if roots = id_map[kv] roots.each do |root| root.associations[ancestors] << obj end end end parent_map.each do |parent_id, obj| if children = children_map[parent_id] children.each do |child| child.associations[parent] = obj end end end end model.one_to_many ancestors, a d = opts.merge(opts.fetch(:descendants, {})) descendants = d.fetch(:name, :descendants) d[:read_only] = true unless d.has_key?(:read_only) la = d[:level_alias] ||= :x_level_x d[:dataset] ||= proc do base_ds = model.filter(key=>send(prkey)) recursive_ds = model.join(t, prkey=>key) if c = d[:conditions] (base_ds, recursive_ds) = [base_ds, recursive_ds].collect do |ds| (c.is_a?(Array) && !Sequel.condition_specifier?(c)) ? ds.filter(*c) : ds.filter(c) end end table_alias = model.dataset.schema_and_table(model.table_name)[1].to_sym model.from(SQL::AliasedExpression.new(t, table_alias)). with_recursive(t, col_aliases ? base_ds.select(*col_aliases) : base_ds.select_all, recursive_ds.select(*c_all), :args=>col_aliases) end dal = Array(d[:after_load]) dal << proc do |m, descs| unless m.associations.has_key?(childrena) parent_map = {m[prkey]=>m} children_map = {} m.associations[childrena] = [] descs.each do |obj| obj.associations[childrena] = [] if opk = obj[prkey] parent_map[opk] = obj end if ok = obj[key] (children_map[ok] ||= []) << obj end end children_map.each do |parent_id, objs| parent_map[parent_id].associations[childrena] = objs end end end d[:after_load] = dal d[:eager_loader] ||= proc do |eo| id_map = eo[:id_map] associations = eo[:associations] parent_map = {} children_map = {} eo[:rows].each do |obj| parent_map[obj[prkey]] = obj obj.associations[descendants] = [] obj.associations[childrena] = [] end r = model.association_reflection(descendants) base_case = model.filter(key=>id_map.keys). select(SQL::AliasedExpression.new(key, ka), *c_all) recursive_case = model.join(t, prkey=>key). select(SQL::QualifiedIdentifier.new(t, ka), *c_all) if c = r[:conditions] (base_case, recursive_case) = [base_case, recursive_case].collect do |ds| (c.is_a?(Array) && !Sequel.condition_specifier?(c)) ? ds.filter(*c) : ds.filter(c) end end if associations.is_a?(Integer) level = associations no_cache_level = level - 1 associations = {} base_case = base_case.select_more(SQL::AliasedExpression.new(0, la)) recursive_case = recursive_case.select_more(SQL::AliasedExpression.new(SQL::QualifiedIdentifier.new(t, la) + 1, la)).filter(SQL::QualifiedIdentifier.new(t, la) < level - 1) end table_alias = model.dataset.schema_and_table(model.table_name)[1].to_sym elds = model.eager_loading_dataset(r, model.from(SQL::AliasedExpression.new(t, table_alias)).with_recursive(t, base_case, recursive_case, :args=>(([ka] + col_aliases + (level ? [la] : [])) if col_aliases)), r.select, associations, eo) elds = elds.select_append(ka) unless elds.opts[:select] == nil elds.all do |obj| if level no_cache = no_cache_level == obj.values.delete(la) end opk = obj[prkey] if parent_map.has_key?(opk) if idm_obj = parent_map[opk] idm_obj.values[ka] = obj.values[ka] obj = idm_obj end else obj.associations[childrena] = [] unless no_cache parent_map[opk] = obj end kv = obj.values.delete(ka) kv = kv.to_i if conv_bd && kv.is_a?(BigDecimal) if root = id_map[kv].first root.associations[descendants] << obj end (children_map[obj[key]] ||= []) << obj end children_map.each do |parent_id, objs| parent_map[parent_id].associations[childrena] = objs.uniq end end model.one_to_many descendants, d end end end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/schema.rb������������������������������������������������������0000664�0000000�0000000�00000005750�12201565355�0021317�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # Sequel's built in schema plugin allows you to define your schema # directly in the model using Model.set_schema (which takes a block # similar to Database#create_table), and use Model.create_table to # create a table using the schema information. # # This plugin is mostly suited to test code. If there is any # chance that your application's schema could change, you should # be using the migration extension instead. # # Usage: # # # Add the schema methods to all model subclasses (called before loading subclasses) # Sequel::Model.plugin :schema # # # Add the schema methods to the Album class # Album.plugin :schema module Schema module ClassMethods # Creates table, using the column information from set_schema. def create_table(*args, &block) set_schema(*args, &block) if block db.create_table(table_name, :generator=>@schema) @db_schema = get_db_schema(true) columns end # Drops the table if it exists and then runs create_table. Should probably # not be used except in testing. def create_table!(*args, &block) drop_table? create_table(*args, &block) end # Creates the table unless the table already exists def create_table?(*args, &block) create_table(*args, &block) unless table_exists? end # Drops table. If the table doesn't exist, this will probably raise an error. def drop_table db.drop_table(table_name) end # Drops table if it already exists, do nothing if it doesn't exist. def drop_table? db.drop_table?(table_name) end # Returns table schema created with set_schema for direct descendant of Model. # Does not retreive schema information from the database, see db_schema if you # want that. def schema @schema || (superclass.schema unless superclass == Model) end # Defines a table schema (see Schema::Generator for more information). # # This is only needed if you want to use the create_table/create_table! methods. # Will also set the dataset if you provide a name, as well as setting # the primary key if you defined one in the passed block. # # In general, it is a better idea to use migrations for production code, as # migrations allow changes to existing schema. set_schema is mostly useful for # test code or simple examples. def set_schema(name = nil, &block) set_dataset(db[name]) if name @schema = db.create_table_generator(&block) set_primary_key(@schema.primary_key_name) if @schema.primary_key_name end # Returns true if table exists, false otherwise. def table_exists? db.table_exists?(table_name) end end end end end ������������������������ruby-sequel-4.1.1/lib/sequel/plugins/scissors.rb����������������������������������������������������0000664�0000000�0000000�00000001704�12201565355�0021722�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The scissors plugin adds class methods for update, delete, and destroy. # It is so named because this is considered dangerous, since it is easy # to write: # # Album.delete # # and delete all rows in the table, when you meant to write: # # album.delete # # and only delete a single row. # # This plugin is mostly useful for backwards compatibility, and not # recommended for use in production. However, it can cut down on # verbosity in non-transactional test code, so it may be appropriate # to use when testing. # # Usage: # # # Make all model subclass run with scissors # Sequel::Model.plugin :scissors # # # Make the Album class run with scissors # Album.plugin :scissors module Scissors module ClassMethods Plugins.def_dataset_methods(self, [:update, :delete, :destroy]) end end end end ������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/serialization.rb�����������������������������������������������0000664�0000000�0000000�00000021165�12201565355�0022732�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # Sequel's built in Serialization plugin allows you to keep serialized # ruby objects in the database, while giving you deserialized objects # when you call an accessor. # # This plugin works by keeping the serialized value in the values, and # adding a @deserialized_values hash. The reader method for serialized columns # will check the @deserialized_values for the value, return it if present, # or deserialized the entry in @values and return it. The writer method will # set the @deserialized_values entry. This plugin adds a before_save hook # that serializes all @deserialized_values to @values. # # You can specify the serialization format as a pair of serializer/deserializer # callable objects. You can also specify the serialization format as a single # symbol, if such a symbol has a registered serializer/deserializer pair in the # plugin. By default, the plugin registers the :marshal, :yaml, and :json # serialization formats. To register your own serialization formats, use # Sequel::Plugins::Serialization.register_format. # If you use yaml or json format, you need to require the libraries, Sequel # does not do the requiring for you. # # You can specify the columns to serialize when loading the plugin, or later # using the serialize_attributes class method. # # Because of how this plugin works, it must be used inside each model class # that needs serialization, after any set_dataset method calls in that class. # Otherwise, it is possible that the default column accessors will take # precedence. # # == Example # # # Require json if you plan to use it, as the plugin doesn't require it for you. # require 'json' # # # Register custom serializer/deserializer pair, if desired # require 'sequel/plugins/serialization' # Sequel::Plugins::Serialization.register_format(:reverse, # lambda{|v| v.reverse}, # lambda{|v| v.reverse}) # # class User < Sequel::Model # # Built-in format support when loading the plugin # plugin :serialization, :json, :permissions # # # Built-in format support after loading the plugin using serialize_attributes # plugin :serialization # serialize_attributes :marshal, :permissions # # # Use custom registered serialization format just like built-in format # serialize_attributes :reverse, :password # # # Use a custom serializer/deserializer pair without registering # serialize_attributes [lambda{|v| v.reverse}, lambda{|v| v.reverse}], :password # end # user = User.create # user.permissions = { :global => 'read-only' } # user.save module Serialization # The default serializers supported by the serialization module. # Use register_format to add serializers to this hash. REGISTERED_FORMATS = {} # Set up the column readers to do deserialization and the column writers # to save the value in deserialized_values. def self.apply(model, *args) model.instance_eval do @deserialization_map = {} @serialization_map = {} end end # Automatically call serialize_attributes with the format and columns unless # no columns were provided. def self.configure(model, format=nil, *columns) model.serialize_attributes(format, *columns) unless columns.empty? end # Register a serializer/deserializer pair with a format symbol, to allow # models to pick this format by name. Both serializer and deserializer # should be callable objects. def self.register_format(format, serializer, deserializer) REGISTERED_FORMATS[format] = [serializer, deserializer] end register_format(:marshal, lambda{|v| [Marshal.dump(v)].pack('m')}, lambda do |v| begin Marshal.load(v.unpack('m')[0]) rescue => e begin # Backwards compatibility for unpacked marshal output. Marshal.load(v) rescue raise e end end end) register_format(:yaml, lambda{|v| v.to_yaml}, lambda{|v| YAML.load(v)}) register_format(:json, lambda{|v| Sequel.object_to_json(v)}, lambda{|v| Sequel.parse_json(v)}) module ClassMethods # A hash with column name symbols and callable values, with the value # called to deserialize the column. attr_reader :deserialization_map # A hash with column name symbols and callable values, with the value # called to serialize the column. attr_reader :serialization_map # Module to store the serialized column accessor methods, so they can # call be overridden and call super to get the serialization behavior attr_accessor :serialization_module Plugins.inherited_instance_variables(self, :@deserialization_map=>:dup, :@serialization_map=>:dup) # Create instance level reader that deserializes column values on request, # and instance level writer that stores new deserialized values. def serialize_attributes(format, *columns) if format.is_a?(Symbol) unless format = REGISTERED_FORMATS[format] raise(Error, "Unsupported serialization format: #{format} (valid formats: #{REGISTERED_FORMATS.keys.map{|k| k.inspect}.join})") end end serializer, deserializer = format raise(Error, "No columns given. The serialization plugin requires you specify which columns to serialize") if columns.empty? define_serialized_attribute_accessor(serializer, deserializer, *columns) end # The columns that will be serialized. This is only for # backwards compatibility, use serialization_map in new code. def serialized_columns serialization_map.keys end private # Add serializated attribute acessor methods to the serialization_module def define_serialized_attribute_accessor(serializer, deserializer, *columns) m = self include(self.serialization_module ||= Module.new) unless serialization_module serialization_module.class_eval do columns.each do |column| m.serialization_map[column] = serializer m.deserialization_map[column] = deserializer define_method(column) do if deserialized_values.has_key?(column) deserialized_values[column] elsif frozen? deserialize_value(column, super()) else deserialized_values[column] = deserialize_value(column, super()) end end define_method("#{column}=") do |v| changed_columns << column unless changed_columns.include?(column) deserialized_values[column] = v end end end end end module InstanceMethods # Serialize deserialized values before saving def before_save serialize_deserialized_values super end # Hash of deserialized values, used as a cache. def deserialized_values @deserialized_values ||= {} end # Freeze the deserialized values def freeze deserialized_values.freeze super end private # Clear any cached deserialized values when doing a manual refresh. def _refresh_set_values(hash) @deserialized_values.clear if @deserialized_values super end # Deserialize the column value. Called when the model column accessor is called to # return a deserialized value. def deserialize_value(column, v) unless v.nil? raise Sequel::Error, "no entry in deserialization_map for #{column.inspect}" unless callable = model.deserialization_map[column] callable.call(v) end end # Serialize all deserialized values def serialize_deserialized_values deserialized_values.each{|k,v| @values[k] = serialize_value(k, v)} end # Serialize the column value. Called before saving to ensure the serialized value # is saved in the database. def serialize_value(column, v) unless v.nil? raise Sequel::Error, "no entry in serialization_map for #{column.inspect}" unless callable = model.serialization_map[column] callable.call(v) end end end end end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/serialization_modification_detection.rb������������������������0000664�0000000�0000000�00000005215�12201565355�0027513�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # This plugin extends the serialization plugin and enables it to detect # changes in serialized values by checking whether the current # deserialized value is the same as the original deserialized value. # The serialization plugin does not do such checks by default, as they # often aren't needed and can hurt performance. # # Note that for this plugin to work correctly, the values you are # serializing must roundtrip correctly (i.e. deserialize(serialize(value)) # should equal value). This is true in most cases, but not in all. For # example, ruby symbols round trip through yaml, but not json (as they get # turned into strings in json). # # == Example # # require 'sequel' # require 'json' # class User < Sequel::Model # plugin :serialization, :json, :permissions # plugin :serialization_modification_detection # end # user = User.create(:permissions => {}) # user.permissions[:global] = 'read-only' # user.save_changes module SerializationModificationDetection # Load the serialization plugin automatically. def self.apply(model) model.plugin :serialization end module InstanceMethods # Clear the cache of original deserialized values after saving so that it doesn't # show the column is modified after saving. def after_save super @original_deserialized_values = @deserialized_values end # Detect which serialized columns have changed. def changed_columns cc = super cc = cc.dup if frozen? deserialized_values.each{|c, v| cc << c if !cc.include?(c) && original_deserialized_value(c) != v} cc end # Freeze the original deserialized values when freezing the instance. def freeze @original_deserialized_values ||= {} @original_deserialized_values.freeze super end private # For new objects, serialize any existing deserialized values so that changes can # be detected. def initialize_set(values) super serialize_deserialized_values end # Return the original deserialized value of the column, caching it to improve performance. def original_deserialized_value(column) if frozen? @original_deserialized_values[column] || deserialize_value(column, self[column]) else (@original_deserialized_values ||= {})[column] ||= deserialize_value(column, self[column]) end end end end end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/sharding.rb����������������������������������������������������0000664�0000000�0000000�00000007341�12201565355�0021654�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The sharding plugin augments Sequel's default model sharding support # in the following ways: # # 1) It automatically sets model instances to be saved back to the # shard they were retreived from. # 2) It makes model associations use the same shard as the model # object. # 3) It adds a slightly nicer API for creating model instances on # specific shards. # # Usage: # # # Add the sharding support to all model subclasses (called before loading subclasses) # Sequel::Model.plugin :sharding # # # Add the sharding support to the Album class # Album.plugin :sharding module Sharding module ClassMethods # Create a new object on the given shard s. def create_using_server(s, values={}, &block) new_using_server(s, values, &block).save end # When eagerly loading, if the current dataset has a defined shard and the # dataset that you will be using to get the associated records does not, # use the current dataset's shard for the associated dataset. def eager_loading_dataset(opts, ds, select, associations, eager_options=OPTS) ds = super(opts, ds, select, associations, eager_options) if !ds.opts[:server] and s = eager_options[:self] and server = s.opts[:server] ds = ds.server(server) end ds end # Return a newly instantiated object that is tied to the given # shard s. When the object is saved, a record will be inserted # on shard s. def new_using_server(s, values={}, &block) new(values, &block).set_server(s) end private # Set the server for each graphed dataset to the current server # unless the graphed dataset already has a server set. def eager_graph_dataset(opts, eager_options) ds = super if s = eager_options[:self].opts[:server] ds = ds.server(s) unless ds.opts[:server] end ds end end module InstanceMethods # Set the server that this object is tied to, unless it has # already been set. Returns self. def set_server?(s) @server ||= s self end private # Ensure that association datasets are tied to the correct shard. def _apply_association_options(*args) use_server(super) end # Ensure that the join table for many_to_many associations uses the correct shard. def _join_table_dataset(opts) use_server(super) end # If creating the object by doing <tt>add_association</tt> for a # +many_to_many+ association, make sure the associated object is created on the # current object's shard, unless the passed object already has an assigned shard. def ensure_associated_primary_key(opts, o, *args) o.set_server?(@server) if o.respond_to?(:set_server?) super end # Don't use primary key lookup to load associated objects, since that will not # respect the current object's server. def load_with_primary_key_lookup?(opts, dynamic_opts) false end end module DatasetMethods # If a row proc exists on the dataset, replace it with one that calls the # previous row_proc, but calls set_server on the output of that row_proc, # ensuring that objects retrieved by a specific shard know which shard they # are tied to. def server(s) ds = super if rp = row_proc ds.row_proc = proc{|r| rp.call(r).set_server(s)} end ds end end end end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/single_table_inheritance.rb������������������������������������0000664�0000000�0000000�00000020545�12201565355�0025057�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The single_table_inheritance plugin allows storing all objects # in the same class hierarchy in the same table. It makes it so # subclasses of this model only load rows related to the subclass, # and when you retrieve rows from the main class, you get instances # of the subclasses (if the rows should use the subclasses's class). # # By default, the plugin assumes that the +sti_key+ column (the first # argument to the plugin) holds the class name as a string. However, # you can override this by using the <tt>:model_map</tt> option and/or # the <tt>:key_map</tt> option. # # You should only load this plugin in the parent class, not in the subclasses. # # You shouldn't call set_dataset in the model after applying this # plugin, otherwise subclasses might use the wrong dataset. You should # make sure this plugin is loaded before the subclasses. Note that since you # need to load the plugin before the subclasses are created, you can't use # direct class references in the plugin class. You should specify subclasses # in the plugin call using class name strings or symbols, see usage below. # # Usage: # # # Use the default of storing the class name in the sti_key # # column (:kind in this case) # Employee.plugin :single_table_inheritance, :kind # # # Using integers to store the class type, with a :model_map hash # # and an sti_key of :type # Employee.plugin :single_table_inheritance, :type, # :model_map=>{1=>:Staff, 2=>:Manager} # # # Using non-class name strings # Employee.plugin :single_table_inheritance, :type, # :model_map=>{'line staff'=>:Staff, 'supervisor'=>:Manager} # # # By default the plugin sets the respective column value # # when a new instance is created. # Staff.create.type == 'line staff' # Manager.create.type == 'supervisor' # # # You can customize this behavior with the :key_chooser option. # # This is most useful when using a non-bijective mapping. # Employee.plugin :single_table_inheritance, :type, # :model_map=>{'line staff'=>:Staff, 'supervisor'=>:Manager}, # :key_chooser=>proc{|instance| instance.model.sti_key_map[instance.model.to_s].first || 'stranger' } # # # Using custom procs, with :model_map taking column values # # and yielding either a class, string, symbol, or nil, # # and :key_map taking a class object and returning the column # # value to use # Employee.plugin :single_table_inheritance, :type, # :model_map=>proc{|v| v.reverse}, # :key_map=>proc{|klass| klass.name.reverse} # # # You can use the same class for multiple values. # # This is mainly useful when the sti_key column contains multiple values # # which are different but do not require different code. # Employee.plugin :single_table_inheritance, :type, # :model_map=>{'staff' => "Staff", # 'manager' => "Manager", # 'overpayed staff' => "Staff", # 'underpayed staff' => "Staff"} # # One minor issue to note is that if you specify the <tt>:key_map</tt> # option as a hash, instead of having it inferred from the <tt>:model_map</tt>, # you should only use class name strings as keys, you should not use symbols # as keys. module SingleTableInheritance # Setup the necessary STI variables, see the module RDoc for SingleTableInheritance def self.configure(model, key, opts=OPTS) model.instance_eval do @sti_key_array = nil @sti_key = key @sti_dataset = dataset @sti_model_map = opts[:model_map] || lambda{|v| v if v && v != ''} @sti_key_map = if km = opts[:key_map] if km.is_a?(Hash) h = Hash.new do |h1,k| unless k.is_a?(String) h1[k.to_s] else [] end end km.each do |k,v| h[k.to_s] = [ ] unless h.key?(k.to_s) h[k.to_s].push( *Array(v) ) end h else km end elsif sti_model_map.is_a?(Hash) h = Hash.new do |h1,k| unless k.is_a?(String) h1[k.to_s] else [] end end sti_model_map.each do |k,v| h[v.to_s] = [ ] unless h.key?(v.to_s) h[v.to_s] << k end h else lambda{|klass| klass.name.to_s} end @sti_key_chooser = opts[:key_chooser] || lambda{|inst| Array(inst.model.sti_key_map[inst.model]).last } dataset.row_proc = lambda{|r| model.sti_load(r)} end end module ClassMethods # The base dataset for STI, to which filters are added to get # only the models for the specific STI subclass. attr_reader :sti_dataset # The column name holding the STI key for this model attr_reader :sti_key # Array holding keys for all subclasses of this class, used for the # dataset filter in subclasses. Nil in the main class. attr_reader :sti_key_array # A hash/proc with class keys and column value values, mapping # the the class to a particular value given to the sti_key column. # Used to set the column value when creating objects, and for the # filter when retrieving objects in subclasses. attr_reader :sti_key_map # A hash/proc with column value keys and class values, mapping # the value of the sti_key column to the appropriate class to use. attr_reader :sti_model_map # A proc which returns the value to use for new instances. # This defaults to a lookup in the key map. attr_reader :sti_key_chooser # Copy the necessary attributes to the subclasses, and filter the # subclass's dataset based on the sti_kep_map entry for the class. def inherited(subclass) super sk = sti_key sd = sti_dataset skm = sti_key_map smm = sti_model_map skc = sti_key_chooser key = Array(skm[subclass]).dup sti_subclass_added(key) rp = dataset.row_proc subclass.set_dataset(sd.filter(SQL::QualifiedIdentifier.new(table_name, sk)=>key), :inherited=>true) subclass.instance_eval do dataset.row_proc = rp @sti_key = sk @sti_key_array = key @sti_dataset = sd @sti_key_map = skm @sti_model_map = smm @sti_key_chooser = skc self.simple_table = nil end end # Return an instance of the class specified by sti_key, # used by the row_proc. def sti_load(r) sti_class(sti_model_map[r[sti_key]]).call(r) end # Make sure that all subclasses of the parent class correctly include # keys for all of their descendant classes. def sti_subclass_added(key) if sti_key_array Sequel.synchronize{sti_key_array.push(*Array(key))} superclass.sti_subclass_added(key) end end private # If calling set_dataset manually, make sure to set the dataset # row proc to one that handles inheritance correctly. def set_dataset_row_proc(ds) ds.row_proc = @dataset.row_proc if @dataset end # Return a class object. If a class is given, return it directly. # Treat strings and symbols as class names. If nil is given or # an invalid class name string or symbol is used, return self. # Raise an error for other types. def sti_class(v) case v when String, Symbol constantize(v) rescue self when nil self when Class v else raise(Error, "Invalid class type used: #{v.inspect}") end end end module InstanceMethods # Set the sti_key column based on the sti_key_map. def before_create send("#{model.sti_key}=", model.sti_key_chooser.call(self)) unless self[model.sti_key] super end end end end end �����������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/skip_create_refresh.rb�����������������������������������������0000664�0000000�0000000�00000002266�12201565355�0024065�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # SkipCreateRefresh is a simple plugin that make Sequel not # refresh after saving a new model object. Sequel does the # refresh to make sure all columns are populated, which is # necessary so that database defaults work correctly. # # This plugin is mostly for performance reasons where you # want to save the cost of select statement after the insert, # but it could also help cases where records are not # immediately available for selection after insertion. # # Note that Sequel does not attempt to refresh records when # updating existing model objects, only when inserting new # model objects. # # Usage: # # # Make all model subclass instances skip refreshes when saving # # (called before loading subclasses) # Sequel::Model.plugin :skip_create_refresh # # # Make the Album class skip refreshes when saving # Album.plugin :skip_create_refresh module SkipCreateRefresh module InstanceMethods private # Do nothing instead of refreshing the record inside of save. def _save_refresh nil end end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/static_cache.rb������������������������������������������������0000664�0000000�0000000�00000011550�12201565355�0022464�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The static_cache plugin is designed for models that are not modified at all # in production use cases, or at least where modifications to them would usually # coincide with an application restart. When loaded into a model class, it # retrieves all rows in the database and staticly caches a ruby array and hash # keyed on primary key containing all of the model instances. All of these instances # are frozen so they won't be modified unexpectedly. # # The caches this plugin creates are used for the following things: # # * Primary key lookups (e.g. Model[1]) # * Model.all calls # * Model.each calls # * Model.map calls without an argument # * Model.to_hash calls without an argument # # Usage: # # # Cache the AlbumType class staticly # AlbumType.plugin :static_cache module StaticCache # Populate the static caches when loading the plugin. def self.configure(model) model.send(:load_cache) end module ClassMethods # A frozen ruby hash holding all of the model's frozen instances, keyed by frozen primary key. attr_reader :cache # An array of all of the model's frozen instances, without issuing a database # query. def all @all.dup end # Get the number of records in the cache, without issuing a database query. def count(*a, &block) if a.empty? && !block @all.size else super end end # Return the frozen object with the given pk, or nil if no such object exists # in the cache, without issuing a database query. def cache_get_pk(pk) cache[pk] end # Yield each of the model's frozen instances to the block, without issuing a database # query. def each(&block) @all.each(&block) end # Use the cache instead of a query to get the results. def map(column=nil, &block) if column raise(Error, "Cannot provide both column and block to map") if block if column.is_a?(Array) @all.map{|r| r.values.values_at(*column)} else @all.map{|r| r[column]} end else @all.map(&(Proc.new if block_given?)) end end Plugins.after_set_dataset(self, :load_cache) # Use the cache instead of a query to get the results. def to_hash(key_column = nil, value_column = nil) return cache.dup if key_column.nil? && value_column.nil? h = {} if value_column if value_column.is_a?(Array) if key_column.is_a?(Array) each{|r| h[r.values.values_at(*key_column)] = r.values.values_at(*value_column)} else each{|r| h[r[key_column]] = r.values.values_at(*value_column)} end else if key_column.is_a?(Array) each{|r| h[r.values.values_at(*key_column)] = r[value_column]} else each{|r| h[r[key_column]] = r[value_column]} end end elsif key_column.is_a?(Array) each{|r| h[r.values.values_at(*key_column)] = r} else each{|r| h[r[key_column]] = r} end h end # Use the cache instead of a query to get the results def to_hash_groups(key_column, value_column = nil) h = {} if value_column if value_column.is_a?(Array) if key_column.is_a?(Array) each{|r| (h[r.values.values_at(*key_column)] ||= []) << r.values.values_at(*value_column)} else each{|r| (h[r[key_column]] ||= []) << r.values.values_at(*value_column)} end else if key_column.is_a?(Array) each{|r| (h[r.values.values_at(*key_column)] ||= []) << r[value_column]} else each{|r| (h[r[key_column]] ||= []) << r[value_column]} end end elsif key_column.is_a?(Array) each{|r| (h[r.values.values_at(*key_column)] ||= []) << r} else each{|r| (h[r[key_column]] ||= []) << r} end h end private # Return the frozen object with the given pk, or nil if no such object exists # in the cache, without issuing a database query. def primary_key_lookup(pk) cache[pk] end # Reload the cache for this model by retrieving all of the instances in the dataset # freezing them, and populating the cached array and hash. def load_cache a = dataset.all h = {} a.each{|o| h[o.pk.freeze] = o.freeze} @all = a.freeze @cache = h.freeze end end end end end ��������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/string_stripper.rb���������������������������������������������0000664�0000000�0000000�00000003552�12201565355�0023313�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # StringStripper is a plugin that strips all input strings # when assigning to the model's values. Example: # # album = Album.new(:name=>' A ') # album.name # => 'A' # # SQL::Blob instances and all non-strings are not modified by # this plugin. Additionally, strings passed to a blob column # setter are also not modified. You can explicitly set # other columns to skip the stripping: # # Album.skip_string_stripping :foo # Album.new(:foo=>' A ').foo # => ' A ' # # Usage: # # # Make all model subclass instances strip strings (called before loading subclasses) # Sequel::Model.plugin :string_stripper # # # Make the Album class strip strings # Album.plugin :string_stripper module StringStripper def self.apply(model) model.plugin(:input_transformer, :string_stripper){|v| (v.is_a?(String) && !v.is_a?(SQL::Blob)) ? v.strip : v} end def self.configure(model) model.instance_eval{set_skipped_string_stripping_columns if @dataset} end module ClassMethods Plugins.after_set_dataset(self, :set_skipped_string_stripping_columns) # Skip stripping for the given columns. def skip_string_stripping(*columns) skip_input_transformer(:string_stripper, *columns) end # Return true if the column should not have values stripped. def skip_string_stripping?(column) skip_input_transformer?(:string_stripper, column) end private # Automatically skip stripping of blob columns def set_skipped_string_stripping_columns if @db_schema blob_columns = @db_schema.map{|k,v| k if v[:type] == :blob}.compact skip_string_stripping(*blob_columns) end end end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/subclasses.rb��������������������������������������������������0000664�0000000�0000000�00000004255�12201565355�0022225�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The Subclasses plugin keeps track of all subclasses of the # current model class. Direct subclasses are available via the # subclasses method, and all descendent classes are available via the # descendents method. # # c = Class.new(Sequel::Model) # c.plugin :subclasses # sc1 = Class.new(c) # sc2 = Class.new(c) # ssc1 = Class.new(sc1) # c.subclasses # [sc1, sc2] # sc1.subclasses # [ssc1] # sc2.subclasses # [] # ssc1.subclasses # [] # c.descendents # [sc1, ssc1, sc2] # # You can provide a block when loading the plugin, and it will be called # with each subclass created: # # a = [] # Sequel::Model.plugin(:subclasses){|sc| a << sc} # class A < Sequel::Model; end # class B < Sequel::Model; end # a # => [A, B] module Subclasses # Initialize the subclasses instance variable for the model. def self.apply(model, &block) model.instance_variable_set(:@subclasses, []) model.instance_variable_set(:@on_subclass, block) end module ClassMethods # Callable object that should be called with every descendent # class created. attr_reader :on_subclass # All subclasses for the current model. Does not # include the model itself. attr_reader :subclasses # All descendent classes of this model. def descendents Sequel.synchronize{_descendents} end Plugins.inherited_instance_variables(self, :@subclasses=>lambda{|v| []}, :@on_subclass=>nil) # Add the subclass to this model's current subclasses, # and initialize a new subclasses instance variable # in the subclass. def inherited(subclass) super Sequel.synchronize{subclasses << subclass} on_subclass.call(subclass) if on_subclass end private # Recursive, non-thread safe version of descendents, since # the mutex Sequel uses isn't reentrant. def _descendents subclasses.map{|x| [x] + x.send(:_descendents)}.flatten end end end end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/tactical_eager_loading.rb��������������������������������������0000664�0000000�0000000�00000005532�12201565355�0024501�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The tactical_eager_loading plugin allows you to eagerly load # an association for all objects retrieved from the same dataset # without calling eager on the dataset. If you attempt to load # associated objects for a record and the association for that # object is currently not cached, it assumes you want to get # the associated objects for all objects retrieved with the dataset that # retrieved the current object. # # Tactical eager loading only takes affect if you retrieved the # current object with Dataset#all, it doesn't work if you # retrieved the current object with Dataset#each. # # Basically, this allows the following code to issue only two queries: # # Album.filter{id<100}.all do |a| # a.artists # end # # Usage: # # # Make all model subclass instances use tactical eager loading (called before loading subclasses) # Sequel::Model.plugin :tactical_eager_loading # # # Make the Album class use tactical eager loading # Album.plugin :tactical_eager_loading module TacticalEagerLoading module InstanceMethods # The dataset that retrieved this object, set if the object was # reteived via Dataset#all. attr_accessor :retrieved_by # All model objects retrieved with this object, set if the object was # reteived via Dataset#all. attr_accessor :retrieved_with private # If there the association is not in the associations cache and the object # was reteived via Dataset#all, eagerly load the association for all model # objects retrieved with the current object. def load_associated_objects(opts, reload=false) name = opts[:name] if !associations.include?(name) && retrieved_by && !frozen? begin retrieved_by.send(:eager_load, retrieved_with.reject{|o| o.frozen?}, name=>{}) rescue Sequel::UndefinedAssociation # This can happen if class table inheritance is used and the association # is only defined in a subclass. This particular instance can use the # association, but it can't be eagerly loaded as the parent class doesn't # have access to the association, and that's the class doing the eager loading. nil end end super end end module DatasetMethods private # Set the retrieved_with and retrieved_by attributes for the object # with the current dataset and array of all objects. def post_load(objects) super objects.each do |o| next unless o.is_a?(Sequel::Model) o.retrieved_by = self o.retrieved_with = objects end end end end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/timestamps.rb��������������������������������������������������0000664�0000000�0000000�00000010044�12201565355�0022235�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The timestamps plugin creates hooks that automatically set create and # update timestamp fields. Both field names used are configurable, and you # can also set whether to overwrite existing create timestamps (false # by default), or whether to set the update timestamp when creating (also # false by default). # # Usage: # # # Timestamp all model instances using +created_at+ and +updated_at+ # # (called before loading subclasses) # Sequel::Model.plugin :timestamps # # # Timestamp Album instances, with custom column names # Album.plugin :timestamps, :create=>:created_on, :update=>:updated_on # # # Timestamp Artist instances, forcing an overwrite of the create # # timestamp, and setting the update timestamp when creating # Album.plugin :timestamps, :force=>true, :update_on_create=>true module Timestamps # Configure the plugin by setting the available options. Note that # if this method is run more than once, previous settings are ignored, # and it will just use the settings given or the default settings. Options: # * :create - The field to hold the create timestamp (default: :created_at) # * :force - Whether to overwrite an existing create timestamp (default: false) # * :update - The field to hold the update timestamp (default: :updated_at) # * :update_on_create - Whether to set the update timestamp to the create timestamp when creating (default: false) def self.configure(model, opts=OPTS) model.instance_eval do @create_timestamp_field = opts[:create]||:created_at @update_timestamp_field = opts[:update]||:updated_at @create_timestamp_overwrite = opts[:force]||false @set_update_timestamp_on_create = opts[:update_on_create]||false end end module ClassMethods # The field to store the create timestamp attr_reader :create_timestamp_field # The field to store the update timestamp attr_reader :update_timestamp_field # Whether to overwrite the create timestamp if it already exists def create_timestamp_overwrite? @create_timestamp_overwrite end Plugins.inherited_instance_variables(self, :@create_timestamp_field=>nil, :@update_timestamp_field=>nil, :@create_timestamp_overwrite=>nil, :@set_update_timestamp_on_create=>nil) # Whether to set the update timestamp to the create timestamp when creating def set_update_timestamp_on_create? @set_update_timestamp_on_create end end module InstanceMethods # Set the create timestamp when creating def before_create set_create_timestamp super end # Set the update timestamp when updating def before_update set_update_timestamp super end private # If the object has accessor methods for the create timestamp field, and # the create timestamp value is nil or overwriting it is allowed, set the # create timestamp field to the time given or the current time. If setting # the update timestamp on creation is configured, set the update timestamp # as well. def set_create_timestamp(time=nil) field = model.create_timestamp_field meth = :"#{field}=" self.send(meth, time||=Sequel.datetime_class.now) if respond_to?(field) && respond_to?(meth) && (model.create_timestamp_overwrite? || send(field).nil?) set_update_timestamp(time) if model.set_update_timestamp_on_create? end # Set the update timestamp to the time given or the current time if the # object has a setter method for the update timestamp field. def set_update_timestamp(time=nil) meth = :"#{model.update_timestamp_field}=" self.send(meth, time||Sequel.datetime_class.now) if respond_to?(meth) end end end end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/touch.rb�������������������������������������������������������0000664�0000000�0000000�00000013320�12201565355�0021171�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The touch plugin adds a touch method to model instances, which saves # the object with a modified timestamp. By default, it uses the # :updated_at column, but you can set which column to use. # It also supports touching of associations, so that when the current # model object is updated or destroyed, the associated rows in the # database can have their modified timestamp updated to the current # timestamp. # # Since the instance touch method works on model instances, # it uses Time.now for the timestamp. The association touching works # on datasets, so it updates all related rows in a single query, using # the SQL standard CURRENT_TIMESTAMP. Both of these can be overridden # easily if necessary. # # Usage: # # # Allow touching of all model instances (called before loading subclasses) # Sequel::Model.plugin :touch # # # Allow touching of Album instances, with a custom column # Album.plugin :touch, :column=>:updated_on # # # Allow touching of Artist instances, updating the albums and tags # # associations when touching, touching the +updated_on+ column for # # albums and the +updated_at+ column for tags # Artist.plugin :touch, :associations=>[{:albums=>:updated_on}, :tags] module Touch # The default column to update when touching TOUCH_COLUMN_DEFAULT = :updated_at def self.apply(model, opts=OPTS) model.instance_variable_set(:@touched_associations, {}) end # Set the touch_column and touched_associations variables for the model. # Options: # * :associations - The associations to touch when a model instance is # updated or destroyed. Can be a symbol for a single association, # a hash with association keys and column values, or an array of # symbols and/or hashes. If a symbol is used, the column used # when updating the associated objects is the model's touch_column. # If a hash is used, the value is used as the column to update. # * :column - The column to modify when touching a model instance. def self.configure(model, opts=OPTS) model.touch_column = opts[:column] || TOUCH_COLUMN_DEFAULT if opts[:column] || !model.touch_column model.touch_associations(opts[:associations]) if opts[:associations] end module ClassMethods # The column to modify when touching a model instance, as a symbol. Also used # as the default column when touching associations, if # the associations don't specify a column. attr_accessor :touch_column # A hash specifying the associations to touch when instances are # updated or destroyed. Keys are association name symbols and values # are column name symbols. attr_reader :touched_associations Plugins.inherited_instance_variables(self, :@touched_associations=>:dup, :@touch_column=>nil) # Add additional associations to be touched. See the :association option # of the Sequel::Plugin::Touch.configure method for the format of the associations # arguments. def touch_associations(*associations) associations.flatten.each do |a| a = {a=>touch_column} if a.is_a?(Symbol) a.each do |k,v| raise(Error, "invalid association: #{k}") unless association_reflection(k) touched_associations[k] = v end end end end module InstanceMethods # Touch all of the model's touched_associations when destroying the object. def after_destroy super touch_associations end # Touch all of the model's touched_associations when updating the object. def after_update super touch_associations end # Touch the model object. If a column is not given, use the model's touch_column # as the column. If the column to use is not one of the model's columns, just # save the changes to the object instead of attempting to a value that doesn't # exist. def touch(column=nil) if column set(column=>touch_instance_value) else column = model.touch_column set(column=>touch_instance_value) if columns.include?(column) end save_changes end private # The value to use when modifying the touch column for the association datasets. Uses # the SQL standard CURRENT_TIMESTAMP. def touch_association_value Sequel::CURRENT_TIMESTAMP end # Update the updated at field for all associated objects that should be touched. def touch_associations model.touched_associations.each do |assoc, column| r = model.association_reflection(assoc) next unless r.can_have_associated_objects?(self) ds = send(r.dataset_method) if ds.send(:joined_dataset?) # Can't update all values at once, so update each instance individually. # Instead if doing a simple save, update via the instance's dataset, # to avoid going into an infinite loop in some cases. send(r[:name]).each{|x| x.this.update(column=>touch_association_value)} else # Update all values at once for performance reasons. ds.update(column=>touch_association_value) end end end # The value to use when modifying the touch column for the model instance. # Uses Time.now to work well with typecasting. def touch_instance_value Time.now end end end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/tree.rb��������������������������������������������������������0000664�0000000�0000000�00000011022�12201565355�0021003�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The Tree plugin adds additional associations and methods that allow you to # treat a Model as a tree. # # A column for holding the parent key is required and is :parent_id by default. # This may be overridden by passing column name via :key. # # Optionally, a column to control order of nodes returned can be specified # by passing column name via :order. # # If you pass true for the :single_root option, the class will ensure there is # only ever one root in the tree. # # Examples: # # class Node < Sequel::Model # plugin :tree # end # # class Node < Sequel::Model # plugin :tree, :key=>:parentid, :order=>:position # end module Tree # Create parent and children associations. Any options # specified are passed to both associations. You can # specify options to use for the parent association # using a :parent option, and options to use for the # children association using a :children option. def self.apply(model, opts=OPTS) opts = opts.dup opts[:class] = model model.instance_eval do @parent_column = (opts[:key] ||= :parent_id) @tree_order = opts[:order] end par = opts.merge(opts.fetch(:parent, {})) parent = par.fetch(:name, :parent) chi = opts.merge(opts.fetch(:children, {})) children = chi.fetch(:name, :children) par[:reciprocal] = children chi[:recripocal] = parent model.many_to_one parent, par model.one_to_many children, chi model.plugin SingleRoot if opts[:single_root] end module ClassMethods # The column symbol or array of column symbols on which to order the tree. attr_accessor :tree_order # The symbol for the column containing the value pointing to the # parent of the leaf. attr_accessor :parent_column Plugins.inherited_instance_variables(self, :@parent_column=>nil, :@tree_order=>nil) # Returns list of all root nodes (those with no parent nodes). # # TreeClass.roots # => [root1, root2] def roots roots_dataset.all end # Returns the dataset for retrieval of all root nodes # # TreeClass.roots_dataset => Sequel#Dataset def roots_dataset ds = filter(parent_column => nil) ds = ds.order(*tree_order) if tree_order ds end end module InstanceMethods # Returns list of ancestors, starting from parent until root. # # subchild1.ancestors # => [child1, root] def ancestors node, nodes = self, [] nodes << node = node.parent while node.parent nodes end # Returns list of ancestors, starting from parent until root. # # subchild1.ancestors # => [child1, root] def descendants nodes = children.dup nodes.each{|child| nodes.concat(child.descendants)} nodes end # Returns the root node of the tree that this node descends from. # This node is returned if it is a root node itself. def root ancestors.last || self end # Returns true if this is a root node, false otherwise. def root? !new? && self[model.parent_column].nil? end # Returns all siblings and a reference to the current node. # # subchild1.self_and_siblings # => [subchild1, subchild2] def self_and_siblings parent ? parent.children : model.roots end # Returns all siblings of the current node. # # subchild1.siblings # => [subchild2] def siblings self_and_siblings - [self] end end # Plugin included when :single_root option is passed. module SingleRoot module ClassMethods # Returns the single root node. def root roots_dataset.first end end module InstanceMethods # Hook that prevents a second root from being created. def before_save if self[model.parent_column].nil? && (root = model.root) && pk != root.pk raise TreeMultipleRootError, "there is already a root #{model.name} defined" end super end end end class TreeMultipleRootError < Error; end end end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/typecast_on_load.rb��������������������������������������������0000664�0000000�0000000�00000005332�12201565355�0023402�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The TypecastOnLoad plugin exists because most of Sequel's database adapters don't # have complete control over typecasting, and may return columns that aren't # typecast correctly (with correct being defined as how the model object # would typecast the same column values). # # This plugin makes model loading call the setter methods (which typecast # by default) for all columns given. You can either specify the columns to # typecast on load in the plugin call itself, or afterwards using # add_typecast_on_load_columns: # # Album.plugin :typecast_on_load, :release_date, :record_date # # or: # Album.plugin :typecast_on_load # Album.add_typecast_on_load_columns :release_date, :record_date # # If the database returns release_date and record_date columns as strings # instead of dates, this will ensure that if you access those columns through # the model object, you'll get Date objects instead of strings. module TypecastOnLoad # Call add_typecast_on_load_columns on the passed column arguments. def self.configure(model, *columns) model.instance_eval do @typecast_on_load_columns ||= [] add_typecast_on_load_columns(*columns) end end module ClassMethods # The columns to typecast on load for this model. attr_reader :typecast_on_load_columns # Add additional columns to typecast on load for this model. def add_typecast_on_load_columns(*columns) @typecast_on_load_columns.concat(columns) end # Typecast values using #load_typecast when the values are retrieved # from the database. def call(values) super.load_typecast end Plugins.inherited_instance_variables(self, :@typecast_on_load_columns=>:dup) end module InstanceMethods # Call the setter method for each of the model's typecast_on_load_columns # with the current value, so it can be typecasted correctly. def load_typecast model.typecast_on_load_columns.each do |c| if v = values[c] send("#{c}=", v) end end changed_columns.clear self end private # Typecast values using #load_typecast when the values are refreshed manually. def _refresh_set_values(values) ret = super load_typecast ret end # Typecast values using #load_typecast when the values are refreshed # automatically after a save. def _save_set_values(values) ret = super load_typecast ret end end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/unlimited_update.rb��������������������������������������������0000664�0000000�0000000�00000001760�12201565355�0023410�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The unlimited_update plugin is designed to work around a # MySQL warning in replicated environments, which occurs if # you issue an UPDATE with a LIMIT clause. No other # database Sequel supports will create an UPDATE clause with # a LIMIT, and in non-replicated MySQL environments, MySQL # doesn't issue a warning. Note that even in replicated # environments the MySQL warning is harmless, as Sequel # restricts an update to rows with a matching primary key, # which should be unique. # # Usage: # # # Make all model subclass not use a limit for update # Sequel::Model.plugin :unlimited_update # # # Make the Album class not use a limit for update # Album.plugin :unlimited_update module UnlimitedUpdate module InstanceMethods private # Use an unlimited dataset for updates. def _update_dataset super.unlimited end end end end end ����������������ruby-sequel-4.1.1/lib/sequel/plugins/update_primary_key.rb������������������������������������������0000664�0000000�0000000�00000004415�12201565355�0023751�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The update_primary_key plugin allows you to modify an object's # primary key and then save the record. Sequel does not work # correctly with primary key modifications by default. Sequel # is designed to work with surrogate primary keys that never need to be # modified, but this plugin makes it work correctly with natural # primary keys that may need to be modified. Example: # # album = Album[1] # album.id = 2 # album.save # # Usage: # # # Make all model subclasses support primary key updates # # (called before loading subclasses) # Sequel::Model.plugin :update_primary_key # # # Make the Album class support primary key updates # Album.plugin :update_primary_key module UpdatePrimaryKey module InstanceMethods # Clear the cached primary key. def after_update super @pk_hash = nil end # Use the cached primary key if one is present. def pk_hash @pk_hash || super end private # If the primary key column changes, clear related associations and cache # the previous primary key values. def change_column_value(column, value) pk = primary_key if (pk.is_a?(Array) ? pk.include?(column) : pk == column) @pk_hash ||= pk_hash unless new? clear_associations_using_primary_key end super end # Clear associations that are likely to be tied to the primary key. # Note that this currently can clear additional options that don't reference # the primary key (such as one_to_many columns referencing a column other than the # primary key). def clear_associations_using_primary_key associations.keys.each do |k| associations.delete(k) if model.association_reflection(k)[:type] != :many_to_one end end # Do not use prepared statements for update queries, since they don't work # in the case where the primary key has changed. def use_prepared_statements_for?(type) if type == :update false else super end end end end end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/validation_class_methods.rb������������������������������������0000664�0000000�0000000�00000043021�12201565355�0025112�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel extension :blank module Plugins # Sequel's built-in validation_class_methods plugin adds backwards compatibility # for the legacy class-level validation methods (e.g. validates_presence_of :column). # # It is recommended to use the validation_helpers plugin instead of this one, # as it is less complex and more flexible. However, this plugin provides reflection # support, since it is class-level, while the instance-level validation_helpers # plugin does not. # # Usage: # # # Add the validation class methods to all model subclasses (called before loading subclasses) # Sequel::Model.plugin :validation_class_methods # # # Add the validation class methods to the Album class # Album.plugin :validation_class_methods module ValidationClassMethods # Setup the validations hash for the given model. def self.apply(model) model.class_eval do @validations = {} @validation_reflections = {} end end module ClassMethods # A hash of validations for this model class. Keys are column symbols, # values are arrays of validation procs. attr_reader :validations # A hash of validation reflections for this model class. Keys are column # symbols, values are an array of two element arrays, with the first element # being the validation type symbol and the second being a hash of validation # options. attr_reader :validation_reflections # The Generator class is used to generate validation definitions using # the validates {} idiom. class Generator # Initializes a new generator. def initialize(receiver ,&block) @receiver = receiver instance_eval(&block) end # Delegates method calls to the receiver by calling receiver.validates_xxx. def method_missing(m, *args, &block) @receiver.send(:"validates_#{m}", *args, &block) end # This object responds to all validates_* methods the model responds to. def respond_to_missing?(meth, include_private) @receiver.respond_to?(:"validates_#{meth}", include_private) end end # Returns true if validations are defined. def has_validations? !validations.empty? end Plugins.inherited_instance_variables(self, :@validations=>:hash_dup, :@validation_reflections=>:hash_dup) # Instructs the model to skip validations defined in superclasses def skip_superclass_validations superclass.validations.each do |att, procs| if ps = @validations[att] @validations[att] -= procs end end @skip_superclass_validations = true end # Instructs the model to skip validations defined in superclasses def skip_superclass_validations? @skip_superclass_validations end # Defines validations by converting a longhand block into a series of # shorthand definitions. For example: # # class MyClass < Sequel::Model # validates do # length_of :name, :minimum => 6 # length_of :password, :minimum => 8 # end # end # # is equivalent to: # class MyClass < Sequel::Model # validates_length_of :name, :minimum => 6 # validates_length_of :password, :minimum => 8 # end def validates(&block) Generator.new(self, &block) end # Validates the given instance. def validate(o) validations.each do |att, procs| v = case att when Array att.collect{|a| o.send(a)} else o.send(att) end procs.each {|tag, p| p.call(o, att, v)} end end # Validates acceptance of an attribute. Just checks that the value # is equal to the :accept option. This method is unique in that # :allow_nil is assumed to be true instead of false. # # Possible Options: # * :accept - The value required for the object to be valid (default: '1') # * :message - The message to use (default: 'is not accepted') def validates_acceptance_of(*atts) opts = { :message => 'is not accepted', :allow_nil => true, :accept => '1', :tag => :acceptance, }.merge!(extract_options!(atts)) reflect_validation(:acceptance, opts, atts) atts << opts validates_each(*atts) do |o, a, v| o.errors.add(a, opts[:message]) unless v == opts[:accept] end end # Validates confirmation of an attribute. Checks that the object has # a _confirmation value matching the current value. For example: # # validates_confirmation_of :blah # # Just makes sure that object.blah = object.blah_confirmation. Often used for passwords # or email addresses on web forms. # # Possible Options: # * :message - The message to use (default: 'is not confirmed') def validates_confirmation_of(*atts) opts = { :message => 'is not confirmed', :tag => :confirmation, }.merge!(extract_options!(atts)) reflect_validation(:confirmation, opts, atts) atts << opts validates_each(*atts) do |o, a, v| o.errors.add(a, opts[:message]) unless v == o.send(:"#{a}_confirmation") end end # Adds a validation for each of the given attributes using the supplied # block. The block must accept three arguments: instance, attribute and # value, e.g.: # # validates_each :name, :password do |object, attribute, value| # object.errors.add(attribute, 'is not nice') unless value.nice? # end # # Possible Options: # * :allow_blank - Whether to skip the validation if the value is blank. # * :allow_missing - Whether to skip the validation if the attribute isn't a key in the # values hash. This is different from allow_nil, because Sequel only sends the attributes # in the values when doing an insert or update. If the attribute is not present, Sequel # doesn't specify it, so the database will use the table's default value. This is different # from having an attribute in values with a value of nil, which Sequel will send as NULL. # If your database table has a non NULL default, this may be a good option to use. You # don't want to use allow_nil, because if the attribute is in values but has a value nil, # Sequel will attempt to insert a NULL value into the database, instead of using the # database's default. # * :allow_nil - Whether to skip the validation if the value is nil. # * :if - A symbol (indicating an instance_method) or proc (which is instance_evaled) # skipping this validation if it returns nil or false. # * :tag - The tag to use for this validation. def validates_each(*atts, &block) opts = extract_options!(atts) blk = if (i = opts[:if]) || (am = opts[:allow_missing]) || (an = opts[:allow_nil]) || (ab = opts[:allow_blank]) proc do |o,a,v| next if i && !validation_if_proc(o, i) next if an && Array(v).all?{|x| x.nil?} next if ab && Array(v).all?{|x| x.blank?} next if am && Array(a).all?{|x| !o.values.has_key?(x)} block.call(o,a,v) end else block end tag = opts[:tag] atts.each do |a| a_vals = Sequel.synchronize{validations[a] ||= []} if tag && (old = a_vals.find{|x| x[0] == tag}) old[1] = blk else a_vals << [tag, blk] end end end # Validates the format of an attribute, checking the string representation of the # value against the regular expression provided by the :with option. # # Possible Options: # * :message - The message to use (default: 'is invalid') # * :with - The regular expression to validate the value with (required). def validates_format_of(*atts) opts = { :message => 'is invalid', :tag => :format, }.merge!(extract_options!(atts)) unless opts[:with].is_a?(Regexp) raise ArgumentError, "A regular expression must be supplied as the :with option of the options hash" end reflect_validation(:format, opts, atts) atts << opts validates_each(*atts) do |o, a, v| o.errors.add(a, opts[:message]) unless v.to_s =~ opts[:with] end end # Validates the length of an attribute. # # Possible Options: # * :is - The exact size required for the value to be valid (no default) # * :maximum - The maximum size allowed for the value (no default) # * :message - The message to use (no default, overrides :nil_message, :too_long, :too_short, and :wrong_length # options if present) # * :minimum - The minimum size allowed for the value (no default) # * :nil_message - The message to use use if :maximum option is used and the value is nil (default: 'is not present') # * :too_long - The message to use use if it the value is too long (default: 'is too long') # * :too_short - The message to use use if it the value is too short (default: 'is too short') # * :within - The array/range that must include the size of the value for it to be valid (no default) # * :wrong_length - The message to use use if it the value is not valid (default: 'is the wrong length') def validates_length_of(*atts) opts = { :nil_message => 'is not present', :too_long => 'is too long', :too_short => 'is too short', :wrong_length => 'is the wrong length' }.merge!(extract_options!(atts)) opts[:tag] ||= ([:length] + [:maximum, :minimum, :is, :within].reject{|x| !opts.include?(x)}).join('-').to_sym reflect_validation(:length, opts, atts) atts << opts validates_each(*atts) do |o, a, v| if m = opts[:maximum] o.errors.add(a, opts[:message] || (v ? opts[:too_long] : opts[:nil_message])) unless v && v.size <= m end if m = opts[:minimum] o.errors.add(a, opts[:message] || opts[:too_short]) unless v && v.size >= m end if i = opts[:is] o.errors.add(a, opts[:message] || opts[:wrong_length]) unless v && v.size == i end if w = opts[:within] o.errors.add(a, opts[:message] || opts[:wrong_length]) unless v && w.send(w.respond_to?(:cover?) ? :cover? : :include?, v.size) end end end # Validates whether an attribute is a number. # # Possible Options: # * :message - The message to use (default: 'is not a number') # * :only_integer - Whether only integers are valid values (default: false) def validates_numericality_of(*atts) opts = { :message => 'is not a number', :tag => :numericality, }.merge!(extract_options!(atts)) reflect_validation(:numericality, opts, atts) atts << opts validates_each(*atts) do |o, a, v| begin if opts[:only_integer] Kernel.Integer(v.to_s) else Kernel.Float(v.to_s) end rescue o.errors.add(a, opts[:message]) end end end # Validates the presence of an attribute. Requires the value not be blank, # with false considered present instead of absent. # # Possible Options: # * :message - The message to use (default: 'is not present') def validates_presence_of(*atts) opts = { :message => 'is not present', :tag => :presence, }.merge!(extract_options!(atts)) reflect_validation(:presence, opts, atts) atts << opts validates_each(*atts) do |o, a, v| o.errors.add(a, opts[:message]) if v.blank? && v != false end end # Validates that an attribute is within a specified range or set of values. # # Possible Options: # * :in - An array or range of values to check for validity (required) # * :message - The message to use (default: 'is not in range or set: <specified range>') def validates_inclusion_of(*atts) opts = extract_options!(atts) n = opts[:in] unless n && (n.respond_to?(:cover?) || n.respond_to?(:include?)) raise ArgumentError, "The :in parameter is required, and must respond to cover? or include?" end opts[:message] ||= "is not in range or set: #{n.inspect}" reflect_validation(:inclusion, opts, atts) atts << opts validates_each(*atts) do |o, a, v| o.errors.add(a, opts[:message]) unless n.send(n.respond_to?(:cover?) ? :cover? : :include?, v) end end # Validates whether an attribute has the correct ruby type for the associated # database type. This is generally useful in conjunction with # raise_on_typecast_failure = false, to handle typecasting errors at validation # time instead of at setter time. # # Possible Options: # * :message - The message to use (default: 'is not a valid (integer|datetime|etc.)') def validates_schema_type(*atts) opts = { :tag => :schema_type, }.merge!(extract_options!(atts)) reflect_validation(:schema_type, opts, atts) atts << opts validates_each(*atts) do |o, a, v| next if v.nil? || (klass = o.send(:schema_type_class, a)).nil? if klass.is_a?(Array) ? !klass.any?{|kls| v.is_a?(kls)} : !v.is_a?(klass) message = opts[:message] || "is not a valid #{Array(klass).join(" or ").downcase}" o.errors.add(a, message) end end end # Validates only if the fields in the model (specified by atts) are # unique in the database. Pass an array of fields instead of multiple # fields to specify that the combination of fields must be unique, # instead of that each field should have a unique value. # # This means that the code: # validates_uniqueness_of([:column1, :column2]) # validates the grouping of column1 and column2 while # validates_uniqueness_of(:column1, :column2) # validates them separately. # # You should also add a unique index in the # database, as this suffers from a fairly obvious race condition. # # Possible Options: # * :message - The message to use (default: 'is already taken') def validates_uniqueness_of(*atts) opts = { :message => 'is already taken', :tag => :uniqueness, }.merge!(extract_options!(atts)) reflect_validation(:uniqueness, opts, atts) atts << opts validates_each(*atts) do |o, a, v| error_field = a a = Array(a) v = Array(v) next if v.empty? || !v.all? ds = o.class.filter(a.zip(v)) num_dups = ds.count allow = if num_dups == 0 # No unique value in the database true elsif num_dups > 1 # Multiple "unique" values in the database!! # Someone didn't add a unique index false elsif o.new? # New record, but unique value already exists in the database false elsif ds.first === o # Unique value exists in database, but for the same record, so the update won't cause a duplicate record true else false end o.errors.add(error_field, opts[:message]) unless allow end end private # Removes and returns the last member of the array if it is a hash. Otherwise, # an empty hash is returned This method is useful when writing methods that # take an options hash as the last parameter. def extract_options!(array) array.last.is_a?(Hash) ? array.pop : {} end # Add the validation reflection to the class's validations. def reflect_validation(type, opts, atts) atts.each do |att| (validation_reflections[att] ||= []) << [type, opts] end end # Handle the :if option for validations def validation_if_proc(o, i) case i when Symbol o.send(i) when Proc o.instance_eval(&i) else raise(::Sequel::Error, "invalid value for :if validation option") end end end module InstanceMethods # Validates the object. def validate model.validate(self) super end end end end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/validation_helpers.rb������������������������������������������0000664�0000000�0000000�00000033662�12201565355�0023736�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel module Plugins # The validation_helpers plugin contains instance method equivalents for most of the legacy # class-level validations. The names and APIs are different, though. Example: # # Sequel::Model.plugin :validation_helpers # class Album < Sequel::Model # def validate # super # validates_min_length 1, :num_tracks # end # end # # The validates_unique method has a unique API, but the other validations have the API explained here: # # Arguments: # atts :: Single attribute symbol or an array of attribute symbols specifying the # attribute(s) to validate. # Options: # :allow_blank :: Whether to skip the validation if the value is blank. You should # make sure all objects respond to blank if you use this option, which you can do by: # Sequel.extension :blank # :allow_missing :: Whether to skip the validation if the attribute isn't a key in the # values hash. This is different from allow_nil, because Sequel only sends the attributes # in the values when doing an insert or update. If the attribute is not present, Sequel # doesn't specify it, so the database will use the table's default value. This is different # from having an attribute in values with a value of nil, which Sequel will send as NULL. # If your database table has a non NULL default, this may be a good option to use. You # don't want to use allow_nil, because if the attribute is in values but has a value nil, # Sequel will attempt to insert a NULL value into the database, instead of using the # database's default. # :allow_nil :: Whether to skip the validation if the value is nil. # :message :: The message to use. Can be a string which is used directly, or a # proc which is called. If the validation method takes a argument before the array of attributes, # that argument is passed as an argument to the proc. # # The default validation options for all models can be modified by # changing the values of the Sequel::Plugins::ValidationHelpers::DEFAULT_OPTIONS hash. You # change change the default options on a per model basis # by overriding a private instance method default_validation_helpers_options. # # By changing the default options, you can setup internationalization of the # error messages. For example, you would modify the default options: # # Sequel::Plugins::ValidationHelpers::DEFAULT_OPTIONS.merge!( # :exact_length=>{:message=>lambda{|exact| I18n.t("errors.exact_length", :exact => exact)}}, # :integer=>{:message=>lambda{I18n.t("errors.integer")}}, # ... # ) # # and then use something like this in your yaml translation file: # # en: # errors: # exact_length: "is not %{exact} characters" # integer: "is not a number" # # Note that if you want to support internationalization of Errors#full_messages, # you need to override the method. Here's an example: # # class Sequel::Model::Errors # ATTRIBUTE_JOINER = I18n.t('errors.joiner').freeze # def full_messages # inject([]) do |m, kv| # att, errors = *kv # att.is_a?(Array) ? Array(att).map!{|v| I18n.t("attributes.#{v}")} : att = I18n.t("attributes.#{att}") # errors.each {|e| m << (e.is_a?(LiteralString) ? e : "#{Array(att).join(ATTRIBUTE_JOINER)} #{e}")} # m # end # end # end module ValidationHelpers # Default validation options used by Sequel. Can be modified to change the error # messages for all models (e.g. for internationalization), or to set certain # default options for validations (e.g. :allow_nil=>true for all validates_format). DEFAULT_OPTIONS = { :exact_length=>{:message=>lambda{|exact| "is not #{exact} characters"}}, :format=>{:message=>lambda{|with| 'is invalid'}}, :includes=>{:message=>lambda{|set| "is not in range or set: #{set.inspect}"}}, :integer=>{:message=>lambda{"is not a number"}}, :length_range=>{:message=>lambda{|range| "is too short or too long"}}, :max_length=>{:message=>lambda{|max| "is longer than #{max} characters"}, :nil_message=>lambda{"is not present"}}, :min_length=>{:message=>lambda{|min| "is shorter than #{min} characters"}}, :not_null=>{:message=>lambda{"is not present"}}, :numeric=>{:message=>lambda{"is not a number"}}, :type=>{:message=>lambda{|klass| klass.is_a?(Array) ? "is not a valid #{klass.join(" or ").downcase}" : "is not a valid #{klass.to_s.downcase}"}}, :presence=>{:message=>lambda{"is not present"}}, :unique=>{:message=>lambda{'is already taken'}} } module InstanceMethods # Check that the attribute values are the given exact length. def validates_exact_length(exact, atts, opts=OPTS) validatable_attributes_for_type(:exact_length, atts, opts){|a,v,m| validation_error_message(m, exact) if v.nil? || v.length != exact} end # Check the string representation of the attribute value(s) against the regular expression with. def validates_format(with, atts, opts=OPTS) validatable_attributes_for_type(:format, atts, opts){|a,v,m| validation_error_message(m, with) unless v.to_s =~ with} end # Check attribute value(s) is included in the given set. def validates_includes(set, atts, opts=OPTS) validatable_attributes_for_type(:includes, atts, opts){|a,v,m| validation_error_message(m, set) unless set.send(set.respond_to?(:cover?) ? :cover? : :include?, v)} end # Check attribute value(s) string representation is a valid integer. def validates_integer(atts, opts=OPTS) validatable_attributes_for_type(:integer, atts, opts) do |a,v,m| begin Kernel.Integer(v.to_s) nil rescue validation_error_message(m) end end end # Check that the attribute values length is in the specified range. def validates_length_range(range, atts, opts=OPTS) validatable_attributes_for_type(:length_range, atts, opts){|a,v,m| validation_error_message(m, range) if v.nil? || !range.send(range.respond_to?(:cover?) ? :cover? : :include?, v.length)} end # Check that the attribute values are not longer than the given max length. # # Accepts a :nil_message option that is the error message to use when the # value is nil instead of being too long. def validates_max_length(max, atts, opts=OPTS) validatable_attributes_for_type(:max_length, atts, opts){|a,v,m| v ? validation_error_message(m, max) : validation_error_message(opts[:nil_message] || DEFAULT_OPTIONS[:max_length][:nil_message]) if v.nil? || v.length > max} end # Check that the attribute values are not shorter than the given min length. def validates_min_length(min, atts, opts=OPTS) validatable_attributes_for_type(:min_length, atts, opts){|a,v,m| validation_error_message(m, min) if v.nil? || v.length < min} end # Check attribute value(s) are not NULL/nil. def validates_not_null(atts, opts=OPTS) validatable_attributes_for_type(:not_null, atts, opts){|a,v,m| validation_error_message(m) if v.nil?} end # Check attribute value(s) string representation is a valid float. def validates_numeric(atts, opts=OPTS) validatable_attributes_for_type(:numeric, atts, opts) do |a,v,m| begin Kernel.Float(v.to_s) nil rescue validation_error_message(m) end end end # Validates for all of the model columns (or just the given columns) # that the column value is an instance of the expected class based on # the column's schema type. def validates_schema_types(atts=keys, opts=OPTS) Array(atts).each do |k| if type = schema_type_class(k) validates_type(type, k, {:allow_nil=>true}.merge(opts)) end end end # Check if value is an instance of a class. If +klass+ is an array, # the value must be an instance of one of the classes in the array. def validates_type(klass, atts, opts=OPTS) klass = klass.to_s.constantize if klass.is_a?(String) || klass.is_a?(Symbol) validatable_attributes_for_type(:type, atts, opts) do |a,v,m| if klass.is_a?(Array) ? !klass.any?{|kls| v.is_a?(kls)} : !v.is_a?(klass) validation_error_message(m, klass) end end end # Check attribute value(s) is not considered blank by the database, but allow false values. def validates_presence(atts, opts=OPTS) validatable_attributes_for_type(:presence, atts, opts){|a,v,m| validation_error_message(m) if model.db.send(:blank_object?, v) && v != false} end # Checks that there are no duplicate values in the database for the given # attributes. Pass an array of fields instead of multiple # fields to specify that the combination of fields must be unique, # instead of that each field should have a unique value. # # This means that the code: # validates_unique([:column1, :column2]) # validates the grouping of column1 and column2 while # validates_unique(:column1, :column2) # validates them separately. # # You can pass a block, which is yielded the dataset in which the columns # must be unique. So if you are doing a soft delete of records, in which # the name must be unique, but only for active records: # # validates_unique(:name){|ds| ds.filter(:active)} # # You should also add a unique index in the # database, as this suffers from a fairly obvious race condition. # # This validation does not respect the :allow_* options that the other validations accept, # since it can deal with a grouping of multiple attributes. # # Possible Options: # :message :: The message to use (default: 'is already taken') # :only_if_modified :: Only check the uniqueness if the object is new or # one of the columns has been modified. # :where :: A callable object where call takes three arguments, a dataset, # the current object, and an array of columns, and should return # a modified dataset that is filtered to include only rows with # the same values as the current object for each column in the array. # # If you want to to a case insensitive uniqueness validation on a database that # is case sensitive by default, you can use: # # :where=>(proc do |ds, obj, cols| # ds.where(cols.map do |c| # v = obj.send(c) # v = v.downcase if v # [Sequel.function(:lower, c), v] # end) # end) def validates_unique(*atts) opts = default_validation_helpers_options(:unique) if atts.last.is_a?(Hash) opts = opts.merge(atts.pop) end message = validation_error_message(opts[:message]) where = opts[:where] atts.each do |a| arr = Array(a) next if arr.any?{|x| errors.on(x)} next if opts[:only_if_modified] && !new? && !arr.any?{|x| changed_columns.include?(x)} ds = if where where.call(model.dataset, self, arr) else vals = arr.map{|x| send(x)} next if vals.any?{|v| v.nil?} model.where(arr.zip(vals)) end ds = yield(ds) if block_given? ds = ds.exclude(pk_hash) unless new? errors.add(a, message) unless ds.count == 0 end end private # The default options hash for the given type of validation. Can # be overridden on a per-model basis for different per model defaults. # The hash return must include a :message option that is either a # proc or string. def default_validation_helpers_options(type) DEFAULT_OPTIONS[type] end # Skip validating any attribute that matches one of the allow_* options. # Otherwise, yield the attribute, value, and passed option :message to # the block. If the block returns anything except nil or false, add it as # an error message for that attributes. def validatable_attributes(atts, opts) am, an, ab, m = opts.values_at(:allow_missing, :allow_nil, :allow_blank, :message) Array(atts).each do |a| next if am && !values.has_key?(a) v = send(a) next if an && v.nil? next if ab && v.respond_to?(:blank?) && v.blank? if message = yield(a, v, m) errors.add(a, message) end end end # Merge the given options with the default options for the given type # and call validatable_attributes with the merged options. def validatable_attributes_for_type(type, atts, opts, &block) validatable_attributes(atts, default_validation_helpers_options(type).merge(opts), &block) end # The validation error message to use, as a string. If message # is a Proc, call it with the args. Otherwise, assume it is a string and # return it. def validation_error_message(message, *args) message.is_a?(Proc) ? message.call(*args) : message end end end end end ������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/plugins/xml_serializer.rb����������������������������������������������0000664�0000000�0000000�00000036112�12201565355�0023104�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'nokogiri' module Sequel module Plugins # The xml_serializer plugin handles serializing entire Sequel::Model # objects to XML, and deserializing XML into a single Sequel::Model # object or an array of Sequel::Model objects. It requires the # nokogiri library. # # Basic Example: # # album = Album[1] # puts album.to_xml # # Output: # <?xml version="1.0"?> # <album> # <id>1</id> # <name>RF</name> # <artist_id>2</artist_id> # </album> # # You can provide options to control the XML output: # # puts album.to_xml(:only=>:name) # puts album.to_xml(:except=>[:id, :artist_id]) # # Output: # <?xml version="1.0"?> # <album> # <name>RF</name> # </album> # # album.to_xml(:include=>:artist) # # Output: # <?xml version="1.0"?> # <album> # <id>1</id> # <name>RF</name> # <artist_id>2</artist_id> # <artist> # <id>2</id> # <name>YJM</name> # </artist> # </album> # # You can use a hash value with <tt>:include</tt> to pass options # to associations: # # album.to_xml(:include=>{:artist=>{:only=>:name}}) # # Output: # <?xml version="1.0"?> # <album> # <id>1</id> # <name>RF</name> # <artist_id>2</artist_id> # <artist> # <name>YJM</name> # </artist> # </album> # # +to_xml+ also exists as a class and dataset method, both # of which return all objects in the dataset: # # Album.to_xml # Album.filter(:artist_id=>1).to_xml(:include=>:tags) # # If you have an existing array of model instances you want to convert to # XML, you can call the class to_xml method with the :array option: # # Album.to_xml(:array=>[Album[1], Album[2]]) # # In addition to creating XML, this plugin also enables Sequel::Model # classes to create instances directly from XML using the from_xml class # method: # # xml = album.to_xml # album = Album.from_xml(xml) # # The array_from_xml class method exists to parse arrays of model instances # from xml: # # xml = Album.filter(:artist_id=>1).to_xml # albums = Album.array_from_xml(xml) # # These does not necessarily round trip, since doing so would let users # create model objects with arbitrary values. By default, from_xml will # call set using values from the tags in the xml. If you want to specify the allowed # fields, you can use the :fields option, which will call set_fields with # the given fields: # # Album.from_xml(album.to_xml, :fields=>%w'id name') # # If you want to update an existing instance, you can use the from_xml # instance method: # # album.from_xml(xml) # # Both of these allow creation of cached associated objects, if you provide # the :associations option: # # album.from_xml(xml, :associations=>:artist) # # You can even provide options when setting up the associated objects: # # album.from_xml(xml, :associations=>{:artist=>{:fields=>%w'id name', :associations=>:tags}}) # # Usage: # # # Add XML output capability to all model subclass instances (called before loading subclasses) # Sequel::Model.plugin :xml_serializer # # # Add XML output capability to Album class instances # Album.plugin :xml_serializer module XmlSerializer module ClassMethods # Proc that camelizes the input string, used for the :camelize option CAMELIZE = proc{|s| s.camelize} # Proc that dasherizes the input string, used for the :dasherize option DASHERIZE = proc{|s| s.dasherize} # Proc that returns the input string as is, used if # no :name_proc, :dasherize, or :camelize option is used. IDENTITY = proc{|s| s} # Proc that underscores the input string, used for the :underscore option UNDERSCORE = proc{|s| s.underscore} # Return an array of instances of this class based on # the provided XML. def array_from_xml(xml, opts=OPTS) node = Nokogiri::XML(xml).children.first unless node raise Error, "Malformed XML used" end node.children.reject{|c| c.is_a?(Nokogiri::XML::Text)}.map{|c| from_xml_node(c, opts)} end # Return an instance of this class based on the provided # XML. def from_xml(xml, opts=OPTS) from_xml_node(Nokogiri::XML(xml).children.first, opts) end # Return an instance of this class based on the given # XML node, which should be Nokogiri::XML::Node instance. # This should probably not be used directly by user code. def from_xml_node(parent, opts=OPTS) new.from_xml_node(parent, opts) end # Return an appropriate Nokogiri::XML::Builder instance # used to create the XML. This should probably not be used # directly by user code. def xml_builder(opts=OPTS) if opts[:builder] opts[:builder] else builder_opts = if opts[:builder_opts] opts[:builder_opts] else {} end builder_opts[:encoding] = opts[:encoding] if opts.has_key?(:encoding) Nokogiri::XML::Builder.new(builder_opts) end end # Return a proc (or any other object that responds to []), # used for formatting XML tag names when serializing to XML. # This should probably not be used directly by user code. def xml_deserialize_name_proc(opts=OPTS) if opts[:name_proc] opts[:name_proc] elsif opts[:underscore] UNDERSCORE else IDENTITY end end # Return a proc (or any other object that responds to []), # used for formatting XML tag names when serializing to XML. # This should probably not be used directly by user code. def xml_serialize_name_proc(opts=OPTS) pr = if opts[:name_proc] opts[:name_proc] elsif opts[:dasherize] DASHERIZE elsif opts[:camelize] CAMELIZE else IDENTITY end proc{|s| "#{pr[s]}_"} end Plugins.def_dataset_methods(self, :to_xml) end module InstanceMethods # Update the contents of this instance based on the given XML. # Accepts the following options: # # :name_proc :: Proc or Hash that accepts a string and returns # a string, used to convert tag names to column or # association names. # :underscore :: Sets the :name_proc option to one that calls +underscore+ # on the input string. Requires that you load the inflector # extension or another library that adds String#underscore. def from_xml(xml, opts=OPTS) from_xml_node(Nokogiri::XML(xml).children.first, opts) end # Update the contents of this instance based on the given # XML node, which should be a Nokogiri::XML::Node instance. # By default, just calls set with a hash created from the content of the node. # # Options: # :associations :: Indicates that the associations cache should be updated by creating # a new associated object using data from the hash. Should be a Symbol # for a single association, an array of symbols for multiple associations, # or a hash with symbol keys and dependent association option hash values. # :fields :: Changes the behavior to call set_fields using the provided fields, instead of calling set. def from_xml_node(parent, opts=OPTS) unless parent raise Error, "Malformed XML used" end if !parent.children.empty? && parent.children.all?{|node| node.is_a?(Nokogiri::XML::Text)} raise Error, "XML consisting of just text nodes used" end if assocs = opts[:associations] assocs = case assocs when Symbol {assocs=>{}} when Array assocs_tmp = {} assocs.each{|v| assocs_tmp[v] = {}} assocs_tmp when Hash assocs else raise Error, ":associations should be Symbol, Array, or Hash if present" end assocs_hash = {} assocs.each{|k,v| assocs_hash[k.to_s] = v} assocs_present = [] end hash = {} populate_associations = {} name_proc = model.xml_deserialize_name_proc(opts) parent.children.each do |node| next if node.is_a?(Nokogiri::XML::Text) k = name_proc[node.name] if assocs_hash && assocs_hash[k] assocs_present << [k.to_sym, node] else hash[k] = node.key?('nil') ? nil : node.children.first.to_s end end if assocs_present assocs_present.each do |assoc, node| assoc_opts = assocs[assoc] unless r = model.association_reflection(assoc) raise Error, "Association #{assoc} is not defined for #{model}" end populate_associations[assoc] = if r.returns_array? node.children.reject{|c| c.is_a?(Nokogiri::XML::Text)}.map{|c| r.associated_class.from_xml_node(c, assoc_opts)} else r.associated_class.from_xml_node(node, assoc_opts) end end end if fields = opts[:fields] set_fields(hash, fields, opts) else set(hash) end populate_associations.each do |assoc, values| associations[assoc] = values end self end # Return a string in XML format. If a block is given, yields the XML # builder object so you can add additional XML tags. # Accepts the following options: # # :builder :: The builder instance used to build the XML, # which should be an instance of Nokogiri::XML::Node. This # is necessary if you are serializing entire object graphs, # like associated objects. # :builder_opts :: Options to pass to the Nokogiri::XML::Builder # initializer, if the :builder option is not provided. # :camelize:: Sets the :name_proc option to one that calls +camelize+ # on the input string. Requires that you load the inflector # extension or another library that adds String#camelize. # :dasherize :: Sets the :name_proc option to one that calls +dasherize+ # on the input string. Requires that you load the inflector # extension or another library that adds String#dasherize. # :encoding :: The encoding to use for the XML output, passed # to the Nokogiri::XML::Builder initializer. # :except :: Symbol or Array of Symbols of columns not # to include in the XML output. # :include :: Symbol, Array of Symbols, or a Hash with # Symbol keys and Hash values specifying # associations or other non-column attributes # to include in the XML output. Using a nested # hash, you can pass options to associations # to affect the XML used for associated objects. # :name_proc :: Proc or Hash that accepts a string and returns # a string, used to format tag names. # :only :: Symbol or Array of Symbols of columns to only # include in the JSON output, ignoring all other # columns. # :root_name :: The base name to use for the XML tag that # contains the data for this instance. This will # be the name of the root node if you are only serializing # a single object, but not if you are serializing # an array of objects using Model.to_xml or Dataset#to_xml. # :types :: Set to true to include type information for # all of the columns, pulled from the db_schema. def to_xml(opts=OPTS) vals = values types = opts[:types] inc = opts[:include] cols = if only = opts[:only] Array(only) else vals.keys - Array(opts[:except]) end name_proc = model.xml_serialize_name_proc(opts) x = model.xml_builder(opts) x.send(name_proc[opts.fetch(:root_name, model.send(:underscore, model.name).gsub('/', '__')).to_s]) do |x1| cols.each do |c| attrs = {} if types attrs[:type] = db_schema.fetch(c, {})[:type] end v = vals[c] if v.nil? attrs[:nil] = '' end x1.send(name_proc[c.to_s], v, attrs) end if inc.is_a?(Hash) inc.each{|k, v| to_xml_include(x1, k, v)} else Array(inc).each{|i| to_xml_include(x1, i)} end yield x1 if block_given? end x.to_xml end private # Handle associated objects and virtual attributes when creating # the xml. def to_xml_include(node, i, opts=OPTS) name_proc = model.xml_serialize_name_proc(opts) objs = send(i) if objs.is_a?(Array) && objs.all?{|x| x.is_a?(Sequel::Model)} node.send(name_proc[i.to_s]) do |x2| objs.each{|obj| obj.to_xml(opts.merge(:builder=>x2))} end elsif objs.is_a?(Sequel::Model) objs.to_xml(opts.merge(:builder=>node, :root_name=>i)) else node.send(name_proc[i.to_s], objs) end end end module DatasetMethods # Return an XML string containing all model objects specified with # this dataset. Takes all of the options available to Model#to_xml, # as well as the :array_root_name option for specifying the name of # the root node that contains the nodes for all of the instances. def to_xml(opts=OPTS) raise(Sequel::Error, "Dataset#to_xml") unless row_proc x = model.xml_builder(opts) name_proc = model.xml_serialize_name_proc(opts) array = if opts[:array] opts = opts.dup opts.delete(:array) else all end x.send(name_proc[opts.fetch(:array_root_name, model.send(:pluralize, model.send(:underscore, model.name))).to_s]) do |x1| array.each do |obj| obj.to_xml(opts.merge(:builder=>x1)) end end x.to_xml end end end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/sql.rb�����������������������������������������������������������������0000664�0000000�0000000�00000177001�12201565355�0017174�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel if RUBY_VERSION < '1.9.0' # :nocov: # If on Ruby 1.8, create a <tt>Sequel::BasicObject</tt> class that is similar to the # the Ruby 1.9 +BasicObject+ class. This is used in a few places where proxy # objects are needed that respond to any method call. class BasicObject # The instance methods to not remove from the class when removing # other methods. KEEP_METHODS = %w"__id__ __send__ __metaclass__ instance_eval instance_exec == equal? initialize method_missing" # Remove all but the most basic instance methods from the class. A separate # method so that it can be called again if necessary if you load libraries # after Sequel that add instance methods to +Object+. def self.remove_methods! ((private_instance_methods + instance_methods) - KEEP_METHODS).each{|m| undef_method(m)} end remove_methods! end # :nocov: else # If on 1.9, create a <tt>Sequel::BasicObject</tt> class that is just like the # default +BasicObject+ class, except that missing constants are resolved in # +Object+. This allows the virtual row support to work with classes # without prefixing them with ::, such as: # # DB[:bonds].filter{maturity_date > Time.now} class BasicObject < ::BasicObject # Lookup missing constants in <tt>::Object</tt> def self.const_missing(name) ::Object.const_get(name) end # No-op method on ruby 1.9, which has a real +BasicObject+ class. def self.remove_methods! end end end class LiteralString < ::String end # Time subclass that gets literalized with only the time value, so it operates # like a standard SQL time type. class SQLTime < ::Time # Create a new SQLTime instance given an hour, minute, and second. def self.create(hour, minute, second, usec = 0) t = now local(t.year, t.month, t.day, hour, minute, second, usec) end end # The SQL module holds classes whose instances represent SQL fragments. # It also holds modules that are included in core ruby classes that # make Sequel a friendly DSL. module SQL ### Parent Classes ### # Classes/Modules aren't in alphabetical order due to the fact that # some reference constants defined in others at load time. # Base class for all SQL expression objects. class Expression @comparison_attrs = [] class << self # All attributes used for equality and hash methods. attr_reader :comparison_attrs # Expression objects are assumed to be value objects, where their # attribute values can't change after assignment. In order to make # it easy to define equality and hash methods, subclass # instances assume that the only values that affect the results of # such methods are the values of the object's attributes. def attr_reader(*args) super comparison_attrs.concat(args) end # Copy the comparison_attrs into the subclass. def inherited(subclass) super subclass.instance_variable_set(:@comparison_attrs, comparison_attrs.dup) end private # Create a to_s instance method that takes a dataset, and calls # the method provided on the dataset with args as the argument (self by default). # Used to DRY up some code. # # Do not call this method with untrusted input, as that can result in # arbitrary code execution. def to_s_method(meth, args=:self) # :nodoc: class_eval("def to_s_append(ds, sql) ds.#{meth}_append(sql, #{args}) end", __FILE__, __LINE__) end end # Alias of <tt>eql?</tt> def ==(other) eql?(other) end # Returns true if the receiver is the same expression as the # the +other+ expression. def eql?(other) other.is_a?(self.class) && !self.class.comparison_attrs.find{|a| send(a) != other.send(a)} end # Make sure that the hash value is the same if the attributes are the same. def hash ([self.class] + self.class.comparison_attrs.map{|x| send(x)}).hash end # Show the class name and instance variables for the object, necessary # for correct operation on ruby 1.9.2. def inspect "#<#{self.class} #{instance_variables.map{|iv| "#{iv}=>#{instance_variable_get(iv).inspect}"}.join(', ')}>" end # Returns +self+, because <tt>SQL::Expression</tt> already acts like +LiteralString+. def lit self end # Alias of +to_s+ def sql_literal(ds) s = '' to_s_append(ds, s) s end end # Represents a complex SQL expression, with a given operator and one # or more attributes (which may also be ComplexExpressions, forming # a tree). This class is the backbone of Sequel's ruby expression DSL. # # This is an abstract class that is not that useful by itself. The # subclasses +BooleanExpression+, +NumericExpression+, and +StringExpression+ # define the behavior of the DSL via operators. class ComplexExpression < Expression # A hash of the opposite for each operator symbol, used for inverting # objects. OPERTATOR_INVERSIONS = {:AND => :OR, :OR => :AND, :< => :>=, :> => :<=, :<= => :>, :>= => :<, :'=' => :'!=' , :'!=' => :'=', :LIKE => :'NOT LIKE', :'NOT LIKE' => :LIKE, :~ => :'!~', :'!~' => :~, :IN => :'NOT IN', :'NOT IN' => :IN, :IS => :'IS NOT', :'IS NOT' => :IS, :'~*' => :'!~*', :'!~*' => :'~*', :NOT => :NOOP, :NOOP => :NOT, :ILIKE => :'NOT ILIKE', :'NOT ILIKE'=>:ILIKE} # Standard mathematical operators used in +NumericMethods+ MATHEMATICAL_OPERATORS = [:+, :-, :/, :*] # Bitwise mathematical operators used in +NumericMethods+ BITWISE_OPERATORS = [:&, :|, :^, :<<, :>>, :%] # Operators that check for equality EQUALITY_OPERATORS = [:'=', :'!='] # Inequality operators used in +InequalityMethods+ INEQUALITY_OPERATORS = [:<, :>, :<=, :>=] # Hash of ruby operator symbols to SQL operators, used in +BooleanMethods+ BOOLEAN_OPERATOR_METHODS = {:& => :AND, :| =>:OR} # Operators that use IN/NOT IN for inclusion/exclusion IN_OPERATORS = [:IN, :'NOT IN'] # Operators that use IS, used for special casing to override literal true/false values IS_OPERATORS = [:IS, :'IS NOT'] # Operators that do pattern matching via regular expressions REGEXP_OPERATORS = [:~, :'!~', :'~*', :'!~*'] # Operators that do pattern matching via LIKE LIKE_OPERATORS = [:LIKE, :'NOT LIKE', :ILIKE, :'NOT ILIKE'] # Operator symbols that take exactly two arguments TWO_ARITY_OPERATORS = EQUALITY_OPERATORS + INEQUALITY_OPERATORS + IS_OPERATORS + IN_OPERATORS + REGEXP_OPERATORS + LIKE_OPERATORS # Operator symbols that take one or more arguments N_ARITY_OPERATORS = [:AND, :OR, :'||'] + MATHEMATICAL_OPERATORS + BITWISE_OPERATORS # Operator symbols that take only a single argument ONE_ARITY_OPERATORS = [:NOT, :NOOP, :'B~'] # Custom expressions that may have different syntax on different databases CUSTOM_EXPRESSIONS = [:extract] # The operator symbol for this object attr_reader :op # An array of args for this object attr_reader :args # Set the operator symbol and arguments for this object to the ones given. # Convert all args that are hashes or arrays of two element arrays to +BooleanExpressions+, # other than the second arg for an IN/NOT IN operator. # Raise an +Error+ if the operator doesn't allow boolean input and a boolean argument is given. # Raise an +Error+ if the wrong number of arguments for a given operator is used. def initialize(op, *args) orig_args = args args = args.map{|a| Sequel.condition_specifier?(a) ? SQL::BooleanExpression.from_value_pairs(a) : a} case op when *N_ARITY_OPERATORS raise(Error, "The #{op} operator requires at least 1 argument") unless args.length >= 1 old_args = args args = [] old_args.each{|a| a.is_a?(self.class) && a.op == op ? args.concat(a.args) : args.push(a)} when *TWO_ARITY_OPERATORS raise(Error, "The #{op} operator requires precisely 2 arguments") unless args.length == 2 # With IN/NOT IN, even if the second argument is an array of two element arrays, # don't convert it into a boolean expression, since it's definitely being used # as a value list. args[1] = orig_args[1] if IN_OPERATORS.include?(op) when *ONE_ARITY_OPERATORS raise(Error, "The #{op} operator requires a single argument") unless args.length == 1 when *CUSTOM_EXPRESSIONS # nothing else raise(Error, "Invalid operator #{op}") end @op = op @args = args end to_s_method :complex_expression_sql, '@op, @args' end # The base class for expressions that can be used in multiple places in # an SQL query. class GenericExpression < Expression end ### Modules ### # Includes an +as+ method that creates an SQL alias. module AliasMethods # Create an SQL alias (+AliasedExpression+) of the receiving column or expression to the given alias. # # Sequel.function(:func).as(:alias) # func() AS "alias" def as(aliaz) AliasedExpression.new(self, aliaz) end end # This defines the bitwise methods: &, |, ^, ~, <<, and >>. Because these # methods overlap with the standard +BooleanMethods methods+, and they only # make sense for integers, they are only included in +NumericExpression+. # # :a.sql_number & :b # "a" & "b" # :a.sql_number | :b # "a" | "b" # :a.sql_number ^ :b # "a" ^ "b" # :a.sql_number << :b # "a" << "b" # :a.sql_number >> :b # "a" >> "b" # ~:a.sql_number # ~"a" module BitwiseMethods ComplexExpression::BITWISE_OPERATORS.each do |o| module_eval("def #{o}(o) NumericExpression.new(#{o.inspect}, self, o) end", __FILE__, __LINE__) end # Do the bitwise compliment of the self # # ~:a.sql_number # ~"a" def ~ NumericExpression.new(:'B~', self) end end # This module includes the boolean/logical AND (&), OR (|) and NOT (~) operators # that are defined on objects that can be used in a boolean context in SQL # (+Symbol+, +LiteralString+, and <tt>SQL::GenericExpression</tt>). # # :a & :b # "a" AND "b" # :a | :b # "a" OR "b" # ~:a # NOT "a" # # One exception to this is when a NumericExpression or Integer is the argument # to & or |, in which case a bitwise method will be used: # # :a & 1 # "a" & 1 # :a | (:b + 1) # "a" | ("b" + 1) module BooleanMethods ComplexExpression::BOOLEAN_OPERATOR_METHODS.each do |m, o| module_eval(<<-END, __FILE__, __LINE__+1) def #{m}(o) case o when NumericExpression, Integer NumericExpression.new(#{m.inspect}, self, o) else BooleanExpression.new(#{o.inspect}, self, o) end end END end # Create a new BooleanExpression with NOT, representing the inversion of whatever self represents. # # ~:a # NOT :a def ~ BooleanExpression.invert(self) end end # These methods are designed as replacements for the core extensions, so that # Sequel is still easy to use if the core extensions are not enabled. module Builders # Create an SQL::AliasedExpression for the given expression and alias. # # Sequel.as(:column, :alias) # "column" AS "alias" def as(exp, aliaz) SQL::AliasedExpression.new(exp, aliaz) end # Order the given argument ascending. # Options: # # :nulls :: Set to :first to use NULLS FIRST (so NULL values are ordered # before other values), or :last to use NULLS LAST (so NULL values # are ordered after other values). # # Sequel.asc(:a) # a ASC # Sequel.asc(:b, :nulls=>:last) # b ASC NULLS LAST def asc(arg, opts=OPTS) SQL::OrderedExpression.new(arg, false, opts) end # Return an <tt>SQL::Blob</tt> that holds the same data as this string. # Blobs provide proper escaping of binary data. If given a blob, returns it # directly. def blob(s) if s.is_a?(SQL::Blob) s else SQL::Blob.new(s) end end # Return an <tt>SQL::CaseExpression</tt> created with the given arguments. # # Sequel.case([[{:a=>[2,3]}, 1]], 0) # SQL: CASE WHEN a IN (2, 3) THEN 1 ELSE 0 END # Sequel.case({:a=>1}, 0, :b) # SQL: CASE b WHEN a THEN 1 ELSE 0 END def case(*args) # core_sql ignore SQL::CaseExpression.new(*args) end # Cast the reciever to the given SQL type. You can specify a ruby class as a type, # and it is handled similarly to using a database independent type in the schema methods. # # Sequel.cast(:a, :integer) # CAST(a AS integer) # Sequel.cast(:a, String) # CAST(a AS varchar(255)) def cast(arg, sql_type) SQL::Cast.new(arg, sql_type) end # Cast the reciever to the given SQL type (or the database's default Integer type if none given), # and return the result as a +NumericExpression+, so you can use the bitwise operators # on the result. # # Sequel.cast_numeric(:a) # CAST(a AS integer) # Sequel.cast_numeric(:a, Float) # CAST(a AS double precision) def cast_numeric(arg, sql_type = nil) cast(arg, sql_type || Integer).sql_number end # Cast the reciever to the given SQL type (or the database's default String type if none given), # and return the result as a +StringExpression+, so you can use + # directly on the result for SQL string concatenation. # # Sequel.cast_string(:a) # CAST(a AS varchar(255)) # Sequel.cast_string(:a, :text) # CAST(a AS text) def cast_string(arg, sql_type = nil) cast(arg, sql_type || String).sql_string end # Return an emulated function call for getting the number of characters # in the argument: # # Sequel.char_length(:a) # char_length(a) -- Most databases # Sequel.char_length(:a) # length(a) -- SQLite def char_length(arg) SQL::EmulatedFunction.new(:char_length, arg) end # Do a deep qualification of the argument using the qualifier. This recurses into # nested structures. # # Sequel.deep_qualify(:table, :column) # "table"."column" # Sequel.deep_qualify(:table, Sequel.+(:column, 1)) # "table"."column" + 1 # Sequel.deep_qualify(:table, Sequel.like(:a, 'b')) # "table"."a" LIKE 'b' ESCAPE '\' def deep_qualify(qualifier, expr) Sequel::Qualifier.new(Sequel, qualifier).transform(expr) end # Return a delayed evaluation that uses the passed block. This is used # to delay evaluations of the code to runtime. For example, with # the following code: # # ds = DB[:table].where{column > Time.now} # # The filter is fixed to the time that where was called. Unless you are # only using the dataset once immediately after creating it, that's # probably not desired. If you just want to set it to the time when the # query is sent to the database, you can wrap it in Sequel.delay: # # ds = DB[:table].where{column > Sequel.delay{Time.now}} # # Note that for dates and timestamps, you are probably better off using # Sequel::CURRENT_DATE and Sequel::CURRENT_TIMESTAMP instead of this # generic delayed evaluation facility. def delay(&block) raise(Error, "Sequel.delay requires a block") unless block SQL::DelayedEvaluation.new(block) end # Order the given argument descending. # Options: # # :nulls :: Set to :first to use NULLS FIRST (so NULL values are ordered # before other values), or :last to use NULLS LAST (so NULL values # are ordered after other values). # # Sequel.desc(:a) # b DESC # Sequel.desc(:b, :nulls=>:first) # b DESC NULLS FIRST def desc(arg, opts=OPTS) SQL::OrderedExpression.new(arg, true, opts) end # Wraps the given object in an appropriate Sequel wrapper. # If the given object is already a Sequel object, return it directly. # For condition specifiers (hashes and arrays of two pairs), true, and false, # return a boolean expressions. For numeric objects, return a numeric # expression. For strings, return a string expression. For procs or when # the method is passed a block, evaluate it as a virtual row and wrap it # appropriately. In all other cases, use a generic wrapper. # # This method allows you to construct SQL expressions that are difficult # to construct via other methods. For example: # # Sequel.expr(1) - :a # SQL: (1 - a) def expr(arg=(no_arg=true), &block) if block_given? if no_arg return expr(block) else raise Error, 'cannot provide both an argument and a block to Sequel.expr' end elsif no_arg raise Error, 'must provide either an argument or a block to Sequel.expr' end case arg when Symbol t, c, a = Sequel.split_symbol(arg) arg = if t SQL::QualifiedIdentifier.new(t, c) else SQL::Identifier.new(c) end if a arg = SQL::AliasedExpression.new(arg, a) end arg when SQL::Expression, LiteralString, SQL::Blob arg when Hash SQL::BooleanExpression.from_value_pairs(arg, :AND) when Array if condition_specifier?(arg) SQL::BooleanExpression.from_value_pairs(arg, :AND) else SQL::Wrapper.new(arg) end when Numeric SQL::NumericExpression.new(:NOOP, arg) when String SQL::StringExpression.new(:NOOP, arg) when TrueClass, FalseClass SQL::BooleanExpression.new(:NOOP, arg) when Proc expr(virtual_row(&arg)) else SQL::Wrapper.new(arg) end end # Extract a datetime_part (e.g. year, month) from the given # expression: # # Sequel.extract(:year, :date) # extract(year FROM "date") def extract(datetime_part, exp) SQL::NumericExpression.new(:extract, datetime_part, exp) end # Returns a <tt>Sequel::SQL::Function</tt> with the function name # and the given arguments. # # Sequel.function(:now) # SQL: now() # Sequel.function(:substr, :a, 1) # SQL: substr(a, 1) def function(name, *args) SQL::Function.new(name, *args) end # Return the argument wrapped as an <tt>SQL::Identifier</tt>. # # Sequel.identifier(:a__b) # "a__b" def identifier(name) SQL::Identifier.new(name) end # Return a <tt>Sequel::SQL::StringExpression</tt> representing an SQL string made up of the # concatenation of the given array's elements. If an argument is passed, # it is used in between each element of the array in the SQL # concatenation. # # Sequel.join([:a]) # SQL: a # Sequel.join([:a, :b]) # SQL: a || b # Sequel.join([:a, 'b']) # SQL: a || 'b' # Sequel.join(['a', :b], ' ') # SQL: 'a' || ' ' || b def join(args, joiner=nil) raise Error, 'argument to Sequel.join must be an array' unless args.is_a?(Array) if joiner args = args.zip([joiner]*args.length).flatten args.pop end return SQL::StringExpression.new(:NOOP, '') if args.empty? args = args.map do |a| case a when Symbol, ::Sequel::SQL::Expression, ::Sequel::LiteralString, TrueClass, FalseClass, NilClass a else a.to_s end end SQL::StringExpression.new(:'||', *args) end # Create a <tt>BooleanExpression</tt> case insensitive (if the database supports it) pattern match of the receiver with # the given patterns. See <tt>SQL::StringExpression.like</tt>. # # Sequel.ilike(:a, 'A%') # "a" ILIKE 'A%' def ilike(*args) SQL::StringExpression.like(*(args << {:case_insensitive=>true})) end # Create a <tt>SQL::BooleanExpression</tt> case sensitive (if the database supports it) pattern match of the receiver with # the given patterns. See <tt>SQL::StringExpression.like</tt>. # # Sequel.like(:a, 'A%') # "a" LIKE 'A%' def like(*args) SQL::StringExpression.like(*args) end # Converts a string into a <tt>Sequel::LiteralString</tt>, in order to override string # literalization, e.g.: # # DB[:items].filter(:abc => 'def').sql #=> # "SELECT * FROM items WHERE (abc = 'def')" # # DB[:items].filter(:abc => Sequel.lit('def')).sql #=> # "SELECT * FROM items WHERE (abc = def)" # # You can also provide arguments, to create a <tt>Sequel::SQL::PlaceholderLiteralString</tt>: # # DB[:items].select{|o| o.count(Sequel.lit('DISTINCT ?', :a))}.sql #=> # "SELECT count(DISTINCT a) FROM items" def lit(s, *args) # core_sql ignore if args.empty? if s.is_a?(LiteralString) s else LiteralString.new(s) end else SQL::PlaceholderLiteralString.new(s, args) end end # Return a <tt>Sequel::SQL::BooleanExpression</tt> created from the condition # specifier, matching none of the conditions. # # Sequel.negate(:a=>true) # SQL: a IS NOT TRUE # Sequel.negate([[:a, true]]) # SQL: a IS NOT TRUE # Sequel.negate([[:a, 1], [:b, 2]]) # SQL: ((a != 1) AND (b != 2)) def negate(arg) if condition_specifier?(arg) SQL::BooleanExpression.from_value_pairs(arg, :AND, true) else raise Error, 'must pass a conditions specifier to Sequel.negate' end end # Return a <tt>Sequel::SQL::BooleanExpression</tt> created from the condition # specifier, matching any of the conditions. # # Sequel.or(:a=>true) # SQL: a IS TRUE # Sequel.or([[:a, true]]) # SQL: a IS TRUE # Sequel.or([[:a, 1], [:b, 2]]) # SQL: ((a = 1) OR (b = 2)) def or(arg) if condition_specifier?(arg) SQL::BooleanExpression.from_value_pairs(arg, :OR, false) else raise Error, 'must pass a conditions specifier to Sequel.or' end end # Create a qualified identifier with the given qualifier and identifier # # Sequel.qualify(:table, :column) # "table"."column" # Sequel.qualify(:schema, :table) # "schema"."table" # Sequel.qualify(:table, :column).qualify(:schema) # "schema"."table"."column" def qualify(qualifier, identifier) SQL::QualifiedIdentifier.new(qualifier, identifier) end # Return an <tt>SQL::Subscript</tt> with the given arguments, representing an # SQL array access. # # Sequel.subscript(:array, 1) # array[1] # Sequel.subscript(:array, 1, 2) # array[1, 2] # Sequel.subscript(:array, [1, 2]) # array[1, 2] # Sequel.subscript(:array, 1..2) # array[1:2] # Sequel.subscript(:array, 1...3) # array[1:2] def subscript(exp, *subs) SQL::Subscript.new(exp, subs.flatten) end # Return an emulated function call for trimming a string of spaces from # both sides (similar to ruby's String#strip). # # Sequel.trim(:a) # trim(a) -- Most databases # Sequel.trim(:a) # ltrim(rtrim(a)) -- Microsoft SQL Server def trim(arg) SQL::EmulatedFunction.new(:trim, arg) end # Return a <tt>SQL::ValueList</tt> created from the given array. Used if the array contains # all two element arrays and you want it treated as an SQL value list (IN predicate) # instead of as a conditions specifier (similar to a hash). This is not necessary if you are using # this array as a value in a filter, but may be necessary if you are using it as a # value with placeholder SQL: # # DB[:a].filter([:a, :b]=>[[1, 2], [3, 4]]) # SQL: (a, b) IN ((1, 2), (3, 4)) # DB[:a].filter('(a, b) IN ?', [[1, 2], [3, 4]]) # SQL: (a, b) IN ((1 = 2) AND (3 = 4)) # DB[:a].filter('(a, b) IN ?', Sequel.value_list([[1, 2], [3, 4]])) # SQL: (a, b) IN ((1, 2), (3, 4)) def value_list(arg) raise Error, 'argument to Sequel.value_list must be an array' unless arg.is_a?(Array) SQL::ValueList.new(arg) end end # Holds methods that are used to cast objects to different SQL types. module CastMethods # Cast the reciever to the given SQL type. You can specify a ruby class as a type, # and it is handled similarly to using a database independent type in the schema methods. # # Sequel.function(:func).cast(:integer) # CAST(func() AS integer) # Sequel.function(:func).cast(String) # CAST(func() AS varchar(255)) def cast(sql_type) Cast.new(self, sql_type) end # Cast the reciever to the given SQL type (or the database's default Integer type if none given), # and return the result as a +NumericExpression+, so you can use the bitwise operators # on the result. # # Sequel.function(:func).cast_numeric # CAST(func() AS integer) # Sequel.function(:func).cast_numeric(Float) # CAST(func() AS double precision) def cast_numeric(sql_type = nil) Cast.new(self, sql_type || Integer).sql_number end # Cast the reciever to the given SQL type (or the database's default String type if none given), # and return the result as a +StringExpression+, so you can use + # directly on the result for SQL string concatenation. # # Sequel.function(:func).cast_string # CAST(func() AS varchar(255)) # Sequel.function(:func).cast_string(:text) # CAST(func() AS text) def cast_string(sql_type = nil) Cast.new(self, sql_type || String).sql_string end end # Adds methods that allow you to treat an object as an instance of a specific # +ComplexExpression+ subclass. This is useful if another library # overrides the methods defined by Sequel. # # For example, if <tt>Symbol#/</tt> is overridden to produce a string (for # example, to make file system path creation easier), the # following code will not do what you want: # # :price/10 > 100 # # In that case, you need to do the following: # # :price.sql_number/10 > 100 module ComplexExpressionMethods # Extract a datetime part (e.g. year, month) from self: # # :date.extract(:year) # extract(year FROM "date") # # Also has the benefit of returning the result as a # NumericExpression instead of a generic ComplexExpression. def extract(datetime_part) NumericExpression.new(:extract, datetime_part, self) end # Return a BooleanExpression representation of +self+. def sql_boolean BooleanExpression.new(:NOOP, self) end # Return a NumericExpression representation of +self+. # # ~:a # NOT "a" # ~:a.sql_number # ~"a" def sql_number NumericExpression.new(:NOOP, self) end # Return a StringExpression representation of +self+. # # :a + :b # "a" + "b" # :a.sql_string + :b # "a" || "b" def sql_string StringExpression.new(:NOOP, self) end end # This module includes the inequality methods (>, <, >=, <=) that are defined on objects that can be # used in a numeric or string context in SQL. # # Sequel.lit('a') > :b # a > "b" # Sequel.lit('a') < :b # a > "b" # Sequel.lit('a') >= :b # a >= "b" # Sequel.lit('a') <= :b # a <= "b" module InequalityMethods ComplexExpression::INEQUALITY_OPERATORS.each do |o| module_eval("def #{o}(o) BooleanExpression.new(#{o.inspect}, self, o) end", __FILE__, __LINE__) end end # This module includes the standard mathematical methods (+, -, *, and /) # that are defined on objects that can be used in a numeric context in SQL # (+Symbol+, +LiteralString+, and +SQL::GenericExpression+). # # :a + :b # "a" + "b" # :a - :b # "a" - "b" # :a * :b # "a" * "b" # :a / :b # "a" / "b" # # One exception to this is if + is called with a +String+ or +StringExpression+, # in which case the || operator is used instead of the + operator: # # :a + 'b' # "a" || 'b' module NumericMethods ComplexExpression::MATHEMATICAL_OPERATORS.each do |o| module_eval("def #{o}(o) NumericExpression.new(#{o.inspect}, self, o) end", __FILE__, __LINE__) unless o == :+ end # Use || as the operator when called with StringExpression and String instances, # and the + operator for LiteralStrings and all other types. def +(ce) case ce when LiteralString NumericExpression.new(:+, self, ce) when StringExpression, String StringExpression.new(:'||', self, ce) else NumericExpression.new(:+, self, ce) end end end # These methods are designed as replacements for the core extension operator # methods, so that Sequel is still easy to use if the core extensions are not # enabled. # # The following methods are defined via metaprogramming: +, -, *, /, &, |. # The +, -, *, and / operators return numeric expressions combining all the # arguments with the appropriate operator, and the & and | operators return # boolean expressions combining all of the arguments with either AND or OR. module OperatorBuilders %w'+ - * /'.each do |op| class_eval(<<-END, __FILE__, __LINE__ + 1) def #{op}(*args) SQL::NumericExpression.new(:#{op}, *args) end END end {'&'=>'AND', '|'=>'OR'}.each do |m, op| class_eval(<<-END, __FILE__, __LINE__ + 1) def #{m}(*args) SQL::BooleanExpression.new(:#{op}, *args) end END end # Invert the given expression. Returns a <tt>Sequel::SQL::BooleanExpression</tt> # created from this argument, not matching all of the conditions. # # Sequel.~(nil) # SQL: NOT NULL # Sequel.~([[:a, true]]) # SQL: a IS NOT TRUE # Sequel.~([[:a, 1], [:b, [2, 3]]]) # SQL: a != 1 OR b NOT IN (2, 3) def ~(arg) if condition_specifier?(arg) SQL::BooleanExpression.from_value_pairs(arg, :OR, true) else SQL::BooleanExpression.invert(arg) end end end # Methods that create +OrderedExpressions+, used for sorting by columns # or more complex expressions. module OrderMethods # Mark the receiving SQL column as sorting in an ascending fashion (generally a no-op). # Options: # # :nulls :: Set to :first to use NULLS FIRST (so NULL values are ordered # before other values), or :last to use NULLS LAST (so NULL values # are ordered after other values). def asc(opts=OPTS) OrderedExpression.new(self, false, opts) end # Mark the receiving SQL column as sorting in a descending fashion. # Options: # # :nulls :: Set to :first to use NULLS FIRST (so NULL values are ordered # before other values), or :last to use NULLS LAST (so NULL values # are ordered after other values). def desc(opts=OPTS) OrderedExpression.new(self, true, opts) end end # Includes a +qualify+ method that created <tt>QualifiedIdentifier</tt>s, used for qualifying column # names with a table or table names with a schema, and the * method for returning all columns in # the identifier if no arguments are given. module QualifyingMethods # If no arguments are given, return an SQL::ColumnAll: # # Sequel.expr(:a__b).* # a.b.* def *(ce=(arg=false;nil)) if arg == false Sequel::SQL::ColumnAll.new(self) else super(ce) end end # Qualify the receiver with the given +qualifier+ (table for column/schema for table). # # Sequel.expr(:column).qualify(:table) # "table"."column" # Sequel.expr(:table).qualify(:schema) # "schema"."table" # Sequel.qualify(:table, :column).qualify(:schema) # "schema"."table"."column" def qualify(qualifier) QualifiedIdentifier.new(qualifier, self) end end # This module includes the +like+ and +ilike+ methods used for pattern matching that are defined on objects that can be # used in a string context in SQL (+Symbol+, +LiteralString+, <tt>SQL::GenericExpression</tt>). module StringMethods # Create a +BooleanExpression+ case insensitive pattern match of the receiver # with the given patterns. See <tt>StringExpression.like</tt>. # # :a.ilike('A%') # "a" ILIKE 'A%' def ilike(*ces) StringExpression.like(self, *(ces << {:case_insensitive=>true})) end # Create a +BooleanExpression+ case sensitive (if the database supports it) pattern match of the receiver with # the given patterns. See <tt>StringExpression.like</tt>. # # :a.like('A%') # "a" LIKE 'A%' def like(*ces) StringExpression.like(self, *ces) end end # This module includes the <tt>+</tt> method. It is included in +StringExpression+ and can be included elsewhere # to allow the use of the + operator to represent concatenation of SQL Strings: module StringConcatenationMethods # Return a +StringExpression+ representing the concatenation of the receiver # with the given argument. # # :x.sql_string + :y => # "x" || "y" def +(ce) StringExpression.new(:'||', self, ce) end end # This module includes the +sql_subscript+ method, representing SQL array accesses. module SubscriptMethods # Return a <tt>Subscript</tt> with the given arguments, representing an # SQL array access. # # :array.sql_subscript(1) # array[1] # :array.sql_subscript(1, 2) # array[1, 2] # :array.sql_subscript([1, 2]) # array[1, 2] # :array.sql_subscript(:array, 1..2) # array[1:2] # :array.sql_subscript(:array, 1...3) # array[1:2] def sql_subscript(*sub) Subscript.new(self, sub.flatten) end end ### Classes ### # Represents an aliasing of an expression to a given alias. class AliasedExpression < Expression # The expression to alias attr_reader :expression # The alias to use for the expression, not +alias+ since that is # a keyword in ruby. attr_reader :aliaz # Create an object with the given expression and alias. def initialize(expression, aliaz) @expression, @aliaz = expression, aliaz end to_s_method :aliased_expression_sql end # +Blob+ is used to represent binary data in the Ruby environment that is # stored as a blob type in the database. Sequel represents binary data as a Blob object because # most database engines require binary data to be escaped differently than regular strings. class Blob < ::String include SQL::AliasMethods include SQL::CastMethods # Return a LiteralString with the same content if no args are given, otherwise # return a SQL::PlaceholderLiteralString with the current string and the given args. def lit(*args) args.empty? ? LiteralString.new(self) : SQL::PlaceholderLiteralString.new(self, args) end # Returns +self+, since it is already a blob. def to_sequel_blob self end end # Subclass of +ComplexExpression+ where the expression results # in a boolean value in SQL. class BooleanExpression < ComplexExpression include BooleanMethods # Take pairs of values (e.g. a hash or array of two element arrays) # and converts it to a +BooleanExpression+. The operator and args # used depends on the case of the right (2nd) argument: # # * 0..10 - left >= 0 AND left <= 10 # * [1,2] - left IN (1,2) # * nil - left IS NULL # * true - left IS TRUE # * false - left IS FALSE # * /as/ - left ~ 'as' # * :blah - left = blah # * 'blah' - left = 'blah' # # If multiple arguments are given, they are joined with the op given (AND # by default, OR possible). If negate is set to true, # all subexpressions are inverted before used. Therefore, the following # expressions are equivalent: # # ~from_value_pairs(hash) # from_value_pairs(hash, :OR, true) def self.from_value_pairs(pairs, op=:AND, negate=false) pairs = pairs.map{|l,r| from_value_pair(l, r)} pairs.map!{|ce| invert(ce)} if negate pairs.length == 1 ? pairs.at(0) : new(op, *pairs) end # Return a BooleanExpression based on the right side of the pair. def self.from_value_pair(l, r) case r when Range new(:AND, new(:>=, l, r.begin), new(r.exclude_end? ? :< : :<=, l, r.end)) when ::Array, ::Sequel::Dataset new(:IN, l, r) when NegativeBooleanConstant new(:"IS NOT", l, r.constant) when BooleanConstant new(:IS, l, r.constant) when NilClass, TrueClass, FalseClass new(:IS, l, r) when Regexp StringExpression.like(l, r) when DelayedEvaluation Sequel.delay{from_value_pair(l, r.callable.call)} else new(:'=', l, r) end end private_class_method :from_value_pair # Invert the expression, if possible. If the expression cannot # be inverted, raise an error. An inverted expression should match everything that the # uninverted expression did not match, and vice-versa, except for possible issues with # SQL NULL (i.e. 1 == NULL is NULL and 1 != NULL is also NULL). # # BooleanExpression.invert(:a) # NOT "a" def self.invert(ce) case ce when BooleanExpression case op = ce.op when :AND, :OR BooleanExpression.new(OPERTATOR_INVERSIONS[op], *ce.args.collect{|a| BooleanExpression.invert(a)}) else BooleanExpression.new(OPERTATOR_INVERSIONS[op], *ce.args.dup) end when StringExpression, NumericExpression raise(Sequel::Error, "cannot invert #{ce.inspect}") when Constant CONSTANT_INVERSIONS[ce] || raise(Sequel::Error, "cannot invert #{ce.inspect}") else BooleanExpression.new(:NOT, ce) end end # Always use an AND operator for & on BooleanExpressions def &(ce) BooleanExpression.new(:AND, self, ce) end # Always use an OR operator for | on BooleanExpressions def |(ce) BooleanExpression.new(:OR, self, ce) end # Return self instead of creating a new object to save on memory. def sql_boolean self end end # Represents an SQL CASE expression, used for conditional branching in SQL. class CaseExpression < GenericExpression # An array of all two pairs with the first element specifying the # condition and the second element specifying the result if the # condition matches. attr_reader :conditions # The default value if no conditions match. attr_reader :default # The expression to test the conditions against attr_reader :expression # Create an object with the given conditions and # default value. An expression can be provided to # test each condition against, instead of having # all conditions represent their own boolean expression. def initialize(conditions, default, expression=(no_expression=true; nil)) raise(Sequel::Error, 'CaseExpression conditions must be a hash or array of all two pairs') unless Sequel.condition_specifier?(conditions) @conditions, @default, @expression, @no_expression = conditions.to_a, default, expression, no_expression end # Whether to use an expression for this CASE expression. def expression? !@no_expression end # Merge the CASE expression into the conditions, useful for databases that # don't support CASE expressions. def with_merged_expression if expression? e = expression CaseExpression.new(conditions.map{|c, r| [::Sequel::SQL::BooleanExpression.new(:'=', e, c), r]}, default) else self end end to_s_method :case_expression_sql end # Represents a cast of an SQL expression to a specific type. class Cast < GenericExpression # The expression to cast attr_reader :expr # The type to which to cast the expression attr_reader :type # Set the attributes to the given arguments def initialize(expr, type) @expr = expr @type = type end to_s_method :cast_sql, '@expr, @type' end # Represents all columns in a given table, table.* in SQL class ColumnAll < Expression # The table containing the columns being selected attr_reader :table # Create an object with the given table def initialize(table) @table = table end to_s_method :column_all_sql end class ComplexExpression include AliasMethods include CastMethods include OrderMethods include SubscriptMethods # Return a BooleanExpression with the same op and args. def sql_boolean BooleanExpression.new(self.op, *self.args) end # Return a NumericExpression with the same op and args. def sql_number NumericExpression.new(self.op, *self.args) end # Return a StringExpression with the same op and args. def sql_string StringExpression.new(self.op, *self.args) end end # Represents constants or psuedo-constants (e.g. +CURRENT_DATE+) in SQL. class Constant < GenericExpression # The underlying constant related to this object. attr_reader :constant # Create an constant with the given value def initialize(constant) @constant = constant end to_s_method :constant_sql, '@constant' end # Represents boolean constants such as +NULL+, +NOTNULL+, +TRUE+, and +FALSE+. class BooleanConstant < Constant to_s_method :boolean_constant_sql, '@constant' end # Represents inverse boolean constants (currently only +NOTNULL+). A # special class to allow for special behavior. class NegativeBooleanConstant < Constant to_s_method :negative_boolean_constant_sql, '@constant' end # Holds default generic constants that can be referenced. These # are included in the Sequel top level module and are also available # in this module which can be required at the top level to get # direct access to the constants. module Constants CURRENT_DATE = Constant.new(:CURRENT_DATE) CURRENT_TIME = Constant.new(:CURRENT_TIME) CURRENT_TIMESTAMP = Constant.new(:CURRENT_TIMESTAMP) SQLTRUE = TRUE = BooleanConstant.new(true) SQLFALSE = FALSE = BooleanConstant.new(false) NULL = BooleanConstant.new(nil) NOTNULL = NegativeBooleanConstant.new(nil) end class ComplexExpression # A hash of the opposite for each constant, used for inverting constants. CONSTANT_INVERSIONS = {Constants::TRUE=>Constants::FALSE, Constants::FALSE=>Constants::TRUE, Constants::NULL=>Constants::NOTNULL, Constants::NOTNULL=>Constants::NULL} end # Represents a delayed evaluation, encapsulating a callable # object which returns the value to use when called. class DelayedEvaluation < GenericExpression # A callable object that returns the value of the evaluation # when called. attr_reader :callable # Set the callable object def initialize(callable) @callable = callable end to_s_method :delayed_evaluation_sql, '@callable' end # Represents an SQL function call. class Function < GenericExpression # The SQL function to call attr_reader :f # The array of arguments to pass to the function (may be blank) attr_reader :args # Set the functions and args to the given arguments def initialize(f, *args) @f, @args = f, args end to_s_method :function_sql end # Represents an SQL function call that is translated/emulated # on databases that lack support for such a function. class EmulatedFunction < Function to_s_method :emulated_function_sql end class GenericExpression include AliasMethods include BooleanMethods include CastMethods include ComplexExpressionMethods include InequalityMethods include NumericMethods include OrderMethods include StringMethods include SubscriptMethods end # Represents an identifier (column or table). Can be used # to specify a +Symbol+ with multiple underscores should not be # split, or for creating an identifier without using a symbol. class Identifier < GenericExpression include QualifyingMethods # The table or column to reference attr_reader :value # Set the value to the given argument def initialize(value) @value = value end to_s_method :quote_identifier, '@value' end # Represents an SQL JOIN clause, used for joining tables. class JoinClause < Expression # The type of join to do attr_reader :join_type # The actual table to join attr_reader :table # The table alias to use for the join, if any attr_reader :table_alias # Create an object with the given join_type, table, and table alias def initialize(join_type, table, table_alias = nil) @join_type, @table, @table_alias = join_type, table, table_alias end to_s_method :join_clause_sql end # Represents an SQL JOIN clause with ON conditions. class JoinOnClause < JoinClause # The conditions for the join attr_reader :on # Create an object with the ON conditions and call super with the # remaining args. def initialize(on, *args) @on = on super(*args) end to_s_method :join_on_clause_sql end # Represents an SQL JOIN clause with USING conditions. class JoinUsingClause < JoinClause # The columns that appear in both tables that should be equal # for the conditions to match. attr_reader :using # Create an object with the given USING conditions and call super # with the remaining args. def initialize(using, *args) @using = using super(*args) end to_s_method :join_using_clause_sql end # Represents a literal string with placeholders and arguments. # This is necessary to ensure delayed literalization of the arguments # required for the prepared statement support and for database-specific # literalization. class PlaceholderLiteralString < GenericExpression # The literal string containing placeholders. This can also be an array # of strings, where each arg in args goes between the string elements. attr_reader :str # The arguments that will be subsituted into the placeholders. # Either an array of unnamed placeholders (which will be substituted in # order for ? characters), or a hash of named placeholders (which will be # substituted for :key phrases). attr_reader :args # Whether to surround the expression with parantheses attr_reader :parens # Create an object with the given string, placeholder arguments, and parens flag. def initialize(str, args, parens=false) @str = str @args = args.is_a?(Array) && args.length == 1 && (v = args.at(0)).is_a?(Hash) ? v : args @parens = parens end to_s_method :placeholder_literal_string_sql end # Subclass of +ComplexExpression+ where the expression results # in a numeric value in SQL. class NumericExpression < ComplexExpression include BitwiseMethods include NumericMethods include InequalityMethods # Always use + for + operator for NumericExpressions. def +(ce) NumericExpression.new(:+, self, ce) end # Return self instead of creating a new object to save on memory. def sql_number self end end # Represents a column/expression to order the result set by. class OrderedExpression < Expression INVERT_NULLS = {:first=>:last, :last=>:first}.freeze # The expression to order the result set by. attr_reader :expression # Whether the expression should order the result set in a descending manner attr_reader :descending # Whether to sort NULLS FIRST/LAST attr_reader :nulls # Set the expression and descending attributes to the given values. # Options: # # :nulls :: Can be :first/:last for NULLS FIRST/LAST. def initialize(expression, descending = true, opts=OPTS) @expression, @descending, @nulls = expression, descending, opts[:nulls] end # Return a copy that is ordered ASC def asc OrderedExpression.new(@expression, false, :nulls=>@nulls) end # Return a copy that is ordered DESC def desc OrderedExpression.new(@expression, true, :nulls=>@nulls) end # Return an inverted expression, changing ASC to DESC and NULLS FIRST to NULLS LAST. def invert OrderedExpression.new(@expression, !@descending, :nulls=>INVERT_NULLS.fetch(@nulls, @nulls)) end to_s_method :ordered_expression_sql end # Represents a qualified identifier (column with table or table with schema). class QualifiedIdentifier < GenericExpression include QualifyingMethods # The table/schema qualifying the reference attr_reader :table # The column/table referenced attr_reader :column # Set the table and column to the given arguments def initialize(table, column) @table, @column = table, column end to_s_method :qualified_identifier_sql, "@table, @column" end # Subclass of +ComplexExpression+ where the expression results # in a text/string/varchar value in SQL. class StringExpression < ComplexExpression include StringMethods include StringConcatenationMethods include InequalityMethods # Map of [regexp, case_insenstive] to +ComplexExpression+ operator symbol LIKE_MAP = {[true, true]=>:'~*', [true, false]=>:~, [false, true]=>:ILIKE, [false, false]=>:LIKE} # Creates a SQL pattern match exprssion. left (l) is the SQL string we # are matching against, and ces are the patterns we are matching. # The match succeeds if any of the patterns match (SQL OR). # # If a regular expression is used as a pattern, an SQL regular expression will be # used, which is currently only supported on MySQL and PostgreSQL. Be aware # that MySQL and PostgreSQL regular expression syntax is similar to ruby # regular expression syntax, but it not exactly the same, especially for # advanced regular expression features. Sequel just uses the source of the # ruby regular expression verbatim as the SQL regular expression string. # # If any other object is used as a regular expression, the SQL LIKE operator will # be used, and should be supported by most databases. # # The pattern match will be case insensitive if the last argument is a hash # with a key of :case_insensitive that is not false or nil. Also, # if a case insensitive regular expression is used (//i), that particular # pattern which will always be case insensitive. # # StringExpression.like(:a, 'a%') # "a" LIKE 'a%' # StringExpression.like(:a, 'a%', :case_insensitive=>true) # "a" ILIKE 'a%' # StringExpression.like(:a, 'a%', /^a/i) # "a" LIKE 'a%' OR "a" ~* '^a' def self.like(l, *ces) l, lre, lci = like_element(l) lci = (ces.last.is_a?(Hash) ? ces.pop : {})[:case_insensitive] ? true : lci ces.collect! do |ce| r, rre, rci = like_element(ce) BooleanExpression.new(LIKE_MAP[[lre||rre, lci||rci]], l, r) end ces.length == 1 ? ces.at(0) : BooleanExpression.new(:OR, *ces) end # Returns a three element array, made up of: # * The object to use # * Whether it is a regular expression # * Whether it is case insensitive def self.like_element(re) # :nodoc: if re.is_a?(Regexp) [re.source, true, re.casefold?] else [re, false, false] end end private_class_method :like_element # Return self instead of creating a new object to save on memory. def sql_string self end end # Represents an SQL array access, with multiple possible arguments. class Subscript < GenericExpression # The SQL array column attr_reader :f # The array of subscripts to use (should be an array of numbers) attr_reader :sub # Set the array column and subscripts to the given arguments def initialize(f, sub) @f, @sub = f, sub end # Create a new +Subscript+ appending the given subscript(s) # the the current array of subscripts. # # :a.sql_subscript(2) # a[2] # :a.sql_subscript(2) | 1 # a[2, 1] def |(sub) Subscript.new(@f, @sub + Array(sub)) end # Create a new +Subscript+ by accessing a subarray of a multidimensional # array. # # :a.sql_subscript(2) # a[2] # :a.sql_subscript(2)[1] # a[2][1] def [](sub) Subscript.new(self, Array(sub)) end to_s_method :subscript_sql end # Represents an SQL value list (IN/NOT IN predicate value). Added so it is possible to deal with a # ruby array of two element arrays as an SQL value list instead of an ordered # hash-like conditions specifier. class ValueList < ::Array end # The purpose of the +VirtualRow+ class is to allow the easy creation of SQL identifiers and functions # without relying on methods defined on +Symbol+. This is useful if another library defines # the methods defined by Sequel, if you are running on ruby 1.9, or if you are not using the # core extensions. # # An instance of this class is yielded to the block supplied to <tt>Dataset#filter</tt>, <tt>Dataset#order</tt>, and <tt>Dataset#select</tt> # (and the other methods that accept a block and pass it to one of those methods). # If the block doesn't take an argument, the block is instance_execed in the context of # an instance of this class. # # +VirtualRow+ uses +method_missing+ to return either an +Identifier+, +QualifiedIdentifier+, +Function+, or +WindowFunction+, # depending on how it is called. # # If a block is _not_ given, creates one of the following objects: # # +Function+ :: Returned if any arguments are supplied, using the method name # as the function name, and the arguments as the function arguments. # +QualifiedIdentifier+ :: Returned if the method name contains __, with the # table being the part before __, and the column being the part after. # +Identifier+ :: Returned otherwise, using the method name. # # If a block is given, it returns either a +Function+ or +WindowFunction+, depending on the first # argument to the method. Note that the block is currently not called by the code, though # this may change in a future version. If the first argument is: # # no arguments given :: creates a +Function+ with no arguments. # :* :: creates a +Function+ with a literal wildcard argument (*), mostly useful for COUNT. # :distinct :: creates a +Function+ that prepends DISTINCT to the rest of the arguments, mostly # useful for aggregate functions. # :over :: creates a +WindowFunction+. If a second argument is provided, it should be a hash # of options which are passed to Window (with possible keys :window, :partition, :order, and :frame). The # arguments to the function itself should be specified as <tt>:*=>true</tt> for a wildcard, or via # the <tt>:args</tt> option. # # Examples: # # ds = DB[:t] # # # Argument yielded to block # ds.filter{|r| r.name < 2} # SELECT * FROM t WHERE (name < 2) # # # Block without argument (instance_eval) # ds.filter{name < 2} # SELECT * FROM t WHERE (name < 2) # # # Qualified identifiers # ds.filter{table__column + 1 < 2} # SELECT * FROM t WHERE ((table.column + 1) < 2) # # # Functions # ds.filter{is_active(1, 'arg2')} # SELECT * FROM t WHERE is_active(1, 'arg2') # ds.select{version{}} # SELECT version() FROM t # ds.select{count(:*){}} # SELECT count(*) FROM t # ds.select{count(:distinct, col1){}} # SELECT count(DISTINCT col1) FROM t # # # Window Functions # ds.select{rank(:over){}} # SELECT rank() OVER () FROM t # ds.select{count(:over, :*=>true){}} # SELECT count(*) OVER () FROM t # ds.select{sum(:over, :args=>col1, :partition=>col2, :order=>col3){}} # SELECT sum(col1) OVER (PARTITION BY col2 ORDER BY col3) FROM t # # # Math Operators # ds.select{|o| o.+(1, :a).as(:b)} # SELECT (1 + a) AS b FROM t # ds.select{|o| o.-(2, :a).as(:b)} # SELECT (2 - a) AS b FROM t # ds.select{|o| o.*(3, :a).as(:b)} # SELECT (3 * a) AS b FROM t # ds.select{|o| o./(4, :a).as(:b)} # SELECT (4 / a) AS b FROM t # # # Boolean Operators # ds.filter{|o| o.&({:a=>1}, :b)} # SELECT * FROM t WHERE ((a = 1) AND b) # ds.filter{|o| o.|({:a=>1}, :b)} # SELECT * FROM t WHERE ((a = 1) OR b) # ds.filter{|o| o.~({:a=>1})} # SELECT * FROM t WHERE (a != 1) # ds.filter{|o| o.~({:a=>1, :b=>2})} # SELECT * FROM t WHERE ((a != 1) OR (b != 2)) # # # Inequality Operators # ds.filter{|o| o.>(1, :a)} # SELECT * FROM t WHERE (1 > a) # ds.filter{|o| o.<(2, :a)} # SELECT * FROM t WHERE (2 < a) # ds.filter{|o| o.>=(3, :a)} # SELECT * FROM t WHERE (3 >= a) # ds.filter{|o| o.<=(4, :a)} # SELECT * FROM t WHERE (4 <= a) # # # Literal Strings # ds.filter{{a=>`some SQL`}} # SELECT * FROM t WHERE (a = some SQL) # # For a more detailed explanation, see the {Virtual Rows guide}[link:files/doc/virtual_rows_rdoc.html]. class VirtualRow < BasicObject WILDCARD = LiteralString.new('*').freeze QUESTION_MARK = LiteralString.new('?').freeze COMMA_SEPARATOR = LiteralString.new(', ').freeze DOUBLE_UNDERSCORE = '__'.freeze DISTINCT = ["DISTINCT ".freeze].freeze COMMA_ARRAY = [COMMA_SEPARATOR].freeze include OperatorBuilders %w'> < >= <='.each do |op| class_eval(<<-END, __FILE__, __LINE__ + 1) def #{op}(*args) SQL::BooleanExpression.new(:#{op}, *args) end END end # Return a literal string created with the given string. def `(s) Sequel::LiteralString.new(s) end # Return an +Identifier+, +QualifiedIdentifier+, +Function+, or +WindowFunction+, depending # on arguments and whether a block is provided. Does not currently call the block. # See the class level documentation. def method_missing(m, *args, &block) if block if args.empty? Function.new(m) else case args.shift when :* Function.new(m, WILDCARD) when :distinct Function.new(m, PlaceholderLiteralString.new(DISTINCT + COMMA_ARRAY * (args.length-1), args)) when :over opts = args.shift || {} fun_args = ::Kernel.Array(opts[:*] ? WILDCARD : opts[:args]) WindowFunction.new(Function.new(m, *fun_args), Window.new(opts)) else Kernel.raise(Error, 'unsupported VirtualRow method argument used with block') end end elsif args.empty? table, column = m.to_s.split(DOUBLE_UNDERSCORE, 2) column ? QualifiedIdentifier.new(table, column) : Identifier.new(m) else Function.new(m, *args) end end Sequel::VIRTUAL_ROW = new end # A +Window+ is part of a window function specifying the window over which the function operates. # It is separated from the +WindowFunction+ class because it also can be used separately on # some databases. class Window < Expression # The options for this window. Options currently supported: # :frame :: if specified, should be :all, :rows, or a String that is used literally. :all always operates over all rows in the # partition, while :rows excludes the current row's later peers. The default is to include # all previous rows in the partition up to the current row's last peer. # :order :: order on the column(s) given # :partition :: partition/group on the column(s) given # :window :: base results on a previously specified named window attr_reader :opts # Set the options to the options given def initialize(opts=OPTS) @opts = opts end to_s_method :window_sql, '@opts' end # A +WindowFunction+ is a grouping of a +Function+ with a +Window+ over which it operates. class WindowFunction < GenericExpression # The function to use, should be an <tt>SQL::Function</tt>. attr_reader :function # The window to use, should be an <tt>SQL::Window</tt>. attr_reader :window # Set the function and window. def initialize(function, window) @function, @window = function, window end to_s_method :window_function_sql, '@function, @window' end # A +Wrapper+ is a simple way to wrap an existing object so that it supports # the Sequel DSL. class Wrapper < GenericExpression # The underlying value wrapped by this object. attr_reader :value # Set the value wrapped by the object. def initialize(value) @value = value end to_s_method :literal, '@value' end end # +LiteralString+ is used to represent literal SQL expressions. A # +LiteralString+ is copied verbatim into an SQL statement. Instances of # +LiteralString+ can be created by calling <tt>String#lit</tt>. class LiteralString include SQL::OrderMethods include SQL::ComplexExpressionMethods include SQL::BooleanMethods include SQL::NumericMethods include SQL::StringMethods include SQL::InequalityMethods include SQL::AliasMethods include SQL::CastMethods # Return self if no args are given, otherwise return a SQL::PlaceholderLiteralString # with the current string and the given args. def lit(*args) args.empty? ? self : SQL::PlaceholderLiteralString.new(self, args) end # Convert a literal string to a SQL::Blob. def to_sequel_blob SQL::Blob.new(self) end end include SQL::Constants extend SQL::Builders extend SQL::OperatorBuilders end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/lib/sequel/timezones.rb�����������������������������������������������������������0000664�0000000�0000000�00000021763�12201565355�0020415�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel @application_timezone = nil @database_timezone = nil @typecast_timezone = nil # Sequel doesn't pay much attention to timezones by default, but you can set it # handle timezones if you want. There are three separate timezone settings, application_timezone, # database_timezone, and typecast_timezone. All three timezones have getter and setter methods. # You can set all three timezones to the same value at once via <tt>Sequel.default_timezone=</tt>. # # The only timezone values that are supported by default are <tt>:utc</tt> (convert to UTC), # <tt>:local</tt> (convert to local time), and +nil+ (don't convert). If you need to # convert to a specific timezone, or need the timezones being used to change based # on the environment (e.g. current user), you need to use the +named_timezones+ extension (and use # +DateTime+ as the +datetime_class+). Sequel also ships with a +thread_local_timezones+ extensions # which allows each thread to have its own timezone values for each of the timezones. module Timezones # The timezone you want the application to use. This is the timezone # that incoming times from the database and typecasting are converted to. attr_reader :application_timezone # The timezone for storage in the database. This is the # timezone to which Sequel will convert timestamps before literalizing them # for storage in the database. It is also the timezone that Sequel will assume # database timestamp values are already in (if they don't include an offset). attr_reader :database_timezone # The timezone that incoming data that Sequel needs to typecast # is assumed to be already in (if they don't include an offset). attr_reader :typecast_timezone %w'application database typecast'.each do |t| class_eval("def #{t}_timezone=(tz); @#{t}_timezone = convert_timezone_setter_arg(tz) end", __FILE__, __LINE__) end # Convert the given +Time+/+DateTime+ object into the database timezone, used when # literalizing objects in an SQL string. def application_to_database_timestamp(v) convert_output_timestamp(v, Sequel.database_timezone) end # Converts the object to the given +output_timezone+. def convert_output_timestamp(v, output_timezone) if output_timezone if v.is_a?(DateTime) case output_timezone when :utc v.new_offset(0) when :local v.new_offset(local_offset_for_datetime(v)) else convert_output_datetime_other(v, output_timezone) end else v.send(output_timezone == :utc ? :getutc : :getlocal) end else v end end # Converts the given object from the given input timezone to the # +application_timezone+ using +convert_input_timestamp+ and # +convert_output_timestamp+. def convert_timestamp(v, input_timezone) begin if v.is_a?(Date) && !v.is_a?(DateTime) # Dates handled specially as they are assumed to already be in the application_timezone if datetime_class == DateTime DateTime.civil(v.year, v.month, v.day, 0, 0, 0, application_timezone == :local ? (defined?(Rational) ? Rational(Time.local(v.year, v.month, v.day).utc_offset, 86400) : Time.local(v.year, v.month, v.day).utc_offset/86400.0) : 0) else Time.send(application_timezone == :utc ? :utc : :local, v.year, v.month, v.day) end else convert_output_timestamp(convert_input_timestamp(v, input_timezone), application_timezone) end rescue InvalidValue raise rescue => e raise convert_exception_class(e, InvalidValue) end end # Convert the given object into an object of <tt>Sequel.datetime_class</tt> in the # +application_timezone+. Used when converting datetime/timestamp columns # returned by the database. def database_to_application_timestamp(v) convert_timestamp(v, Sequel.database_timezone) end # Sets the database, application, and typecasting timezones to the given timezone. def default_timezone=(tz) self.database_timezone = tz self.application_timezone = tz self.typecast_timezone = tz end # Convert the given object into an object of <tt>Sequel.datetime_class</tt> in the # +application_timezone+. Used when typecasting values when assigning them # to model datetime attributes. def typecast_to_application_timestamp(v) convert_timestamp(v, Sequel.typecast_timezone) end private # Convert the given +DateTime+ to the given input_timezone, keeping the # same time and just modifying the timezone. def convert_input_datetime_no_offset(v, input_timezone) case input_timezone when :utc, nil v # DateTime assumes UTC if no offset is given when :local offset = local_offset_for_datetime(v) v.new_offset(offset) - offset else convert_input_datetime_other(v, input_timezone) end end # Convert the given +DateTime+ to the given input_timezone that is not supported # by default (i.e. one other than +nil+, <tt>:local</tt>, or <tt>:utc</tt>). Raises an +InvalidValue+ by default. # Can be overridden in extensions. def convert_input_datetime_other(v, input_timezone) raise InvalidValue, "Invalid input_timezone: #{input_timezone.inspect}" end # Converts the object from a +String+, +Array+, +Date+, +DateTime+, or +Time+ into an # instance of <tt>Sequel.datetime_class</tt>. If given an array or a string that doesn't # contain an offset, assume that the array/string is already in the given +input_timezone+. def convert_input_timestamp(v, input_timezone) case v when String v2 = Sequel.string_to_datetime(v) if !input_timezone || Date._parse(v).has_key?(:offset) v2 else # Correct for potentially wrong offset if string doesn't include offset if v2.is_a?(DateTime) v2 = convert_input_datetime_no_offset(v2, input_timezone) else # Time assumes local time if no offset is given v2 = v2.getutc + v2.utc_offset if input_timezone == :utc end v2 end when Array y, mo, d, h, mi, s, ns, off = v if datetime_class == DateTime s += (defined?(Rational) ? Rational(ns, 1000000000) : ns/1000000000.0) if ns if off DateTime.civil(y, mo, d, h, mi, s, off) else convert_input_datetime_no_offset(DateTime.civil(y, mo, d, h, mi, s), input_timezone) end else Time.send(input_timezone == :utc ? :utc : :local, y, mo, d, h, mi, s, (ns ? ns / 1000.0 : 0)) end when Hash ary = [:year, :month, :day, :hour, :minute, :second, :nanos].map{|x| (v[x] || v[x.to_s]).to_i} if (offset = (v[:offset] || v['offset'])) ary << offset end convert_input_timestamp(ary, input_timezone) convert_input_timestamp(ary, input_timezone) when Time if datetime_class == DateTime if v.respond_to?(:to_datetime) v.to_datetime else # :nocov: # Ruby 1.8 code, %N not available and %z broken on Windows offset_hours, offset_minutes = (v.utc_offset/60).divmod(60) string_to_datetime(v.strftime("%Y-%m-%dT%H:%M:%S") << sprintf(".%06i%+03i%02i", v.usec, offset_hours, offset_minutes)) # :nocov: end else v end when DateTime if datetime_class == DateTime v elsif v.respond_to?(:to_time) v.to_time else # :nocov: string_to_datetime(v.strftime("%FT%T.%N%z")) # :nocov: end else raise InvalidValue, "Invalid convert_input_timestamp type: #{v.inspect}" end end # Convert the given +DateTime+ to the given output_timezone that is not supported # by default (i.e. one other than +nil+, <tt>:local</tt>, or <tt>:utc</tt>). Raises an +InvalidValue+ by default. # Can be overridden in extensions. def convert_output_datetime_other(v, output_timezone) raise InvalidValue, "Invalid output_timezone: #{output_timezone.inspect}" end # Convert the timezone setter argument. Returns argument given by default, # exists for easier overriding in extensions. def convert_timezone_setter_arg(tz) tz end # Takes a DateTime dt, and returns the correct local offset for that dt, daylight savings included. def local_offset_for_datetime(dt) time_offset_to_datetime_offset Time.local(dt.year, dt.month, dt.day, dt.hour, dt.min, dt.sec).utc_offset end # Caches offset conversions to avoid excess Rational math. def time_offset_to_datetime_offset(offset_secs) @local_offsets ||= {} @local_offsets[offset_secs] ||= respond_to?(:Rational, true) ? Rational(offset_secs, 60*60*24) : offset_secs/60/60/24.0 end end extend Timezones end �������������ruby-sequel-4.1.1/lib/sequel/version.rb�������������������������������������������������������������0000664�0000000�0000000�00000001103�12201565355�0020047�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������module Sequel # The major version of Sequel. Only bumped for major changes. MAJOR = 4 # The minor version of Sequel. Bumped for every non-patch level # release, generally around once a month. MINOR = 1 # The tiny version of Sequel. Usually 0, only bumped for bugfix # releases that fix regressions from previous versions. TINY = 1 # The version of Sequel you are using, as a string (e.g. "2.11.0") VERSION = [MAJOR, MINOR, TINY].join('.') # The version of Sequel you are using, as a string (e.g. "2.11.0") def self.version VERSION end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/sequel.gemspec��������������������������������������������������������������������0000664�0000000�0000000�00000001653�12201565355�0016646�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.expand_path("../lib/sequel/version", __FILE__) SEQUEL_GEMSPEC = Gem::Specification.new do |s| s.name = 'sequel' s.rubyforge_project = 'sequel' s.version = Sequel.version s.platform = Gem::Platform::RUBY s.has_rdoc = true s.extra_rdoc_files = ["README.rdoc", "CHANGELOG", "MIT-LICENSE"] + Dir["doc/*.rdoc"] + Dir['doc/release_notes/*.txt'] s.rdoc_options += ["--quiet", "--line-numbers", "--inline-source", '--title', 'Sequel: The Database Toolkit for Ruby', '--main', 'README.rdoc'] s.summary = "The Database Toolkit for Ruby" s.description = s.summary s.author = "Jeremy Evans" s.email = "code@jeremyevans.net" s.homepage = "http://sequel.rubyforge.org" s.required_ruby_version = ">= 1.8.7" s.files = %w(MIT-LICENSE CHANGELOG README.rdoc Rakefile bin/sequel) + Dir["doc/**/*.{rdoc,txt}"] + Dir["{spec,lib}/**/*.{rb,RB}"] s.require_path = "lib" s.bindir = 'bin' s.executables << 'sequel' end �������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/�����������������������������������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0014730�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/adapters/��������������������������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0016533�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/adapters/db2_spec.rb���������������������������������������������������������0000664�0000000�0000000�00000010477�12201565355�0020552�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������SEQUEL_ADAPTER_TEST = :db2 require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') if DB.table_exists?(:test) DB.drop_table(:test) end describe Sequel::Database do before do @db = DB @db.create_table(:test){String :a} @ds = @db[:test] end after do @db.drop_table(:test) end specify "should provide disconnect functionality after preparing a connection" do @ds.prepare(:first, :a).call @db.disconnect @db.pool.size.should == 0 end specify "should return version correctly" do @db.db2_version.should match(/DB2 v/i) end end describe "Simple Dataset operations" do before(:all) do Sequel::DB2.use_clob_as_blob = false DB.create_table!(:items) do Integer :id, :primary_key => true Integer :number column :bin_string, 'varchar(20) for bit data' column :bin_blob, 'blob' end @ds = DB[:items] end after(:each) do @ds.delete end after(:all) do Sequel::DB2.use_clob_as_blob = true DB.drop_table(:items) end specify "should insert with a primary key specified" do @ds.insert(:id => 1, :number => 10) @ds.insert(:id => 100, :number => 20) @ds.select_hash(:id, :number).should == {1 => 10, 100 => 20} end specify "should insert into binary columns" do @ds.insert(:id => 1, :bin_string => Sequel.blob("\1"), :bin_blob => Sequel.blob("\2")) @ds.select(:bin_string, :bin_blob).first.should == {:bin_string => "\1", :bin_blob => "\2"} end end describe Sequel::Database do before do @db = DB end after do @db.drop_table(:items) end specify "should parse primary keys from the schema properly" do @db.create_table!(:items){Integer :number} @db.schema(:items).collect{|k,v| k if v[:primary_key]}.compact.should == [] @db.create_table!(:items){primary_key :number} @db.schema(:items).collect{|k,v| k if v[:primary_key]}.compact.should == [:number] @db.create_table!(:items){Integer :number1, :null => false; Integer :number2, :null => false; primary_key [:number1, :number2]} @db.schema(:items).collect{|k,v| k if v[:primary_key]}.compact.should == [:number1, :number2] end end describe "Sequel::IBMDB.convert_smallint_to_bool" do before do @db = DB @db.create_table!(:booltest){column :b, 'smallint'; column :i, 'integer'} @ds = @db[:booltest] end after do Sequel::IBMDB.convert_smallint_to_bool = true @db.drop_table(:booltest) end specify "should consider smallint datatypes as boolean if set, but not larger smallints" do @db.schema(:booltest, :reload=>true).first.last[:type].should == :boolean @db.schema(:booltest, :reload=>true).first.last[:db_type].should match /smallint/i Sequel::IBMDB.convert_smallint_to_bool = false @db.schema(:booltest, :reload=>true).first.last[:type].should == :integer @db.schema(:booltest, :reload=>true).first.last[:db_type].should match /smallint/i end specify "should return smallints as bools and integers as integers when set" do Sequel::IBMDB.convert_smallint_to_bool = true @ds.delete @ds << {:b=>true, :i=>10} @ds.all.should == [{:b=>true, :i=>10}] @ds.delete @ds << {:b=>false, :i=>0} @ds.all.should == [{:b=>false, :i=>0}] @ds.delete @ds << {:b=>true, :i=>1} @ds.all.should == [{:b=>true, :i=>1}] end specify "should return all smallints as integers when unset" do Sequel::IBMDB.convert_smallint_to_bool = false @ds.delete @ds << {:b=>true, :i=>10} @ds.all.should == [{:b=>1, :i=>10}] @ds.delete @ds << {:b=>false, :i=>0} @ds.all.should == [{:b=>0, :i=>0}] @ds.delete @ds << {:b=>1, :i=>10} @ds.all.should == [{:b=>1, :i=>10}] @ds.delete @ds << {:b=>0, :i=>0} @ds.all.should == [{:b=>0, :i=>0}] end end if DB.adapter_scheme == :ibmdb describe "Simple Dataset operations in transactions" do before do DB.create_table!(:items_insert_in_transaction) do Integer :id, :primary_key => true integer :number end @ds = DB[:items_insert_in_transaction] end after do DB.drop_table(:items_insert_in_transaction) end specify "should insert correctly with a primary key specified inside a transaction" do DB.transaction do @ds.insert(:id=>100, :number=>20) @ds.count.should == 1 @ds.order(:id).all.should == [{:id=>100, :number=>20}] end end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/adapters/firebird_spec.rb����������������������������������������������������0000664�0000000�0000000�00000032066�12201565355�0021667�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������SEQUEL_ADAPTER_TEST = :firebird require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') def DB.sqls (@sqls ||= []) end logger = Object.new def logger.method_missing(m, msg) DB.sqls.push(msg) end DB.loggers = [logger] DB.create_table! :test do varchar :name, :size => 50 integer :val, :index => true end DB.create_table! :test2 do integer :val timestamp :time_stamp end DB.create_table! :test3 do integer :val timestamp :time_stamp end DB.create_table! :test5 do primary_key :xid integer :val end DB.create_table! :test6 do primary_key :xid blob :val String :val2 varchar :val3, :size=>200 String :val4, :text=>true end describe "A Firebird database" do before do @db = DB end specify "should provide disconnect functionality" do @db.tables @db.pool.size.should == 1 @db.disconnect @db.pool.size.should == 0 end specify "should raise Sequel::Error on error" do proc{@db << "SELECT 1 + 'a'"}.should raise_error(Sequel::Error) end end describe "A Firebird dataset" do before do @d = DB[:test] @d.delete # remove all records @d.quote_identifiers = true end specify "should return the correct record count" do @d.count.should == 0 @d << {:name => 'abc', :val => 123} @d << {:name => 'abc', :val => 456} @d << {:name => 'def', :val => 789} @d.count.should == 3 end specify "should return the correct records" do @d.to_a.should == [] @d << {:name => 'abc', :val => 123} @d << {:name => 'abc', :val => 456} @d << {:name => 'def', :val => 789} @d.order(:val).to_a.should == [ {:name => 'abc', :val => 123}, {:name => 'abc', :val => 456}, {:name => 'def', :val => 789} ] end specify "should update records correctly" do @d << {:name => 'abc', :val => 123} @d << {:name => 'abc', :val => 456} @d << {:name => 'def', :val => 789} @d.filter(:name => 'abc').update(:val => 530) # the third record should stay the same # floating-point precision bullshit @d[:name => 'def'][:val].should == 789 @d.filter(:val => 530).count.should == 2 end specify "should delete records correctly" do @d << {:name => 'abc', :val => 123} @d << {:name => 'abc', :val => 456} @d << {:name => 'def', :val => 789} @d.filter(:name => 'abc').delete @d.count.should == 1 @d.first[:name].should == 'def' end specify "should be able to literalize booleans" do proc {@d.literal(true)}.should_not raise_error proc {@d.literal(false)}.should_not raise_error end specify "should quote columns and tables using double quotes if quoting identifiers" do @d.quote_identifiers = true @d.select(:name).sql.should == \ 'SELECT "NAME" FROM "TEST"' @d.select('COUNT(*)'.lit).sql.should == \ 'SELECT COUNT(*) FROM "TEST"' @d.select(:max[:val]).sql.should == \ 'SELECT max("VAL") FROM "TEST"' @d.select(:now[]).sql.should == \ 'SELECT now() FROM "TEST"' @d.select(:max[:items__val]).sql.should == \ 'SELECT max("ITEMS"."VAL") FROM "TEST"' @d.order(:name.desc).sql.should == \ 'SELECT * FROM "TEST" ORDER BY "NAME" DESC' @d.select('TEST.NAME AS item_:name'.lit).sql.should == \ 'SELECT TEST.NAME AS item_:name FROM "TEST"' @d.select('"NAME"'.lit).sql.should == \ 'SELECT "NAME" FROM "TEST"' @d.select('max(TEST."NAME") AS "max_:name"'.lit).sql.should == \ 'SELECT max(TEST."NAME") AS "max_:name" FROM "TEST"' @d.select(:test[:ABC, 'hello']).sql.should == \ "SELECT test(\"ABC\", 'hello') FROM \"TEST\"" @d.select(:test[:ABC__DEF, 'hello']).sql.should == \ "SELECT test(\"ABC\".\"DEF\", 'hello') FROM \"TEST\"" @d.select(:test[:ABC__DEF, 'hello'].as(:X2)).sql.should == \ "SELECT test(\"ABC\".\"DEF\", 'hello') AS \"X2\" FROM \"TEST\"" @d.insert_sql(:val => 333).should =~ \ /\AINSERT INTO "TEST" \("VAL"\) VALUES \(333\)( RETURNING NULL)?\z/ @d.insert_sql(:X => :Y).should =~ \ /\AINSERT INTO "TEST" \("X"\) VALUES \("Y"\)( RETURNING NULL)?\z/ end specify "should quote fields correctly when reversing the order if quoting identifiers" do @d.quote_identifiers = true @d.reverse_order(:name).sql.should == \ 'SELECT * FROM "TEST" ORDER BY "NAME" DESC' @d.reverse_order(:name.desc).sql.should == \ 'SELECT * FROM "TEST" ORDER BY "NAME" ASC' @d.reverse_order(:name, :test.desc).sql.should == \ 'SELECT * FROM "TEST" ORDER BY "NAME" DESC, "TEST" ASC' @d.reverse_order(:name.desc, :test).sql.should == \ 'SELECT * FROM "TEST" ORDER BY "NAME" ASC, "TEST" DESC' end specify "should support transactions" do DB.transaction do @d << {:name => 'abc', :val => 1} end @d.count.should == 1 end specify "should have #transaction yield the connection" do DB.transaction do |conn| conn.should_not == nil end end specify "should correctly rollback transactions" do proc do DB.transaction do @d << {:name => 'abc', :val => 1} raise RuntimeError, 'asdf' end end.should raise_error(RuntimeError) @d.count.should == 0 end specify "should handle returning inside of the block by committing" do def DB.ret_commit transaction do self[:test] << {:name => 'abc'} return self[:test] << {:name => 'd'} end end @d.count.should == 0 DB.ret_commit @d.count.should == 1 DB.ret_commit @d.count.should == 2 proc do DB.transaction do raise RuntimeError, 'asdf' end end.should raise_error(RuntimeError) @d.count.should == 2 end specify "should quote and upcase reserved keywords" do @d = DB[:testing] @d.quote_identifiers = true @d.select(:select).sql.should == \ 'SELECT "SELECT" FROM "TESTING"' end end describe "A Firebird dataset with a timestamp field" do before do @d = DB[:test3] @d.delete end specify "should store milliseconds in time fields" do t = Time.now @d << {:val=>1, :time_stamp=>t} @d.literal(@d[:val =>'1'][:time_stamp]).should == @d.literal(t) @d[:val=>'1'][:time_stamp].usec.should == t.usec - t.usec % 100 end end describe "A Firebird database" do before do @db = DB @db.drop_table?(:posts) @db.sqls.clear end specify "should allow us to name the sequences" do @db.create_table(:posts){primary_key :id, :sequence_name => "seq_test"} check_sqls do @db.sqls.should == [ "DROP SEQUENCE SEQ_TEST", "CREATE TABLE POSTS (ID integer PRIMARY KEY )", "CREATE SEQUENCE SEQ_TEST", " CREATE TRIGGER BI_POSTS_ID for POSTS\n ACTIVE BEFORE INSERT position 0\n as begin\n if ((new.ID is null) or (new.ID = 0)) then\n begin\n new.ID = next value for seq_test;\n end\n end\n\n" ] end end specify "should allow us to set the starting position for the sequences" do @db.create_table(:posts){primary_key :id, :sequence_start_position => 999} check_sqls do @db.sqls.should == [ "DROP SEQUENCE SEQ_POSTS_ID", "CREATE TABLE POSTS (ID integer PRIMARY KEY )", "CREATE SEQUENCE SEQ_POSTS_ID", "ALTER SEQUENCE SEQ_POSTS_ID RESTART WITH 999", " CREATE TRIGGER BI_POSTS_ID for POSTS\n ACTIVE BEFORE INSERT position 0\n as begin\n if ((new.ID is null) or (new.ID = 0)) then\n begin\n new.ID = next value for seq_posts_id;\n end\n end\n\n" ] end end specify "should allow us to name and set the starting position for the sequences" do @db.create_table(:posts){primary_key :id, :sequence_name => "seq_test", :sequence_start_position => 999} check_sqls do @db.sqls.should == [ "DROP SEQUENCE SEQ_TEST", "CREATE TABLE POSTS (ID integer PRIMARY KEY )", "CREATE SEQUENCE SEQ_TEST", "ALTER SEQUENCE SEQ_TEST RESTART WITH 999", " CREATE TRIGGER BI_POSTS_ID for POSTS\n ACTIVE BEFORE INSERT position 0\n as begin\n if ((new.ID is null) or (new.ID = 0)) then\n begin\n new.ID = next value for seq_test;\n end\n end\n\n" ] end end specify "should allow us to name the triggers" do @db.create_table(:posts){primary_key :id, :trigger_name => "trig_test"} check_sqls do @db.sqls.should == [ "DROP SEQUENCE SEQ_POSTS_ID", "CREATE TABLE POSTS (ID integer PRIMARY KEY )", "CREATE SEQUENCE SEQ_POSTS_ID", " CREATE TRIGGER TRIG_TEST for POSTS\n ACTIVE BEFORE INSERT position 0\n as begin\n if ((new.ID is null) or (new.ID = 0)) then\n begin\n new.ID = next value for seq_posts_id;\n end\n end\n\n" ] end end specify "should allow us to not create the sequence" do @db.create_table(:posts){primary_key :id, :create_sequence => false} check_sqls do @db.sqls.should == [ "CREATE TABLE POSTS (ID integer PRIMARY KEY )", " CREATE TRIGGER BI_POSTS_ID for POSTS\n ACTIVE BEFORE INSERT position 0\n as begin\n if ((new.ID is null) or (new.ID = 0)) then\n begin\n new.ID = next value for seq_posts_id;\n end\n end\n\n" ] end end specify "should allow us to not create the trigger" do @db.create_table(:posts){primary_key :id, :create_trigger => false} check_sqls do @db.sqls.should == [ "DROP SEQUENCE SEQ_POSTS_ID", "CREATE TABLE POSTS (ID integer PRIMARY KEY )", "CREATE SEQUENCE SEQ_POSTS_ID", ] end end specify "should allow us to not create either the sequence nor the trigger" do @db.create_table(:posts){primary_key :id, :create_sequence => false, :create_trigger => false} check_sqls do @db.sqls.should == [ "CREATE TABLE POSTS (ID integer PRIMARY KEY )" ] end end specify "should support column operations" do @db.create_table!(:test2){varchar :name, :size => 50; integer :val} @db[:test2] << {} @db[:test2].columns.should == [:name, :val] @db.add_column :test2, :xyz, :varchar, :size => 50 @db[:test2].columns.should == [:name, :val, :xyz] @db[:test2].columns.should == [:name, :val, :xyz] @db.drop_column :test2, :xyz @db[:test2].columns.should == [:name, :val] @db[:test2].delete @db.add_column :test2, :xyz, :varchar, :default => '000', :size => 50#, :create_domain => 'xyz_varchar' @db[:test2] << {:name => 'mmm', :val => 111, :xyz => 'qqqq'} @db[:test2].columns.should == [:name, :val, :xyz] @db.rename_column :test2, :xyz, :zyx @db[:test2].columns.should == [:name, :val, :zyx] @db[:test2].first[:zyx].should == 'qqqq' @db.add_column :test2, :xyz, :decimal, :elements => [12, 2] @db[:test2].delete @db[:test2] << {:name => 'mmm', :val => 111, :xyz => 56.4} @db.set_column_type :test2, :xyz, :varchar, :size => 50 @db[:test2].first[:xyz].should == "56.40" end specify "should allow us to retrieve the primary key for a table" do @db.create_table!(:test2){primary_key :id} @db.primary_key(:test2).should == ["id"] end end describe "Postgres::Dataset#insert" do before do @ds = DB[:test5] @ds.delete end specify "should using call insert_returning_sql" do # @ds.should_receive(:single_value).once.with(:sql=>'INSERT INTO TEST5 (VAL) VALUES (10) RETURNING XID', :server=> :default) @ds.should_receive(:single_value).once @ds.insert(:val=>10) end specify "should have insert_returning_sql use the RETURNING keyword" do @ds.insert_returning_sql(:XID, :val=>10).should == "INSERT INTO TEST5 (VAL) VALUES (10) RETURNING XID" @ds.insert_returning_sql('*'.lit, :val=>10).should == "INSERT INTO TEST5 (VAL) VALUES (10) RETURNING *" @ds.insert_returning_sql('NULL'.lit, :val=>10).should == "INSERT INTO TEST5 (VAL) VALUES (10) RETURNING NULL" end specify "should correctly return the inserted record's primary key value" do value1 = 10 id1 = @ds.insert(:val=>value1) @ds.first(:XID=>id1)[:val].should == value1 value2 = 20 id2 = @ds.insert(:val=>value2) @ds.first(:XID=>id2)[:val].should == value2 end specify "should return nil if the table has no primary key" do ds = DB[:test] ds.delete ds.insert(:name=>'a').should == nil end end describe "Postgres::Dataset#insert" do before do @ds = DB[:test6] @ds.delete end specify "should insert and retrieve a blob successfully" do value1 = "\1\2\2\2\2222\2\2\2" value2 = "abcd" value3 = "efgh" value4 = "ijkl" id1 = @ds.insert(:val=>value1, :val2=>value2, :val3=>value3, :val4=>value4) @ds.first(:XID=>id1)[:val].should == value1 @ds.first(:XID=>id1)[:val2].should == value2 @ds.first(:XID=>id1)[:val3].should == value3 @ds.first(:XID=>id1)[:val4].should == value4 end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/adapters/informix_spec.rb����������������������������������������������������0000664�0000000�0000000�00000004724�12201565355�0021734�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������SEQUEL_ADAPTER_TEST = :informix require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') if DB.table_exists?(:test) DB.drop_table :test end DB.create_table :test do text :name integer :value index :value end describe "A Informix database" do specify "should provide disconnect functionality" do DB.execute("select user from dual") DB.pool.size.should == 1 DB.disconnect DB.pool.size.should == 0 end end describe "A Informix dataset" do before do @d = DB[:test] @d.delete # remove all records end specify "should return the correct record count" do @d.count.should == 0 @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.count.should == 3 end specify "should return the correct records" do @d.to_a.should == [] @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.order(:value).to_a.should == [ {:name => 'abc', :value => 123}, {:name => 'abc', :value => 456}, {:name => 'def', :value => 789} ] end specify "should update records correctly" do @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.filter(:name => 'abc').update(:value => 530) # the third record should stay the same # floating-point precision bullshit @d[:name => 'def'][:value].should == 789 @d.filter(:value => 530).count.should == 2 end specify "should delete records correctly" do @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.filter(:name => 'abc').delete @d.count.should == 1 @d.first[:name].should == 'def' end specify "should be able to literalize booleans" do proc {@d.literal(true)}.should_not raise_error proc {@d.literal(false)}.should_not raise_error end specify "should support transactions" do DB.transaction do @d << {:name => 'abc', :value => 1} end @d.count.should == 1 end specify "should support #first and #last" do @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.order(:value).first.should == {:name => 'abc', :value => 123} @d.order(:value).last.should == {:name => 'def', :value => 789} end end ��������������������������������������������ruby-sequel-4.1.1/spec/adapters/mssql_spec.rb�������������������������������������������������������0000664�0000000�0000000�00000060157�12201565355�0021242�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������SEQUEL_ADAPTER_TEST = :mssql require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') def DB.sqls (@sqls ||= []) end logger = Object.new def logger.method_missing(m, msg) DB.sqls << msg end DB.loggers = [logger] describe "A MSSQL database" do before do @db = DB end cspecify "should be able to read fractional part of timestamp", :odbc do rs = @db["select getutcdate() as full_date, cast(datepart(millisecond, getutcdate()) as int) as milliseconds"].first rs[:milliseconds].should == rs[:full_date].usec/1000 end cspecify "should be able to write fractional part of timestamp", :odbc do t = Time.utc(2001, 12, 31, 23, 59, 59, 997000) (t.usec/1000).should == @db["select cast(datepart(millisecond, ?) as int) as milliseconds", t].get end specify "should not raise an error when getting the server version" do proc{@db.server_version}.should_not raise_error proc{@db.dataset.server_version}.should_not raise_error end end describe "A MSSQL database" do before do @db = DB @db.create_table! :test3 do Integer :value Time :time end end after do @db.drop_table?(:test3) end specify "should work with NOLOCK" do @db.transaction{@db[:test3].nolock.all.should == []} end end describe "MSSQL" do before(:all) do @db = DB @db.create_table!(:test3){Integer :v3} @db.create_table!(:test4){Integer :v4} @db[:test3].import([:v3], [[1], [2]]) @db[:test4].import([:v4], [[1], [3]]) end after(:all) do @db.drop_table?(:test3, :test4) end specify "should should support CROSS APPLY" do @db[:test3].cross_apply(@db[:test4].where(:test3__v3=>:test4__v4)).select_order_map([:v3, :v4]).should == [[1,1]] end specify "should should support OUTER APPLY" do @db[:test3].outer_apply(@db[:test4].where(:test3__v3=>:test4__v4)).select_order_map([:v3, :v4]).should == [[1,1], [2, nil]] end end # This spec is currently disabled as the SQL Server 2008 R2 Express doesn't support # full text searching. Even if full text searching is supported, # you may need to create a full text catalog on the database first via: # CREATE FULLTEXT CATALOG ftscd AS DEFAULT describe "MSSQL full_text_search" do before do @db = DB @db.drop_table?(:posts) end after do @db.drop_table?(:posts) end specify "should support fulltext indexes and full_text_search" do log do @db.create_table(:posts){Integer :id, :null=>false; String :title; String :body; index :id, :name=>:fts_id_idx, :unique=>true; full_text_index :title, :key_index=>:fts_id_idx; full_text_index [:title, :body], :key_index=>:fts_id_idx} @db[:posts].insert(:title=>'ruby rails', :body=>'y') @db[:posts].insert(:title=>'sequel', :body=>'ruby') @db[:posts].insert(:title=>'ruby scooby', :body=>'x') @db[:posts].full_text_search(:title, 'rails').all.should == [{:title=>'ruby rails', :body=>'y'}] @db[:posts].full_text_search([:title, :body], ['sequel', 'ruby']).all.should == [{:title=>'sequel', :body=>'ruby'}] @db[:posts].full_text_search(:title, :$n).call(:select, :n=>'rails').should == [{:title=>'ruby rails', :body=>'y'}] @db[:posts].full_text_search(:title, :$n).prepare(:select, :fts_select).call(:n=>'rails').should == [{:title=>'ruby rails', :body=>'y'}] end end end if false describe "MSSQL Dataset#join_table" do specify "should emulate the USING clause with ON" do DB[:items].join(:categories, [:id]).sql.should == 'SELECT * FROM [ITEMS] INNER JOIN [CATEGORIES] ON ([CATEGORIES].[ID] = [ITEMS].[ID])' ['SELECT * FROM [ITEMS] INNER JOIN [CATEGORIES] ON (([CATEGORIES].[ID1] = [ITEMS].[ID1]) AND ([CATEGORIES].[ID2] = [ITEMS].[ID2]))', 'SELECT * FROM [ITEMS] INNER JOIN [CATEGORIES] ON (([CATEGORIES].[ID2] = [ITEMS].[ID2]) AND ([CATEGORIES].[ID1] = [ITEMS].[ID1]))']. should include(DB[:items].join(:categories, [:id1, :id2]).sql) DB[:items___i].join(:categories___c, [:id]).sql.should == 'SELECT * FROM [ITEMS] AS [I] INNER JOIN [CATEGORIES] AS [C] ON ([C].[ID] = [I].[ID])' end end describe "MSSQL Dataset#output" do before do @db = DB @db.create_table!(:items){String :name; Integer :value} @db.create_table!(:out){String :name; Integer :value} @ds = @db[:items] end after do @db.drop_table?(:items, :out) end specify "should format OUTPUT clauses without INTO for DELETE statements" do @ds.output(nil, [:deleted__name, :deleted__value]).delete_sql.should =~ /DELETE FROM \[ITEMS\] OUTPUT \[DELETED\].\[(NAME|VALUE)\], \[DELETED\].\[(NAME|VALUE)\]/ @ds.output(nil, [Sequel::SQL::ColumnAll.new(:deleted)]).delete_sql.should =~ /DELETE FROM \[ITEMS\] OUTPUT \[DELETED\].*/ end specify "should format OUTPUT clauses with INTO for DELETE statements" do @ds.output(:out, [:deleted__name, :deleted__value]).delete_sql.should =~ /DELETE FROM \[ITEMS\] OUTPUT \[DELETED\].\[(NAME|VALUE)\], \[DELETED\].\[(NAME|VALUE)\] INTO \[OUT\]/ @ds.output(:out, {:name => :deleted__name, :value => :deleted__value}).delete_sql.should =~ /DELETE FROM \[ITEMS\] OUTPUT \[DELETED\].\[(NAME|VALUE)\], \[DELETED\].\[(NAME|VALUE)\] INTO \[OUT\] \(\[(NAME|VALUE)\], \[(NAME|VALUE)\]\)/ end specify "should format OUTPUT clauses without INTO for INSERT statements" do @ds.output(nil, [:inserted__name, :inserted__value]).insert_sql(:name => "name", :value => 1).should =~ /INSERT INTO \[ITEMS\] \(\[(NAME|VALUE)\], \[(NAME|VALUE)\]\) OUTPUT \[INSERTED\].\[(NAME|VALUE)\], \[INSERTED\].\[(NAME|VALUE)\] VALUES \((N'name'|1), (N'name'|1)\)/ @ds.output(nil, [Sequel::SQL::ColumnAll.new(:inserted)]).insert_sql(:name => "name", :value => 1).should =~ /INSERT INTO \[ITEMS\] \(\[(NAME|VALUE)\], \[(NAME|VALUE)\]\) OUTPUT \[INSERTED\].* VALUES \((N'name'|1), (N'name'|1)\)/ end specify "should format OUTPUT clauses with INTO for INSERT statements" do @ds.output(:out, [:inserted__name, :inserted__value]).insert_sql(:name => "name", :value => 1).should =~ /INSERT INTO \[ITEMS\] \((\[NAME\]|\[VALUE\]), (\[NAME\]|\[VALUE\])\) OUTPUT \[INSERTED\].\[(NAME|VALUE)\], \[INSERTED\].\[(NAME|VALUE)\] INTO \[OUT\] VALUES \((N'name'|1), (N'name'|1)\)/ @ds.output(:out, {:name => :inserted__name, :value => :inserted__value}).insert_sql(:name => "name", :value => 1).should =~ /INSERT INTO \[ITEMS\] \(\[(NAME|VALUE)\], \[(NAME|VALUE)\]\) OUTPUT \[INSERTED\].\[(NAME|VALUE)\], \[INSERTED\].\[(NAME|VALUE)\] INTO \[OUT\] \(\[(NAME|VALUE)\], \[(NAME|VALUE)\]\) VALUES \((N'name'|1), (N'name'|1)\)/ end specify "should format OUTPUT clauses without INTO for UPDATE statements" do @ds.output(nil, [:inserted__name, :deleted__value]).update_sql(:value => 2).should =~ /UPDATE \[ITEMS\] SET \[VALUE\] = 2 OUTPUT \[(INSERTED\].\[NAME|DELETED\].\[VALUE)\], \[(INSERTED\].\[NAME|DELETED\].\[VALUE)\]/ @ds.output(nil, [Sequel::SQL::ColumnAll.new(:inserted)]).update_sql(:value => 2).should =~ /UPDATE \[ITEMS\] SET \[VALUE\] = 2 OUTPUT \[INSERTED\].*/ end specify "should format OUTPUT clauses with INTO for UPDATE statements" do @ds.output(:out, [:inserted__name, :deleted__value]).update_sql(:value => 2).should =~ /UPDATE \[ITEMS\] SET \[VALUE\] = 2 OUTPUT \[(INSERTED\].\[NAME|DELETED\].\[VALUE)\], \[(INSERTED\].\[NAME|DELETED\].\[VALUE)\] INTO \[OUT\]/ @ds.output(:out, {:name => :inserted__name, :value => :deleted__value}).update_sql(:value => 2).should =~ /UPDATE \[ITEMS\] SET \[VALUE\] = 2 OUTPUT \[(INSERTED\].\[NAME|DELETED\].\[VALUE)\], \[(INSERTED\].\[NAME|DELETED\].\[VALUE)\] INTO \[OUT\] \(\[(NAME|VALUE)\], \[(NAME|VALUE)\]\)/ end specify "should execute OUTPUT clauses in DELETE statements" do @ds.insert(:name => "name", :value => 1) @ds.output(:out, [:deleted__name, :deleted__value]).delete @db[:out].all.should == [{:name => "name", :value => 1}] @ds.insert(:name => "name", :value => 2) @ds.output(:out, {:name => :deleted__name, :value => :deleted__value}).delete @db[:out].all.should == [{:name => "name", :value => 1}, {:name => "name", :value => 2}] end specify "should execute OUTPUT clauses in INSERT statements" do @ds.output(:out, [:inserted__name, :inserted__value]).insert(:name => "name", :value => 1) @db[:out].all.should == [{:name => "name", :value => 1}] @ds.output(:out, {:name => :inserted__name, :value => :inserted__value}).insert(:name => "name", :value => 2) @db[:out].all.should == [{:name => "name", :value => 1}, {:name => "name", :value => 2}] end specify "should execute OUTPUT clauses in UPDATE statements" do @ds.insert(:name => "name", :value => 1) @ds.output(:out, [:inserted__name, :deleted__value]).update(:value => 2) @db[:out].all.should == [{:name => "name", :value => 1}] @ds.output(:out, {:name => :inserted__name, :value => :deleted__value}).update(:value => 3) @db[:out].all.should == [{:name => "name", :value => 1}, {:name => "name", :value => 2}] end end describe "MSSQL dataset using #with and #with_recursive" do before do @db = DB @ds = DB[:t] @ds1 = @ds.with(:t, @db[:x]) @ds2 = @ds.with_recursive(:t, @db[:x], @db[:t]) end specify "should prepend UPDATE statements with WITH clause" do @ds1.update_sql(:x => :y).should == 'WITH [T] AS (SELECT * FROM [X]) UPDATE [T] SET [X] = [Y]' @ds2.update_sql(:x => :y).should == 'WITH [T] AS (SELECT * FROM [X] UNION ALL SELECT * FROM [T]) UPDATE [T] SET [X] = [Y]' end specify "should prepend DELETE statements with WITH clause" do @ds1.filter(:y => 1).delete_sql.should == 'WITH [T] AS (SELECT * FROM [X]) DELETE FROM [T] WHERE ([Y] = 1)' @ds2.filter(:y => 1).delete_sql.should == 'WITH [T] AS (SELECT * FROM [X] UNION ALL SELECT * FROM [T]) DELETE FROM [T] WHERE ([Y] = 1)' end specify "should prepend INSERT statements with WITH clause" do @ds1.insert_sql(@db[:t]).should == 'WITH [T] AS (SELECT * FROM [X]) INSERT INTO [T] SELECT * FROM [T]' @ds2.insert_sql(@db[:t]).should == 'WITH [T] AS (SELECT * FROM [X] UNION ALL SELECT * FROM [T]) INSERT INTO [T] SELECT * FROM [T]' end specify "should move WITH clause on joined dataset to top level" do @db[:s].inner_join(@ds1).sql.should == "WITH [T] AS (SELECT * FROM [X]) SELECT * FROM [S] INNER JOIN (SELECT * FROM [T]) AS [T1]" @ds1.inner_join(@db[:s].with(:s, @db[:y])).sql.should == "WITH [T] AS (SELECT * FROM [X]), [S] AS (SELECT * FROM [Y]) SELECT * FROM [T] INNER JOIN (SELECT * FROM [S]) AS [T1]" end end describe "MSSQL::Dataset#import" do before do @db = DB @db.sqls.clear @ds = @db[:test] end after do @db.drop_table?(:test) end specify "#import should work correctly with an arbitrary output value" do @db.create_table!(:test){primary_key :x; Integer :y} @ds.output(nil, [:inserted__y, :inserted__x]).import([:y], [[3], [4]]).should == [{:y=>3, :x=>1}, {:y=>4, :x=>2}] @ds.all.should == [{:x=>1, :y=>3}, {:x=>2, :y=>4}] end specify "should handle WITH statements" do @db.create_table!(:test){Integer :x; Integer :y} @db[:testx].with(:testx, @db[:test]).import([:x, :y], [[1, 2], [3, 4], [5, 6]], :slice => 2) @ds.select_order_map([:x, :y]).should == [[1, 2], [3, 4], [5, 6]] end end describe "MSSQL joined datasets" do before do @db = DB end specify "should format DELETE statements" do @db[:t1].inner_join(:t2, :t1__pk => :t2__pk).delete_sql.should == "DELETE FROM [T1] FROM [T1] INNER JOIN [T2] ON ([T1].[PK] = [T2].[PK])" end specify "should format UPDATE statements" do @db[:t1].inner_join(:t2, :t1__pk => :t2__pk).update_sql(:pk => :t2__pk).should == "UPDATE [T1] SET [PK] = [T2].[PK] FROM [T1] INNER JOIN [T2] ON ([T1].[PK] = [T2].[PK])" end end describe "Offset support" do before do @db = DB @db.create_table!(:i){Integer :id; Integer :parent_id} @ds = @db[:i].order(:id) @hs = [] @ds.row_proc = proc{|r| @hs << r.dup; r[:id] *= 2; r[:parent_id] *= 3; r} @ds.import [:id, :parent_id], [[1,nil],[2,nil],[3,1],[4,1],[5,3],[6,5]] end after do @db.drop_table?(:i) end specify "should return correct rows" do @ds.limit(2, 2).all.should == [{:id=>6, :parent_id=>3}, {:id=>8, :parent_id=>3}] end specify "should not include offset column in hashes passed to row_proc" do @ds.limit(2, 2).all @hs.should == [{:id=>3, :parent_id=>1}, {:id=>4, :parent_id=>1}] end end describe "Common Table Expressions" do before do @db = DB @db.create_table!(:i1){Integer :id; Integer :parent_id} @db.create_table!(:i2){Integer :id; Integer :parent_id} @ds = @db[:i1] @ds2 = @db[:i2] @ds.import [:id, :parent_id], [[1,nil],[2,nil],[3,1],[4,1],[5,3],[6,5]] end after do @db.drop_table?(:i1, :i2) end specify "using #with should be able to update" do @ds.insert(:id=>1) @ds2.insert(:id=>2, :parent_id=>1) @ds2.insert(:id=>3, :parent_id=>2) @ds.with(:t, @ds2).filter(:id => @db[:t].select(:id)).update(:parent_id => @db[:t].filter(:id => :i1__id).select(:parent_id).limit(1)) @ds[:id => 1].should == {:id => 1, :parent_id => nil} @ds[:id => 2].should == {:id => 2, :parent_id => 1} @ds[:id => 3].should == {:id => 3, :parent_id => 2} @ds[:id => 4].should == {:id => 4, :parent_id => 1} end specify "using #with_recursive should be able to update" do ds = @ds.with_recursive(:t, @ds.filter(:parent_id=>1).or(:id => 1), @ds.join(:t, :i=>:parent_id).select(:i1__id, :i1__parent_id), :args=>[:i, :pi]) ds.exclude(:id => @db[:t].select(:i)).update(:parent_id => 1) @ds[:id => 1].should == {:id => 1, :parent_id => nil} @ds[:id => 2].should == {:id => 2, :parent_id => 1} @ds[:id => 5].should == {:id => 5, :parent_id => 3} end specify "using #with should be able to insert" do @ds2.insert(:id=>7) @ds.with(:t, @ds2).insert(@db[:t]) @ds[:id => 7].should == {:id => 7, :parent_id => nil} end specify "using #with_recursive should be able to insert" do ds = @ds2.with_recursive(:t, @ds.filter(:parent_id=>1), @ds.join(:t, :i=>:parent_id).select(:i1__id, :i1__parent_id), :args=>[:i, :pi]) ds.insert @db[:t] @ds2.all.should == [{:id => 3, :parent_id => 1}, {:id => 4, :parent_id => 1}, {:id => 5, :parent_id => 3}, {:id => 6, :parent_id => 5}] end specify "using #with should be able to delete" do @ds2.insert(:id=>6) @ds2.insert(:id=>5) @ds2.insert(:id=>4) @ds.with(:t, @ds2).filter(:id => @db[:t].select(:id)).delete @ds.all.should == [{:id => 1, :parent_id => nil}, {:id => 2, :parent_id => nil}, {:id => 3, :parent_id => 1}] end specify "using #with_recursive should be able to delete" do @ds.insert(:id=>7, :parent_id=>2) ds = @ds.with_recursive(:t, @ds.filter(:parent_id=>1), @ds.join(:t, :i=>:parent_id).select(:i1__id, :i1__parent_id), :args=>[:i, :pi]) ds.filter(:i1__id => @db[:t].select(:i)).delete @ds.all.should == [{:id => 1, :parent_id => nil}, {:id => 2, :parent_id => nil}, {:id => 7, :parent_id => 2}] end specify "using #with should be able to import" do @ds2.insert(:id=>7) @ds.with(:t, @ds2).import [:id, :parent_id], @db[:t].select(:id, :parent_id) @ds[:id => 7].should == {:id => 7, :parent_id => nil} end specify "using #with_recursive should be able to import" do ds = @ds2.with_recursive(:t, @ds.filter(:parent_id=>1), @ds.join(:t, :i=>:parent_id).select(:i1__id, :i1__parent_id), :args=>[:i, :pi]) ds.import [:id, :parent_id], @db[:t].select(:i, :pi) @ds2.all.should == [{:id => 3, :parent_id => 1}, {:id => 4, :parent_id => 1}, {:id => 5, :parent_id => 3}, {:id => 6, :parent_id => 5}] end end describe "MSSSQL::Dataset#insert" do before do @db = DB @db.create_table!(:test5){primary_key :xid; Integer :value} @db.create_table! :test4 do String :name, :size => 20 column :value, 'varbinary(max)' end @db.sqls.clear @ds = @db[:test5] end after do @db.drop_table?(:test5, :test4) end specify "should have insert_select return nil if disable_insert_output is used" do @ds.disable_insert_output.insert_select(:value=>10).should == nil end specify "should have insert_select return nil if the server version is not 2005+" do def @ds.server_version() 8000760 end @ds.insert_select(:value=>10).should == nil end specify "should have insert_select insert the record and return the inserted record" do h = @ds.insert_select(:value=>10) h[:value].should == 10 @ds.first(:xid=>h[:xid])[:value].should == 10 end cspecify "should allow large text and binary values", :odbc do blob = Sequel::SQL::Blob.new("0" * (65*1024)) @db[:test4].insert(:name => 'max varbinary test', :value => blob) b = @db[:test4].where(:name => 'max varbinary test').get(:value) b.length.should == blob.length b.should == blob end specify "should play nicely with simple_select_all?" do DB[:test4].disable_insert_output.send(:simple_select_all?).should == true end end describe "MSSSQL::Dataset#into" do before do @db = DB end specify "should format SELECT statement" do @db[:t].into(:new).select_sql.should == "SELECT * INTO [NEW] FROM [T]" end specify "should select rows into a new table" do @db.create_table!(:t) {Integer :id; String :value} @db[:t].insert(:id => 1, :value => "test") @db << @db[:t].into(:new).select_sql @db[:new].all.should == [{:id => 1, :value => "test"}] @db.drop_table?(:t, :new) end end describe "A MSSQL database" do before do @db = DB end after do @db.drop_table?(:a) end specify "should handle many existing types for set_column_allow_null" do @db.create_table!(:a){column :a, 'integer'} @db.alter_table(:a){set_column_allow_null :a, false} @db.create_table!(:a){column :a, 'decimal(24, 2)'} @db.alter_table(:a){set_column_allow_null :a, false} @db.schema(:a).first.last[:column_size].should == 24 @db.schema(:a).first.last[:scale].should == 2 @db.create_table!(:a){column :a, 'decimal(10)'} @db.schema(:a).first.last[:column_size].should == 10 @db.schema(:a).first.last[:scale].should == 0 @db.alter_table(:a){set_column_allow_null :a, false} @db.create_table!(:a){column :a, 'nchar(2)'} @db.alter_table(:a){set_column_allow_null :a, false} s = @db.schema(:a).first.last (s[:max_chars] || s[:column_size]).should == 2 end end describe "MSSQL::Database#rename_table" do after do DB.drop_table?(:foo) end specify "should work on non-schema bound tables which need escaping" do DB.quote_identifiers = true DB.create_table! :'foo bar' do text :name end DB.drop_table? :foo proc { DB.rename_table 'foo bar', 'foo' }.should_not raise_error end specify "should work on schema bound tables" do DB.execute(<<-SQL) IF NOT EXISTS (SELECT * FROM sys.schemas WHERE name = 'MY') EXECUTE sp_executesql N'create schema MY' SQL DB.create_table! :MY__foo do text :name end proc { DB.rename_table :MY__foo, :MY__bar }.should_not raise_error proc { DB.rename_table :MY__bar, :foo }.should_not raise_error end end describe "MSSQL::Dataset#count" do specify "should work with a distinct query with an order clause" do DB.create_table!(:items){String :name; Integer :value} DB[:items].insert(:name => "name", :value => 1) DB[:items].insert(:name => "name", :value => 1) DB[:items].select(:name, :value).distinct.order(:name).count.should == 1 DB[:items].select(:name, :value).group(:name, :value).order(:name).count.should == 1 end end describe "MSSQL::Database#create_table" do specify "should support collate with various other column options" do DB.create_table!(:items){ String :name, :size => 128, :collate => :sql_latin1_general_cp1_ci_as, :default => 'foo', :null => false, :unique => true} DB[:items].insert DB[:items].select_map(:name).should == ["foo"] end end describe "MSSQL::Database#mssql_unicode_strings = false" do before do DB.mssql_unicode_strings = false end after do DB.drop_table?(:items) DB.mssql_unicode_strings = true end specify "should work correctly" do DB.create_table!(:items){String :name} DB[:items].mssql_unicode_strings.should == false DB[:items].insert(:name=>'foo') DB[:items].select_map(:name).should == ['foo'] end specify "should be overridable at the dataset level" do DB.create_table!(:items){String :name} ds = DB[:items] ds.mssql_unicode_strings.should == false ds.mssql_unicode_strings = true ds.mssql_unicode_strings.should == true ds.insert(:name=>'foo') ds.select_map(:name).should == ['foo'] end end describe "A MSSQL database adds index with include" do before :all do @table_name = :test_index_include @db = DB @db.create_table! @table_name do integer :col1 integer :col2 integer :col3 end end after :all do @db.drop_table? @table_name end cspecify "should be able add index with include" do @db.alter_table @table_name do add_index [:col1], :include => [:col2,:col3] end @db.indexes(@table_name).should have_key("#{@table_name}_col1_index".to_sym) end end describe "MSSQL::Database#drop_column with a schema" do before do DB.run "create schema test" rescue nil end after do DB.drop_table(:test__items) DB.run "drop schema test" rescue nil end specify "drops columns with a default value" do DB.create_table!(:test__items){ Integer :id; String :name, :default => 'widget' } DB.drop_column(:test__items, :name) DB[:test__items].columns.should == [:id] end end describe "Database#foreign_key_list" do before(:all) do DB.create_table! :items do primary_key :id integer :sku end DB.create_table! :prices do integer :item_id datetime :valid_from float :price primary_key [:item_id, :valid_from] foreign_key [:item_id], :items, :key => :id, :name => :fk_prices_items end DB.create_table! :sales do integer :id integer :price_item_id datetime :price_valid_from foreign_key [:price_item_id, :price_valid_from], :prices, :key => [:item_id, :valid_from], :name => :fk_sales_prices, :on_delete => :cascade end end after(:all) do DB.drop_table :sales DB.drop_table :prices DB.drop_table :items end it "should support typical foreign keys" do DB.foreign_key_list(:prices).should == [{:name => :fk_prices_items, :table => :items, :columns => [:item_id], :key => [:id], :on_update => :no_action, :on_delete => :no_action }] end it "should support a foreign key with multiple columns" do DB.foreign_key_list(:sales).should == [{:name => :fk_sales_prices, :table => :prices, :columns => [:price_item_id, :price_valid_from], :key => [:item_id, :valid_from], :on_update => :no_action, :on_delete => :cascade }] end context "with multiple schemas" do before(:all) do DB.execute_ddl "create schema vendor" DB.create_table! :vendor__vendors do primary_key :id varchar :name end DB.create_table! :vendor__mapping do integer :vendor_id integer :item_id foreign_key [:vendor_id], :vendor__vendors, :name => :fk_mapping_vendor foreign_key [:item_id], :items, :name => :fk_mapping_item end end after(:all) do DB.drop_table :vendor__mapping DB.drop_table :vendor__vendors DB.execute_ddl "drop schema vendor" end it "should support mixed schema bound tables" do DB.foreign_key_list(:vendor__mapping).sort_by{|h| h[:name].to_s}.should == [{:name => :fk_mapping_item, :table => :items, :columns => [:item_id], :key => [:id], :on_update => :no_action, :on_delete => :no_action }, {:name => :fk_mapping_vendor, :table => :vendor__vendors, :columns => [:vendor_id], :key => [:id], :on_update => :no_action, :on_delete => :no_action }] end end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/adapters/mysql_spec.rb�������������������������������������������������������0000664�0000000�0000000�00000133155�12201565355�0021247�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������SEQUEL_ADAPTER_TEST = :mysql require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') unless defined?(MYSQL_SOCKET_FILE) MYSQL_SOCKET_FILE = '/tmp/mysql.sock' end MYSQL_URI = URI.parse(DB.uri) def DB.sqls (@sqls ||= []) end logger = Object.new def logger.method_missing(m, msg) DB.sqls << msg end DB.loggers = [logger] DB.drop_table?(:items, :dolls, :booltest) SQL_BEGIN = 'BEGIN' SQL_ROLLBACK = 'ROLLBACK' SQL_COMMIT = 'COMMIT' describe "MySQL", '#create_table' do before do @db = DB DB.sqls.clear end after do @db.drop_table?(:dolls) end specify "should allow to specify options for MySQL" do @db.create_table(:dolls, :engine => 'MyISAM', :charset => 'latin2'){text :name} check_sqls do @db.sqls.should == ["CREATE TABLE `dolls` (`name` text) ENGINE=MyISAM DEFAULT CHARSET=latin2"] end end specify "should create a temporary table" do @db.create_table(:tmp_dolls, :temp => true, :engine => 'MyISAM', :charset => 'latin2'){text :name} check_sqls do @db.sqls.should == ["CREATE TEMPORARY TABLE `tmp_dolls` (`name` text) ENGINE=MyISAM DEFAULT CHARSET=latin2"] end end specify "should not use a default for a String :text=>true type" do @db.create_table(:dolls){String :name, :text=>true, :default=>'blah'} check_sqls do @db.sqls.should == ["CREATE TABLE `dolls` (`name` text)"] end end specify "should not use a default for a File type" do @db.create_table(:dolls){File :name, :default=>'blah'} check_sqls do @db.sqls.should == ["CREATE TABLE `dolls` (`name` blob)"] end end specify "should respect the size option for File type" do @db.create_table(:dolls) do File :n1 File :n2, :size=>:tiny File :n3, :size=>:medium File :n4, :size=>:long File :n5, :size=>255 end @db.schema(:dolls).map{|k, v| v[:db_type]}.should == %w"blob tinyblob mediumblob longblob blob" end specify "should include an :auto_increment schema attribute if auto incrementing" do @db.create_table(:dolls) do Integer :n2 String :n3 Integer :n4, :auto_increment=>true, :unique=>true end @db.schema(:dolls).map{|k, v| v[:auto_increment]}.should == [nil, nil, true] end specify "should support collate with various other column options" do @db.create_table!(:dolls){ String :name, :size=>128, :collate=>:utf8_bin, :default=>'foo', :null=>false, :unique=>true} @db[:dolls].insert @db[:dolls].select_map(:name).should == ["foo"] end specify "should be able to parse the default value for set and enum types" do @db.create_table!(:dolls){column :t, "set('a', 'b', 'c', 'd')", :default=>'a,b'} @db.schema(:dolls).first.last[:ruby_default].should == 'a,b' @db.create_table!(:dolls){column :t, "enum('a', 'b', 'c', 'd')", :default=>'b'} @db.schema(:dolls).first.last[:ruby_default].should == 'b' end end if [:mysql, :mysql2].include?(DB.adapter_scheme) describe "Sequel::MySQL::Database#convert_tinyint_to_bool" do before do @db = DB @db.create_table(:booltest){column :b, 'tinyint(1)'; column :i, 'tinyint(4)'} @ds = @db[:booltest] end after do @db.convert_tinyint_to_bool = true @db.drop_table?(:booltest) end specify "should consider tinyint(1) datatypes as boolean if set, but not larger tinyints" do @db.schema(:booltest, :reload=>true).should == [[:b, {:type=>:boolean, :allow_null=>true, :primary_key=>false, :default=>nil, :ruby_default=>nil, :db_type=>"tinyint(1)"}, ], [:i, {:type=>:integer, :allow_null=>true, :primary_key=>false, :default=>nil, :ruby_default=>nil, :db_type=>"tinyint(4)"}, ]] @db.convert_tinyint_to_bool = false @db.schema(:booltest, :reload=>true).should == [[:b, {:type=>:integer, :allow_null=>true, :primary_key=>false, :default=>nil, :ruby_default=>nil, :db_type=>"tinyint(1)"}, ], [:i, {:type=>:integer, :allow_null=>true, :primary_key=>false, :default=>nil, :ruby_default=>nil, :db_type=>"tinyint(4)"}, ]] end specify "should return tinyint(1)s as bools and tinyint(4)s as integers when set" do @db.convert_tinyint_to_bool = true @ds.delete @ds << {:b=>true, :i=>10} @ds.all.should == [{:b=>true, :i=>10}] @ds.delete @ds << {:b=>false, :i=>0} @ds.all.should == [{:b=>false, :i=>0}] @ds.delete @ds << {:b=>true, :i=>1} @ds.all.should == [{:b=>true, :i=>1}] end specify "should return all tinyints as integers when unset" do @db.convert_tinyint_to_bool = false @ds.delete @ds << {:b=>true, :i=>10} @ds.all.should == [{:b=>1, :i=>10}] @ds.delete @ds << {:b=>false, :i=>0} @ds.all.should == [{:b=>0, :i=>0}] @ds.delete @ds << {:b=>1, :i=>10} @ds.all.should == [{:b=>1, :i=>10}] @ds.delete @ds << {:b=>0, :i=>0} @ds.all.should == [{:b=>0, :i=>0}] end specify "should allow disabling the conversion on a per-dataset basis" do @db.convert_tinyint_to_bool = true ds = @ds.clone def ds.cast_tinyint_integer?(f) true end #mysql def ds.convert_tinyint_to_bool?() false end #mysql2 ds.delete ds << {:b=>true, :i=>10} ds.all.should == [{:b=>1, :i=>10}] @ds.all.should == [{:b=>true, :i=>10}] end end end describe "A MySQL dataset" do before do DB.create_table(:items){String :name; Integer :value} @d = DB[:items] DB.sqls.clear end after do DB.drop_table?(:items) end specify "should quote columns and tables using back-ticks if quoting identifiers" do @d.quote_identifiers = true @d.select(:name).sql.should == 'SELECT `name` FROM `items`' @d.select(Sequel.lit('COUNT(*)')).sql.should == 'SELECT COUNT(*) FROM `items`' @d.select(Sequel.function(:max, :value)).sql.should == 'SELECT max(`value`) FROM `items`' @d.select(Sequel.function(:NOW)).sql.should == 'SELECT NOW() FROM `items`' @d.select(Sequel.function(:max, :items__value)).sql.should == 'SELECT max(`items`.`value`) FROM `items`' @d.order(Sequel.expr(:name).desc).sql.should == 'SELECT * FROM `items` ORDER BY `name` DESC' @d.select(Sequel.lit('items.name AS item_name')).sql.should == 'SELECT items.name AS item_name FROM `items`' @d.select(Sequel.lit('`name`')).sql.should == 'SELECT `name` FROM `items`' @d.select(Sequel.lit('max(items.`name`) AS `max_name`')).sql.should == 'SELECT max(items.`name`) AS `max_name` FROM `items`' @d.select(Sequel.function(:test, :abc, 'hello')).sql.should == "SELECT test(`abc`, 'hello') FROM `items`" @d.select(Sequel.function(:test, :abc__def, 'hello')).sql.should == "SELECT test(`abc`.`def`, 'hello') FROM `items`" @d.select(Sequel.function(:test, :abc__def, 'hello').as(:x2)).sql.should == "SELECT test(`abc`.`def`, 'hello') AS `x2` FROM `items`" @d.insert_sql(:value => 333).should == 'INSERT INTO `items` (`value`) VALUES (333)' @d.insert_sql(:x => :y).should == 'INSERT INTO `items` (`x`) VALUES (`y`)' end specify "should quote fields correctly when reversing the order" do @d.quote_identifiers = true @d.reverse_order(:name).sql.should == 'SELECT * FROM `items` ORDER BY `name` DESC' @d.reverse_order(Sequel.desc(:name)).sql.should == 'SELECT * FROM `items` ORDER BY `name` ASC' @d.reverse_order(:name, Sequel.desc(:test)).sql.should == 'SELECT * FROM `items` ORDER BY `name` DESC, `test` ASC' @d.reverse_order(Sequel.desc(:name), :test).sql.should == 'SELECT * FROM `items` ORDER BY `name` ASC, `test` DESC' end specify "should support ORDER clause in UPDATE statements" do @d.order(:name).update_sql(:value => 1).should == 'UPDATE `items` SET `value` = 1 ORDER BY `name`' end specify "should support LIMIT clause in UPDATE statements" do @d.limit(10).update_sql(:value => 1).should == 'UPDATE `items` SET `value` = 1 LIMIT 10' end specify "should support regexps" do @d << {:name => 'abc', :value => 1} @d << {:name => 'bcd', :value => 2} @d.filter(:name => /bc/).count.should == 2 @d.filter(:name => /^bc/).count.should == 1 end specify "should have explain output" do @d.explain.should be_a_kind_of(String) @d.explain(:extended=>true).should be_a_kind_of(String) @d.explain.should_not == @d.explain(:extended=>true) end specify "should correctly literalize strings with comment backslashes in them" do @d.delete proc {@d << {:name => ':\\'}}.should_not raise_error @d.first[:name].should == ':\\' end specify "should handle prepared statements with on_duplicate_key_update" do @d.db.add_index :items, :value, :unique=>true ds = @d.on_duplicate_key_update ps = ds.prepare(:insert, :insert_user_id_feature_name, :value => :$v, :name => :$n) ps.call(:v => 1, :n => 'a') ds.all.should == [{:value=>1, :name=>'a'}] ps.call(:v => 1, :n => 'b') ds.all.should == [{:value=>1, :name=>'b'}] end end describe "MySQL datasets" do before do @d = DB[:orders] end specify "should correctly quote column references" do @d.quote_identifiers = true market = 'ICE' ack_stamp = Time.now - 15 * 60 # 15 minutes ago @d.select(:market, Sequel.function(:minute, Sequel.function(:from_unixtime, :ack)).as(:minute)). where{(ack > ack_stamp) & {:market => market}}. group_by(Sequel.function(:minute, Sequel.function(:from_unixtime, :ack))).sql.should == \ "SELECT `market`, minute(from_unixtime(`ack`)) AS `minute` FROM `orders` WHERE ((`ack` > #{@d.literal(ack_stamp)}) AND (`market` = 'ICE')) GROUP BY minute(from_unixtime(`ack`))" end end describe "Dataset#distinct" do before do @db = DB @db.create_table!(:a) do Integer :a Integer :b end @ds = @db[:a] end after do @db.drop_table?(:a) end it "#distinct with arguments should return results distinct on those arguments" do @ds.insert(20, 10) @ds.insert(30, 10) @ds.order(:b, :a).distinct.map(:a).should == [20, 30] @ds.order(:b, Sequel.desc(:a)).distinct.map(:a).should == [30, 20] # MySQL doesn't respect orders when using the nonstandard GROUP BY [[20], [30]].should include(@ds.order(:b, :a).distinct(:b).map(:a)) end end describe "MySQL join expressions" do before do @ds = DB[:nodes] end specify "should raise error for :full_outer join requests." do lambda{@ds.join_table(:full_outer, :nodes)}.should raise_error(Sequel::Error) end specify "should support natural left joins" do @ds.join_table(:natural_left, :nodes).sql.should == \ 'SELECT * FROM `nodes` NATURAL LEFT JOIN `nodes`' end specify "should support natural right joins" do @ds.join_table(:natural_right, :nodes).sql.should == \ 'SELECT * FROM `nodes` NATURAL RIGHT JOIN `nodes`' end specify "should support natural left outer joins" do @ds.join_table(:natural_left_outer, :nodes).sql.should == \ 'SELECT * FROM `nodes` NATURAL LEFT OUTER JOIN `nodes`' end specify "should support natural right outer joins" do @ds.join_table(:natural_right_outer, :nodes).sql.should == \ 'SELECT * FROM `nodes` NATURAL RIGHT OUTER JOIN `nodes`' end specify "should support natural inner joins" do @ds.join_table(:natural_inner, :nodes).sql.should == \ 'SELECT * FROM `nodes` NATURAL LEFT JOIN `nodes`' end specify "should support cross joins" do @ds.join_table(:cross, :nodes).sql.should == \ 'SELECT * FROM `nodes` CROSS JOIN `nodes`' end specify "should support cross joins as inner joins if conditions are used" do @ds.join_table(:cross, :nodes, :id=>:id).sql.should == \ 'SELECT * FROM `nodes` INNER JOIN `nodes` ON (`nodes`.`id` = `nodes`.`id`)' end specify "should support straight joins (force left table to be read before right)" do @ds.join_table(:straight, :nodes).sql.should == \ 'SELECT * FROM `nodes` STRAIGHT_JOIN `nodes`' end specify "should support natural joins on multiple tables." do @ds.join_table(:natural_left_outer, [:nodes, :branches]).sql.should == \ 'SELECT * FROM `nodes` NATURAL LEFT OUTER JOIN (`nodes`, `branches`)' end specify "should support straight joins on multiple tables." do @ds.join_table(:straight, [:nodes,:branches]).sql.should == \ 'SELECT * FROM `nodes` STRAIGHT_JOIN (`nodes`, `branches`)' end end describe "Joined MySQL dataset" do before do @ds = DB[:nodes] end specify "should quote fields correctly" do @ds.quote_identifiers = true @ds.join(:attributes, :node_id => :id).sql.should == \ "SELECT * FROM `nodes` INNER JOIN `attributes` ON (`attributes`.`node_id` = `nodes`.`id`)" end specify "should allow a having clause on ungrouped datasets" do proc {@ds.having('blah')}.should_not raise_error @ds.having('blah').sql.should == \ "SELECT * FROM `nodes` HAVING (blah)" end specify "should put a having clause before an order by clause" do @ds.order(:aaa).having(:bbb => :ccc).sql.should == \ "SELECT * FROM `nodes` HAVING (`bbb` = `ccc`) ORDER BY `aaa`" end end describe "A MySQL database" do after do DB.drop_table?(:test_innodb) end specify "should handle the creation and dropping of an InnoDB table with foreign keys" do proc{DB.create_table!(:test_innodb, :engine=>:InnoDB){primary_key :id; foreign_key :fk, :test_innodb, :key=>:id}}.should_not raise_error end end describe "A MySQL database" do before(:all) do @db = DB @db.create_table! :test2 do text :name integer :value end end after(:all) do @db.drop_table?(:test2) end specify "should provide the server version" do @db.server_version.should >= 40000 end specify "should cache the server version" do # warm cache: @db.server_version @db.sqls.clear 3.times{@db.server_version} @db.sqls.should be_empty end specify "should support for_share" do @db.transaction{@db[:test2].for_share.all.should == []} end specify "should support add_column operations" do @db.add_column :test2, :xyz, :text @db[:test2].columns.should == [:name, :value, :xyz] @db[:test2] << {:name => 'mmm', :value => 111, :xyz => '000'} @db[:test2].first[:xyz].should == '000' end specify "should support drop_column operations" do @db[:test2].columns.should == [:name, :value, :xyz] @db.drop_column :test2, :xyz @db[:test2].columns.should == [:name, :value] end specify "should support rename_column operations" do @db[:test2].delete @db.add_column :test2, :xyz, :text @db[:test2] << {:name => 'mmm', :value => 111, :xyz => 'qqqq'} @db[:test2].columns.should == [:name, :value, :xyz] @db.rename_column :test2, :xyz, :zyx, :type => :text @db[:test2].columns.should == [:name, :value, :zyx] @db[:test2].first[:zyx].should == 'qqqq' end specify "should support rename_column operations with types like varchar(255)" do @db[:test2].delete @db.add_column :test2, :tre, :text @db[:test2] << {:name => 'mmm', :value => 111, :tre => 'qqqq'} @db[:test2].columns.should == [:name, :value, :zyx, :tre] @db.rename_column :test2, :tre, :ert, :type => :varchar, :size=>255 @db[:test2].columns.should == [:name, :value, :zyx, :ert] @db[:test2].first[:ert].should == 'qqqq' end specify "should support set_column_type operations" do @db.add_column :test2, :xyz, :float @db[:test2].delete @db[:test2] << {:name => 'mmm', :value => 111, :xyz => 56.78} @db.set_column_type :test2, :xyz, :integer @db[:test2].first[:xyz].should == 57 end specify "should support add_index" do @db.add_index :test2, :value end specify "should support drop_index" do @db.drop_index :test2, :value end specify "should support add_foreign_key" do @db.alter_table :test2 do add_index :value, :unique=>true add_foreign_key :value2, :test2, :key=>:value end @db[:test2].columns.should == [:name, :value, :zyx, :ert, :xyz, :value2] end end describe "A MySQL database with table options" do before do @options = {:engine=>'MyISAM', :charset=>'latin1', :collate => 'latin1_swedish_ci'} Sequel::MySQL.default_engine = 'InnoDB' Sequel::MySQL.default_charset = 'utf8' Sequel::MySQL.default_collate = 'utf8_general_ci' @db = DB @db.drop_table?(:items) DB.sqls.clear end after do @db.drop_table?(:items) Sequel::MySQL.default_engine = nil Sequel::MySQL.default_charset = nil Sequel::MySQL.default_collate = nil end specify "should allow to pass custom options (engine, charset, collate) for table creation" do @db.create_table(:items, @options){Integer :size; text :name} check_sqls do @db.sqls.should == ["CREATE TABLE `items` (`size` integer, `name` text) ENGINE=MyISAM DEFAULT CHARSET=latin1 DEFAULT COLLATE=latin1_swedish_ci"] end end specify "should use default options if specified (engine, charset, collate) for table creation" do @db.create_table(:items){Integer :size; text :name} check_sqls do @db.sqls.should == ["CREATE TABLE `items` (`size` integer, `name` text) ENGINE=InnoDB DEFAULT CHARSET=utf8 DEFAULT COLLATE=utf8_general_ci"] end end specify "should not use default if option has a nil value" do @db.create_table(:items, :engine=>nil, :charset=>nil, :collate=>nil){Integer :size; text :name} check_sqls do @db.sqls.should == ["CREATE TABLE `items` (`size` integer, `name` text)"] end end end describe "A MySQL database" do before do @db = DB @db.drop_table?(:items) DB.sqls.clear end after do @db.drop_table?(:items, :users) end specify "should support defaults for boolean columns" do @db.create_table(:items){TrueClass :active1, :default=>true; FalseClass :active2, :default => false} check_sqls do @db.sqls.should == ["CREATE TABLE `items` (`active1` tinyint(1) DEFAULT 1, `active2` tinyint(1) DEFAULT 0)"] end end specify "should correctly format CREATE TABLE statements with foreign keys" do @db.create_table(:items){primary_key :id; foreign_key :p_id, :items, :key => :id, :null => false, :on_delete => :cascade} check_sqls do @db.sqls.should == ["CREATE TABLE `items` (`id` integer PRIMARY KEY AUTO_INCREMENT, `p_id` integer NOT NULL, UNIQUE (`id`), FOREIGN KEY (`p_id`) REFERENCES `items`(`id`) ON DELETE CASCADE)"] end end specify "should correctly format ALTER TABLE statements with foreign keys" do @db.create_table(:items){Integer :id} @db.create_table(:users){primary_key :id} @db.alter_table(:items){add_foreign_key :p_id, :users, :key => :id, :null => false, :on_delete => :cascade} check_sqls do @db.sqls.should == ["CREATE TABLE `items` (`id` integer)", "CREATE TABLE `users` (`id` integer PRIMARY KEY AUTO_INCREMENT)", "ALTER TABLE `items` ADD COLUMN `p_id` integer NOT NULL, ADD FOREIGN KEY (`p_id`) REFERENCES `users`(`id`) ON DELETE CASCADE"] end end specify "should have rename_column support keep existing options" do @db.create_table(:items){String :id, :null=>false, :default=>'blah'} @db.alter_table(:items){rename_column :id, :nid} check_sqls do @db.sqls.should == ["CREATE TABLE `items` (`id` varchar(255) NOT NULL DEFAULT 'blah')", "DESCRIBE `items`", "ALTER TABLE `items` CHANGE COLUMN `id` `nid` varchar(255) NOT NULL DEFAULT 'blah'"] end @db[:items].insert @db[:items].all.should == [{:nid=>'blah'}] proc{@db[:items].insert(:nid=>nil)}.should raise_error(Sequel::DatabaseError) end specify "should have set_column_type support keep existing options" do @db.create_table(:items){Integer :id, :null=>false, :default=>5} @db.alter_table(:items){set_column_type :id, Bignum} check_sqls do @db.sqls.should == ["CREATE TABLE `items` (`id` integer NOT NULL DEFAULT 5)", "DESCRIBE `items`", "ALTER TABLE `items` CHANGE COLUMN `id` `id` bigint NOT NULL DEFAULT 5"] end @db[:items].insert @db[:items].all.should == [{:id=>5}] proc{@db[:items].insert(:id=>nil)}.should raise_error(Sequel::DatabaseError) @db[:items].delete @db[:items].insert(2**40) @db[:items].all.should == [{:id=>2**40}] end specify "should have set_column_type pass through options" do @db.create_table(:items){integer :id; enum :list, :elements=>%w[one]} @db.alter_table(:items){set_column_type :id, :int, :unsigned=>true, :size=>8; set_column_type :list, :enum, :elements=>%w[two]} check_sqls do @db.sqls.should == ["CREATE TABLE `items` (`id` integer, `list` enum('one'))", "DESCRIBE `items`", "ALTER TABLE `items` CHANGE COLUMN `id` `id` int(8) UNSIGNED NULL, CHANGE COLUMN `list` `list` enum('two') NULL"] end end specify "should have set_column_default support keep existing options" do @db.create_table(:items){Integer :id, :null=>false, :default=>5} @db.alter_table(:items){set_column_default :id, 6} check_sqls do @db.sqls.should == ["CREATE TABLE `items` (`id` integer NOT NULL DEFAULT 5)", "DESCRIBE `items`", "ALTER TABLE `items` CHANGE COLUMN `id` `id` int(11) NOT NULL DEFAULT 6"] end @db[:items].insert @db[:items].all.should == [{:id=>6}] proc{@db[:items].insert(:id=>nil)}.should raise_error(Sequel::DatabaseError) end specify "should have set_column_allow_null support keep existing options" do @db.create_table(:items){Integer :id, :null=>false, :default=>5} @db.alter_table(:items){set_column_allow_null :id, true} check_sqls do @db.sqls.should == ["CREATE TABLE `items` (`id` integer NOT NULL DEFAULT 5)", "DESCRIBE `items`", "ALTER TABLE `items` CHANGE COLUMN `id` `id` int(11) NULL DEFAULT 5"] end @db[:items].insert @db[:items].all.should == [{:id=>5}] proc{@db[:items].insert(:id=>nil)}.should_not end specify "should accept repeated raw sql statements using Database#<<" do @db.create_table(:items){String :name; Integer :value} @db << 'DELETE FROM items' @db[:items].count.should == 0 @db << "INSERT INTO items (name, value) VALUES ('tutu', 1234)" @db[:items].first.should == {:name => 'tutu', :value => 1234} @db << 'DELETE FROM items' @db[:items].first.should == nil end end # Socket tests should only be run if the MySQL server is on localhost if %w'localhost 127.0.0.1 ::1'.include?(MYSQL_URI.host) and DB.adapter_scheme == :mysql describe "A MySQL database" do specify "should accept a socket option" do db = Sequel.mysql(DB.opts[:database], :host => 'localhost', :user => DB.opts[:user], :password => DB.opts[:password], :socket => MYSQL_SOCKET_FILE) proc {db.test_connection}.should_not raise_error end specify "should accept a socket option without host option" do db = Sequel.mysql(DB.opts[:database], :user => DB.opts[:user], :password => DB.opts[:password], :socket => MYSQL_SOCKET_FILE) proc {db.test_connection}.should_not raise_error end specify "should fail to connect with invalid socket" do db = Sequel.mysql(DB.opts[:database], :user => DB.opts[:user], :password => DB.opts[:password], :socket =>'blah') proc {db.test_connection}.should raise_error end end end describe "A MySQL database" do specify "should accept a read_timeout option when connecting" do db = Sequel.connect(DB.opts.merge(:read_timeout=>22342)) proc {db.test_connection}.should_not raise_error end specify "should accept a connect_timeout option when connecting" do db = Sequel.connect(DB.opts.merge(:connect_timeout=>22342)) proc {db.test_connection}.should_not raise_error end end describe "MySQL foreign key support" do after do DB.drop_table?(:testfk, :testpk) end specify "should create table without :key" do DB.create_table!(:testpk){primary_key :id} DB.create_table!(:testfk){foreign_key :fk, :testpk} end specify "should create table with composite keys without :key" do DB.create_table!(:testpk){Integer :id; Integer :id2; primary_key([:id, :id2])} DB.create_table!(:testfk){Integer :fk; Integer :fk2; foreign_key([:fk, :fk2], :testpk)} end specify "should create table with self referential without :key" do DB.create_table!(:testfk){primary_key :id; foreign_key :fk, :testfk} end specify "should create table with self referential with composite keys without :key" do DB.create_table!(:testfk){Integer :id; Integer :id2; Integer :fk; Integer :fk2; primary_key([:id, :id2]); foreign_key([:fk, :fk2], :testfk)} end specify "should alter table without :key" do DB.create_table!(:testpk){primary_key :id} DB.create_table!(:testfk){Integer :id} DB.alter_table(:testfk){add_foreign_key :fk, :testpk} end specify "should alter table with composite keys without :key" do DB.create_table!(:testpk){Integer :id; Integer :id2; primary_key([:id, :id2])} DB.create_table!(:testfk){Integer :fk; Integer :fk2} DB.alter_table(:testfk){add_foreign_key([:fk, :fk2], :testpk)} end specify "should alter table with self referential without :key" do DB.create_table!(:testfk){primary_key :id} DB.alter_table(:testfk){add_foreign_key :fk, :testfk} end specify "should alter table with self referential with composite keys without :key" do DB.create_table!(:testfk){Integer :id; Integer :id2; Integer :fk; Integer :fk2; primary_key([:id, :id2])} DB.alter_table(:testfk){add_foreign_key [:fk, :fk2], :testfk} end end describe "A grouped MySQL dataset" do before do DB.create_table! :test2 do text :name integer :value end DB[:test2] << {:name => '11', :value => 10} DB[:test2] << {:name => '11', :value => 20} DB[:test2] << {:name => '11', :value => 30} DB[:test2] << {:name => '12', :value => 10} DB[:test2] << {:name => '12', :value => 20} DB[:test2] << {:name => '13', :value => 10} end after do DB.drop_table?(:test2) end specify "should return the correct count for raw sql query" do ds = DB["select name FROM test2 WHERE name = '11' GROUP BY name"] ds.count.should == 1 end specify "should return the correct count for a normal dataset" do ds = DB[:test2].select(:name).where(:name => '11').group(:name) ds.count.should == 1 end end describe "A MySQL database" do before do @db = DB @db.drop_table?(:posts) @db.sqls.clear end after do @db.drop_table?(:posts) end specify "should support fulltext indexes and full_text_search" do @db.create_table(:posts, :engine=>:MyISAM){text :title; text :body; full_text_index :title; full_text_index [:title, :body]} check_sqls do @db.sqls.should == [ "CREATE TABLE `posts` (`title` text, `body` text) ENGINE=MyISAM", "CREATE FULLTEXT INDEX `posts_title_index` ON `posts` (`title`)", "CREATE FULLTEXT INDEX `posts_title_body_index` ON `posts` (`title`, `body`)" ] end @db[:posts].insert(:title=>'ruby rails', :body=>'y') @db[:posts].insert(:title=>'sequel', :body=>'ruby') @db[:posts].insert(:title=>'ruby scooby', :body=>'x') @db.sqls.clear @db[:posts].full_text_search(:title, 'rails').all.should == [{:title=>'ruby rails', :body=>'y'}] @db[:posts].full_text_search([:title, :body], ['sequel', 'ruby']).all.should == [{:title=>'sequel', :body=>'ruby'}] @db[:posts].full_text_search(:title, '+ruby -rails', :boolean => true).all.should == [{:title=>'ruby scooby', :body=>'x'}] check_sqls do @db.sqls.should == [ "SELECT * FROM `posts` WHERE (MATCH (`title`) AGAINST ('rails'))", "SELECT * FROM `posts` WHERE (MATCH (`title`, `body`) AGAINST ('sequel ruby'))", "SELECT * FROM `posts` WHERE (MATCH (`title`) AGAINST ('+ruby -rails' IN BOOLEAN MODE))"] end @db[:posts].full_text_search(:title, :$n).call(:select, :n=>'rails').should == [{:title=>'ruby rails', :body=>'y'}] @db[:posts].full_text_search(:title, :$n).prepare(:select, :fts_select).call(:n=>'rails').should == [{:title=>'ruby rails', :body=>'y'}] end specify "should support spatial indexes" do @db.create_table(:posts, :engine=>:MyISAM){point :geom, :null=>false; spatial_index [:geom]} check_sqls do @db.sqls.should == [ "CREATE TABLE `posts` (`geom` point NOT NULL) ENGINE=MyISAM", "CREATE SPATIAL INDEX `posts_geom_index` ON `posts` (`geom`)" ] end end specify "should support indexes with index type" do @db.create_table(:posts){Integer :id; index :id, :type => :btree} check_sqls do @db.sqls.should == [ "CREATE TABLE `posts` (`id` integer)", "CREATE INDEX `posts_id_index` USING btree ON `posts` (`id`)" ] end end specify "should support unique indexes with index type" do @db.create_table(:posts){Integer :id; index :id, :type => :btree, :unique => true} check_sqls do @db.sqls.should == [ "CREATE TABLE `posts` (`id` integer)", "CREATE UNIQUE INDEX `posts_id_index` USING btree ON `posts` (`id`)" ] end end specify "should not dump partial indexes" do @db.create_table(:posts){text :id} @db << "CREATE INDEX posts_id_index ON posts (id(10))" @db.indexes(:posts).should == {} end specify "should dump partial indexes if :partial option is set to true" do @db.create_table(:posts){text :id} @db << "CREATE INDEX posts_id_index ON posts (id(10))" @db.indexes(:posts, :partial => true).should == {:posts_id_index => {:columns => [:id], :unique => false}} end end describe "MySQL::Dataset#insert and related methods" do before do DB.create_table(:items){String :name; Integer :value} @d = DB[:items] DB.sqls.clear end after do DB.drop_table?(:items) end specify "#insert should insert record with default values when no arguments given" do @d.insert check_sqls do DB.sqls.should == ["INSERT INTO `items` () VALUES ()"] end @d.all.should == [{:name => nil, :value => nil}] end specify "#insert should insert record with default values when empty hash given" do @d.insert({}) check_sqls do DB.sqls.should == ["INSERT INTO `items` () VALUES ()"] end @d.all.should == [{:name => nil, :value => nil}] end specify "#insert should insert record with default values when empty array given" do @d.insert [] check_sqls do DB.sqls.should == ["INSERT INTO `items` () VALUES ()"] end @d.all.should == [{:name => nil, :value => nil}] end specify "#on_duplicate_key_update should work with regular inserts" do DB.add_index :items, :name, :unique=>true DB.sqls.clear @d.insert(:name => 'abc', :value => 1) @d.on_duplicate_key_update(:name, :value => 6).insert(:name => 'abc', :value => 1) @d.on_duplicate_key_update(:name, :value => 6).insert(:name => 'def', :value => 2) check_sqls do DB.sqls.length.should == 3 DB.sqls[0].should =~ /\AINSERT INTO `items` \(`(name|value)`, `(name|value)`\) VALUES \(('abc'|1), (1|'abc')\)\z/ DB.sqls[1].should =~ /\AINSERT INTO `items` \(`(name|value)`, `(name|value)`\) VALUES \(('abc'|1), (1|'abc')\) ON DUPLICATE KEY UPDATE `name`=VALUES\(`name`\), `value`=6\z/ DB.sqls[2].should =~ /\AINSERT INTO `items` \(`(name|value)`, `(name|value)`\) VALUES \(('def'|2), (2|'def')\) ON DUPLICATE KEY UPDATE `name`=VALUES\(`name`\), `value`=6\z/ end @d.all.should == [{:name => 'abc', :value => 6}, {:name => 'def', :value => 2}] end specify "#multi_replace should insert multiple records in a single statement" do @d.multi_replace([{:name => 'abc'}, {:name => 'def'}]) check_sqls do DB.sqls.should == [ SQL_BEGIN, "REPLACE INTO `items` (`name`) VALUES ('abc'), ('def')", SQL_COMMIT ] end @d.all.should == [ {:name => 'abc', :value => nil}, {:name => 'def', :value => nil} ] end specify "#multi_replace should split the list of records into batches if :commit_every option is given" do @d.multi_replace([{:value => 1}, {:value => 2}, {:value => 3}, {:value => 4}], :commit_every => 2) check_sqls do DB.sqls.should == [ SQL_BEGIN, "REPLACE INTO `items` (`value`) VALUES (1), (2)", SQL_COMMIT, SQL_BEGIN, "REPLACE INTO `items` (`value`) VALUES (3), (4)", SQL_COMMIT ] end @d.all.should == [ {:name => nil, :value => 1}, {:name => nil, :value => 2}, {:name => nil, :value => 3}, {:name => nil, :value => 4} ] end specify "#multi_replace should split the list of records into batches if :slice option is given" do @d.multi_replace([{:value => 1}, {:value => 2}, {:value => 3}, {:value => 4}], :slice => 2) check_sqls do DB.sqls.should == [ SQL_BEGIN, "REPLACE INTO `items` (`value`) VALUES (1), (2)", SQL_COMMIT, SQL_BEGIN, "REPLACE INTO `items` (`value`) VALUES (3), (4)", SQL_COMMIT ] end @d.all.should == [ {:name => nil, :value => 1}, {:name => nil, :value => 2}, {:name => nil, :value => 3}, {:name => nil, :value => 4} ] end specify "#multi_insert should insert multiple records in a single statement" do @d.multi_insert([{:name => 'abc'}, {:name => 'def'}]) check_sqls do DB.sqls.should == [ SQL_BEGIN, "INSERT INTO `items` (`name`) VALUES ('abc'), ('def')", SQL_COMMIT ] end @d.all.should == [ {:name => 'abc', :value => nil}, {:name => 'def', :value => nil} ] end specify "#multi_insert should split the list of records into batches if :commit_every option is given" do @d.multi_insert([{:value => 1}, {:value => 2}, {:value => 3}, {:value => 4}], :commit_every => 2) check_sqls do DB.sqls.should == [ SQL_BEGIN, "INSERT INTO `items` (`value`) VALUES (1), (2)", SQL_COMMIT, SQL_BEGIN, "INSERT INTO `items` (`value`) VALUES (3), (4)", SQL_COMMIT ] end @d.all.should == [ {:name => nil, :value => 1}, {:name => nil, :value => 2}, {:name => nil, :value => 3}, {:name => nil, :value => 4} ] end specify "#multi_insert should split the list of records into batches if :slice option is given" do @d.multi_insert([{:value => 1}, {:value => 2}, {:value => 3}, {:value => 4}], :slice => 2) check_sqls do DB.sqls.should == [ SQL_BEGIN, "INSERT INTO `items` (`value`) VALUES (1), (2)", SQL_COMMIT, SQL_BEGIN, "INSERT INTO `items` (`value`) VALUES (3), (4)", SQL_COMMIT ] end @d.all.should == [ {:name => nil, :value => 1}, {:name => nil, :value => 2}, {:name => nil, :value => 3}, {:name => nil, :value => 4} ] end specify "#import should support inserting using columns and values arrays" do @d.import([:name, :value], [['abc', 1], ['def', 2]]) check_sqls do DB.sqls.should == [ SQL_BEGIN, "INSERT INTO `items` (`name`, `value`) VALUES ('abc', 1), ('def', 2)", SQL_COMMIT ] end @d.all.should == [ {:name => 'abc', :value => 1}, {:name => 'def', :value => 2} ] end specify "#insert_ignore should add the IGNORE keyword when inserting" do @d.insert_ignore.multi_insert([{:name => 'abc'}, {:name => 'def'}]) check_sqls do DB.sqls.should == [ SQL_BEGIN, "INSERT IGNORE INTO `items` (`name`) VALUES ('abc'), ('def')", SQL_COMMIT ] end @d.all.should == [ {:name => 'abc', :value => nil}, {:name => 'def', :value => nil} ] end specify "#insert_ignore should add the IGNORE keyword for single inserts" do @d.insert_ignore.insert(:name => 'ghi') check_sqls do DB.sqls.should == ["INSERT IGNORE INTO `items` (`name`) VALUES ('ghi')"] end @d.all.should == [{:name => 'ghi', :value => nil}] end specify "#on_duplicate_key_update should add the ON DUPLICATE KEY UPDATE and ALL columns when no args given" do @d.on_duplicate_key_update.import([:name,:value], [['abc', 1], ['def',2]]) check_sqls do DB.sqls.should == [ "SELECT * FROM `items` LIMIT 1", SQL_BEGIN, "INSERT INTO `items` (`name`, `value`) VALUES ('abc', 1), ('def', 2) ON DUPLICATE KEY UPDATE `name`=VALUES(`name`), `value`=VALUES(`value`)", SQL_COMMIT ] end @d.all.should == [ {:name => 'abc', :value => 1}, {:name => 'def', :value => 2} ] end specify "#on_duplicate_key_update should add the ON DUPLICATE KEY UPDATE and columns specified when args are given" do @d.on_duplicate_key_update(:value).import([:name,:value], [['abc', 1], ['def',2]] ) check_sqls do DB.sqls.should == [ SQL_BEGIN, "INSERT INTO `items` (`name`, `value`) VALUES ('abc', 1), ('def', 2) ON DUPLICATE KEY UPDATE `value`=VALUES(`value`)", SQL_COMMIT ] end @d.all.should == [ {:name => 'abc', :value => 1}, {:name => 'def', :value => 2} ] end end describe "MySQL::Dataset#update and related methods" do before do DB.create_table(:items){String :name; Integer :value; index :name, :unique=>true} @d = DB[:items] end after do DB.drop_table?(:items) end specify "#update_ignore should not raise error where normal update would fail" do @d.insert(:name => 'cow', :value => 0) @d.insert(:name => 'cat', :value => 1) proc{@d.where(:value => 1).update(:name => 'cow')}.should raise_error(Sequel::DatabaseError) DB.sqls.clear @d.update_ignore.where(:value => 1).update(:name => 'cow') check_sqls do DB.sqls.should == ["UPDATE IGNORE `items` SET `name` = 'cow' WHERE (`value` = 1)"] end @d.order(:name).all.should == [{:name => 'cat', :value => 1}, {:name => 'cow', :value => 0}] end end describe "MySQL::Dataset#replace" do before do DB.create_table(:items){Integer :id, :unique=>true; Integer :value} @d = DB[:items] DB.sqls.clear end after do DB.drop_table?(:items) end specify "should use default values if they exist" do DB.alter_table(:items){set_column_default :id, 1; set_column_default :value, 2} @d.replace @d.all.should == [{:id=>1, :value=>2}] @d.replace([]) @d.all.should == [{:id=>1, :value=>2}] @d.replace({}) @d.all.should == [{:id=>1, :value=>2}] end end describe "MySQL::Dataset#complex_expression_sql" do before do @d = DB.dataset end specify "should handle string concatenation with CONCAT if more than one record" do @d.literal(Sequel.join([:x, :y])).should == "CONCAT(`x`, `y`)" @d.literal(Sequel.join([:x, :y], ' ')).should == "CONCAT(`x`, ' ', `y`)" @d.literal(Sequel.join([Sequel.function(:x, :y), 1, Sequel.lit('z')], Sequel.subscript(:y, 1))).should == "CONCAT(x(`y`), `y`[1], '1', `y`[1], z)" end specify "should handle string concatenation as simple string if just one record" do @d.literal(Sequel.join([:x])).should == "`x`" @d.literal(Sequel.join([:x], ' ')).should == "`x`" end end describe "MySQL::Dataset#calc_found_rows" do before do DB.create_table!(:items){Integer :a} end after do DB.drop_table?(:items) end specify "should add the SQL_CALC_FOUND_ROWS keyword when selecting" do DB[:items].select(:a).calc_found_rows.limit(1).sql.should == \ 'SELECT SQL_CALC_FOUND_ROWS `a` FROM `items` LIMIT 1' end specify "should count matching rows disregarding LIMIT clause" do DB[:items].multi_insert([{:a => 1}, {:a => 1}, {:a => 2}]) DB.sqls.clear DB.synchronize do DB[:items].calc_found_rows.filter(:a => 1).limit(1).all.should == [{:a => 1}] DB.dataset.select(Sequel.function(:FOUND_ROWS).as(:rows)).all.should == [{:rows => 2 }] end check_sqls do DB.sqls.should == [ 'SELECT SQL_CALC_FOUND_ROWS * FROM `items` WHERE (`a` = 1) LIMIT 1', 'SELECT FOUND_ROWS() AS `rows`', ] end end end if DB.adapter_scheme == :mysql or DB.adapter_scheme == :jdbc or DB.adapter_scheme == :mysql2 describe "MySQL Stored Procedures" do before do DB.create_table(:items){Integer :id; Integer :value} @d = DB[:items] DB.sqls.clear end after do DB.drop_table?(:items) DB.execute('DROP PROCEDURE test_sproc') end specify "should be callable on the database object" do DB.execute_ddl('CREATE PROCEDURE test_sproc() BEGIN DELETE FROM items; END') DB[:items].delete DB[:items].insert(:value=>1) DB[:items].count.should == 1 DB.call_sproc(:test_sproc) DB[:items].count.should == 0 end # Mysql2 doesn't support stored procedures that return result sets, probably because # CLIENT_MULTI_RESULTS is not set. unless DB.adapter_scheme == :mysql2 specify "should be callable on the dataset object" do DB.execute_ddl('CREATE PROCEDURE test_sproc(a INTEGER) BEGIN SELECT *, a AS b FROM items; END') DB[:items].delete @d = DB[:items] @d.call_sproc(:select, :test_sproc, 3).should == [] @d.insert(:value=>1) @d.call_sproc(:select, :test_sproc, 4).should == [{:id=>nil, :value=>1, :b=>4}] @d.row_proc = proc{|r| r.keys.each{|k| r[k] *= 2 if r[k].is_a?(Integer)}; r} @d.call_sproc(:select, :test_sproc, 3).should == [{:id=>nil, :value=>2, :b=>6}] end specify "should be callable on the dataset object with multiple arguments" do DB.execute_ddl('CREATE PROCEDURE test_sproc(a INTEGER, c INTEGER) BEGIN SELECT *, a AS b, c AS d FROM items; END') DB[:items].delete @d = DB[:items] @d.call_sproc(:select, :test_sproc, 3, 4).should == [] @d.insert(:value=>1) @d.call_sproc(:select, :test_sproc, 4, 5).should == [{:id=>nil, :value=>1, :b=>4, :d=>5}] @d.row_proc = proc{|r| r.keys.each{|k| r[k] *= 2 if r[k].is_a?(Integer)}; r} @d.call_sproc(:select, :test_sproc, 3, 4).should == [{:id=>nil, :value=>2, :b=>6, :d => 8}] end end specify "should deal with nil values" do DB.execute_ddl('CREATE PROCEDURE test_sproc(i INTEGER, v INTEGER) BEGIN INSERT INTO items VALUES (i, v); END') DB[:items].delete DB.call_sproc(:test_sproc, :args=>[1, nil]) DB[:items].all.should == [{:id=>1, :value=>nil}] end end end if DB.adapter_scheme == :mysql describe "MySQL bad date/time conversions" do after do DB.convert_invalid_date_time = false end specify "should raise an exception when a bad date/time is used and convert_invalid_date_time is false" do DB.convert_invalid_date_time = false proc{DB["SELECT CAST('0000-00-00' AS date)"].single_value}.should raise_error(Sequel::InvalidValue) proc{DB["SELECT CAST('0000-00-00 00:00:00' AS datetime)"].single_value}.should raise_error(Sequel::InvalidValue) proc{DB["SELECT CAST('25:00:00' AS time)"].single_value}.should raise_error(Sequel::InvalidValue) end specify "should not use a nil value bad date/time is used and convert_invalid_date_time is nil or :nil" do DB.convert_invalid_date_time = nil DB["SELECT CAST('0000-00-00' AS date)"].single_value.should == nil DB["SELECT CAST('0000-00-00 00:00:00' AS datetime)"].single_value.should == nil DB["SELECT CAST('25:00:00' AS time)"].single_value.should == nil DB.convert_invalid_date_time = :nil DB["SELECT CAST('0000-00-00' AS date)"].single_value.should == nil DB["SELECT CAST('0000-00-00 00:00:00' AS datetime)"].single_value.should == nil DB["SELECT CAST('25:00:00' AS time)"].single_value.should == nil end specify "should not use a nil value bad date/time is used and convert_invalid_date_time is :string" do DB.convert_invalid_date_time = :string DB["SELECT CAST('0000-00-00' AS date)"].single_value.should == '0000-00-00' DB["SELECT CAST('0000-00-00 00:00:00' AS datetime)"].single_value.should == '0000-00-00 00:00:00' DB["SELECT CAST('25:00:00' AS time)"].single_value.should == '25:00:00' end end describe "MySQL multiple result sets" do before do DB.create_table!(:a){Integer :a} DB.create_table!(:b){Integer :b} @ds = DB['SELECT * FROM a; SELECT * FROM b'] DB[:a].insert(10) DB[:a].insert(15) DB[:b].insert(20) DB[:b].insert(25) end after do DB.drop_table?(:a, :b) end specify "should combine all results by default" do @ds.all.should == [{:a=>10}, {:a=>15}, {:b=>20}, {:b=>25}] end specify "should work with Database#run" do proc{DB.run('SELECT * FROM a; SELECT * FROM b')}.should_not raise_error proc{DB.run('SELECT * FROM a; SELECT * FROM b')}.should_not raise_error end specify "should work with Database#run and other statements" do proc{DB.run('UPDATE a SET a = 1; SELECT * FROM a; DELETE FROM b')}.should_not raise_error DB[:a].select_order_map(:a).should == [1, 1] DB[:b].all.should == [] end specify "should split results returned into arrays if split_multiple_result_sets is used" do @ds.split_multiple_result_sets.all.should == [[{:a=>10}, {:a=>15}], [{:b=>20}, {:b=>25}]] end specify "should have regular row_procs work when splitting multiple result sets" do @ds.row_proc = proc{|x| x[x.keys.first] *= 2; x} @ds.split_multiple_result_sets.all.should == [[{:a=>20}, {:a=>30}], [{:b=>40}, {:b=>50}]] end specify "should use the columns from the first result set when splitting result sets" do @ds.split_multiple_result_sets.columns.should == [:a] end specify "should not allow graphing a dataset that splits multiple statements" do proc{@ds.split_multiple_result_sets.graph(:b, :b=>:a)}.should raise_error(Sequel::Error) end specify "should not allow splitting a graphed dataset" do proc{DB[:a].graph(:b, :b=>:a).split_multiple_result_sets}.should raise_error(Sequel::Error) end end end if DB.adapter_scheme == :mysql2 describe "Mysql2 streaming" do before(:all) do DB.create_table!(:a){Integer :a} DB.transaction do 1000.times do |i| DB[:a].insert(i) end end @ds = DB[:a].stream.order(:a) end after(:all) do DB.drop_table?(:a) end specify "should correctly stream results" do @ds.map(:a).should == (0...1000).to_a end specify "should correctly handle early returning when streaming results" do 3.times{@ds.each{|r| break r[:a]}.should == 0} end end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/adapters/oracle_spec.rb������������������������������������������������������0000664�0000000�0000000�00000023472�12201565355�0021347�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������SEQUEL_ADAPTER_TEST = :oracle require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') describe "An Oracle database" do before(:all) do DB.create_table!(:items) do String :name, :size => 50 Integer :value Date :date_created index :value end DB.create_table!(:books) do Integer :id String :title, :size => 50 Integer :category_id end DB.create_table!(:categories) do Integer :id String :cat_name, :size => 50 end @d = DB[:items] end after do @d.delete end after(:all) do DB.drop_table?(:items, :books, :categories) end specify "should provide disconnect functionality" do DB.execute("select user from dual") DB.pool.size.should == 1 DB.disconnect DB.pool.size.should == 0 end specify "should have working view_exists?" do begin DB.view_exists?(:cats).should be_false DB.create_view(:cats, DB[:categories]) DB.view_exists?(:cats).should be_true om = DB.identifier_output_method im = DB.identifier_input_method DB.identifier_output_method = :reverse DB.identifier_input_method = :reverse DB.view_exists?(:STAC).should be_true DB.view_exists?(:cats).should be_false ensure DB.identifier_output_method = om DB.identifier_input_method = im DB.drop_view(:cats) end end specify "should be able to get current sequence value with SQL" do begin DB.create_table!(:foo){primary_key :id} DB.fetch('SELECT seq_foo_id.nextval FROM DUAL').single_value.should == 1 ensure DB.drop_table(:foo) end end specify "should provide schema information" do books_schema = [[:id, [:integer, false, true, nil]], [:title, [:string, false, true, nil]], [:category_id, [:integer, false, true, nil]]] categories_schema = [[:id, [:integer, false, true, nil]], [:cat_name, [:string, false, true, nil]]] items_schema = [[:name, [:string, false, true, nil]], [:value, [:integer, false, true, nil]], [:date_created, [:datetime, false, true, nil]]] {:books => books_schema, :categories => categories_schema, :items => items_schema}.each_pair do |table, expected_schema| schema = DB.schema(table) schema.should_not be_nil schema.map{|c, s| [c, s.values_at(:type, :primary_key, :allow_null, :ruby_default)]}.should == expected_schema end end specify "should create a temporary table" do DB.create_table! :test_tmp, :temp => true do varchar2 :name, :size => 50 primary_key :id, :integer, :null => false index :name, :unique => true end DB.drop_table?(:test_tmp) end specify "should return the correct record count" do @d.count.should == 0 @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.count.should == 3 end specify "should return the correct records" do @d.to_a.should == [] @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.order(:value).to_a.should == [ {:date_created=>nil, :name => 'abc', :value => 123}, {:date_created=>nil, :name => 'abc', :value => 456}, {:date_created=>nil, :name => 'def', :value => 789} ] @d.select(:name).distinct.order_by(:name).to_a.should == [ {:name => 'abc'}, {:name => 'def'} ] @d.order(Sequel.desc(:value)).limit(1).to_a.should == [ {:date_created=>nil, :name => 'def', :value => 789} ] @d.filter(:name => 'abc').to_a.should == [ {:date_created=>nil, :name => 'abc', :value => 123}, {:date_created=>nil, :name => 'abc', :value => 456} ] @d.order(Sequel.desc(:value)).filter(:name => 'abc').to_a.should == [ {:date_created=>nil, :name => 'abc', :value => 456}, {:date_created=>nil, :name => 'abc', :value => 123} ] @d.filter(:name => 'abc').limit(1).to_a.should == [ {:date_created=>nil, :name => 'abc', :value => 123} ] @d.filter(:name => 'abc').order(Sequel.desc(:value)).limit(1).to_a.should == [ {:date_created=>nil, :name => 'abc', :value => 456} ] @d.filter(:name => 'abc').order(:value).limit(1).to_a.should == [ {:date_created=>nil, :name => 'abc', :value => 123} ] @d.order(:value).limit(1).to_a.should == [ {:date_created=>nil, :name => 'abc', :value => 123} ] @d.order(:value).limit(1, 1).to_a.should == [ {:date_created=>nil, :name => 'abc', :value => 456} ] @d.order(:value).limit(1, 2).to_a.should == [ {:date_created=>nil, :name => 'def', :value => 789} ] @d.avg(:value).to_i.should == (789+123+456)/3 @d.max(:value).to_i.should == 789 @d.select(:name, Sequel.function(:AVG, :value).as(:avg)).filter(:name => 'abc').group(:name).to_a.should == [ {:name => 'abc', :avg => (456+123)/2.0} ] @d.select(Sequel.function(:AVG, :value).as(:avg)).group(:name).order(:name).limit(1).to_a.should == [ {:avg => (456+123)/2.0} ] @d.select(:name, Sequel.function(:AVG, :value).as(:avg)).group(:name).order(:name).to_a.should == [ {:name => 'abc', :avg => (456+123)/2.0}, {:name => 'def', :avg => 789*1.0} ] @d.select(:name, Sequel.function(:AVG, :value).as(:avg)).group(:name).order(:name).to_a.should == [ {:name => 'abc', :avg => (456+123)/2.0}, {:name => 'def', :avg => 789*1.0} ] @d.select(:name, Sequel.function(:AVG, :value).as(:avg)).group(:name).having(:name => ['abc', 'def']).order(:name).to_a.should == [ {:name => 'abc', :avg => (456+123)/2.0}, {:name => 'def', :avg => 789*1.0} ] @d.select(:name, :value).filter(:name => 'abc').union(@d.select(:name, :value).filter(:name => 'def')).order(:value).to_a.should == [ {:name => 'abc', :value => 123}, {:name => 'abc', :value => 456}, {:name => 'def', :value => 789} ] end specify "should update records correctly" do @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.filter(:name => 'abc').update(:value => 530) @d[:name => 'def'][:value].should == 789 @d.filter(:value => 530).count.should == 2 end specify "should translate values correctly" do @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.filter('value > 500').update(:date_created => Sequel.lit("to_timestamp('2009-09-09', 'YYYY-MM-DD')")) @d[:name => 'def'][:date_created].strftime('%F').should == '2009-09-09' end specify "should delete records correctly" do @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.filter(:name => 'abc').delete @d.count.should == 1 @d.first[:name].should == 'def' end specify "should be able to literalize booleans" do proc {@d.literal(true)}.should_not raise_error proc {@d.literal(false)}.should_not raise_error end specify "should support transactions" do DB.transaction do @d << {:name => 'abc', :value => 1} end @d.count.should == 1 end specify "should return correct result" do @d1 = DB[:books] @d1.delete @d1 << {:id => 1, :title => 'aaa', :category_id => 100} @d1 << {:id => 2, :title => 'bbb', :category_id => 100} @d1 << {:id => 3, :title => 'ccc', :category_id => 101} @d1 << {:id => 4, :title => 'ddd', :category_id => 102} @d2 = DB[:categories] @d2.delete @d2 << {:id => 100, :cat_name => 'ruby'} @d2 << {:id => 101, :cat_name => 'rails'} @d1.join(:categories, :id => :category_id).select(:books__id, :title, :cat_name).order(:books__id).to_a.should == [ {:id => 1, :title => 'aaa', :cat_name => 'ruby'}, {:id => 2, :title => 'bbb', :cat_name => 'ruby'}, {:id => 3, :title => 'ccc', :cat_name => 'rails'} ] @d1.join(:categories, :id => :category_id).select(:books__id, :title, :cat_name).order(:books__id).limit(2, 1).to_a.should == [ {:id => 2, :title => 'bbb', :cat_name => 'ruby'}, {:id => 3, :title => 'ccc', :cat_name => 'rails'}, ] @d1.left_outer_join(:categories, :id => :category_id).select(:books__id, :title, :cat_name).order(:books__id).to_a.should == [ {:id => 1, :title => 'aaa', :cat_name => 'ruby'}, {:id => 2, :title => 'bbb', :cat_name => 'ruby'}, {:id => 3, :title => 'ccc', :cat_name => 'rails'}, {:id => 4, :title => 'ddd', :cat_name => nil} ] @d1.left_outer_join(:categories, :id => :category_id).select(:books__id, :title, :cat_name).reverse_order(:books__id).limit(2, 0).to_a.should == [ {:id => 4, :title => 'ddd', :cat_name => nil}, {:id => 3, :title => 'ccc', :cat_name => 'rails'} ] end specify "should allow columns to be renamed" do @d1 = DB[:books] @d1.delete @d1 << {:id => 1, :title => 'aaa', :category_id => 100} @d1 << {:id => 2, :title => 'bbb', :category_id => 100} @d1 << {:id => 3, :title => 'bbb', :category_id => 100} @d1.select(Sequel.as(:title, :name)).order_by(:id).to_a.should == [ { :name => 'aaa' }, { :name => 'bbb' }, { :name => 'bbb' }, ] end specify "nested queries should work" do DB[:books].select(:title).group_by(:title).count.should == 2 end specify "#for_update should use FOR UPDATE" do DB[:books].for_update.sql.should == 'SELECT * FROM "BOOKS" FOR UPDATE' end specify "#lock_style should accept symbols" do DB[:books].lock_style(:update).sql.should == 'SELECT * FROM "BOOKS" FOR UPDATE' end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/adapters/postgres_spec.rb����������������������������������������������������0000664�0000000�0000000�00000374221�12201565355�0021751�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������SEQUEL_ADAPTER_TEST = :postgres require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') def DB.sqls (@sqls ||= []) end logger = Object.new def logger.method_missing(m, msg) DB.sqls << msg end DB.loggers << logger describe "PostgreSQL", '#create_table' do before do @db = DB DB.sqls.clear end after do @db.drop_table?(:tmp_dolls, :unlogged_dolls) end specify "should create a temporary table" do @db.create_table(:tmp_dolls, :temp => true){text :name} check_sqls do @db.sqls.should == ['CREATE TEMPORARY TABLE "tmp_dolls" ("name" text)'] end end specify "should create an unlogged table" do @db.create_table(:unlogged_dolls, :unlogged => true){text :name} check_sqls do @db.sqls.should == ['CREATE UNLOGGED TABLE "unlogged_dolls" ("name" text)'] end end specify "should create a table inheriting from another table" do @db.create_table(:unlogged_dolls){text :name} @db.create_table(:tmp_dolls, :inherits=>:unlogged_dolls){} @db[:tmp_dolls].insert('a') @db[:unlogged_dolls].all.should == [{:name=>'a'}] end specify "should create a table inheriting from multiple tables" do begin @db.create_table(:unlogged_dolls){text :name} @db.create_table(:tmp_dolls){text :bar} @db.create_table!(:items, :inherits=>[:unlogged_dolls, :tmp_dolls]){text :foo} @db[:items].insert(:name=>'a', :bar=>'b', :foo=>'c') @db[:unlogged_dolls].all.should == [{:name=>'a'}] @db[:tmp_dolls].all.should == [{:bar=>'b'}] @db[:items].all.should == [{:name=>'a', :bar=>'b', :foo=>'c'}] ensure @db.drop_table?(:items) end end specify "should not allow to pass both :temp and :unlogged" do proc do @db.create_table(:temp_unlogged_dolls, :temp => true, :unlogged => true){text :name} end.should raise_error(Sequel::Error, "can't provide both :temp and :unlogged to create_table") end end describe "PostgreSQL views" do before do @db = DB @db.drop_view(:items_view, :cascade=>true, :if_exists=>true) @db.create_table!(:items){Integer :number} @db[:items].insert(10) @db[:items].insert(20) end after do @opts ||={} @db.drop_view(:items_view, @opts.merge(:if_exists=>true, :cascade=>true)) rescue nil @db.drop_table?(:items) end specify "should support temporary views" do @db.create_view(:items_view, @db[:items].where(:number=>10), :temp=>true) @db[:items_view].map(:number).should == [10] @db.create_or_replace_view(:items_view, @db[:items].where(:number=>20), :temp=>true) @db[:items_view].map(:number).should == [20] end specify "should support recursive views" do @db.create_view(:items_view, @db[:items].where(:number=>10).union(@db[:items, :items_view].where(Sequel.-(:number, 5)=>:n).select(:number), :all=>true, :from_self=>false), :recursive=>[:n]) @db[:items_view].select_order_map(:n).should == [10] @db[:items].insert(15) @db[:items_view].select_order_map(:n).should == [10, 15, 20] end if DB.server_version >= 90300 specify "should support materialized views" do @opts = {:materialized=>true} @db.create_view(:items_view, @db[:items].where{number >= 10}, @opts) @db[:items_view].select_order_map(:number).should == [10, 20] @db[:items].insert(15) @db[:items_view].select_order_map(:number).should == [10, 20] @db.refresh_view(:items_view) @db[:items_view].select_order_map(:number).should == [10, 15, 20] end if DB.server_version >= 90300 specify "should support :if_exists=>true for not raising an error if the view does not exist" do proc{@db.drop_view(:items_view, :if_exists=>true)}.should_not raise_error end end describe "A PostgreSQL database" do before(:all) do @db = DB @db.create_table!(:public__testfk){primary_key :id; foreign_key :i, :public__testfk} end after(:all) do @db.drop_table?(:public__testfk) end specify "should provide the server version" do @db.server_version.should > 70000 end specify "should support a :qualify option to tables and views" do @db.tables(:qualify=>true).should include(Sequel.qualify(:public, :testfk)) begin @db.create_view(:testfkv, @db[:testfk]) @db.views(:qualify=>true).should include(Sequel.qualify(:public, :testfkv)) ensure @db.drop_view(:testfkv) end end specify "should not typecast the int2vector type incorrectly" do @db.get(Sequel.cast('10 20', :int2vector)).should_not == 10 end cspecify "should not typecast the money type incorrectly", :do do @db.get(Sequel.cast('10.01', :money)).should_not == 0 end specify "should correctly parse the schema" do @db.schema(:public__testfk, :reload=>true).should == [ [:id, {:type=>:integer, :ruby_default=>nil, :db_type=>"integer", :default=>"nextval('testfk_id_seq'::regclass)", :oid=>23, :primary_key=>true, :allow_null=>false}], [:i, {:type=>:integer, :ruby_default=>nil, :db_type=>"integer", :default=>nil, :oid=>23, :primary_key=>false, :allow_null=>true}]] end specify "should parse foreign keys for tables in a schema" do @db.foreign_key_list(:public__testfk).should == [{:on_delete=>:no_action, :on_update=>:no_action, :columns=>[:i], :key=>[:id], :deferrable=>false, :table=>Sequel.qualify(:public, :testfk), :name=>:testfk_i_fkey}] end specify "should return uuid fields as strings" do @db.get(Sequel.cast('550e8400-e29b-41d4-a716-446655440000', :uuid)).should == '550e8400-e29b-41d4-a716-446655440000' end end describe "A PostgreSQL database with domain types" do before(:all) do @db = DB @db << "DROP DOMAIN IF EXISTS positive_number CASCADE" @db << "CREATE DOMAIN positive_number AS numeric(10,2) CHECK (VALUE > 0)" @db.create_table!(:testfk){positive_number :id, :primary_key=>true} end after(:all) do @db.drop_table?(:testfk) @db << "DROP DOMAIN positive_number" end specify "should correctly parse the schema" do sch = @db.schema(:testfk, :reload=>true) sch.first.last.delete(:domain_oid).should be_a_kind_of(Integer) sch.should == [[:id, {:type=>:decimal, :ruby_default=>nil, :db_type=>"numeric(10,2)", :default=>nil, :oid=>1700, :primary_key=>true, :allow_null=>false, :db_domain_type=>'positive_number'}]] end end describe "A PostgreSQL dataset" do before(:all) do @db = DB @d = @db[:test] @db.create_table! :test do text :name integer :value, :index => true end end before do @d.delete @db.sqls.clear end after do @db.drop_table?(:atest) end after(:all) do @db.drop_table?(:test) end specify "should quote columns and tables using double quotes if quoting identifiers" do check_sqls do @d.select(:name).sql.should == 'SELECT "name" FROM "test"' @d.select(Sequel.lit('COUNT(*)')).sql.should == 'SELECT COUNT(*) FROM "test"' @d.select(Sequel.function(:max, :value)).sql.should == 'SELECT max("value") FROM "test"' @d.select(Sequel.function(:NOW)).sql.should == 'SELECT NOW() FROM "test"' @d.select(Sequel.function(:max, :items__value)).sql.should == 'SELECT max("items"."value") FROM "test"' @d.order(Sequel.desc(:name)).sql.should == 'SELECT * FROM "test" ORDER BY "name" DESC' @d.select(Sequel.lit('test.name AS item_name')).sql.should == 'SELECT test.name AS item_name FROM "test"' @d.select(Sequel.lit('"name"')).sql.should == 'SELECT "name" FROM "test"' @d.select(Sequel.lit('max(test."name") AS "max_name"')).sql.should == 'SELECT max(test."name") AS "max_name" FROM "test"' @d.insert_sql(:x => :y).should =~ /\AINSERT INTO "test" \("x"\) VALUES \("y"\)( RETURNING NULL)?\z/ @d.select(Sequel.function(:test, :abc, 'hello')).sql.should == "SELECT test(\"abc\", 'hello') FROM \"test\"" @d.select(Sequel.function(:test, :abc__def, 'hello')).sql.should == "SELECT test(\"abc\".\"def\", 'hello') FROM \"test\"" @d.select(Sequel.function(:test, :abc__def, 'hello').as(:x2)).sql.should == "SELECT test(\"abc\".\"def\", 'hello') AS \"x2\" FROM \"test\"" @d.insert_sql(:value => 333).should =~ /\AINSERT INTO "test" \("value"\) VALUES \(333\)( RETURNING NULL)?\z/ end end specify "should quote fields correctly when reversing the order if quoting identifiers" do check_sqls do @d.reverse_order(:name).sql.should == 'SELECT * FROM "test" ORDER BY "name" DESC' @d.reverse_order(Sequel.desc(:name)).sql.should == 'SELECT * FROM "test" ORDER BY "name" ASC' @d.reverse_order(:name, Sequel.desc(:test)).sql.should == 'SELECT * FROM "test" ORDER BY "name" DESC, "test" ASC' @d.reverse_order(Sequel.desc(:name), :test).sql.should == 'SELECT * FROM "test" ORDER BY "name" ASC, "test" DESC' end end specify "should support regexps" do @d << {:name => 'abc', :value => 1} @d << {:name => 'bcd', :value => 2} @d.filter(:name => /bc/).count.should == 2 @d.filter(:name => /^bc/).count.should == 1 end specify "should support NULLS FIRST and NULLS LAST" do @d << {:name => 'abc'} @d << {:name => 'bcd'} @d << {:name => 'bcd', :value => 2} @d.order(Sequel.asc(:value, :nulls=>:first), :name).select_map(:name).should == %w[abc bcd bcd] @d.order(Sequel.asc(:value, :nulls=>:last), :name).select_map(:name).should == %w[bcd abc bcd] @d.order(Sequel.asc(:value, :nulls=>:first), :name).reverse.select_map(:name).should == %w[bcd bcd abc] end specify "#lock should lock tables and yield if a block is given" do @d.lock('EXCLUSIVE'){@d.insert(:name=>'a')} end specify "should support exclusion constraints when creating or altering tables" do @db.create_table!(:atest){Integer :t; exclude [[Sequel.desc(:t, :nulls=>:last), '=']], :using=>:btree, :where=>proc{t > 0}} @db[:atest].insert(1) @db[:atest].insert(2) proc{@db[:atest].insert(2)}.should raise_error(Sequel::Postgres::ExclusionConstraintViolation) @db.create_table!(:atest){Integer :t} @db.alter_table(:atest){add_exclusion_constraint [[:t, '=']], :using=>:btree, :name=>'atest_ex'} @db[:atest].insert(1) @db[:atest].insert(2) proc{@db[:atest].insert(2)}.should raise_error(Sequel::Postgres::ExclusionConstraintViolation) @db.alter_table(:atest){drop_constraint 'atest_ex'} end if DB.server_version >= 90000 specify "should support deferrable exclusion constraints" do @db.create_table!(:atest){Integer :t; exclude [[Sequel.desc(:t, :nulls=>:last), '=']], :using=>:btree, :where=>proc{t > 0}, :deferrable => true} proc do @db.transaction do @db[:atest].insert(2) proc{@db[:atest].insert(2)}.should_not raise_error end end.should raise_error(Sequel::Postgres::ExclusionConstraintViolation) end if DB.server_version >= 90000 specify "should support Database#error_info for getting info hash on the given error" do @db.create_table!(:atest){Integer :t; Integer :t2, :null=>false, :default=>1; constraint :f, :t=>0} begin @db[:atest].insert(1) rescue => e end e.should_not be_nil info = @db.error_info(e) info[:schema].should == 'public' info[:table].should == 'atest' info[:constraint].should == 'f' info[:column].should be_nil info[:type].should be_nil begin @db[:atest].insert(0, nil) rescue => e end e.should_not be_nil info = @db.error_info(e.wrapped_exception) info[:schema].should == 'public' info[:table].should == 'atest' info[:constraint].should be_nil info[:column].should == 't2' info[:type].should be_nil end if DB.server_version >= 90300 && DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG && Object.const_defined?(:PG) && ::PG.const_defined?(:Constants) && ::PG::Constants.const_defined?(:PG_DIAG_SCHEMA_NAME) specify "should support Database#do for executing anonymous code blocks" do @db.drop_table?(:btest) @db.do "BEGIN EXECUTE 'CREATE TABLE btest (a INTEGER)'; EXECUTE 'INSERT INTO btest VALUES (1)'; END" @db[:btest].select_map(:a).should == [1] @db.do "BEGIN EXECUTE 'DROP TABLE btest; CREATE TABLE atest (a INTEGER)'; EXECUTE 'INSERT INTO atest VALUES (1)'; END", :language=>:plpgsql @db[:atest].select_map(:a).should == [1] end if DB.server_version >= 90000 specify "should support adding foreign key constarints that are not yet valid, and validating them later" do @db.create_table!(:atest){primary_key :id; Integer :fk} @db[:atest].insert(1, 5) @db.alter_table(:atest){add_foreign_key [:fk], :atest, :not_valid=>true, :name=>:atest_fk} @db[:atest].insert(2, 1) proc{@db[:atest].insert(3, 4)}.should raise_error(Sequel::DatabaseError) proc{@db.alter_table(:atest){validate_constraint :atest_fk}}.should raise_error(Sequel::DatabaseError) @db[:atest].where(:id=>1).update(:fk=>2) @db.alter_table(:atest){validate_constraint :atest_fk} proc{@db.alter_table(:atest){validate_constraint :atest_fk}}.should_not raise_error end if DB.server_version >= 90200 specify "should support adding check constarints that are not yet valid, and validating them later" do @db.create_table!(:atest){Integer :a} @db[:atest].insert(5) @db.alter_table(:atest){add_constraint({:name=>:atest_check, :not_valid=>true}){a >= 10}} @db[:atest].insert(10) proc{@db[:atest].insert(6)}.should raise_error(Sequel::DatabaseError) proc{@db.alter_table(:atest){validate_constraint :atest_check}}.should raise_error(Sequel::DatabaseError) @db[:atest].where{a < 10}.update(:a=>Sequel.+(:a, 10)) @db.alter_table(:atest){validate_constraint :atest_check} proc{@db.alter_table(:atest){validate_constraint :atest_check}}.should_not raise_error end if DB.server_version >= 90200 specify "should support :using when altering a column's type" do @db.create_table!(:atest){Integer :t} @db[:atest].insert(1262304000) @db.alter_table(:atest){set_column_type :t, Time, :using=>Sequel.cast('epoch', Time) + Sequel.cast('1 second', :interval) * :t} @db[:atest].get(Sequel.extract(:year, :t)).should == 2010 end specify "should support :using with a string when altering a column's type" do @db.create_table!(:atest){Integer :t} @db[:atest].insert(1262304000) @db.alter_table(:atest){set_column_type :t, Time, :using=>"'epoch'::timestamp + '1 second'::interval * t"} @db[:atest].get(Sequel.extract(:year, :t)).should == 2010 end specify "should be able to parse the default value for an interval type" do @db.create_table!(:atest){interval :t, :default=>'1 week'} @db.schema(:atest).first.last[:ruby_default].should == '7 days' end specify "should have #transaction support various types of synchronous options" do @db.transaction(:synchronous=>:on){} @db.transaction(:synchronous=>true){} @db.transaction(:synchronous=>:off){} @db.transaction(:synchronous=>false){} @db.sqls.grep(/synchronous/).should == ["SET LOCAL synchronous_commit = on", "SET LOCAL synchronous_commit = on", "SET LOCAL synchronous_commit = off", "SET LOCAL synchronous_commit = off"] @db.sqls.clear @db.transaction(:synchronous=>nil){} check_sqls do @db.sqls.should == ['BEGIN', 'COMMIT'] end if @db.server_version >= 90100 @db.sqls.clear @db.transaction(:synchronous=>:local){} check_sqls do @db.sqls.grep(/synchronous/).should == ["SET LOCAL synchronous_commit = local"] end if @db.server_version >= 90200 @db.sqls.clear @db.transaction(:synchronous=>:remote_write){} check_sqls do @db.sqls.grep(/synchronous/).should == ["SET LOCAL synchronous_commit = remote_write"] end end end end specify "should have #transaction support read only transactions" do @db.transaction(:read_only=>true){} @db.transaction(:read_only=>false){} @db.transaction(:isolation=>:serializable, :read_only=>true){} @db.transaction(:isolation=>:serializable, :read_only=>false){} @db.sqls.grep(/READ/).should == ["SET TRANSACTION READ ONLY", "SET TRANSACTION READ WRITE", "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY", "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE READ WRITE"] end specify "should have #transaction support deferrable transactions" do @db.transaction(:deferrable=>true){} @db.transaction(:deferrable=>false){} @db.transaction(:deferrable=>true, :read_only=>true){} @db.transaction(:deferrable=>false, :read_only=>false){} @db.transaction(:isolation=>:serializable, :deferrable=>true, :read_only=>true){} @db.transaction(:isolation=>:serializable, :deferrable=>false, :read_only=>false){} @db.sqls.grep(/DEF/).should == ["SET TRANSACTION DEFERRABLE", "SET TRANSACTION NOT DEFERRABLE", "SET TRANSACTION READ ONLY DEFERRABLE", "SET TRANSACTION READ WRITE NOT DEFERRABLE", "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE", "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE READ WRITE NOT DEFERRABLE"] end if DB.server_version >= 90100 specify "should support creating indexes concurrently" do @db.add_index :test, [:name, :value], :concurrently=>true check_sqls do @db.sqls.should == ['CREATE INDEX CONCURRENTLY "test_name_value_index" ON "test" ("name", "value")'] end end specify "should support dropping indexes only if they already exist" do @db.add_index :test, [:name, :value], :name=>'tnv1' @db.sqls.clear @db.drop_index :test, [:name, :value], :if_exists=>true, :name=>'tnv1' check_sqls do @db.sqls.should == ['DROP INDEX IF EXISTS "tnv1"'] end end specify "should support CASCADE when dropping indexes" do @db.add_index :test, [:name, :value], :name=>'tnv2' @db.sqls.clear @db.drop_index :test, [:name, :value], :cascade=>true, :name=>'tnv2' check_sqls do @db.sqls.should == ['DROP INDEX "tnv2" CASCADE'] end end specify "should support dropping indexes concurrently" do @db.add_index :test, [:name, :value], :name=>'tnv2' @db.sqls.clear @db.drop_index :test, [:name, :value], :concurrently=>true, :name=>'tnv2' check_sqls do @db.sqls.should == ['DROP INDEX CONCURRENTLY "tnv2"'] end end if DB.server_version >= 90200 specify "#lock should lock table if inside a transaction" do @db.transaction{@d.lock('EXCLUSIVE'); @d.insert(:name=>'a')} end specify "#lock should return nil" do @d.lock('EXCLUSIVE'){@d.insert(:name=>'a')}.should == nil @db.transaction{@d.lock('EXCLUSIVE').should == nil; @d.insert(:name=>'a')} end specify "should raise an error if attempting to update a joined dataset with a single FROM table" do proc{@db[:test].join(:test, [:name]).update(:name=>'a')}.should raise_error(Sequel::Error, 'Need multiple FROM tables if updating/deleting a dataset with JOINs') end specify "should truncate with options" do @d << { :name => 'abc', :value => 1} @d.count.should == 1 @d.truncate(:cascade => true) @d.count.should == 0 if @d.db.server_version > 80400 @d << { :name => 'abc', :value => 1} @d.truncate(:cascade => true, :only=>true, :restart=>true) @d.count.should == 0 end end specify "should truncate multiple tables at once" do tables = [:test, :test] tables.each{|t| @d.from(t).insert} @d.from(:test, :test).truncate tables.each{|t| @d.from(t).count.should == 0} end end describe "Dataset#distinct" do before do @db = DB @db.create_table!(:a) do Integer :a Integer :b end @ds = @db[:a] end after do @db.drop_table?(:a) end it "#distinct with arguments should return results distinct on those arguments" do @ds.insert(20, 10) @ds.insert(30, 10) @ds.order(:b, :a).distinct.map(:a).should == [20, 30] @ds.order(:b, Sequel.desc(:a)).distinct.map(:a).should == [30, 20] @ds.order(:b, :a).distinct(:b).map(:a).should == [20] @ds.order(:b, Sequel.desc(:a)).distinct(:b).map(:a).should == [30] end end if DB.pool.respond_to?(:max_size) and DB.pool.max_size > 1 describe "Dataset#for_update support" do before do @db = DB.create_table!(:items) do primary_key :id Integer :number String :name end @ds = DB[:items] end after do DB.drop_table?(:items) DB.disconnect end specify "should handle FOR UPDATE" do @ds.insert(:number=>20) c, t = nil, nil q = Queue.new DB.transaction do @ds.for_update.first(:id=>1) t = Thread.new do DB.transaction do q.push nil @ds.filter(:id=>1).update(:name=>'Jim') c = @ds.first(:id=>1) q.push nil end end q.pop @ds.filter(:id=>1).update(:number=>30) end q.pop t.join c.should == {:id=>1, :number=>30, :name=>'Jim'} end specify "should handle FOR SHARE" do @ds.insert(:number=>20) c, t = nil q = Queue.new DB.transaction do @ds.for_share.first(:id=>1) t = Thread.new do DB.transaction do c = @ds.for_share.filter(:id=>1).first q.push nil end end q.pop @ds.filter(:id=>1).update(:name=>'Jim') c.should == {:id=>1, :number=>20, :name=>nil} end t.join end end end describe "A PostgreSQL dataset with a timestamp field" do before(:all) do @db = DB @db.create_table! :test3 do Date :date DateTime :time end @d = @db[:test3] end before do @d.delete end after do @db.convert_infinite_timestamps = false if @db.adapter_scheme == :postgres end after(:all) do @db.drop_table?(:test3) end cspecify "should store milliseconds in time fields for Time objects", :do, :swift do t = Time.now @d << {:time=>t} t2 = @d.get(:time) @d.literal(t2).should == @d.literal(t) t2.strftime('%Y-%m-%d %H:%M:%S').should == t.strftime('%Y-%m-%d %H:%M:%S') (t2.is_a?(Time) ? t2.usec : t2.strftime('%N').to_i/1000).should == t.usec end cspecify "should store milliseconds in time fields for DateTime objects", :do, :swift do t = DateTime.now @d << {:time=>t} t2 = @d.get(:time) @d.literal(t2).should == @d.literal(t) t2.strftime('%Y-%m-%d %H:%M:%S').should == t.strftime('%Y-%m-%d %H:%M:%S') (t2.is_a?(Time) ? t2.usec : t2.strftime('%N').to_i/1000).should == t.strftime('%N').to_i/1000 end if DB.adapter_scheme == :postgres specify "should handle infinite timestamps if convert_infinite_timestamps is set" do @d << {:time=>Sequel.cast('infinity', DateTime)} @db.convert_infinite_timestamps = :nil @db[:test3].get(:time).should == nil @db.convert_infinite_timestamps = :string @db[:test3].get(:time).should == 'infinity' @db.convert_infinite_timestamps = :float @db[:test3].get(:time).should == 1.0/0.0 @db.convert_infinite_timestamps = 'nil' @db[:test3].get(:time).should == nil @db.convert_infinite_timestamps = 'string' @db[:test3].get(:time).should == 'infinity' @db.convert_infinite_timestamps = 'float' @db[:test3].get(:time).should == 1.0/0.0 @db.convert_infinite_timestamps = 't' @db[:test3].get(:time).should == 1.0/0.0 if ((Time.parse('infinity'); nil) rescue true) # Skip for loose time parsing (e.g. old rbx) @db.convert_infinite_timestamps = 'f' proc{@db[:test3].get(:time)}.should raise_error @db.convert_infinite_timestamps = nil proc{@db[:test3].get(:time)}.should raise_error @db.convert_infinite_timestamps = false proc{@db[:test3].get(:time)}.should raise_error end @d.update(:time=>Sequel.cast('-infinity', DateTime)) @db.convert_infinite_timestamps = :nil @db[:test3].get(:time).should == nil @db.convert_infinite_timestamps = :string @db[:test3].get(:time).should == '-infinity' @db.convert_infinite_timestamps = :float @db[:test3].get(:time).should == -1.0/0.0 end specify "should handle conversions from infinite strings/floats in models" do c = Class.new(Sequel::Model(:test3)) @db.convert_infinite_timestamps = :float c.new(:time=>'infinity').time.should == 'infinity' c.new(:time=>'-infinity').time.should == '-infinity' c.new(:time=>1.0/0.0).time.should == 1.0/0.0 c.new(:time=>-1.0/0.0).time.should == -1.0/0.0 end specify "should handle infinite dates if convert_infinite_timestamps is set" do @d << {:date=>Sequel.cast('infinity', Date)} @db.convert_infinite_timestamps = :nil @db[:test3].get(:date).should == nil @db.convert_infinite_timestamps = :string @db[:test3].get(:date).should == 'infinity' @db.convert_infinite_timestamps = :float @db[:test3].get(:date).should == 1.0/0.0 @d.update(:date=>Sequel.cast('-infinity', :timestamp)) @db.convert_infinite_timestamps = :nil @db[:test3].get(:date).should == nil @db.convert_infinite_timestamps = :string @db[:test3].get(:date).should == '-infinity' @db.convert_infinite_timestamps = :float @db[:test3].get(:date).should == -1.0/0.0 end specify "should handle conversions from infinite strings/floats in models" do c = Class.new(Sequel::Model(:test3)) @db.convert_infinite_timestamps = :float c.new(:date=>'infinity').date.should == 'infinity' c.new(:date=>'-infinity').date.should == '-infinity' c.new(:date=>1.0/0.0).date.should == 1.0/0.0 c.new(:date=>-1.0/0.0).date.should == -1.0/0.0 end end specify "explain and analyze should not raise errors" do @d = DB[:test3] proc{@d.explain}.should_not raise_error proc{@d.analyze}.should_not raise_error end specify "#locks should be a dataset returning database locks " do @db.locks.should be_a_kind_of(Sequel::Dataset) @db.locks.all.should be_a_kind_of(Array) end end describe "A PostgreSQL database" do before do @db = DB @db.create_table! :test2 do text :name integer :value end end after do @db.drop_table?(:test2) end specify "should support column operations" do @db.create_table!(:test2){text :name; integer :value} @db[:test2] << {} @db[:test2].columns.should == [:name, :value] @db.add_column :test2, :xyz, :text, :default => '000' @db[:test2].columns.should == [:name, :value, :xyz] @db[:test2] << {:name => 'mmm', :value => 111} @db[:test2].first[:xyz].should == '000' @db[:test2].columns.should == [:name, :value, :xyz] @db.drop_column :test2, :xyz @db[:test2].columns.should == [:name, :value] @db[:test2].delete @db.add_column :test2, :xyz, :text, :default => '000' @db[:test2] << {:name => 'mmm', :value => 111, :xyz => 'qqqq'} @db[:test2].columns.should == [:name, :value, :xyz] @db.rename_column :test2, :xyz, :zyx @db[:test2].columns.should == [:name, :value, :zyx] @db[:test2].first[:zyx].should == 'qqqq' @db.add_column :test2, :xyz, :float @db[:test2].delete @db[:test2] << {:name => 'mmm', :value => 111, :xyz => 56.78} @db.set_column_type :test2, :xyz, :integer @db[:test2].first[:xyz].should == 57 end end describe "A PostgreSQL database" do before do @db = DB @db.drop_table?(:posts) @db.sqls.clear end after do @db.drop_table?(:posts) end specify "should support resetting the primary key sequence" do @db.create_table(:posts){primary_key :a} @db[:posts].insert(:a=>20).should == 20 @db[:posts].insert.should == 1 @db[:posts].insert.should == 2 @db[:posts].insert(:a=>10).should == 10 @db.reset_primary_key_sequence(:posts).should == 21 @db[:posts].insert.should == 21 @db[:posts].order(:a).map(:a).should == [1, 2, 10, 20, 21] end specify "should support specifying Integer/Bignum/Fixnum types in primary keys and have them be auto incrementing" do @db.create_table(:posts){primary_key :a, :type=>Integer} @db[:posts].insert.should == 1 @db[:posts].insert.should == 2 @db.create_table!(:posts){primary_key :a, :type=>Fixnum} @db[:posts].insert.should == 1 @db[:posts].insert.should == 2 @db.create_table!(:posts){primary_key :a, :type=>Bignum} @db[:posts].insert.should == 1 @db[:posts].insert.should == 2 end specify "should not raise an error if attempting to resetting the primary key sequence for a table without a primary key" do @db.create_table(:posts){Integer :a} @db.reset_primary_key_sequence(:posts).should == nil end specify "should support opclass specification" do @db.create_table(:posts){text :title; text :body; integer :user_id; index(:user_id, :opclass => :int4_ops, :type => :btree)} check_sqls do @db.sqls.should == [ 'CREATE TABLE "posts" ("title" text, "body" text, "user_id" integer)', 'CREATE INDEX "posts_user_id_index" ON "posts" USING btree ("user_id" int4_ops)' ] end end specify "should support fulltext indexes and searching" do @db.create_table(:posts){text :title; text :body; full_text_index [:title, :body]; full_text_index :title, :language => 'french', :index_type=>:gist} check_sqls do @db.sqls.should == [ %{CREATE TABLE "posts" ("title" text, "body" text)}, %{CREATE INDEX "posts_title_body_index" ON "posts" USING gin (to_tsvector('simple'::regconfig, (COALESCE("title", '') || ' ' || COALESCE("body", ''))))}, %{CREATE INDEX "posts_title_index" ON "posts" USING gist (to_tsvector('french'::regconfig, (COALESCE("title", ''))))} ] end @db[:posts].insert(:title=>'ruby rails', :body=>'yowsa') @db[:posts].insert(:title=>'sequel', :body=>'ruby') @db[:posts].insert(:title=>'ruby scooby', :body=>'x') @db.sqls.clear @db[:posts].full_text_search(:title, 'rails').all.should == [{:title=>'ruby rails', :body=>'yowsa'}] @db[:posts].full_text_search([:title, :body], ['yowsa', 'rails']).all.should == [:title=>'ruby rails', :body=>'yowsa'] @db[:posts].full_text_search(:title, 'scooby', :language => 'french').all.should == [{:title=>'ruby scooby', :body=>'x'}] check_sqls do @db.sqls.should == [ %{SELECT * FROM "posts" WHERE (to_tsvector('simple'::regconfig, (COALESCE("title", ''))) @@ to_tsquery('simple'::regconfig, 'rails'))}, %{SELECT * FROM "posts" WHERE (to_tsvector('simple'::regconfig, (COALESCE("title", '') || ' ' || COALESCE("body", ''))) @@ to_tsquery('simple'::regconfig, 'yowsa | rails'))}, %{SELECT * FROM "posts" WHERE (to_tsvector('french'::regconfig, (COALESCE("title", ''))) @@ to_tsquery('french'::regconfig, 'scooby'))}] end @db[:posts].full_text_search(:title, :$n).call(:select, :n=>'rails').should == [{:title=>'ruby rails', :body=>'yowsa'}] @db[:posts].full_text_search(:title, :$n).prepare(:select, :fts_select).call(:n=>'rails').should == [{:title=>'ruby rails', :body=>'yowsa'}] end specify "should support spatial indexes" do @db.create_table(:posts){box :geom; spatial_index [:geom]} check_sqls do @db.sqls.should == [ 'CREATE TABLE "posts" ("geom" box)', 'CREATE INDEX "posts_geom_index" ON "posts" USING gist ("geom")' ] end end specify "should support indexes with index type" do @db.create_table(:posts){varchar :title, :size => 5; index :title, :type => 'hash'} check_sqls do @db.sqls.should == [ 'CREATE TABLE "posts" ("title" varchar(5))', 'CREATE INDEX "posts_title_index" ON "posts" USING hash ("title")' ] end end specify "should support unique indexes with index type" do @db.create_table(:posts){varchar :title, :size => 5; index :title, :type => 'btree', :unique => true} check_sqls do @db.sqls.should == [ 'CREATE TABLE "posts" ("title" varchar(5))', 'CREATE UNIQUE INDEX "posts_title_index" ON "posts" USING btree ("title")' ] end end specify "should support partial indexes" do @db.create_table(:posts){varchar :title, :size => 5; index :title, :where => {:title => '5'}} check_sqls do @db.sqls.should == [ 'CREATE TABLE "posts" ("title" varchar(5))', 'CREATE INDEX "posts_title_index" ON "posts" ("title") WHERE ("title" = \'5\')' ] end end specify "should support identifiers for table names in indicies" do @db.create_table(Sequel::SQL::Identifier.new(:posts)){varchar :title, :size => 5; index :title, :where => {:title => '5'}} check_sqls do @db.sqls.should == [ 'CREATE TABLE "posts" ("title" varchar(5))', 'CREATE INDEX "posts_title_index" ON "posts" ("title") WHERE ("title" = \'5\')' ] end end specify "should support renaming tables" do @db.create_table!(:posts1){primary_key :a} @db.rename_table(:posts1, :posts) end end describe "Postgres::Dataset#import" do before do @db = DB @db.create_table!(:test){primary_key :x; Integer :y} @db.sqls.clear @ds = @db[:test] end after do @db.drop_table?(:test) end specify "#import should a single insert statement" do @ds.import([:x, :y], [[1, 2], [3, 4]]) check_sqls do @db.sqls.should == ['BEGIN', 'INSERT INTO "test" ("x", "y") VALUES (1, 2), (3, 4)', 'COMMIT'] end @ds.all.should == [{:x=>1, :y=>2}, {:x=>3, :y=>4}] end specify "#import should work correctly when returning primary keys" do @ds.import([:x, :y], [[1, 2], [3, 4]], :return=>:primary_key).should == [1, 3] @ds.all.should == [{:x=>1, :y=>2}, {:x=>3, :y=>4}] end specify "#import should work correctly when returning primary keys with :slice option" do @ds.import([:x, :y], [[1, 2], [3, 4]], :return=>:primary_key, :slice=>1).should == [1, 3] @ds.all.should == [{:x=>1, :y=>2}, {:x=>3, :y=>4}] end specify "#import should work correctly with an arbitrary returning value" do @ds.returning(:y, :x).import([:x, :y], [[1, 2], [3, 4]]).should == [{:y=>2, :x=>1}, {:y=>4, :x=>3}] @ds.all.should == [{:x=>1, :y=>2}, {:x=>3, :y=>4}] end end describe "Postgres::Dataset#insert" do before do @db = DB @db.create_table!(:test5){primary_key :xid; Integer :value} @db.sqls.clear @ds = @db[:test5] end after do @db.drop_table?(:test5) end specify "should work with static SQL" do @ds.with_sql('INSERT INTO test5 (value) VALUES (10)').insert.should == nil @db['INSERT INTO test5 (value) VALUES (20)'].insert.should == nil @ds.all.should == [{:xid=>1, :value=>10}, {:xid=>2, :value=>20}] end specify "should insert correctly if using a column array and a value array" do @ds.insert([:value], [10]).should == 1 @ds.all.should == [{:xid=>1, :value=>10}] end specify "should use INSERT RETURNING" do @ds.insert(:value=>10).should == 1 check_sqls do @db.sqls.last.should == 'INSERT INTO "test5" ("value") VALUES (10) RETURNING "xid"' end end specify "should have insert_select insert the record and return the inserted record" do h = @ds.insert_select(:value=>10) h[:value].should == 10 @ds.first(:xid=>h[:xid])[:value].should == 10 end specify "should correctly return the inserted record's primary key value" do value1 = 10 id1 = @ds.insert(:value=>value1) @ds.first(:xid=>id1)[:value].should == value1 value2 = 20 id2 = @ds.insert(:value=>value2) @ds.first(:xid=>id2)[:value].should == value2 end specify "should return nil if the table has no primary key" do @db.create_table!(:test5){String :name; Integer :value} @ds.delete @ds.insert(:name=>'a').should == nil end end describe "Postgres::Database schema qualified tables" do before do @db = DB @db << "CREATE SCHEMA schema_test" @db.instance_variable_set(:@primary_keys, {}) @db.instance_variable_set(:@primary_key_sequences, {}) end after do @db << "DROP SCHEMA schema_test CASCADE" end specify "should be able to create, drop, select and insert into tables in a given schema" do @db.create_table(:schema_test__schema_test){primary_key :i} @db[:schema_test__schema_test].first.should == nil @db[:schema_test__schema_test].insert(:i=>1).should == 1 @db[:schema_test__schema_test].first.should == {:i=>1} @db.from(Sequel.lit('schema_test.schema_test')).first.should == {:i=>1} @db.drop_table(:schema_test__schema_test) @db.create_table(Sequel.qualify(:schema_test, :schema_test)){integer :i} @db[:schema_test__schema_test].first.should == nil @db.from(Sequel.lit('schema_test.schema_test')).first.should == nil @db.drop_table(Sequel.qualify(:schema_test, :schema_test)) end specify "#tables should not include tables in a default non-public schema" do @db.create_table(:schema_test__schema_test){integer :i} @db.tables(:schema=>:schema_test).should include(:schema_test) @db.tables.should_not include(:pg_am) @db.tables.should_not include(:domain_udt_usage) end specify "#tables should return tables in the schema provided by the :schema argument" do @db.create_table(:schema_test__schema_test){integer :i} @db.tables(:schema=>:schema_test).should == [:schema_test] end specify "#schema should not include columns from tables in a default non-public schema" do @db.create_table(:schema_test__domains){integer :i} sch = @db.schema(:schema_test__domains) cs = sch.map{|x| x.first} cs.should include(:i) cs.should_not include(:data_type) end specify "#schema should only include columns from the table in the given :schema argument" do @db.create_table!(:domains){integer :d} @db.create_table(:schema_test__domains){integer :i} sch = @db.schema(:domains, :schema=>:schema_test) cs = sch.map{|x| x.first} cs.should include(:i) cs.should_not include(:d) @db.drop_table(:domains) end specify "#schema should not include columns in tables from other domains by default" do @db.create_table!(:public__domains){integer :d} @db.create_table(:schema_test__domains){integer :i} begin @db.schema(:domains).map{|x| x.first}.should == [:d] @db.schema(:schema_test__domains).map{|x| x.first}.should == [:i] ensure @db.drop_table?(:public__domains) end end specify "#table_exists? should see if the table is in a given schema" do @db.create_table(:schema_test__schema_test){integer :i} @db.table_exists?(:schema_test__schema_test).should == true end specify "should be able to add and drop indexes in a schema" do @db.create_table(:schema_test__schema_test){Integer :i, :index=>true} @db.indexes(:schema_test__schema_test).keys.should == [:schema_test_schema_test_i_index] @db.drop_index :schema_test__schema_test, :i @db.indexes(:schema_test__schema_test).keys.should == [] end specify "should be able to get primary keys for tables in a given schema" do @db.create_table(:schema_test__schema_test){primary_key :i} @db.primary_key(:schema_test__schema_test).should == 'i' end specify "should be able to get serial sequences for tables in a given schema" do @db.create_table(:schema_test__schema_test){primary_key :i} @db.primary_key_sequence(:schema_test__schema_test).should == '"schema_test"."schema_test_i_seq"' end specify "should be able to get serial sequences for tables that have spaces in the name in a given schema" do @db.create_table(:"schema_test__schema test"){primary_key :i} @db.primary_key_sequence(:"schema_test__schema test").should == '"schema_test"."schema test_i_seq"' end specify "should be able to get custom sequences for tables in a given schema" do @db << "CREATE SEQUENCE schema_test.kseq" @db.create_table(:schema_test__schema_test){integer :j; primary_key :k, :type=>:integer, :default=>Sequel.lit("nextval('schema_test.kseq'::regclass)")} @db.primary_key_sequence(:schema_test__schema_test).should == '"schema_test".kseq' end specify "should be able to get custom sequences for tables that have spaces in the name in a given schema" do @db << "CREATE SEQUENCE schema_test.\"ks eq\"" @db.create_table(:"schema_test__schema test"){integer :j; primary_key :k, :type=>:integer, :default=>Sequel.lit("nextval('schema_test.\"ks eq\"'::regclass)")} @db.primary_key_sequence(:"schema_test__schema test").should == '"schema_test"."ks eq"' end specify "should handle schema introspection cases with tables with same name in multiple schemas" do begin @db.create_table(:schema_test__schema_test) do primary_key :id foreign_key :i, :schema_test__schema_test, :index=>{:name=>:schema_test_sti} end @db.create_table!(:public__schema_test) do primary_key :id foreign_key :j, :public__schema_test, :index=>{:name=>:public_test_sti} end h = @db.schema(:schema_test) h.length.should == 2 h.last.first.should == :j @db.indexes(:schema_test).should == {:public_test_sti=>{:unique=>false, :columns=>[:j], :deferrable=>nil}} @db.foreign_key_list(:schema_test).should == [{:on_update=>:no_action, :columns=>[:j], :deferrable=>false, :key=>[:id], :table=>:schema_test, :on_delete=>:no_action, :name=>:schema_test_j_fkey}] ensure @db.drop_table?(:public__schema_test) end end end describe "Postgres::Database schema qualified tables and eager graphing" do before(:all) do @db = DB @db.run "DROP SCHEMA s CASCADE" rescue nil @db.run "CREATE SCHEMA s" @db.create_table(:s__bands){primary_key :id; String :name} @db.create_table(:s__albums){primary_key :id; String :name; foreign_key :band_id, :s__bands} @db.create_table(:s__tracks){primary_key :id; String :name; foreign_key :album_id, :s__albums} @db.create_table(:s__members){primary_key :id; String :name; foreign_key :band_id, :s__bands} @Band = Class.new(Sequel::Model(:s__bands)) @Album = Class.new(Sequel::Model(:s__albums)) @Track = Class.new(Sequel::Model(:s__tracks)) @Member = Class.new(Sequel::Model(:s__members)) def @Band.name; :Band; end def @Album.name; :Album; end def @Track.name; :Track; end def @Member.name; :Member; end @Band.one_to_many :albums, :class=>@Album, :order=>:name @Band.one_to_many :members, :class=>@Member, :order=>:name @Album.many_to_one :band, :class=>@Band, :order=>:name @Album.one_to_many :tracks, :class=>@Track, :order=>:name @Track.many_to_one :album, :class=>@Album, :order=>:name @Member.many_to_one :band, :class=>@Band, :order=>:name @Member.many_to_many :members, :class=>@Member, :join_table=>:s__bands, :right_key=>:id, :left_key=>:id, :left_primary_key=>:band_id, :right_primary_key=>:band_id, :order=>:name @Band.many_to_many :tracks, :class=>@Track, :join_table=>:s__albums, :right_key=>:id, :right_primary_key=>:album_id, :order=>:name @b1 = @Band.create(:name=>"BM") @b2 = @Band.create(:name=>"J") @a1 = @Album.create(:name=>"BM1", :band=>@b1) @a2 = @Album.create(:name=>"BM2", :band=>@b1) @a3 = @Album.create(:name=>"GH", :band=>@b2) @a4 = @Album.create(:name=>"GHL", :band=>@b2) @t1 = @Track.create(:name=>"BM1-1", :album=>@a1) @t2 = @Track.create(:name=>"BM1-2", :album=>@a1) @t3 = @Track.create(:name=>"BM2-1", :album=>@a2) @t4 = @Track.create(:name=>"BM2-2", :album=>@a2) @m1 = @Member.create(:name=>"NU", :band=>@b1) @m2 = @Member.create(:name=>"TS", :band=>@b1) @m3 = @Member.create(:name=>"NS", :band=>@b2) @m4 = @Member.create(:name=>"JC", :band=>@b2) end after(:all) do @db.run "DROP SCHEMA s CASCADE" end specify "should return all eager graphs correctly" do bands = @Band.order(:bands__name).eager_graph(:albums).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands = @Band.order(:bands__name).eager_graph(:albums=>:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands.map{|x| x.albums.map{|y| y.tracks}}.should == [[[@t1, @t2], [@t3, @t4]], [[], []]] bands = @Band.order(:bands__name).eager_graph({:albums=>:tracks}, :members).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands.map{|x| x.albums.map{|y| y.tracks}}.should == [[[@t1, @t2], [@t3, @t4]], [[], []]] bands.map{|x| x.members}.should == [[@m1, @m2], [@m4, @m3]] end specify "should have eager graphs work with previous joins" do bands = @Band.order(:bands__name).select_all(:s__bands).join(:s__members, :band_id=>:id).from_self(:alias=>:bands0).eager_graph(:albums=>:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands.map{|x| x.albums.map{|y| y.tracks}}.should == [[[@t1, @t2], [@t3, @t4]], [[], []]] end specify "should have eager graphs work with joins with the same tables" do bands = @Band.order(:bands__name).select_all(:s__bands).join(:s__members, :band_id=>:id).eager_graph({:albums=>:tracks}, :members).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands.map{|x| x.albums.map{|y| y.tracks}}.should == [[[@t1, @t2], [@t3, @t4]], [[], []]] bands.map{|x| x.members}.should == [[@m1, @m2], [@m4, @m3]] end specify "should have eager graphs work with self referential associations" do bands = @Band.order(:bands__name).eager_graph(:tracks=>{:album=>:band}).all bands.should == [@b1, @b2] bands.map{|x| x.tracks}.should == [[@t1, @t2, @t3, @t4], []] bands.map{|x| x.tracks.map{|y| y.album}}.should == [[@a1, @a1, @a2, @a2], []] bands.map{|x| x.tracks.map{|y| y.album.band}}.should == [[@b1, @b1, @b1, @b1], []] members = @Member.order(:members__name).eager_graph(:members).all members.should == [@m4, @m3, @m1, @m2] members.map{|x| x.members}.should == [[@m4, @m3], [@m4, @m3], [@m1, @m2], [@m1, @m2]] members = @Member.order(:members__name).eager_graph(:band, :members=>:band).all members.should == [@m4, @m3, @m1, @m2] members.map{|x| x.band}.should == [@b2, @b2, @b1, @b1] members.map{|x| x.members}.should == [[@m4, @m3], [@m4, @m3], [@m1, @m2], [@m1, @m2]] members.map{|x| x.members.map{|y| y.band}}.should == [[@b2, @b2], [@b2, @b2], [@b1, @b1], [@b1, @b1]] end specify "should have eager graphs work with a from_self dataset" do bands = @Band.order(:bands__name).from_self.eager_graph(:tracks=>{:album=>:band}).all bands.should == [@b1, @b2] bands.map{|x| x.tracks}.should == [[@t1, @t2, @t3, @t4], []] bands.map{|x| x.tracks.map{|y| y.album}}.should == [[@a1, @a1, @a2, @a2], []] bands.map{|x| x.tracks.map{|y| y.album.band}}.should == [[@b1, @b1, @b1, @b1], []] end specify "should have eager graphs work with different types of aliased from tables" do bands = @Band.order(:tracks__name).from(:s__bands___tracks).eager_graph(:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.tracks}.should == [[@t1, @t2, @t3, @t4], []] bands = @Band.order(:tracks__name).from(Sequel.expr(:s__bands).as(:tracks)).eager_graph(:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.tracks}.should == [[@t1, @t2, @t3, @t4], []] bands = @Band.order(:tracks__name).from(Sequel.expr(:s__bands).as(Sequel.identifier(:tracks))).eager_graph(:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.tracks}.should == [[@t1, @t2, @t3, @t4], []] bands = @Band.order(:tracks__name).from(Sequel.expr(:s__bands).as('tracks')).eager_graph(:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.tracks}.should == [[@t1, @t2, @t3, @t4], []] end specify "should have eager graphs work with join tables with aliases" do bands = @Band.order(:bands__name).eager_graph(:members).join(:s__albums___tracks, :band_id=>Sequel.qualify(:s__bands, :id)).eager_graph(:albums=>:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands.map{|x| x.members}.should == [[@m1, @m2], [@m4, @m3]] bands = @Band.order(:bands__name).eager_graph(:members).join(Sequel.as(:s__albums, :tracks), :band_id=>Sequel.qualify(:s__bands, :id)).eager_graph(:albums=>:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands.map{|x| x.members}.should == [[@m1, @m2], [@m4, @m3]] bands = @Band.order(:bands__name).eager_graph(:members).join(Sequel.as(:s__albums, 'tracks'), :band_id=>Sequel.qualify(:s__bands, :id)).eager_graph(:albums=>:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands.map{|x| x.members}.should == [[@m1, @m2], [@m4, @m3]] bands = @Band.order(:bands__name).eager_graph(:members).join(Sequel.as(:s__albums, Sequel.identifier(:tracks)), :band_id=>Sequel.qualify(:s__bands, :id)).eager_graph(:albums=>:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands.map{|x| x.members}.should == [[@m1, @m2], [@m4, @m3]] bands = @Band.order(:bands__name).eager_graph(:members).join(:s__albums, {:band_id=>Sequel.qualify(:s__bands, :id)}, :table_alias=>:tracks).eager_graph(:albums=>:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands.map{|x| x.members}.should == [[@m1, @m2], [@m4, @m3]] bands = @Band.order(:bands__name).eager_graph(:members).join(:s__albums, {:band_id=>Sequel.qualify(:s__bands, :id)}, :table_alias=>'tracks').eager_graph(:albums=>:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands.map{|x| x.members}.should == [[@m1, @m2], [@m4, @m3]] bands = @Band.order(:bands__name).eager_graph(:members).join(:s__albums, {:band_id=>Sequel.qualify(:s__bands, :id)}, :table_alias=>Sequel.identifier(:tracks)).eager_graph(:albums=>:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.albums}.should == [[@a1, @a2], [@a3, @a4]] bands.map{|x| x.members}.should == [[@m1, @m2], [@m4, @m3]] end specify "should have eager graphs work with different types of qualified from tables" do bands = @Band.order(:bands__name).from(Sequel.qualify(:s, :bands)).eager_graph(:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.tracks}.should == [[@t1, @t2, @t3, @t4], []] bands = @Band.order(:bands__name).from(Sequel.identifier(:bands).qualify(:s)).eager_graph(:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.tracks}.should == [[@t1, @t2, @t3, @t4], []] bands = @Band.order(:bands__name).from(Sequel::SQL::QualifiedIdentifier.new(:s, 'bands')).eager_graph(:tracks).all bands.should == [@b1, @b2] bands.map{|x| x.tracks}.should == [[@t1, @t2, @t3, @t4], []] end end if DB.server_version >= 80300 describe "PostgreSQL tsearch2" do before(:all) do DB.create_table! :test6 do text :title text :body full_text_index [:title, :body] end @ds = DB[:test6] end after do DB[:test6].delete end after(:all) do DB.drop_table?(:test6) end specify "should search by indexed column" do record = {:title => "oopsla conference", :body => "test"} @ds << record @ds.full_text_search(:title, "oopsla").all.should include(record) end specify "should join multiple coumns with spaces to search by last words in row" do record = {:title => "multiple words", :body => "are easy to search"} @ds << record @ds.full_text_search([:title, :body], "words").all.should include(record) end specify "should return rows with a NULL in one column if a match in another column" do record = {:title => "multiple words", :body =>nil} @ds << record @ds.full_text_search([:title, :body], "words").all.should include(record) end end end if DB.dataset.supports_window_functions? describe "Postgres::Dataset named windows" do before do @db = DB @db.create_table!(:i1){Integer :id; Integer :group_id; Integer :amount} @ds = @db[:i1].order(:id) @ds.insert(:id=>1, :group_id=>1, :amount=>1) @ds.insert(:id=>2, :group_id=>1, :amount=>10) @ds.insert(:id=>3, :group_id=>1, :amount=>100) @ds.insert(:id=>4, :group_id=>2, :amount=>1000) @ds.insert(:id=>5, :group_id=>2, :amount=>10000) @ds.insert(:id=>6, :group_id=>2, :amount=>100000) end after do @db.drop_table?(:i1) end specify "should give correct results for window functions" do @ds.window(:win, :partition=>:group_id, :order=>:id).select(:id){sum(:over, :args=>amount, :window=>win){}}.all.should == [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1000, :id=>4}, {:sum=>11000, :id=>5}, {:sum=>111000, :id=>6}] @ds.window(:win, :partition=>:group_id).select(:id){sum(:over, :args=>amount, :window=>win, :order=>id){}}.all.should == [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1000, :id=>4}, {:sum=>11000, :id=>5}, {:sum=>111000, :id=>6}] @ds.window(:win, {}).select(:id){sum(:over, :args=>amount, :window=>:win, :order=>id){}}.all.should == [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1111, :id=>4}, {:sum=>11111, :id=>5}, {:sum=>111111, :id=>6}] @ds.window(:win, :partition=>:group_id).select(:id){sum(:over, :args=>amount, :window=>:win, :order=>id, :frame=>:all){}}.all.should == [{:sum=>111, :id=>1}, {:sum=>111, :id=>2}, {:sum=>111, :id=>3}, {:sum=>111000, :id=>4}, {:sum=>111000, :id=>5}, {:sum=>111000, :id=>6}] end end end describe "Postgres::Database functions, languages, schemas, and triggers" do before do @d = DB end after do @d.drop_function('tf', :if_exists=>true, :cascade=>true) @d.drop_function('tf', :if_exists=>true, :cascade=>true, :args=>%w'integer integer') @d.drop_language(:plpgsql, :if_exists=>true, :cascade=>true) if @d.server_version < 90000 @d.drop_schema(:sequel, :if_exists=>true, :cascade=>true) @d.drop_table?(:test) end specify "#create_function and #drop_function should create and drop functions" do proc{@d['SELECT tf()'].all}.should raise_error(Sequel::DatabaseError) args = ['tf', 'SELECT 1', {:returns=>:integer}] @d.send(:create_function_sql, *args).should =~ /\A\s*CREATE FUNCTION tf\(\)\s+RETURNS integer\s+LANGUAGE SQL\s+AS 'SELECT 1'\s*\z/ @d.create_function(*args) @d['SELECT tf()'].all.should == [{:tf=>1}] @d.send(:drop_function_sql, 'tf').should == 'DROP FUNCTION tf()' @d.drop_function('tf') proc{@d['SELECT tf()'].all}.should raise_error(Sequel::DatabaseError) end specify "#create_function and #drop_function should support options" do args = ['tf', 'SELECT $1 + $2', {:args=>[[:integer, :a], :integer], :replace=>true, :returns=>:integer, :language=>'SQL', :behavior=>:immutable, :strict=>true, :security_definer=>true, :cost=>2, :set=>{:search_path => 'public'}}] @d.send(:create_function_sql,*args).should =~ /\A\s*CREATE OR REPLACE FUNCTION tf\(a integer, integer\)\s+RETURNS integer\s+LANGUAGE SQL\s+IMMUTABLE\s+STRICT\s+SECURITY DEFINER\s+COST 2\s+SET search_path = public\s+AS 'SELECT \$1 \+ \$2'\s*\z/ @d.create_function(*args) # Make sure replace works @d.create_function(*args) @d['SELECT tf(1, 2)'].all.should == [{:tf=>3}] args = ['tf', {:if_exists=>true, :cascade=>true, :args=>[[:integer, :a], :integer]}] @d.send(:drop_function_sql,*args).should == 'DROP FUNCTION IF EXISTS tf(a integer, integer) CASCADE' @d.drop_function(*args) # Make sure if exists works @d.drop_function(*args) end specify "#create_language and #drop_language should create and drop languages" do @d.send(:create_language_sql, :plpgsql).should == 'CREATE LANGUAGE plpgsql' @d.create_language(:plpgsql, :replace=>true) if @d.server_version < 90000 proc{@d.create_language(:plpgsql)}.should raise_error(Sequel::DatabaseError) @d.send(:drop_language_sql, :plpgsql).should == 'DROP LANGUAGE plpgsql' @d.drop_language(:plpgsql) if @d.server_version < 90000 proc{@d.drop_language(:plpgsql)}.should raise_error(Sequel::DatabaseError) if @d.server_version < 90000 @d.send(:create_language_sql, :plpgsql, :replace=>true, :trusted=>true, :handler=>:a, :validator=>:b).should == (@d.server_version >= 90000 ? 'CREATE OR REPLACE TRUSTED LANGUAGE plpgsql HANDLER a VALIDATOR b' : 'CREATE TRUSTED LANGUAGE plpgsql HANDLER a VALIDATOR b') @d.send(:drop_language_sql, :plpgsql, :if_exists=>true, :cascade=>true).should == 'DROP LANGUAGE IF EXISTS plpgsql CASCADE' # Make sure if exists works @d.drop_language(:plpgsql, :if_exists=>true, :cascade=>true) if @d.server_version < 90000 end specify "#create_schema and #drop_schema should create and drop schemas" do @d.send(:create_schema_sql, :sequel).should == 'CREATE SCHEMA "sequel"' @d.send(:create_schema_sql, :sequel, :if_not_exists=>true, :owner=>:foo).should == 'CREATE SCHEMA IF NOT EXISTS "sequel" AUTHORIZATION "foo"' @d.send(:drop_schema_sql, :sequel).should == 'DROP SCHEMA "sequel"' @d.send(:drop_schema_sql, :sequel, :if_exists=>true, :cascade=>true).should == 'DROP SCHEMA IF EXISTS "sequel" CASCADE' @d.create_schema(:sequel) @d.create_schema(:sequel, :if_not_exists=>true) if @d.server_version >= 90300 @d.create_table(:sequel__test){Integer :a} @d.drop_schema(:sequel, :if_exists=>true, :cascade=>true) end specify "#create_trigger and #drop_trigger should create and drop triggers" do @d.create_language(:plpgsql) if @d.server_version < 90000 @d.create_function(:tf, 'BEGIN IF NEW.value IS NULL THEN RAISE EXCEPTION \'Blah\'; END IF; RETURN NEW; END;', :language=>:plpgsql, :returns=>:trigger) @d.send(:create_trigger_sql, :test, :identity, :tf, :each_row=>true).should == 'CREATE TRIGGER identity BEFORE INSERT OR UPDATE OR DELETE ON "test" FOR EACH ROW EXECUTE PROCEDURE tf()' @d.create_table(:test){String :name; Integer :value} @d.create_trigger(:test, :identity, :tf, :each_row=>true) @d[:test].insert(:name=>'a', :value=>1) @d[:test].filter(:name=>'a').all.should == [{:name=>'a', :value=>1}] proc{@d[:test].filter(:name=>'a').update(:value=>nil)}.should raise_error(Sequel::DatabaseError) @d[:test].filter(:name=>'a').all.should == [{:name=>'a', :value=>1}] @d[:test].filter(:name=>'a').update(:value=>3) @d[:test].filter(:name=>'a').all.should == [{:name=>'a', :value=>3}] @d.send(:drop_trigger_sql, :test, :identity).should == 'DROP TRIGGER identity ON "test"' @d.drop_trigger(:test, :identity) @d.send(:create_trigger_sql, :test, :identity, :tf, :after=>true, :events=>:insert, :args=>[1, 'a']).should == 'CREATE TRIGGER identity AFTER INSERT ON "test" EXECUTE PROCEDURE tf(1, \'a\')' @d.send(:drop_trigger_sql, :test, :identity, :if_exists=>true, :cascade=>true).should == 'DROP TRIGGER IF EXISTS identity ON "test" CASCADE' # Make sure if exists works @d.drop_trigger(:test, :identity, :if_exists=>true, :cascade=>true) end end if DB.adapter_scheme == :postgres describe "Postgres::Dataset #use_cursor" do before(:all) do @db = DB @db.create_table!(:test_cursor){Integer :x} @db.sqls.clear @ds = @db[:test_cursor] @db.transaction{1001.times{|i| @ds.insert(i)}} end after(:all) do @db.drop_table?(:test_cursor) end specify "should return the same results as the non-cursor use" do @ds.all.should == @ds.use_cursor.all end specify "should respect the :rows_per_fetch option" do @db.sqls.clear @ds.use_cursor.all check_sqls do @db.sqls.length.should == 6 @db.sqls.clear end @ds.use_cursor(:rows_per_fetch=>100).all check_sqls do @db.sqls.length.should == 15 end end specify "should handle returning inside block" do def @ds.check_return use_cursor.each{|r| return} end @ds.check_return @ds.all.should == @ds.use_cursor.all end end describe "Postgres::PG_NAMED_TYPES" do before do @db = DB Sequel::Postgres::PG_NAMED_TYPES[:interval] = lambda{|v| v.reverse} @db.extension :pg_array @db.reset_conversion_procs end after do Sequel::Postgres::PG_NAMED_TYPES.delete(:interval) @db.reset_conversion_procs @db.drop_table?(:foo) end specify "should look up conversion procs by name" do @db.create_table!(:foo){interval :bar} @db[:foo].insert(Sequel.cast('21 days', :interval)) @db[:foo].get(:bar).should == 'syad 12' end specify "should handle array types of named types" do @db.create_table!(:foo){column :bar, 'interval[]'} @db[:foo].insert(Sequel.pg_array(['21 days'], :interval)) @db[:foo].get(:bar).should == ['syad 12'] end end end if ((DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG) || DB.adapter_scheme == :jdbc) && DB.server_version >= 90000 describe "Postgres::Database#copy_into" do before(:all) do @db = DB @db.create_table!(:test_copy){Integer :x; Integer :y} @ds = @db[:test_copy].order(:x, :y) end before do @db[:test_copy].delete end after(:all) do @db.drop_table?(:test_copy) end specify "should work with a :data option containing data in PostgreSQL text format" do @db.copy_into(:test_copy, :data=>"1\t2\n3\t4\n") @ds.select_map([:x, :y]).should == [[1, 2], [3, 4]] end specify "should work with :format=>:csv option and :data option containing data in CSV format" do @db.copy_into(:test_copy, :format=>:csv, :data=>"1,2\n3,4\n") @ds.select_map([:x, :y]).should == [[1, 2], [3, 4]] end specify "should respect given :options" do @db.copy_into(:test_copy, :options=>"FORMAT csv, HEADER TRUE", :data=>"x,y\n1,2\n3,4\n") @ds.select_map([:x, :y]).should == [[1, 2], [3, 4]] end specify "should respect given :options options when :format is used" do @db.copy_into(:test_copy, :options=>"QUOTE '''', DELIMITER '|'", :format=>:csv, :data=>"'1'|'2'\n'3'|'4'\n") @ds.select_map([:x, :y]).should == [[1, 2], [3, 4]] end specify "should accept :columns option to online copy the given columns" do @db.copy_into(:test_copy, :data=>"1\t2\n3\t4\n", :columns=>[:y, :x]) @ds.select_map([:x, :y]).should == [[2, 1], [4, 3]] end specify "should accept a block and use returned values for the copy in data stream" do buf = ["1\t2\n", "3\t4\n"] @db.copy_into(:test_copy){buf.shift} @ds.select_map([:x, :y]).should == [[1, 2], [3, 4]] end specify "should work correctly with a block and :format=>:csv" do buf = ["1,2\n", "3,4\n"] @db.copy_into(:test_copy, :format=>:csv){buf.shift} @ds.select_map([:x, :y]).should == [[1, 2], [3, 4]] end specify "should accept an enumerable as the :data option" do @db.copy_into(:test_copy, :data=>["1\t2\n", "3\t4\n"]) @ds.select_map([:x, :y]).should == [[1, 2], [3, 4]] end specify "should have an exception, cause a rollback of copied data and still have a usable connection" do 2.times do sent = false proc{@db.copy_into(:test_copy){raise ArgumentError if sent; sent = true; "1\t2\n"}}.should raise_error(ArgumentError) @ds.select_map([:x, :y]).should == [] end end specify "should handle database errors with a rollback of copied data and still have a usable connection" do 2.times do proc{@db.copy_into(:test_copy, :data=>["1\t2\n", "3\ta\n"])}.should raise_error(Sequel::DatabaseError) @ds.select_map([:x, :y]).should == [] end end specify "should raise an Error if both :data and a block are provided" do proc{@db.copy_into(:test_copy, :data=>["1\t2\n", "3\t4\n"]){}}.should raise_error(Sequel::Error) end specify "should raise an Error if neither :data or a block are provided" do proc{@db.copy_into(:test_copy)}.should raise_error(Sequel::Error) end end describe "Postgres::Database#copy_table" do before(:all) do @db = DB @db.create_table!(:test_copy){Integer :x; Integer :y} ds = @db[:test_copy] ds.insert(1, 2) ds.insert(3, 4) end after(:all) do @db.drop_table?(:test_copy) end specify "without a block or options should return a text version of the table as a single string" do @db.copy_table(:test_copy).should == "1\t2\n3\t4\n" end specify "without a block and with :format=>:csv should return a csv version of the table as a single string" do @db.copy_table(:test_copy, :format=>:csv).should == "1,2\n3,4\n" end specify "should treat string as SQL code" do @db.copy_table('COPY "test_copy" TO STDOUT').should == "1\t2\n3\t4\n" end specify "should respect given :options options" do @db.copy_table(:test_copy, :options=>"FORMAT csv, HEADER TRUE").should == "x,y\n1,2\n3,4\n" end specify "should respect given :options options when :format is used" do @db.copy_table(:test_copy, :format=>:csv, :options=>"QUOTE '''', FORCE_QUOTE *").should == "'1','2'\n'3','4'\n" end specify "should accept dataset as first argument" do @db.copy_table(@db[:test_copy].cross_join(:test_copy___tc).order(:test_copy__x, :test_copy__y, :tc__x, :tc__y)).should == "1\t2\t1\t2\n1\t2\t3\t4\n3\t4\t1\t2\n3\t4\t3\t4\n" end specify "with a block and no options should yield each row as a string in text format" do buf = [] @db.copy_table(:test_copy){|b| buf << b} buf.should == ["1\t2\n", "3\t4\n"] end specify "with a block and :format=>:csv should yield each row as a string in csv format" do buf = [] @db.copy_table(:test_copy, :format=>:csv){|b| buf << b} buf.should == ["1,2\n", "3,4\n"] end specify "should work fine when using a block that is terminated early with a following copy_table" do buf = [] proc{@db.copy_table(:test_copy, :format=>:csv){|b| buf << b; break}}.should raise_error(Sequel::DatabaseDisconnectError) buf.should == ["1,2\n"] buf.clear proc{@db.copy_table(:test_copy, :format=>:csv){|b| buf << b; raise ArgumentError}}.should raise_error(Sequel::DatabaseDisconnectError) buf.should == ["1,2\n"] buf.clear @db.copy_table(:test_copy){|b| buf << b} buf.should == ["1\t2\n", "3\t4\n"] end specify "should work fine when using a block that is terminated early with a following regular query" do buf = [] proc{@db.copy_table(:test_copy, :format=>:csv){|b| buf << b; break}}.should raise_error(Sequel::DatabaseDisconnectError) buf.should == ["1,2\n"] buf.clear proc{@db.copy_table(:test_copy, :format=>:csv){|b| buf << b; raise ArgumentError}}.should raise_error(Sequel::DatabaseDisconnectError) buf.should == ["1,2\n"] @db[:test_copy].select_order_map(:x).should == [1, 3] end end end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG && DB.server_version >= 90000 describe "Postgres::Database LISTEN/NOTIFY" do before(:all) do @db = DB end specify "should support listen and notify" do notify_pid = @db.synchronize{|conn| conn.backend_pid} called = false @db.listen('foo', :after_listen=>proc{@db.notify('foo')}) do |ev, pid, payload| ev.should == 'foo' pid.should == notify_pid ['', nil].should include(payload) called = true end.should == 'foo' called.should be_true # Check weird identifier names called = false @db.listen('FOO bar', :after_listen=>proc{@db.notify('FOO bar')}) do |ev, pid, payload| ev.should == 'FOO bar' pid.should == notify_pid ['', nil].should include(payload) called = true end.should == 'FOO bar' called.should be_true # Check identifier symbols called = false @db.listen(:foo, :after_listen=>proc{@db.notify(:foo)}) do |ev, pid, payload| ev.should == 'foo' pid.should == notify_pid ['', nil].should include(payload) called = true end.should == 'foo' called.should be_true called = false @db.listen('foo', :after_listen=>proc{@db.notify('foo', :payload=>'bar')}) do |ev, pid, payload| ev.should == 'foo' pid.should == notify_pid payload.should == 'bar' called = true end.should == 'foo' called.should be_true @db.listen('foo', :after_listen=>proc{@db.notify('foo')}).should == 'foo' called = false called2 = false i = 0 @db.listen(['foo', 'bar'], :after_listen=>proc{@db.notify('foo', :payload=>'bar'); @db.notify('bar', :payload=>'foo')}, :loop=>proc{i+=1}) do |ev, pid, payload| if !called ev.should == 'foo' pid.should == notify_pid payload.should == 'bar' called = true else ev.should == 'bar' pid.should == notify_pid payload.should == 'foo' called2 = true break end end.should be_nil called.should be_true called2.should be_true i.should == 1 end specify "should accept a :timeout option in listen" do @db.listen('foo2', :timeout=>0.001).should == nil called = false @db.listen('foo2', :timeout=>0.001){|ev, pid, payload| called = true}.should == nil called.should be_false i = 0 @db.listen('foo2', :timeout=>0.001, :loop=>proc{i+=1; throw :stop if i > 3}){|ev, pid, payload| called = true}.should == nil i.should == 4 end unless RUBY_PLATFORM =~ /mingw/ # Ruby freezes on this spec on this platform/version end end describe 'PostgreSQL special float handling' do before do @db = DB @db.create_table!(:test5){Float :value} @db.sqls.clear @ds = @db[:test5] end after do @db.drop_table?(:test5) end check_sqls do specify 'should quote NaN' do nan = 0.0/0.0 @ds.insert_sql(:value => nan).should == %q{INSERT INTO "test5" ("value") VALUES ('NaN')} end specify 'should quote +Infinity' do inf = 1.0/0.0 @ds.insert_sql(:value => inf).should == %q{INSERT INTO "test5" ("value") VALUES ('Infinity')} end specify 'should quote -Infinity' do inf = -1.0/0.0 @ds.insert_sql(:value => inf).should == %q{INSERT INTO "test5" ("value") VALUES ('-Infinity')} end end if DB.adapter_scheme == :postgres specify 'inserts NaN' do nan = 0.0/0.0 @ds.insert(:value=>nan) @ds.all[0][:value].nan?.should be_true end specify 'inserts +Infinity' do inf = 1.0/0.0 @ds.insert(:value=>inf) @ds.all[0][:value].infinite?.should > 0 end specify 'inserts -Infinity' do inf = -1.0/0.0 @ds.insert(:value=>inf) @ds.all[0][:value].infinite?.should < 0 end end end describe 'PostgreSQL array handling' do before(:all) do @db = DB @db.extension :pg_array @ds = @db[:items] @native = DB.adapter_scheme == :postgres @jdbc = DB.adapter_scheme == :jdbc @tp = lambda{@db.schema(:items).map{|a| a.last[:type]}} end after do @db.drop_table?(:items) end specify 'insert and retrieve integer and float arrays of various sizes' do @db.create_table!(:items) do column :i2, 'int2[]' column :i4, 'int4[]' column :i8, 'int8[]' column :r, 'real[]' column :dp, 'double precision[]' end @tp.call.should == [:integer_array, :integer_array, :bigint_array, :float_array, :float_array] @ds.insert(Sequel.pg_array([1], :int2), Sequel.pg_array([nil, 2], :int4), Sequel.pg_array([3, nil], :int8), Sequel.pg_array([4, nil, 4.5], :real), Sequel.pg_array([5, nil, 5.5], "double precision")) @ds.count.should == 1 rs = @ds.all if @jdbc || @native rs.should == [{:i2=>[1], :i4=>[nil, 2], :i8=>[3, nil], :r=>[4.0, nil, 4.5], :dp=>[5.0, nil, 5.5]}] end if @native rs.first.values.each{|v| v.should_not be_a_kind_of(Array)} rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)} @ds.delete @ds.insert(rs.first) @ds.all.should == rs end @ds.delete @ds.insert(Sequel.pg_array([[1], [2]], :int2), Sequel.pg_array([[nil, 2], [3, 4]], :int4), Sequel.pg_array([[3, nil], [nil, nil]], :int8), Sequel.pg_array([[4, nil], [nil, 4.5]], :real), Sequel.pg_array([[5, nil], [nil, 5.5]], "double precision")) rs = @ds.all if @jdbc || @native rs.should == [{:i2=>[[1], [2]], :i4=>[[nil, 2], [3, 4]], :i8=>[[3, nil], [nil, nil]], :r=>[[4, nil], [nil, 4.5]], :dp=>[[5, nil], [nil, 5.5]]}] end if @native rs.first.values.each{|v| v.should_not be_a_kind_of(Array)} rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)} @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end specify 'insert and retrieve decimal arrays' do @db.create_table!(:items) do column :n, 'numeric[]' end @tp.call.should == [:decimal_array] @ds.insert(Sequel.pg_array([BigDecimal.new('1.000000000000000000001'), nil, BigDecimal.new('1')], :numeric)) @ds.count.should == 1 rs = @ds.all if @jdbc || @native rs.should == [{:n=>[BigDecimal.new('1.000000000000000000001'), nil, BigDecimal.new('1')]}] end if @native rs.first.values.each{|v| v.should_not be_a_kind_of(Array)} rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)} @ds.delete @ds.insert(rs.first) @ds.all.should == rs end @ds.delete @ds.insert(Sequel.pg_array([[BigDecimal.new('1.0000000000000000000000000000001'), nil], [nil, BigDecimal.new('1')]], :numeric)) rs = @ds.all if @jdbc || @native rs.should == [{:n=>[[BigDecimal.new('1.0000000000000000000000000000001'), nil], [nil, BigDecimal.new('1')]]}] end if @native rs.first.values.each{|v| v.should_not be_a_kind_of(Array)} rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)} @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end specify 'insert and retrieve string arrays' do @db.create_table!(:items) do column :c, 'char(4)[]' column :vc, 'varchar[]' column :t, 'text[]' end @tp.call.should == [:string_array, :string_array, :string_array] @ds.insert(Sequel.pg_array(['a', nil, 'NULL', 'b"\'c'], 'char(4)'), Sequel.pg_array(['a', nil, 'NULL', 'b"\'c'], :varchar), Sequel.pg_array(['a', nil, 'NULL', 'b"\'c'], :text)) @ds.count.should == 1 rs = @ds.all if @jdbc || @native rs.should == [{:c=>['a ', nil, 'NULL', 'b"\'c'], :vc=>['a', nil, 'NULL', 'b"\'c'], :t=>['a', nil, 'NULL', 'b"\'c']}] end if @native rs.first.values.each{|v| v.should_not be_a_kind_of(Array)} rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)} @ds.delete @ds.insert(rs.first) @ds.all.should == rs end @ds.delete @ds.insert(Sequel.pg_array([[['a'], [nil]], [['NULL'], ['b"\'c']]], 'char(4)'), Sequel.pg_array([[['a'], ['']], [['NULL'], ['b"\'c']]], :varchar), Sequel.pg_array([[['a'], [nil]], [['NULL'], ['b"\'c']]], :text)) rs = @ds.all if @jdbc || @native rs.should == [{:c=>[[['a '], [nil]], [['NULL'], ['b"\'c']]], :vc=>[[['a'], ['']], [['NULL'], ['b"\'c']]], :t=>[[['a'], [nil]], [['NULL'], ['b"\'c']]]}] end if @native rs.first.values.each{|v| v.should_not be_a_kind_of(Array)} rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)} @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end specify 'insert and retrieve arrays of other types' do @db.create_table!(:items) do column :b, 'bool[]' column :d, 'date[]' column :t, 'time[]' column :ts, 'timestamp[]' column :tstz, 'timestamptz[]' end @tp.call.should == [:boolean_array, :date_array, :time_array, :datetime_array, :datetime_timezone_array] d = Date.today t = Sequel::SQLTime.create(10, 20, 30) ts = Time.local(2011, 1, 2, 3, 4, 5) @ds.insert(Sequel.pg_array([true, false], :bool), Sequel.pg_array([d, nil], :date), Sequel.pg_array([t, nil], :time), Sequel.pg_array([ts, nil], :timestamp), Sequel.pg_array([ts, nil], :timestamptz)) @ds.count.should == 1 rs = @ds.all if @jdbc || @native rs.should == [{:b=>[true, false], :d=>[d, nil], :t=>[t, nil], :ts=>[ts, nil], :tstz=>[ts, nil]}] end if @native rs.first.values.each{|v| v.should_not be_a_kind_of(Array)} rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)} @ds.delete @ds.insert(rs.first) @ds.all.should == rs end @db.create_table!(:items) do column :ba, 'bytea[]' column :tz, 'timetz[]' column :o, 'oid[]' end @tp.call.should == [:blob_array, :time_timezone_array, :integer_array] @ds.insert(Sequel.pg_array([Sequel.blob("a\0"), nil], :bytea), Sequel.pg_array([t, nil], :timetz), Sequel.pg_array([1, 2, 3], :oid)) @ds.count.should == 1 if @native rs = @ds.all rs.should == [{:ba=>[Sequel.blob("a\0"), nil], :tz=>[t, nil], :o=>[1, 2, 3]}] rs.first.values.each{|v| v.should_not be_a_kind_of(Array)} rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)} @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end specify 'insert and retrieve custom array types' do int2vector = Class.new do attr_reader :array def initialize(array) @array = array end def sql_literal_append(ds, sql) sql << "'#{array.join(' ')}'" end def ==(other) if other.is_a?(self.class) array == other.array else super end end end @db.register_array_type(:int2vector){|s| int2vector.new(s.split.map{|i| i.to_i})} @db.create_table!(:items) do column :b, 'int2vector[]' end @tp.call.should == [:int2vector_array] int2v = int2vector.new([1, 2]) @ds.insert(Sequel.pg_array([int2v], :int2vector)) @ds.count.should == 1 rs = @ds.all if @native rs.should == [{:b=>[int2v]}] rs.first.values.each{|v| v.should_not be_a_kind_of(Array)} rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)} @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end unless DB.adapter_scheme == :jdbc specify 'use arrays in bound variables' do @db.create_table!(:items) do column :i, 'int4[]' end @ds.call(:insert, {:i=>[1,2]}, {:i=>:$i}) @ds.get(:i).should == [1, 2] @ds.filter(:i=>:$i).call(:first, :i=>[1,2]).should == {:i=>[1,2]} @ds.filter(:i=>:$i).call(:first, :i=>[1,3]).should == nil # NULL values @ds.delete @ds.call(:insert, {:i=>[nil,nil]}, {:i=>:$i}) @ds.first.should == {:i=>[nil, nil]} @db.create_table!(:items) do column :i, 'text[]' end a = ["\"\\\\\"{}\n\t\r \v\b123afP", 'NULL', nil, ''] @ds.call(:insert, {:i=>:$i}, :i=>Sequel.pg_array(a)) @ds.get(:i).should == a @ds.filter(:i=>:$i).call(:first, :i=>a).should == {:i=>a} @ds.filter(:i=>:$i).call(:first, :i=>['', nil, nil, 'a']).should == nil @db.create_table!(:items) do column :i, 'date[]' end a = [Date.today] @ds.call(:insert, {:i=>:$i}, :i=>Sequel.pg_array(a, 'date')) @ds.get(:i).should == a @ds.filter(:i=>:$i).call(:first, :i=>a).should == {:i=>a} @ds.filter(:i=>:$i).call(:first, :i=>Sequel.pg_array([Date.today-1], 'date')).should == nil @db.create_table!(:items) do column :i, 'timestamp[]' end a = [Time.local(2011, 1, 2, 3, 4, 5)] @ds.call(:insert, {:i=>:$i}, :i=>Sequel.pg_array(a, 'timestamp')) @ds.get(:i).should == a @ds.filter(:i=>:$i).call(:first, :i=>a).should == {:i=>a} @ds.filter(:i=>:$i).call(:first, :i=>Sequel.pg_array([a.first-1], 'timestamp')).should == nil @db.create_table!(:items) do column :i, 'boolean[]' end a = [true, false] @ds.call(:insert, {:i=>:$i}, :i=>Sequel.pg_array(a, 'boolean')) @ds.get(:i).should == a @ds.filter(:i=>:$i).call(:first, :i=>a).should == {:i=>a} @ds.filter(:i=>:$i).call(:first, :i=>Sequel.pg_array([false, true], 'boolean')).should == nil @db.create_table!(:items) do column :i, 'bytea[]' end a = [Sequel.blob("a\0'\"")] @ds.call(:insert, {:i=>:$i}, :i=>Sequel.pg_array(a, 'bytea')) @ds.get(:i).should == a @ds.filter(:i=>:$i).call(:first, :i=>a).should == {:i=>a} @ds.filter(:i=>:$i).call(:first, :i=>Sequel.pg_array([Sequel.blob("b\0")], 'bytea')).should == nil end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG specify 'with models' do @db.create_table!(:items) do primary_key :id column :i, 'integer[]' column :f, 'double precision[]' column :d, 'numeric[]' column :t, 'text[]' end c = Class.new(Sequel::Model(@db[:items])) c.plugin :pg_typecast_on_load, :i, :f, :d, :t unless @native o = c.create(:i=>[1,2, nil], :f=>[[1, 2.5], [3, 4.5]], :d=>[1, BigDecimal.new('1.000000000000000000001')], :t=>[%w'a b c', ['NULL', nil, '1']]) o.i.should == [1, 2, nil] o.f.should == [[1, 2.5], [3, 4.5]] o.d.should == [BigDecimal.new('1'), BigDecimal.new('1.000000000000000000001')] o.t.should == [%w'a b c', ['NULL', nil, '1']] end specify 'operations/functions with pg_array_ops' do Sequel.extension :pg_array_ops @db.create_table!(:items){column :i, 'integer[]'; column :i2, 'integer[]'; column :i3, 'integer[]'; column :i4, 'integer[]'; column :i5, 'integer[]'} @ds.insert(Sequel.pg_array([1, 2, 3]), Sequel.pg_array([2, 1]), Sequel.pg_array([4, 4]), Sequel.pg_array([[5, 5], [4, 3]]), Sequel.pg_array([1, nil, 5])) @ds.get(Sequel.pg_array(:i) > :i3).should be_false @ds.get(Sequel.pg_array(:i3) > :i).should be_true @ds.get(Sequel.pg_array(:i) >= :i3).should be_false @ds.get(Sequel.pg_array(:i) >= :i).should be_true @ds.get(Sequel.pg_array(:i3) < :i).should be_false @ds.get(Sequel.pg_array(:i) < :i3).should be_true @ds.get(Sequel.pg_array(:i3) <= :i).should be_false @ds.get(Sequel.pg_array(:i) <= :i).should be_true @ds.get(Sequel.expr(5=>Sequel.pg_array(:i).any)).should be_false @ds.get(Sequel.expr(1=>Sequel.pg_array(:i).any)).should be_true @ds.get(Sequel.expr(1=>Sequel.pg_array(:i3).all)).should be_false @ds.get(Sequel.expr(4=>Sequel.pg_array(:i3).all)).should be_true @ds.get(Sequel.expr(1=>Sequel.pg_array(:i)[1..1].any)).should be_true @ds.get(Sequel.expr(2=>Sequel.pg_array(:i)[1..1].any)).should be_false @ds.get(Sequel.pg_array(:i2)[1]).should == 2 @ds.get(Sequel.pg_array(:i2)[1]).should == 2 @ds.get(Sequel.pg_array(:i2)[2]).should == 1 @ds.get(Sequel.pg_array(:i4)[2][1]).should == 4 @ds.get(Sequel.pg_array(:i4)[2][2]).should == 3 @ds.get(Sequel.pg_array(:i).contains(:i2)).should be_true @ds.get(Sequel.pg_array(:i).contains(:i3)).should be_false @ds.get(Sequel.pg_array(:i2).contained_by(:i)).should be_true @ds.get(Sequel.pg_array(:i).contained_by(:i2)).should be_false @ds.get(Sequel.pg_array(:i).overlaps(:i2)).should be_true @ds.get(Sequel.pg_array(:i2).overlaps(:i3)).should be_false @ds.get(Sequel.pg_array(:i).dims).should == '[1:3]' @ds.get(Sequel.pg_array(:i).length).should == 3 @ds.get(Sequel.pg_array(:i).lower).should == 1 if @db.server_version >= 80400 @ds.select(Sequel.pg_array(:i).unnest).from_self.count.should == 3 end if @db.server_version >= 90000 @ds.get(Sequel.pg_array(:i5).join).should == '15' @ds.get(Sequel.pg_array(:i5).join(':')).should == '1:5' @ds.get(Sequel.pg_array(:i5).join(':', '*')).should == '1:*:5' end if @db.server_version >= 90300 @ds.get(Sequel.pg_array(:i5).remove(1).length).should == 2 @ds.get(Sequel.pg_array(:i5).replace(1, 4).contains([1])).should be_false @ds.get(Sequel.pg_array(:i5).replace(1, 4).contains([4])).should be_true end if @native @ds.get(Sequel.pg_array(:i).push(4)).should == [1, 2, 3, 4] @ds.get(Sequel.pg_array(:i).unshift(4)).should == [4, 1, 2, 3] @ds.get(Sequel.pg_array(:i).concat(:i2)).should == [1, 2, 3, 2, 1] end if @db.type_supported?(:hstore) Sequel.extension :pg_hstore, :pg_hstore_ops @db.get(Sequel.pg_array(['a', 'b']).op.hstore['a']).should == 'b' @db.get(Sequel.pg_array(['a', 'b']).op.hstore(['c', 'd'])['a']).should == 'c' end end end describe 'PostgreSQL hstore handling' do before(:all) do @db = DB @db.extension :pg_array, :pg_hstore @ds = @db[:items] @h = {'a'=>'b', 'c'=>nil, 'd'=>'NULL', 'e'=>'\\\\" \\\' ,=>'} @native = DB.adapter_scheme == :postgres end after do @db.drop_table?(:items) end specify 'insert and retrieve hstore values' do @db.create_table!(:items) do column :h, :hstore end @ds.insert(Sequel.hstore(@h)) @ds.count.should == 1 if @native rs = @ds.all v = rs.first[:h] v.should_not be_a_kind_of(Hash) v.to_hash.should be_a_kind_of(Hash) v.to_hash.should == @h @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end specify 'insert and retrieve hstore[] values' do @db.create_table!(:items) do column :h, 'hstore[]' end @ds.insert(Sequel.pg_array([Sequel.hstore(@h)], :hstore)) @ds.count.should == 1 if @native rs = @ds.all v = rs.first[:h].first v.should_not be_a_kind_of(Hash) v.to_hash.should be_a_kind_of(Hash) v.to_hash.should == @h @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end specify 'use hstore in bound variables' do @db.create_table!(:items) do column :i, :hstore end @ds.call(:insert, {:i=>Sequel.hstore(@h)}, {:i=>:$i}) @ds.get(:i).should == @h @ds.filter(:i=>:$i).call(:first, :i=>Sequel.hstore(@h)).should == {:i=>@h} @ds.filter(:i=>:$i).call(:first, :i=>Sequel.hstore({})).should == nil @ds.delete @ds.call(:insert, {:i=>Sequel.hstore('a'=>nil)}, {:i=>:$i}) @ds.get(:i).should == Sequel.hstore('a'=>nil) @ds.delete @ds.call(:insert, {:i=>@h}, {:i=>:$i}) @ds.get(:i).should == @h @ds.filter(:i=>:$i).call(:first, :i=>@h).should == {:i=>@h} @ds.filter(:i=>:$i).call(:first, :i=>{}).should == nil end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG specify 'with models and associations' do @db.create_table!(:items) do primary_key :id column :h, :hstore end c = Class.new(Sequel::Model(@db[:items])) do def self.name 'Item' end unrestrict_primary_key def item_id h['item_id'].to_i if h end def left_item_id h['left_item_id'].to_i if h end end Sequel.extension :pg_hstore_ops c.plugin :many_through_many c.plugin :pg_typecast_on_load, :h unless @native h = {'item_id'=>"2", 'left_item_id'=>"1"} o2 = c.create(:id=>2) o = c.create(:id=>1, :h=>h) o.h.should == h c.many_to_one :item, :class=>c, :key_column=>Sequel.cast(Sequel.hstore(:h)['item_id'], Integer) c.one_to_many :items, :class=>c, :key=>Sequel.cast(Sequel.hstore(:h)['item_id'], Integer), :key_method=>:item_id c.many_to_many :related_items, :class=>c, :join_table=>:items___i, :left_key=>Sequel.cast(Sequel.hstore(:h)['left_item_id'], Integer), :right_key=>Sequel.cast(Sequel.hstore(:h)['item_id'], Integer) c.many_to_one :other_item, :class=>c, :key=>:id, :primary_key_method=>:item_id, :primary_key=>Sequel.cast(Sequel.hstore(:h)['item_id'], Integer), :reciprocal=>:other_items c.one_to_many :other_items, :class=>c, :primary_key=>:item_id, :key=>:id, :primary_key_column=>Sequel.cast(Sequel.hstore(:h)['item_id'], Integer), :reciprocal=>:other_item c.many_to_many :other_related_items, :class=>c, :join_table=>:items___i, :left_key=>:id, :right_key=>:id, :left_primary_key_column=>Sequel.cast(Sequel.hstore(:h)['left_item_id'], Integer), :left_primary_key=>:left_item_id, :right_primary_key=>Sequel.cast(Sequel.hstore(:h)['left_item_id'], Integer), :right_primary_key_method=>:left_item_id c.many_through_many :mtm_items, [ [:items, Sequel.cast(Sequel.hstore(:h)['item_id'], Integer), Sequel.cast(Sequel.hstore(:h)['left_item_id'], Integer)], [:items, Sequel.cast(Sequel.hstore(:h)['left_item_id'], Integer), Sequel.cast(Sequel.hstore(:h)['left_item_id'], Integer)] ], :class=>c, :left_primary_key_column=>Sequel.cast(Sequel.hstore(:h)['item_id'], Integer), :left_primary_key=>:item_id, :right_primary_key=>Sequel.cast(Sequel.hstore(:h)['left_item_id'], Integer), :right_primary_key_method=>:left_item_id # Lazily Loading o.item.should == o2 o2.items.should == [o] o.related_items.should == [o2] o2.other_item.should == o o.other_items.should == [o2] o.other_related_items.should == [o] o.mtm_items.should == [o] # Eager Loading via eager os = c.eager(:item, :related_items, :other_items, :other_related_items, :mtm_items).where(:id=>1).all.first os.item.should == o2 os.related_items.should == [o2] os.other_items.should == [o2] os.other_related_items.should == [o] os.mtm_items.should == [o] os = c.eager(:items, :other_item).where(:id=>2).all.first os.items.should == [o] os.other_item.should == o # Eager Loading via eager_graph c.eager_graph(:item).where(:items__id=>1).all.first.item.should == o2 c.eager_graph(:items).where(:items__id=>2).all.first.items.should == [o] c.eager_graph(:related_items).where(:items__id=>1).all.first.related_items.should == [o2] c.eager_graph(:other_item).where(:items__id=>2).all.first.other_item.should == o c.eager_graph(:other_items).where(:items__id=>1).all.first.other_items.should == [o2] c.eager_graph(:other_related_items).where(:items__id=>1).all.first.other_related_items.should == [o] c.eager_graph(:mtm_items).where(:items__id=>1).all.first.mtm_items.should == [o] # Filter By Associations - Model Instances c.filter(:item=>o2).all.should == [o] c.filter(:items=>o).all.should == [o2] c.filter(:related_items=>o2).all.should == [o] c.filter(:other_item=>o).all.should == [o2] c.filter(:other_items=>o2).all.should == [o] c.filter(:other_related_items=>o).all.should == [o] c.filter(:mtm_items=>o).all.should == [o] # Filter By Associations - Model Datasets c.filter(:item=>c.filter(:id=>o2.id)).all.should == [o] c.filter(:items=>c.filter(:id=>o.id)).all.should == [o2] c.filter(:related_items=>c.filter(:id=>o2.id)).all.should == [o] c.filter(:other_item=>c.filter(:id=>o.id)).all.should == [o2] c.filter(:other_items=>c.filter(:id=>o2.id)).all.should == [o] c.filter(:other_related_items=>c.filter(:id=>o.id)).all.should == [o] c.filter(:mtm_items=>c.filter(:id=>o.id)).all.should == [o] end specify 'operations/functions with pg_hstore_ops' do Sequel.extension :pg_hstore_ops, :pg_array, :pg_array_ops @db.create_table!(:items){hstore :h1; hstore :h2; hstore :h3; String :t} @ds.insert(Sequel.hstore('a'=>'b', 'c'=>nil), Sequel.hstore('a'=>'b'), Sequel.hstore('d'=>'e')) h1 = Sequel.hstore(:h1) h2 = Sequel.hstore(:h2) h3 = Sequel.hstore(:h3) @ds.get(h1['a']).should == 'b' @ds.get(h1['d']).should == nil @ds.get(h2.concat(h3).keys.length).should == 2 @ds.get(h1.concat(h3).keys.length).should == 3 @ds.get(h2.merge(h3).keys.length).should == 2 @ds.get(h1.merge(h3).keys.length).should == 3 unless [:do].include?(@db.adapter_scheme) # Broken DataObjects thinks operators with ? represent placeholders @ds.get(h1.contain_all(%w'a c')).should == true @ds.get(h1.contain_all(%w'a d')).should == false @ds.get(h1.contain_any(%w'a d')).should == true @ds.get(h1.contain_any(%w'e d')).should == false end @ds.get(h1.contains(h2)).should == true @ds.get(h1.contains(h3)).should == false @ds.get(h2.contained_by(h1)).should == true @ds.get(h2.contained_by(h3)).should == false @ds.get(h1.defined('a')).should == true @ds.get(h1.defined('c')).should == false @ds.get(h1.defined('d')).should == false @ds.get(h1.delete('a')['c']).should == nil @ds.get(h1.delete(%w'a d')['c']).should == nil @ds.get(h1.delete(h2)['c']).should == nil @ds.from(Sequel.hstore('a'=>'b', 'c'=>nil).op.each).order(:key).all.should == [{:key=>'a', :value=>'b'}, {:key=>'c', :value=>nil}] unless [:do].include?(@db.adapter_scheme) @ds.get(h1.has_key?('c')).should == true @ds.get(h1.include?('c')).should == true @ds.get(h1.key?('c')).should == true @ds.get(h1.member?('c')).should == true @ds.get(h1.exist?('c')).should == true @ds.get(h1.has_key?('d')).should == false @ds.get(h1.include?('d')).should == false @ds.get(h1.key?('d')).should == false @ds.get(h1.member?('d')).should == false @ds.get(h1.exist?('d')).should == false end @ds.get(h1.hstore.hstore.hstore.keys.length).should == 2 @ds.get(h1.keys.length).should == 2 @ds.get(h2.keys.length).should == 1 @ds.get(h1.akeys.length).should == 2 @ds.get(h2.akeys.length).should == 1 @ds.from(Sequel.hstore('t'=>'s').op.populate(Sequel::SQL::Cast.new(nil, :items))).select_map(:t).should == ['s'] @ds.from(:items___i).select(Sequel.hstore('t'=>'s').op.record_set(:i).as(:r)).from_self(:alias=>:s).select(Sequel.lit('(r).*')).from_self.select_map(:t).should == ['s'] @ds.from(Sequel.hstore('t'=>'s', 'a'=>'b').op.skeys.as(:s)).select_order_map(:s).should == %w'a t' @ds.from((Sequel.hstore('t'=>'s', 'a'=>'b').op - 'a').skeys.as(:s)).select_order_map(:s).should == %w't' @ds.get(h1.slice(%w'a c').keys.length).should == 2 @ds.get(h1.slice(%w'd c').keys.length).should == 1 @ds.get(h1.slice(%w'd e').keys.length).should == nil @ds.from(Sequel.hstore('t'=>'s', 'a'=>'b').op.svals.as(:s)).select_order_map(:s).should == %w'b s' @ds.get(h1.to_array.length).should == 4 @ds.get(h2.to_array.length).should == 2 @ds.get(h1.to_matrix.length).should == 2 @ds.get(h2.to_matrix.length).should == 1 @ds.get(h1.values.length).should == 2 @ds.get(h2.values.length).should == 1 @ds.get(h1.avals.length).should == 2 @ds.get(h2.avals.length).should == 1 end end if DB.type_supported?(:hstore) describe 'PostgreSQL json type' do before(:all) do @db = DB @db.extension :pg_array, :pg_json @ds = @db[:items] @a = [1, 2, {'a'=>'b'}, 3.0] @h = {'a'=>'b', '1'=>[3, 4, 5]} @native = DB.adapter_scheme == :postgres end after do @db.drop_table?(:items) end specify 'insert and retrieve json values' do @db.create_table!(:items){json :j} @ds.insert(Sequel.pg_json(@h)) @ds.count.should == 1 if @native rs = @ds.all v = rs.first[:j] v.should_not be_a_kind_of(Hash) v.to_hash.should be_a_kind_of(Hash) v.should == @h v.to_hash.should == @h @ds.delete @ds.insert(rs.first) @ds.all.should == rs end @ds.delete @ds.insert(Sequel.pg_json(@a)) @ds.count.should == 1 if @native rs = @ds.all v = rs.first[:j] v.should_not be_a_kind_of(Array) v.to_a.should be_a_kind_of(Array) v.should == @a v.to_a.should == @a @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end specify 'insert and retrieve json[] values' do @db.create_table!(:items){column :j, 'json[]'} j = Sequel.pg_array([Sequel.pg_json('a'=>1), Sequel.pg_json(['b', 2])]) @ds.insert(j) @ds.count.should == 1 if @native rs = @ds.all v = rs.first[:j] v.should_not be_a_kind_of(Array) v.to_a.should be_a_kind_of(Array) v.should == j v.to_a.should == j @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end specify 'use json in bound variables' do @db.create_table!(:items){json :i} @ds.call(:insert, {:i=>Sequel.pg_json(@h)}, {:i=>:$i}) @ds.get(:i).should == @h @ds.filter(Sequel.cast(:i, String)=>:$i).call(:first, :i=>Sequel.pg_json(@h)).should == {:i=>@h} @ds.filter(Sequel.cast(:i, String)=>:$i).call(:first, :i=>Sequel.pg_json({})).should == nil @ds.filter(Sequel.cast(:i, String)=>:$i).call(:delete, :i=>Sequel.pg_json(@h)).should == 1 @ds.call(:insert, {:i=>Sequel.pg_json(@a)}, {:i=>:$i}) @ds.get(:i).should == @a @ds.filter(Sequel.cast(:i, String)=>:$i).call(:first, :i=>Sequel.pg_json(@a)).should == {:i=>@a} @ds.filter(Sequel.cast(:i, String)=>:$i).call(:first, :i=>Sequel.pg_json([])).should == nil @ds.delete @ds.call(:insert, {:i=>Sequel.pg_json('a'=>nil)}, {:i=>:$i}) @ds.get(:i).should == Sequel.pg_json('a'=>nil) @db.create_table!(:items){column :i, 'json[]'} j = Sequel.pg_array([Sequel.pg_json('a'=>1), Sequel.pg_json(['b', 2])], :text) @ds.call(:insert, {:i=>j}, {:i=>:$i}) @ds.get(:i).should == j @ds.filter(Sequel.cast(:i, 'text[]')=>:$i).call(:first, :i=>j).should == {:i=>j} @ds.filter(Sequel.cast(:i, 'text[]')=>:$i).call(:first, :i=>Sequel.pg_array([])).should == nil end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG specify 'with models' do @db.create_table!(:items) do primary_key :id json :h end c = Class.new(Sequel::Model(@db[:items])) c.plugin :pg_typecast_on_load, :h unless @native c.create(:h=>Sequel.pg_json(@h)).h.should == @h c.create(:h=>Sequel.pg_json(@a)).h.should == @a end specify 'operations/functions with pg_json_ops' do Sequel.extension :pg_json_ops jo = Sequel.pg_json('a'=>1, 'b'=>{'c'=>2, 'd'=>{'e'=>3}}).op ja = Sequel.pg_json([2, 3, %w'a b']).op @db.get(jo['a']).should == 1 @db.get(jo['b']['c']).should == 2 @db.get(jo[%w'b c']).should == 2 @db.get(jo['b'].get_text(%w'd e')).should == "3" @db.get(jo[%w'b d'].get_text('e')).should == "3" @db.get(ja[1]).should == 3 @db.get(ja[%w'2 1']).should == 'b' @db.get(jo.extract('a')).should == 1 @db.get(jo.extract('b').extract('c')).should == 2 @db.get(jo.extract('b', 'c')).should == 2 @db.get(jo.extract('b', 'd', 'e')).should == 3 @db.get(jo.extract_text('b', 'd')).should == '{"e":3}' @db.get(jo.extract_text('b', 'd', 'e')).should == '3' @db.get(ja.array_length).should == 3 @db.from(ja.array_elements.as(:v)).select_map(:v).should == [2, 3, %w'a b'] @db.from(jo.keys.as(:k)).select_order_map(:k).should == %w'a b' @db.from(jo.each).select_order_map(:key).should == %w'a b' @db.from(jo.each).order(:key).select_map(:value).should == [1, {'c'=>2, 'd'=>{'e'=>3}}] @db.from(jo.each_text).select_order_map(:key).should == %w'a b' @db.from(jo.each_text).order(:key).where(:key=>'b').get(:value).should =~ /\{"d":\{"e":3\},"c":2\}|\{"c":2,"d":\{"e":3\}\}/ Sequel.extension :pg_row_ops @db.create_table!(:items) do Integer :a String :b end j = Sequel.pg_json('a'=>1, 'b'=>'c').op @db.get(j.populate(Sequel.cast(nil, :items)).pg_row[:a]).should == 1 @db.get(j.populate(Sequel.cast(nil, :items)).pg_row[:b]).should == 'c' j = Sequel.pg_json([{'a'=>1, 'b'=>'c'}, {'a'=>2, 'b'=>'d'}]).op @db.from(j.populate_set(Sequel.cast(nil, :items))).select_order_map(:a).should == [1, 2] @db.from(j.populate_set(Sequel.cast(nil, :items))).select_order_map(:b).should == %w'c d' end if DB.server_version >= 90300 && DB.adapter_scheme == :postgres end if DB.server_version >= 90200 describe 'PostgreSQL inet/cidr types' do ipv6_broken = (IPAddr.new('::1'); false) rescue true before(:all) do @db = DB @db.extension :pg_array, :pg_inet @ds = @db[:items] @v4 = '127.0.0.1' @v4nm = '127.0.0.0/8' @v6 = '2001:4f8:3:ba:2e0:81ff:fe22:d1f1' @v6nm = '2001:4f8:3:ba::/64' @ipv4 = IPAddr.new(@v4) @ipv4nm = IPAddr.new(@v4nm) unless ipv6_broken @ipv6 = IPAddr.new(@v6) @ipv6nm = IPAddr.new(@v6nm) end @native = DB.adapter_scheme == :postgres end after do @db.drop_table?(:items) end specify 'insert and retrieve inet/cidr values' do @db.create_table!(:items){inet :i; cidr :c} @ds.insert(@ipv4, @ipv4nm) @ds.count.should == 1 if @native rs = @ds.all rs.first[:i].should == @ipv4 rs.first[:c].should == @ipv4nm rs.first[:i].should be_a_kind_of(IPAddr) rs.first[:c].should be_a_kind_of(IPAddr) @ds.delete @ds.insert(rs.first) @ds.all.should == rs end unless ipv6_broken @ds.delete @ds.insert(@ipv6, @ipv6nm) @ds.count.should == 1 if @native rs = @ds.all rs.first[:j] rs.first[:i].should == @ipv6 rs.first[:c].should == @ipv6nm rs.first[:i].should be_a_kind_of(IPAddr) rs.first[:c].should be_a_kind_of(IPAddr) @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end end specify 'insert and retrieve inet/cidr/macaddr array values' do @db.create_table!(:items){column :i, 'inet[]'; column :c, 'cidr[]'; column :m, 'macaddr[]'} @ds.insert(Sequel.pg_array([@ipv4], 'inet'), Sequel.pg_array([@ipv4nm], 'cidr'), Sequel.pg_array(['12:34:56:78:90:ab'], 'macaddr')) @ds.count.should == 1 if @native rs = @ds.all rs.first.values.all?{|c| c.is_a?(Sequel::Postgres::PGArray)}.should be_true rs.first[:i].first.should == @ipv4 rs.first[:c].first.should == @ipv4nm rs.first[:m].first.should == '12:34:56:78:90:ab' rs.first[:i].first.should be_a_kind_of(IPAddr) rs.first[:c].first.should be_a_kind_of(IPAddr) @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end specify 'use ipaddr in bound variables' do @db.create_table!(:items){inet :i; cidr :c} @ds.call(:insert, {:i=>@ipv4, :c=>@ipv4nm}, {:i=>:$i, :c=>:$c}) @ds.get(:i).should == @ipv4 @ds.get(:c).should == @ipv4nm @ds.filter(:i=>:$i, :c=>:$c).call(:first, :i=>@ipv4, :c=>@ipv4nm).should == {:i=>@ipv4, :c=>@ipv4nm} @ds.filter(:i=>:$i, :c=>:$c).call(:first, :i=>@ipv6, :c=>@ipv6nm).should == nil @ds.filter(:i=>:$i, :c=>:$c).call(:delete, :i=>@ipv4, :c=>@ipv4nm).should == 1 unless ipv6_broken @ds.call(:insert, {:i=>@ipv6, :c=>@ipv6nm}, {:i=>:$i, :c=>:$c}) @ds.get(:i).should == @ipv6 @ds.get(:c).should == @ipv6nm @ds.filter(:i=>:$i, :c=>:$c).call(:first, :i=>@ipv6, :c=>@ipv6nm).should == {:i=>@ipv6, :c=>@ipv6nm} @ds.filter(:i=>:$i, :c=>:$c).call(:first, :i=>@ipv4, :c=>@ipv4nm).should == nil @ds.filter(:i=>:$i, :c=>:$c).call(:delete, :i=>@ipv6, :c=>@ipv6nm).should == 1 end @db.create_table!(:items){column :i, 'inet[]'; column :c, 'cidr[]'; column :m, 'macaddr[]'} @ds.call(:insert, {:i=>[@ipv4], :c=>[@ipv4nm], :m=>['12:34:56:78:90:ab']}, {:i=>:$i, :c=>:$c, :m=>:$m}) @ds.filter(:i=>:$i, :c=>:$c, :m=>:$m).call(:first, :i=>[@ipv4], :c=>[@ipv4nm], :m=>['12:34:56:78:90:ab']).should == {:i=>[@ipv4], :c=>[@ipv4nm], :m=>['12:34:56:78:90:ab']} @ds.filter(:i=>:$i, :c=>:$c, :m=>:$m).call(:first, :i=>[], :c=>[], :m=>[]).should == nil @ds.filter(:i=>:$i, :c=>:$c, :m=>:$m).call(:delete, :i=>[@ipv4], :c=>[@ipv4nm], :m=>['12:34:56:78:90:ab']).should == 1 end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG specify 'with models' do @db.create_table!(:items) do primary_key :id inet :i cidr :c end c = Class.new(Sequel::Model(@db[:items])) c.plugin :pg_typecast_on_load, :i, :c unless @native c.create(:i=>@v4, :c=>@v4nm).values.values_at(:i, :c).should == [@ipv4, @ipv4nm] unless ipv6_broken c.create(:i=>@ipv6, :c=>@ipv6nm).values.values_at(:i, :c).should == [@ipv6, @ipv6nm] end end end describe 'PostgreSQL range types' do before(:all) do @db = DB @db.extension :pg_array, :pg_range @ds = @db[:items] @map = {:i4=>'int4range', :i8=>'int8range', :n=>'numrange', :d=>'daterange', :t=>'tsrange', :tz=>'tstzrange'} @r = {:i4=>1...2, :i8=>2...3, :n=>BigDecimal.new('1.0')..BigDecimal.new('2.0'), :d=>Date.today...(Date.today+1), :t=>Time.local(2011, 1)..Time.local(2011, 2), :tz=>Time.local(2011, 1)..Time.local(2011, 2)} @ra = {} @pgr = {} @pgra = {} @r.each{|k, v| @ra[k] = Sequel.pg_array([v], @map[k])} @r.each{|k, v| @pgr[k] = Sequel.pg_range(v)} @r.each{|k, v| @pgra[k] = Sequel.pg_array([Sequel.pg_range(v)], @map[k])} @native = DB.adapter_scheme == :postgres end after do @db.drop_table?(:items) end specify 'insert and retrieve range type values' do @db.create_table!(:items){int4range :i4; int8range :i8; numrange :n; daterange :d; tsrange :t; tstzrange :tz} [@r, @pgr].each do |input| h = {} input.each{|k, v| h[k] = Sequel.cast(v, @map[k])} @ds.insert(h) @ds.count.should == 1 if @native rs = @ds.all rs.first.each do |k, v| v.should_not be_a_kind_of(Range) v.to_range.should be_a_kind_of(Range) v.should == @r[k] v.to_range.should == @r[k] end @ds.delete @ds.insert(rs.first) @ds.all.should == rs end @ds.delete end end specify 'insert and retrieve arrays of range type values' do @db.create_table!(:items){column :i4, 'int4range[]'; column :i8, 'int8range[]'; column :n, 'numrange[]'; column :d, 'daterange[]'; column :t, 'tsrange[]'; column :tz, 'tstzrange[]'} [@ra, @pgra].each do |input| @ds.insert(input) @ds.count.should == 1 if @native rs = @ds.all rs.first.each do |k, v| v.should_not be_a_kind_of(Array) v.to_a.should be_a_kind_of(Array) v.first.should_not be_a_kind_of(Range) v.first.to_range.should be_a_kind_of(Range) v.should == @ra[k].to_a v.first.should == @r[k] end @ds.delete @ds.insert(rs.first) @ds.all.should == rs end @ds.delete end end specify 'use range types in bound variables' do @db.create_table!(:items){int4range :i4; int8range :i8; numrange :n; daterange :d; tsrange :t; tstzrange :tz} h = {} @r.keys.each{|k| h[k] = :"$#{k}"} r2 = {} @r.each{|k, v| r2[k] = Range.new(v.begin, v.end+2)} @ds.call(:insert, @r, h) @ds.first.should == @r @ds.filter(h).call(:first, @r).should == @r @ds.filter(h).call(:first, @pgr).should == @r @ds.filter(h).call(:first, r2).should == nil @ds.filter(h).call(:delete, @r).should == 1 @db.create_table!(:items){column :i4, 'int4range[]'; column :i8, 'int8range[]'; column :n, 'numrange[]'; column :d, 'daterange[]'; column :t, 'tsrange[]'; column :tz, 'tstzrange[]'} @r.each{|k, v| r2[k] = [Range.new(v.begin, v.end+2)]} @ds.call(:insert, @ra, h) @ds.filter(h).call(:first, @ra).each{|k, v| v.should == @ra[k].to_a} @ds.filter(h).call(:first, @pgra).each{|k, v| v.should == @ra[k].to_a} @ds.filter(h).call(:first, r2).should == nil @ds.filter(h).call(:delete, @ra).should == 1 end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG specify 'with models' do @db.create_table!(:items){primary_key :id; int4range :i4; int8range :i8; numrange :n; daterange :d; tsrange :t; tstzrange :tz} c = Class.new(Sequel::Model(@db[:items])) c.plugin :pg_typecast_on_load, :i4, :i8, :n, :d, :t, :tz unless @native v = c.create(@r).values v.delete(:id) v.should == @r unless @db.adapter_scheme == :jdbc @db.create_table!(:items){primary_key :id; column :i4, 'int4range[]'; column :i8, 'int8range[]'; column :n, 'numrange[]'; column :d, 'daterange[]'; column :t, 'tsrange[]'; column :tz, 'tstzrange[]'} c = Class.new(Sequel::Model(@db[:items])) c.plugin :pg_typecast_on_load, :i4, :i8, :n, :d, :t, :tz unless @native v = c.create(@ra).values v.delete(:id) v.each{|k,v1| v1.should == @ra[k].to_a} end end specify 'operations/functions with pg_range_ops' do Sequel.extension :pg_range_ops @db.get(Sequel.pg_range(1..5, :int4range).op.contains(2..4)).should be_true @db.get(Sequel.pg_range(1..5, :int4range).op.contains(3..6)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.contains(0..6)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.contained_by(0..6)).should be_true @db.get(Sequel.pg_range(1..5, :int4range).op.contained_by(3..6)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.contained_by(2..4)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.overlaps(5..6)).should be_true @db.get(Sequel.pg_range(1...5, :int4range).op.overlaps(5..6)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(6..10)).should be_true @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(5..10)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(-1..0)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(-1..3)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(6..10)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(5..10)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(-1..0)).should be_true @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(-1..3)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(6..10)).should be_true @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(5..10)).should be_true @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(-1..0)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(-1..3)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(-1..7)).should be_true @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(6..10)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(5..10)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(3..10)).should be_false @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-1..10)).should be_true @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-1..0)).should be_true @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-1..3)).should be_true @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-5..-1)).should be_true @db.get(Sequel.pg_range(1..5, :int4range).op.adjacent_to(6..10)).should be_true @db.get(Sequel.pg_range(1...5, :int4range).op.adjacent_to(6..10)).should be_false @db.get((Sequel.pg_range(1..5, :int4range).op + (6..10)).adjacent_to(6..10)).should be_false @db.get((Sequel.pg_range(1..5, :int4range).op + (6..10)).adjacent_to(11..20)).should be_true @db.get((Sequel.pg_range(1..5, :int4range).op * (2..6)).adjacent_to(6..10)).should be_true @db.get((Sequel.pg_range(1..4, :int4range).op * (2..6)).adjacent_to(6..10)).should be_false @db.get((Sequel.pg_range(1..5, :int4range).op - (2..6)).adjacent_to(2..10)).should be_true @db.get((Sequel.pg_range(0..4, :int4range).op - (3..6)).adjacent_to(4..10)).should be_false @db.get(Sequel.pg_range(0..4, :int4range).op.lower).should == 0 @db.get(Sequel.pg_range(0..4, :int4range).op.upper).should == 5 @db.get(Sequel.pg_range(0..4, :int4range).op.isempty).should be_false @db.get(Sequel::Postgres::PGRange.empty(:int4range).op.isempty).should be_true @db.get(Sequel.pg_range(1..5, :numrange).op.lower_inc).should be_true @db.get(Sequel::Postgres::PGRange.new(1, 5, :exclude_begin=>true, :db_type=>:numrange).op.lower_inc).should be_false @db.get(Sequel.pg_range(1..5, :numrange).op.upper_inc).should be_true @db.get(Sequel.pg_range(1...5, :numrange).op.upper_inc).should be_false @db.get(Sequel::Postgres::PGRange.new(1, 5, :db_type=>:int4range).op.lower_inf).should be_false @db.get(Sequel::Postgres::PGRange.new(nil, 5, :db_type=>:int4range).op.lower_inf).should be_true @db.get(Sequel::Postgres::PGRange.new(1, 5, :db_type=>:int4range).op.upper_inf).should be_false @db.get(Sequel::Postgres::PGRange.new(1, nil, :db_type=>:int4range).op.upper_inf).should be_true end end if DB.server_version >= 90200 describe 'PostgreSQL interval types' do before(:all) do @db = DB @db.extension :pg_array, :pg_interval @ds = @db[:items] @native = DB.adapter_scheme == :postgres end after(:all) do Sequel::Postgres::PG_TYPES.delete(1186) end after do @db.drop_table?(:items) end specify 'insert and retrieve interval values' do @db.create_table!(:items){interval :i} [ ['0', '00:00:00', 0, [[:seconds, 0]]], ['1 microsecond', '00:00:00.000001', 0.000001, [[:seconds, 0.000001]]], ['1 millisecond', '00:00:00.001', 0.001, [[:seconds, 0.001]]], ['1 second', '00:00:01', 1, [[:seconds, 1]]], ['1 minute', '00:01:00', 60, [[:seconds, 60]]], ['1 hour', '01:00:00', 3600, [[:seconds, 3600]]], ['1 day', '1 day', 86400, [[:days, 1]]], ['1 week', '7 days', 86400*7, [[:days, 7]]], ['1 month', '1 mon', 86400*30, [[:months, 1]]], ['1 year', '1 year', 31557600, [[:years, 1]]], ['1 decade', '10 years', 31557600*10, [[:years, 10]]], ['1 century', '100 years', 31557600*100, [[:years, 100]]], ['1 millennium', '1000 years', 31557600*1000, [[:years, 1000]]], ['1 year 2 months 3 weeks 4 days 5 hours 6 minutes 7 seconds', '1 year 2 mons 25 days 05:06:07', 31557600 + 2*86400*30 + 3*86400*7 + 4*86400 + 5*3600 + 6*60 + 7, [[:years, 1], [:months, 2], [:days, 25], [:seconds, 18367]]], ['-1 year +2 months -3 weeks +4 days -5 hours +6 minutes -7 seconds', '-10 mons -17 days -04:54:07', -10*86400*30 - 3*86400*7 + 4*86400 - 5*3600 + 6*60 - 7, [[:months, -10], [:days, -17], [:seconds, -17647]]], ['+2 years -1 months +3 weeks -4 days +5 hours -6 minutes +7 seconds', '1 year 11 mons 17 days 04:54:07', 31557600 + 11*86400*30 + 3*86400*7 - 4*86400 + 5*3600 - 6*60 + 7, [[:years, 1], [:months, 11], [:days, 17], [:seconds, 17647]]], ].each do |instr, outstr, value, parts| @ds.insert(instr) @ds.count.should == 1 if @native @ds.get(Sequel.cast(:i, String)).should == outstr rs = @ds.all rs.first[:i].is_a?(ActiveSupport::Duration).should be_true rs.first[:i].should == ActiveSupport::Duration.new(value, parts) rs.first[:i].parts.sort_by{|k,v| k.to_s}.should == parts.sort_by{|k,v| k.to_s} @ds.delete @ds.insert(rs.first) @ds.all.should == rs end @ds.delete end end specify 'insert and retrieve interval array values' do @db.create_table!(:items){column :i, 'interval[]'} @ds.insert(Sequel.pg_array(['1 year 2 months 3 weeks 4 days 5 hours 6 minutes 7 seconds'], 'interval')) @ds.count.should == 1 if @native rs = @ds.all rs.first[:i].is_a?(Sequel::Postgres::PGArray).should be_true rs.first[:i].first.is_a?(ActiveSupport::Duration).should be_true rs.first[:i].first.should == ActiveSupport::Duration.new(31557600 + 2*86400*30 + 3*86400*7 + 4*86400 + 5*3600 + 6*60 + 7, [[:years, 1], [:months, 2], [:days, 25], [:seconds, 18367]]) rs.first[:i].first.parts.sort_by{|k,v| k.to_s}.should == [[:years, 1], [:months, 2], [:days, 25], [:seconds, 18367]].sort_by{|k,v| k.to_s} @ds.delete @ds.insert(rs.first) @ds.all.should == rs end end specify 'use intervals in bound variables' do @db.create_table!(:items){interval :i} @ds.insert('1 year 2 months 3 weeks 4 days 5 hours 6 minutes 7 seconds') d = @ds.get(:i) @ds.delete @ds.call(:insert, {:i=>d}, {:i=>:$i}) @ds.get(:i).should == d @ds.filter(:i=>:$i).call(:first, :i=>d).should == {:i=>d} @ds.filter(:i=>:$i).call(:first, :i=>'0').should == nil @ds.filter(:i=>:$i).call(:delete, :i=>d).should == 1 @db.create_table!(:items){column :i, 'interval[]'} @ds.call(:insert, {:i=>[d]}, {:i=>:$i}) @ds.filter(:i=>:$i).call(:first, :i=>[d]).should == {:i=>[d]} @ds.filter(:i=>:$i).call(:first, :i=>[]).should == nil @ds.filter(:i=>:$i).call(:delete, :i=>[d]).should == 1 end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG specify 'with models' do @db.create_table!(:items) do primary_key :id interval :i end c = Class.new(Sequel::Model(@db[:items])) c.plugin :pg_typecast_on_load, :i, :c unless @native v = c.create(:i=>'1 year 2 mons 25 days 05:06:07').i v.is_a?(ActiveSupport::Duration).should be_true v.should == ActiveSupport::Duration.new(31557600 + 2*86400*30 + 3*86400*7 + 4*86400 + 5*3600 + 6*60 + 7, [[:years, 1], [:months, 2], [:days, 25], [:seconds, 18367]]) v.parts.sort_by{|k,_| k.to_s}.should == [[:years, 1], [:months, 2], [:days, 25], [:seconds, 18367]].sort_by{|k,_| k.to_s} end end if (begin require 'active_support/duration'; require 'active_support/inflector'; require 'active_support/core_ext/string/inflections'; true; rescue LoadError; false end) describe 'PostgreSQL row-valued/composite types' do before(:all) do @db = DB Sequel.extension :pg_array_ops, :pg_row_ops @db.extension :pg_array, :pg_row @ds = @db[:person] @db.create_table!(:address) do String :street String :city String :zip end @db.create_table!(:person) do Integer :id address :address end @db.create_table!(:company) do Integer :id column :employees, 'person[]' end @db.register_row_type(:address) @db.register_row_type(Sequel.qualify(:public, :person)) @db.register_row_type(:public__company) @native = DB.adapter_scheme == :postgres end after(:all) do @db.drop_table?(:company, :person, :address) @db.row_types.clear @db.reset_conversion_procs if @native end after do [:company, :person, :address].each{|t| @db[t].delete} end specify 'insert and retrieve row types' do @ds.insert(:id=>1, :address=>Sequel.pg_row(['123 Sesame St', 'Somewhere', '12345'])) @ds.count.should == 1 if @native # Single row valued type rs = @ds.all v = rs.first[:address] v.should_not be_a_kind_of(Hash) v.to_hash.should be_a_kind_of(Hash) v.to_hash.should == {:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345'} @ds.delete @ds.insert(rs.first) @ds.all.should == rs # Nested row value type p = @ds.get(:person) p[:id].should == 1 p[:address].should == v end end specify 'insert and retrieve row types containing domains' do begin @db << "DROP DOMAIN IF EXISTS positive_integer CASCADE" @db << "CREATE DOMAIN positive_integer AS integer CHECK (VALUE > 0)" @db.create_table!(:domain_check) do positive_integer :id end @db.register_row_type(:domain_check) @db.get(@db.row_type(:domain_check, [1])).should == {:id=>1} ensure @db.drop_table(:domain_check) @db << "DROP DOMAIN positive_integer" end end if DB.adapter_scheme == :postgres specify 'insert and retrieve arrays of row types' do @ds = @db[:company] @ds.insert(:id=>1, :employees=>Sequel.pg_array([@db.row_type(:person, [1, Sequel.pg_row(['123 Sesame St', 'Somewhere', '12345'])])])) @ds.count.should == 1 if @native v = @ds.get(:company) v.should_not be_a_kind_of(Hash) v.to_hash.should be_a_kind_of(Hash) v[:id].should == 1 employees = v[:employees] employees.should_not be_a_kind_of(Array) employees.to_a.should be_a_kind_of(Array) employees.should == [{:id=>1, :address=>{:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345'}}] @ds.delete @ds.insert(v[:id], v[:employees]) @ds.get(:company).should == v end end specify 'use row types in bound variables' do @ds.call(:insert, {:address=>Sequel.pg_row(['123 Sesame St', 'Somewhere', '12345'])}, {:address=>:$address, :id=>1}) @ds.get(:address).should == {:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345'} @ds.filter(:address=>Sequel.cast(:$address, :address)).call(:first, :address=>Sequel.pg_row(['123 Sesame St', 'Somewhere', '12345']))[:id].should == 1 @ds.filter(:address=>Sequel.cast(:$address, :address)).call(:first, :address=>Sequel.pg_row(['123 Sesame St', 'Somewhere', '12356'])).should == nil @ds.delete @ds.call(:insert, {:address=>Sequel.pg_row([nil, nil, nil])}, {:address=>:$address, :id=>1}) @ds.get(:address).should == {:street=>nil, :city=>nil, :zip=>nil} end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG specify 'use arrays of row types in bound variables' do @ds = @db[:company] @ds.call(:insert, {:employees=>Sequel.pg_array([@db.row_type(:person, [1, Sequel.pg_row(['123 Sesame St', 'Somewhere', '12345'])])])}, {:employees=>:$employees, :id=>1}) @ds.get(:company).should == {:id=>1, :employees=>[{:id=>1, :address=>{:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345'}}]} @ds.filter(:employees=>Sequel.cast(:$employees, 'person[]')).call(:first, :employees=>Sequel.pg_array([@db.row_type(:person, [1, Sequel.pg_row(['123 Sesame St', 'Somewhere', '12345'])])]))[:id].should == 1 @ds.filter(:employees=>Sequel.cast(:$employees, 'person[]')).call(:first, :employees=>Sequel.pg_array([@db.row_type(:person, [1, Sequel.pg_row(['123 Sesame St', 'Somewhere', '12356'])])])).should == nil @ds.delete @ds.call(:insert, {:employees=>Sequel.pg_array([@db.row_type(:person, [1, Sequel.pg_row([nil, nil, nil])])])}, {:employees=>:$employees, :id=>1}) @ds.get(:employees).should == [{:address=>{:city=>nil, :zip=>nil, :street=>nil}, :id=>1}] end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG specify 'operations/functions with pg_row_ops' do @ds.insert(:id=>1, :address=>Sequel.pg_row(['123 Sesame St', 'Somewhere', '12345'])) @ds.get(Sequel.pg_row(:address)[:street]).should == '123 Sesame St' @ds.get(Sequel.pg_row(:address)[:city]).should == 'Somewhere' @ds.get(Sequel.pg_row(:address)[:zip]).should == '12345' @ds = @db[:company] @ds.insert(:id=>1, :employees=>Sequel.pg_array([@db.row_type(:person, [1, Sequel.pg_row(['123 Sesame St', 'Somewhere', '12345'])])])) @ds.get(Sequel.pg_row(:company)[:id]).should == 1 if @native @ds.get(Sequel.pg_row(:company)[:employees]).should == [{:id=>1, :address=>{:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345'}}] @ds.get(Sequel.pg_row(:company)[:employees][1]).should == {:id=>1, :address=>{:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345'}} @ds.get(Sequel.pg_row(:company)[:employees][1][:address]).should == {:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345'} end @ds.get(Sequel.pg_row(:company)[:employees][1][:id]).should == 1 @ds.get(Sequel.pg_row(:company)[:employees][1][:address][:street]).should == '123 Sesame St' @ds.get(Sequel.pg_row(:company)[:employees][1][:address][:city]).should == 'Somewhere' @ds.get(Sequel.pg_row(:company)[:employees][1][:address][:zip]).should == '12345' end context "#splat and #*" do before(:all) do @db.create_table!(:a){Integer :a} @db.create_table!(:b){a :b; Integer :a} @db.register_row_type(:a) @db.register_row_type(:b) @db[:b].insert(:a=>1, :b=>@db.row_type(:a, [2])) end after(:all) do @db.drop_table?(:b, :a) end specify "splat should reference the table type" do @db[:b].select(:a).first.should == {:a=>1} @db[:b].select(:b__a).first.should == {:a=>1} @db[:b].select(Sequel.pg_row(:b)[:a]).first.should == {:a=>2} @db[:b].select(Sequel.pg_row(:b).splat[:a]).first.should == {:a=>1} if @native @db[:b].select(:b).first.should == {:b=>{:a=>2}} @db[:b].select(Sequel.pg_row(:b).splat).first.should == {:a=>1, :b=>{:a=>2}} @db[:b].select(Sequel.pg_row(:b).splat(:b)).first.should == {:b=>{:a=>1, :b=>{:a=>2}}} end end specify "* should expand the table type into separate columns" do ds = @db[:b].select(Sequel.pg_row(:b).splat(:b)).from_self(:alias=>:t) if @native ds.first.should == {:b=>{:a=>1, :b=>{:a=>2}}} ds.select(Sequel.pg_row(:b).*).first.should == {:a=>1, :b=>{:a=>2}} ds.select(Sequel.pg_row(:b)[:b]).first.should == {:b=>{:a=>2}} ds.select(Sequel.pg_row(:t__b).*).first.should == {:a=>1, :b=>{:a=>2}} ds.select(Sequel.pg_row(:t__b)[:b]).first.should == {:b=>{:a=>2}} end ds.select(Sequel.pg_row(:b)[:a]).first.should == {:a=>1} ds.select(Sequel.pg_row(:t__b)[:a]).first.should == {:a=>1} end end context "with models" do before(:all) do class Address < Sequel::Model(:address) plugin :pg_row end class Person < Sequel::Model(:person) plugin :pg_row end class Company < Sequel::Model(:company) plugin :pg_row end @a = Address.new(:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345') @es = Sequel.pg_array([Person.new(:id=>1, :address=>@a)]) end after(:all) do Object.send(:remove_const, :Address) rescue nil Object.send(:remove_const, :Person) rescue nil Object.send(:remove_const, :Company) rescue nil end specify 'insert and retrieve row types as model objects' do @ds.insert(:id=>1, :address=>@a) @ds.count.should == 1 if @native # Single row valued type rs = @ds.all v = rs.first[:address] v.should be_a_kind_of(Address) v.should == @a @ds.delete @ds.insert(rs.first) @ds.all.should == rs # Nested row value type p = @ds.get(:person) p.should be_a_kind_of(Person) p.id.should == 1 p.address.should be_a_kind_of(Address) p.address.should == @a end end specify 'insert and retrieve arrays of row types as model objects' do @ds = @db[:company] @ds.insert(:id=>1, :employees=>@es) @ds.count.should == 1 if @native v = @ds.get(:company) v.should be_a_kind_of(Company) v.id.should == 1 employees = v[:employees] employees.should_not be_a_kind_of(Array) employees.to_a.should be_a_kind_of(Array) employees.should == @es @ds.delete @ds.insert(v.id, v.employees) @ds.get(:company).should == v end end specify 'use model objects in bound variables' do @ds.call(:insert, {:address=>@a}, {:address=>:$address, :id=>1}) @ds.get(:address).should == @a @ds.filter(:address=>Sequel.cast(:$address, :address)).call(:first, :address=>@a)[:id].should == 1 @ds.filter(:address=>Sequel.cast(:$address, :address)).call(:first, :address=>Address.new(:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12356')).should == nil end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG specify 'use arrays of model objects in bound variables' do @ds = @db[:company] @ds.call(:insert, {:employees=>@es}, {:employees=>:$employees, :id=>1}) @ds.get(:company).should == Company.new(:id=>1, :employees=>@es) @ds.filter(:employees=>Sequel.cast(:$employees, 'person[]')).call(:first, :employees=>@es)[:id].should == 1 @ds.filter(:employees=>Sequel.cast(:$employees, 'person[]')).call(:first, :employees=>Sequel.pg_array([@db.row_type(:person, [1, Sequel.pg_row(['123 Sesame St', 'Somewhere', '12356'])])])).should == nil end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG specify 'model typecasting' do Person.plugin :pg_typecast_on_load, :address unless @native a = Address.new(:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345') o = Person.create(:id=>1, :address=>['123 Sesame St', 'Somewhere', '12345']) o.address.should == a o = Person.create(:id=>1, :address=>{:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345'}) o.address.should == a o = Person.create(:id=>1, :address=>a) o.address.should == a Company.plugin :pg_typecast_on_load, :employees unless @native e = Person.new(:id=>1, :address=>a) unless @db.adapter_scheme == :jdbc o = Company.create(:id=>1, :employees=>[{:id=>1, :address=>{:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345'}}]) o.employees.should == [e] o = Company.create(:id=>1, :employees=>[e]) o.employees.should == [e] end end end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/adapters/spec_helper.rb������������������������������������������������������0000664�0000000�0000000�00000003371�12201565355�0021355�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'rubygems' require 'logger' if ENV['COVERAGE'] require File.join(File.dirname(File.expand_path(__FILE__)), "../sequel_coverage") SimpleCov.sequel_coverage(:group=>%r{lib/sequel/adapters}) end unless Object.const_defined?('Sequel') $:.unshift(File.join(File.dirname(File.expand_path(__FILE__)), "../../lib/")) require 'sequel' end begin require File.join(File.dirname(File.dirname(File.expand_path(__FILE__))), 'spec_config.rb') rescue LoadError end Sequel::Database.extension :columns_introspection if ENV['SEQUEL_COLUMNS_INTROSPECTION'] Sequel.cache_anonymous_models = false class Sequel::Database def log_duration(duration, message) log_info(message) end end (defined?(RSpec) ? RSpec::Core::ExampleGroup : Spec::Example::ExampleGroup).class_eval do def log begin DB.loggers << Logger.new(STDOUT) yield ensure DB.loggers.pop end end def self.cspecify(message, *checked, &block) return specify(message, &block) if ENV['SEQUEL_NO_PENDING'] pending = false checked.each do |c| case c when DB.adapter_scheme pending = c when Proc pending = c if c.first.call(DB) when Array pending = c if c.first == DB.adapter_scheme && c.last == DB.call(DB) end end if pending specify(message){pending("Not yet working on #{Array(pending).join(', ')}", &block)} else specify(message, &block) end end def check_sqls yield unless ENV['SEQUEL_NO_CHECK_SQLS'] end def self.check_sqls yield unless ENV['SEQUEL_NO_CHECK_SQLS'] end end unless defined?(DB) env_var = "SEQUEL_#{SEQUEL_ADAPTER_TEST.to_s.upcase}_URL" env_var = ENV.has_key?(env_var) ? env_var : 'SEQUEL_INTEGRATION_URL' DB = Sequel.connect(ENV[env_var]) end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/adapters/sqlite_spec.rb������������������������������������������������������0000664�0000000�0000000�00000050602�12201565355�0021376�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������SEQUEL_ADAPTER_TEST = :sqlite require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') describe "An SQLite database" do before do @db = DB @fk = @db.foreign_keys end after do @db.drop_table?(:fk) @db.foreign_keys = @fk @db.case_sensitive_like = true @db.use_timestamp_timezones = false Sequel.datetime_class = Time end if DB.auto_vacuum == :none specify "should support getting pragma values" do @db.pragma_get(:auto_vacuum).to_s.should == '0' end specify "should support setting pragma values" do @db.pragma_set(:auto_vacuum, '1') @db.pragma_get(:auto_vacuum).to_s.should == '1' @db.pragma_set(:auto_vacuum, '2') @db.pragma_get(:auto_vacuum).to_s.should == '2' end specify "should support getting and setting the auto_vacuum pragma" do @db.auto_vacuum = :full @db.auto_vacuum.should == :full @db.auto_vacuum = :incremental @db.auto_vacuum.should == :incremental proc {@db.auto_vacuum = :invalid}.should raise_error(Sequel::Error) end end specify "should respect case sensitive like false" do @db.case_sensitive_like = false @db.get(Sequel.like('a', 'A')).to_s.should == '1' end specify "should respect case sensitive like true" do @db.case_sensitive_like = true @db.get(Sequel.like('a', 'A')).to_s.should == '0' end specify "should support casting to Date by using the date function" do @db.get(Sequel.cast('2012-10-20 11:12:13', Date)).should == '2012-10-20' end specify "should support casting to Time or DateTime by using the datetime function" do @db.get(Sequel.cast('2012-10-20', Time)).should == '2012-10-20 00:00:00' @db.get(Sequel.cast('2012-10-20', DateTime)).should == '2012-10-20 00:00:00' end specify "should provide the SQLite version as an integer" do @db.sqlite_version.should be_a_kind_of(Integer) end specify "should support setting and getting the foreign_keys pragma" do (@db.sqlite_version >= 30619 ? [true, false] : [nil]).should include(@db.foreign_keys) @db.foreign_keys = true @db.foreign_keys = false end specify "should enforce foreign key integrity if foreign_keys pragma is set" do @db.foreign_keys = true @db.create_table!(:fk){primary_key :id; foreign_key :parent_id, :fk} @db[:fk].insert(1, nil) @db[:fk].insert(2, 1) @db[:fk].insert(3, 3) proc{@db[:fk].insert(4, 5)}.should raise_error(Sequel::Error) end if DB.sqlite_version >= 30619 specify "should not enforce foreign key integrity if foreign_keys pragma is unset" do @db.foreign_keys = false @db.create_table!(:fk){primary_key :id; foreign_key :parent_id, :fk} @db[:fk].insert(1, 2) @db[:fk].all.should == [{:id=>1, :parent_id=>2}] end specify "should support a use_timestamp_timezones setting" do @db.use_timestamp_timezones = true @db.create_table!(:fk){Time :time} @db[:fk].insert(Time.now) @db[:fk].get(Sequel.cast(:time, String)).should =~ /[-+]\d\d\d\d\z/ @db.use_timestamp_timezones = false @db[:fk].delete @db[:fk].insert(Time.now) @db[:fk].get(Sequel.cast(:time, String)).should_not =~ /[-+]\d\d\d\d\z/ end specify "should provide a list of existing tables" do @db.drop_table?(:fk) @db.tables.should be_a_kind_of(Array) @db.tables.should_not include(:fk) @db.create_table!(:fk){String :name} @db.tables.should include(:fk) end specify "should support getting and setting the synchronous pragma" do @db.synchronous = :off @db.synchronous.should == :off @db.synchronous = :normal @db.synchronous.should == :normal @db.synchronous = :full @db.synchronous.should == :full proc {@db.synchronous = :invalid}.should raise_error(Sequel::Error) end specify "should support getting and setting the temp_store pragma" do @db.temp_store = :default @db.temp_store.should == :default @db.temp_store = :file @db.temp_store.should == :file @db.temp_store = :memory @db.temp_store.should == :memory proc {@db.temp_store = :invalid}.should raise_error(Sequel::Error) end cspecify "should support timestamps and datetimes and respect datetime_class", :do, :jdbc, :amalgalite, :swift do @db.create_table!(:fk){timestamp :t; datetime :d} @db.use_timestamp_timezones = true t1 = Time.at(1) @db[:fk] << {:t => t1, :d => t1} @db[:fk].map(:t).should == [t1] @db[:fk].map(:d).should == [t1] Sequel.datetime_class = DateTime t2 = Sequel.string_to_datetime(t1.iso8601) @db[:fk].map(:t).should == [t2] @db[:fk].map(:d).should == [t2] end specify "should support sequential primary keys" do @db.create_table!(:fk) {primary_key :id; text :name} @db[:fk] << {:name => 'abc'} @db[:fk] << {:name => 'def'} @db[:fk] << {:name => 'ghi'} @db[:fk].order(:name).all.should == [ {:id => 1, :name => 'abc'}, {:id => 2, :name => 'def'}, {:id => 3, :name => 'ghi'} ] end specify "should correctly parse the schema" do @db.create_table!(:fk) {timestamp :t} @db.schema(:fk, :reload=>true).should == [[:t, {:type=>:datetime, :allow_null=>true, :default=>nil, :ruby_default=>nil, :db_type=>"timestamp", :primary_key=>false}]] end specify "should handle and return BigDecimal values for numeric columns" do DB.create_table!(:fk){numeric :d} d = DB[:fk] d.insert(:d=>BigDecimal.new('80.0')) d.insert(:d=>BigDecimal.new('NaN')) d.insert(:d=>BigDecimal.new('Infinity')) d.insert(:d=>BigDecimal.new('-Infinity')) ds = d.all ds.shift.should == {:d=>BigDecimal.new('80.0')} ds.map{|x| x[:d].to_s}.should == %w'NaN Infinity -Infinity' DB end end describe "SQLite temporary views" do before do @db = DB @db.drop_view(:items) rescue nil @db.create_table!(:items){Integer :number} @db[:items].insert(10) @db[:items].insert(20) end after do @db.drop_table?(:items) end specify "should be supported" do @db.create_view(:items_view, @db[:items].where(:number=>10), :temp=>true) @db[:items_view].map(:number).should == [10] @db.disconnect lambda{@db[:items_view].map(:number)}.should raise_error(Sequel::DatabaseError) end end describe "SQLite type conversion" do before do @db = DB @integer_booleans = @db.integer_booleans @db.integer_booleans = true @ds = @db[:items] @db.drop_table?(:items) end after do @db.integer_booleans = @integer_booleans Sequel.datetime_class = Time @db.drop_table?(:items) end specify "should handle integers in boolean columns" do @db.create_table(:items){TrueClass :a} @db[:items].insert(false) @db[:items].select_map(:a).should == [false] @db[:items].select_map(Sequel.expr(:a)+:a).should == [0] @db[:items].update(:a=>true) @db[:items].select_map(:a).should == [true] @db[:items].select_map(Sequel.expr(:a)+:a).should == [2] end specify "should handle integers/floats/strings/decimals in numeric/decimal columns" do @db.create_table(:items){Numeric :a} @db[:items].insert(100) @db[:items].select_map(:a).should == [BigDecimal.new('100')] @db[:items].get(:a).should be_a_kind_of(BigDecimal) @db[:items].update(:a=>100.1) @db[:items].select_map(:a).should == [BigDecimal.new('100.1')] @db[:items].get(:a).should be_a_kind_of(BigDecimal) @db[:items].update(:a=>'100.1') @db[:items].select_map(:a).should == [BigDecimal.new('100.1')] @db[:items].get(:a).should be_a_kind_of(BigDecimal) @db[:items].update(:a=>BigDecimal.new('100.1')) @db[:items].select_map(:a).should == [BigDecimal.new('100.1')] @db[:items].get(:a).should be_a_kind_of(BigDecimal) end specify "should handle integer/float date columns as julian date" do @db.create_table(:items){Date :a} i = 2455979 @db[:items].insert(i) @db[:items].first.should == {:a=>Date.jd(i)} @db[:items].update(:a=>2455979.1) @db[:items].first.should == {:a=>Date.jd(i)} end specify "should handle integer/float time columns as seconds" do @db.create_table(:items){Time :a, :only_time=>true} @db[:items].insert(3661) @db[:items].first.should == {:a=>Sequel::SQLTime.create(1, 1, 1)} @db[:items].update(:a=>3661.000001) @db[:items].first.should == {:a=>Sequel::SQLTime.create(1, 1, 1, 1)} end specify "should handle integer datetime columns as unix timestamp" do @db.create_table(:items){DateTime :a} i = 1329860756 @db[:items].insert(i) @db[:items].first.should == {:a=>Time.at(i)} Sequel.datetime_class = DateTime @db[:items].first.should == {:a=>DateTime.strptime(i.to_s, '%s')} end specify "should handle float datetime columns as julian date" do @db.create_table(:items){DateTime :a} i = 2455979.5 @db[:items].insert(i) @db[:items].first.should == {:a=>Time.at(1329825600)} Sequel.datetime_class = DateTime @db[:items].first.should == {:a=>DateTime.jd(2455979.5)} end specify "should handle integer/float blob columns" do @db.create_table(:items){File :a} @db[:items].insert(1) @db[:items].first.should == {:a=>Sequel::SQL::Blob.new('1')} @db[:items].update(:a=>'1.1') @db[:items].first.should == {:a=>Sequel::SQL::Blob.new(1.1.to_s)} end end if DB.adapter_scheme == :sqlite describe "An SQLite dataset" do before do @d = DB[:items] end specify "should raise errors if given a regexp pattern match" do proc{@d.literal(Sequel.expr(:x).like(/a/))}.should raise_error(Sequel::Error) proc{@d.literal(~Sequel.expr(:x).like(/a/))}.should raise_error(Sequel::Error) proc{@d.literal(Sequel.expr(:x).like(/a/i))}.should raise_error(Sequel::Error) proc{@d.literal(~Sequel.expr(:x).like(/a/i))}.should raise_error(Sequel::Error) end end describe "An SQLite dataset AS clause" do specify "should use a string literal for :col___alias" do DB.literal(:c___a).should == "`c` AS 'a'" end specify "should use a string literal for :table__col___alias" do DB.literal(:t__c___a).should == "`t`.`c` AS 'a'" end specify "should use a string literal for :column.as(:alias)" do DB.literal(Sequel.as(:c, :a)).should == "`c` AS 'a'" end specify "should use a string literal in the SELECT clause" do DB[:t].select(:c___a).sql.should == "SELECT `c` AS 'a' FROM `t`" end specify "should use a string literal in the FROM clause" do DB[:t___a].sql.should == "SELECT * FROM `t` AS 'a'" end specify "should use a string literal in the JOIN clause" do DB[:t].join_table(:natural, :j, nil, :table_alias=>:a).sql.should == "SELECT * FROM `t` NATURAL JOIN `j` AS 'a'" end end describe "SQLite::Dataset#delete" do before do DB.create_table! :items do primary_key :id String :name Float :value end @d = DB[:items] @d.delete # remove all records @d << {:name => 'abc', :value => 1.23} @d << {:name => 'def', :value => 4.56} @d << {:name => 'ghi', :value => 7.89} end specify "should return the number of records affected when filtered" do @d.count.should == 3 @d.filter{value < 3}.delete.should == 1 @d.count.should == 2 @d.filter{value < 3}.delete.should == 0 @d.count.should == 2 end specify "should return the number of records affected when unfiltered" do @d.count.should == 3 @d.delete.should == 3 @d.count.should == 0 @d.delete.should == 0 end end describe "SQLite::Dataset#update" do before do DB.create_table! :items do primary_key :id String :name Float :value end @d = DB[:items] @d.delete # remove all records @d << {:name => 'abc', :value => 1.23} @d << {:name => 'def', :value => 4.56} @d << {:name => 'ghi', :value => 7.89} end specify "should return the number of records affected" do @d.filter(:name => 'abc').update(:value => 2).should == 1 @d.update(:value => 10).should == 3 @d.filter(:name => 'xxx').update(:value => 23).should == 0 end end describe "SQLite dataset" do before do DB.create_table! :test do primary_key :id String :name Float :value end DB.create_table! :items do primary_key :id String :name Float :value end @d = DB[:items] @d << {:name => 'abc', :value => 1.23} @d << {:name => 'def', :value => 4.56} @d << {:name => 'ghi', :value => 7.89} end after do DB.drop_table?(:test, :items) end specify "should be able to insert from a subquery" do DB[:test] << @d DB[:test].count.should == 3 DB[:test].select(:name, :value).order(:value).to_a.should == \ @d.select(:name, :value).order(:value).to_a end specify "should support #explain" do DB[:test].explain.should be_a_kind_of(String) end specify "should have #explain work when identifier_output_method is modified" do ds = DB[:test] ds.identifier_output_method = :upcase ds.explain.should be_a_kind_of(String) end end describe "A SQLite database" do before do @db = DB @db.create_table! :test2 do text :name integer :value end end after do @db.drop_table?(:test2) end specify "should support add_column operations" do @db.add_column :test2, :xyz, :text @db[:test2].columns.should == [:name, :value, :xyz] @db[:test2] << {:name => 'mmm', :value => 111, :xyz=>'000'} @db[:test2].first.should == {:name => 'mmm', :value => 111, :xyz=>'000'} end specify "should support drop_column operations" do @db.drop_column :test2, :value @db[:test2].columns.should == [:name] @db[:test2] << {:name => 'mmm'} @db[:test2].first.should == {:name => 'mmm'} end specify "should support drop_column operations in a transaction" do @db.transaction{@db.drop_column :test2, :value} @db[:test2].columns.should == [:name] @db[:test2] << {:name => 'mmm'} @db[:test2].first.should == {:name => 'mmm'} end specify "should keep a composite primary key when dropping columns" do @db.create_table!(:test2){Integer :a; Integer :b; Integer :c; primary_key [:a, :b]} @db.drop_column :test2, :c @db[:test2].columns.should == [:a, :b] @db[:test2] << {:a=>1, :b=>2} @db[:test2] << {:a=>2, :b=>3} proc{@db[:test2] << {:a=>2, :b=>3}}.should raise_error(Sequel::Error) end specify "should keep column attributes when dropping a column" do @db.create_table! :test3 do primary_key :id text :name integer :value end # This lame set of additions and deletions are to test that the primary keys # don't get messed up when we recreate the database. @db[:test3] << { :name => "foo", :value => 1} @db[:test3] << { :name => "foo", :value => 2} @db[:test3] << { :name => "foo", :value => 3} @db[:test3].filter(:id => 2).delete @db.drop_column :test3, :value @db['PRAGMA table_info(?)', :test3][:id][:pk].to_i.should == 1 @db[:test3].select(:id).all.should == [{:id => 1}, {:id => 3}] end if DB.foreign_keys specify "should keep foreign keys when dropping a column" do @db.create_table! :test do primary_key :id String :name Integer :value end @db.create_table! :test3 do String :name Integer :value foreign_key :test_id, :test, :on_delete => :set_null, :on_update => :cascade end @db[:test3].insert(:name => "abc", :test_id => @db[:test].insert(:name => "foo", :value => 3)) @db[:test3].insert(:name => "def", :test_id => @db[:test].insert(:name => "bar", :value => 4)) @db.drop_column :test3, :value @db[:test].filter(:name => 'bar').delete @db[:test3][:name => 'def'][:test_id].should be_nil @db[:test].filter(:name => 'foo').update(:id=>100) @db[:test3][:name => 'abc'][:test_id].should == 100 @db.drop_table? :test, :test3 end end specify "should support rename_column operations" do @db[:test2].delete @db.add_column :test2, :xyz, :text @db[:test2] << {:name => 'mmm', :value => 111, :xyz => 'qqqq'} @db[:test2].columns.should == [:name, :value, :xyz] @db.rename_column :test2, :xyz, :zyx, :type => :text @db[:test2].columns.should == [:name, :value, :zyx] @db[:test2].first[:zyx].should == 'qqqq' @db[:test2].count.should eql(1) end specify "should preserve defaults when dropping or renaming columns" do @db.create_table! :test3 do String :s, :default=>'a' Integer :i end @db[:test3].insert @db[:test3].first[:s].should == 'a' @db[:test3].delete @db.drop_column :test3, :i @db[:test3].insert @db[:test3].first[:s].should == 'a' @db[:test3].delete @db.rename_column :test3, :s, :t @db[:test3].insert @db[:test3].first[:t].should == 'a' @db[:test3].delete end specify "should handle quoted tables when dropping or renaming columns" do @db.quote_identifiers = true table_name = "T T" @db.drop_table?(table_name) @db.create_table! table_name do Integer :"s s" Integer :"i i" end @db.from(table_name).insert(:"s s"=>1, :"i i"=>2) @db.from(table_name).all.should == [{:"s s"=>1, :"i i"=>2}] @db.drop_column table_name, :"i i" @db.from(table_name).all.should == [{:"s s"=>1}] @db.rename_column table_name, :"s s", :"t t" @db.from(table_name).all.should == [{:"t t"=>1}] @db.drop_table?(table_name) end specify "should choose a temporary table name that isn't already used when dropping or renaming columns" do sqls = [] @db.loggers << (l=Class.new{%w'info error'.each{|m| define_method(m){|sql| sqls << sql}}}.new) @db.tables.each{|t| @db.drop_table(t) if t.to_s =~ /test3/} @db.create_table :test3 do Integer :h Integer :i end @db.create_table :test3_backup0 do Integer :j end @db.create_table :test3_backup1 do Integer :k end @db[:test3].columns.should == [:h, :i] @db[:test3_backup0].columns.should == [:j] @db[:test3_backup1].columns.should == [:k] sqls.clear @db.drop_column(:test3, :i) sqls.any?{|x| x =~ /\AALTER TABLE.*test3.*RENAME TO.*test3_backup2/}.should == true sqls.any?{|x| x =~ /\AALTER TABLE.*test3.*RENAME TO.*test3_backup[01]/}.should == false @db[:test3].columns.should == [:h] @db[:test3_backup0].columns.should == [:j] @db[:test3_backup1].columns.should == [:k] @db.create_table :test3_backup2 do Integer :l end sqls.clear @db.rename_column(:test3, :h, :i) sqls.any?{|x| x =~ /\AALTER TABLE.*test3.*RENAME TO.*test3_backup3/}.should == true sqls.any?{|x| x =~ /\AALTER TABLE.*test3.*RENAME TO.*test3_backup[012]/}.should == false @db[:test3].columns.should == [:i] @db[:test3_backup0].columns.should == [:j] @db[:test3_backup1].columns.should == [:k] @db[:test3_backup2].columns.should == [:l] @db.loggers.delete(l) @db.drop_table?(:test3, :test3_backup0, :test3_backup1, :test3_backup2) end specify "should support add_index" do @db.add_index :test2, :value, :unique => true @db.add_index :test2, [:name, :value] end specify "should support drop_index" do @db.add_index :test2, :value, :unique => true @db.drop_index :test2, :value end specify "should keep applicable indexes when emulating schema methods" do @db.create_table!(:a){Integer :a; Integer :b} @db.add_index :a, :a @db.add_index :a, :b @db.add_index :a, [:b, :a] @db.drop_column :a, :b @db.indexes(:a).should == {:a_a_index=>{:unique=>false, :columns=>[:a]}} end specify "should have support for various #transaction modes" do sqls = [] @db.loggers << Class.new{%w'info error'.each{|m| define_method(m){|sql| sqls << sql}}}.new @db.transaction(:mode => :immediate) do sqls.last.should == "BEGIN IMMEDIATE TRANSACTION" end @db.transaction(:mode => :exclusive) do sqls.last.should == "BEGIN EXCLUSIVE TRANSACTION" end @db.transaction(:mode => :deferred) do sqls.last.should == "BEGIN DEFERRED TRANSACTION" end @db.transaction do sqls.last.should == Sequel::Database::SQL_BEGIN end @db.transaction_mode.should == nil @db.transaction_mode = :immediate @db.transaction_mode.should == :immediate @db.transaction do sqls.last.should == "BEGIN IMMEDIATE TRANSACTION" end @db.transaction(:mode => :exclusive) do sqls.last.should == "BEGIN EXCLUSIVE TRANSACTION" end proc {@db.transaction_mode = :invalid}.should raise_error(Sequel::Error) @db.transaction_mode.should == :immediate proc {@db.transaction(:mode => :invalid) {}}.should raise_error(Sequel::Error) end end ������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/bin_spec.rb������������������������������������������������������������������0000664�0000000�0000000�00000023641�12201565355�0017045�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'rubygems' require 'rbconfig' require 'yaml' RUBY = File.join(RbConfig::CONFIG['bindir'], RbConfig::CONFIG['RUBY_INSTALL_NAME']) OUTPUT = "spec/bin-sequel-spec-output-#{$$}.log" TMP_FILE = "spec/bin-sequel-tmp-#{$$}.rb" BIN_SPEC_DB = "spec/bin-sequel-spec-db-#{$$}.sqlite3" BIN_SPEC_DB2 = "spec/bin-sequel-spec-db2-#{$$}.sqlite3" if defined?(RUBY_ENGINE) && RUBY_ENGINE == 'jruby' CONN_PREFIX = 'jdbc:sqlite:' CONN_HASH = {:adapter=>'jdbc', :uri=>"#{CONN_PREFIX}#{BIN_SPEC_DB}"} else CONN_PREFIX = 'sqlite://' CONN_HASH = {:adapter=>'sqlite', :database=>BIN_SPEC_DB} end unless Object.const_defined?('Sequel') && Sequel.const_defined?('Model') $:.unshift(File.join(File.dirname(File.expand_path(__FILE__)), "../../lib/")) require 'sequel' end DB = Sequel.connect("#{CONN_PREFIX}#{BIN_SPEC_DB}") DB2 = Sequel.connect("#{CONN_PREFIX}#{BIN_SPEC_DB2}") File.delete(BIN_SPEC_DB) if File.file?(BIN_SPEC_DB) File.delete(BIN_SPEC_DB2) if File.file?(BIN_SPEC_DB2) describe "bin/sequel" do def bin(opts={}) cmd = "#{opts[:pre]}\"#{RUBY}\" -I lib bin/sequel #{opts[:args]} #{"#{CONN_PREFIX}#{BIN_SPEC_DB}" unless opts[:no_conn]} #{opts[:post]}> #{OUTPUT}#{" 2>&1" if opts[:stderr]}" system(cmd) File.read(OUTPUT) end after do DB.disconnect DB2.disconnect File.delete(BIN_SPEC_DB) if File.file?(BIN_SPEC_DB) File.delete(BIN_SPEC_DB2) if File.file?(BIN_SPEC_DB2) File.delete(TMP_FILE) if File.file?(TMP_FILE) end after(:all) do File.delete(OUTPUT) if File.file?(OUTPUT) end it "-h should print the help" do help = bin(:args=>"-h", :no_conn=>true) help.should =~ /\ASequel: The Database Toolkit for Ruby/ help.should =~ /^Usage: sequel / end it "-c should run code" do bin(:args=>'-c "print DB.tables.inspect"').should == '[]' DB.create_table(:a){Integer :a} bin(:args=>'-c "print DB.tables.inspect"').should == '[:a]' end it "-C should copy databases" do DB.create_table(:a) do primary_key :a String :name end DB.create_table(:b) do foreign_key :a, :a index :a end DB[:a].insert(1, 'foo') DB[:b].insert(1) bin(:args=>'-C', :post=>"#{CONN_PREFIX}#{BIN_SPEC_DB2}").should =~ Regexp.new(<<END) Databases connections successful Migrations dumped successfully Tables created Begin copying data Begin copying records for table: a Finished copying 1 records for table: a Begin copying records for table: b Finished copying 1 records for table: b Finished copying data Begin creating indexes Finished creating indexes Begin adding foreign key constraints Finished adding foreign key constraints Database copy finished in \\d\\.\\d+ seconds END DB2.tables.sort_by{|t| t.to_s}.should == [:a, :b] DB[:a].all.should == [{:a=>1, :name=>'foo'}] DB[:b].all.should == [{:a=>1}] DB2.schema(:a).should == [[:a, {:allow_null=>false, :default=>nil, :primary_key=>true, :db_type=>"integer", :type=>:integer, :ruby_default=>nil}], [:name, {:allow_null=>true, :default=>nil, :primary_key=>false, :db_type=>"varchar(255)", :type=>:string, :ruby_default=>nil}]] DB2.schema(:b).should == [[:a, {:allow_null=>true, :default=>nil, :primary_key=>false, :db_type=>"integer", :type=>:integer, :ruby_default=>nil}]] DB2.indexes(:a).should == {} DB2.indexes(:b).should == {:b_a_index=>{:unique=>false, :columns=>[:a]}} DB2.foreign_key_list(:a).should == [] DB2.foreign_key_list(:b).should == [{:columns=>[:a], :table=>:a, :key=>nil, :on_update=>:no_action, :on_delete=>:no_action}] end it "-d and -D should dump generic and specific migrations" do DB.create_table(:a) do primary_key :a String :name end DB.create_table(:b) do foreign_key :a, :a index :a end bin(:args=>'-d').should == <<END Sequel.migration do change do create_table(:a) do primary_key :a String :name, :size=>255 end create_table(:b, :ignore_index_errors=>true) do foreign_key :a, :a index [:a] end end end END bin(:args=>'-D').should == <<END Sequel.migration do change do create_table(:a) do primary_key :a column :name, "varchar(255)" end create_table(:b) do foreign_key :a, :a index [:a] end end end END end it "-E should echo SQL statements to stdout" do bin(:args=>'-E -c DB.tables').should =~ %r{I, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\] INFO -- : \(\d\.\d+s\) PRAGMA foreign_keys = 1\nI, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\] INFO -- : \(\d\.\d+s\) PRAGMA case_sensitive_like = 1\nI, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\] INFO -- : \(\d\.\d+s\) SELECT \* FROM `sqlite_master` WHERE \(type = 'table' AND NOT name = 'sqlite_sequence'\)\n} end it "-I should include directory in load path" do bin(:args=>'-Ifoo -c "p 1 if $:.include?(\'foo\')"').should == "1\n" end it "-l should log SQL statements to file" do bin(:args=>"-l #{TMP_FILE} -c DB.tables").should == '' File.read(TMP_FILE).should =~ %r{I, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\] INFO -- : \(\d\.\d+s\) PRAGMA foreign_keys = 1\nI, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\] INFO -- : \(\d\.\d+s\) PRAGMA case_sensitive_like = 1\nI, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\] INFO -- : \(\d\.\d+s\) SELECT \* FROM `sqlite_master` WHERE \(type = 'table' AND NOT name = 'sqlite_sequence'\)\n} end it "-L should load all *.rb files in given directory" do bin(:args=>'-L ./lib/sequel/connection_pool -c "p [Sequel::SingleConnectionPool, Sequel::ThreadedConnectionPool, Sequel::ShardedSingleConnectionPool, Sequel::ShardedThreadedConnectionPool].length"').should == "4\n" end it "-m should migrate database up" do bin(:args=>"-m spec/files/integer_migrations").should == '' DB.tables.sort_by{|t| t.to_s}.should == [:schema_info, :sm1111, :sm2222, :sm3333] end it "-M should specify version to migrate to" do bin(:args=>"-m spec/files/integer_migrations -M 2").should == '' DB.tables.sort_by{|t| t.to_s}.should == [:schema_info, :sm1111, :sm2222] end it "-N should not test for a valid connection" do bin(:no_conn=>true, :args=>"-c '' -N #{CONN_PREFIX}spec/nonexistent/foo").should == '' bin(:no_conn=>true, :args=>"-c '' #{CONN_PREFIX}spec/nonexistent/foo", :stderr=>true).should =~ /\AError: Sequel::DatabaseConnectionError: / end it "-r should require a given library" do bin(:args=>'-rsequel/extensions/sql_expr -c "print DB.literal(1.sql_expr)"').should == "1" end it "-S should dump the schema cache" do bin(:args=>"-S #{TMP_FILE}").should == '' Marshal.load(File.read(TMP_FILE)).should == {} DB.create_table(:a){Integer :a} bin(:args=>"-S #{TMP_FILE}").should == '' Marshal.load(File.read(TMP_FILE)).should == {"`a`"=>[[:a, {:type=>:integer, :db_type=>"integer", :ruby_default=>nil, :allow_null=>true, :default=>nil, :primary_key=>false}]]} end it "-t should output full backtraces on error" do bin(:args=>'-c "lambda{lambda{lambda{raise \'foo\'}.call}.call}.call"', :stderr=>true).count("\n").should < 3 bin(:args=>'-t -c "lambda{lambda{lambda{raise \'foo\'}.call}.call}.call"', :stderr=>true).count("\n").should > 3 end it "-v should output the Sequel version" do bin(:args=>"-v", :no_conn=>true).should == "sequel #{Sequel.version}\n" end it "should error if using -M without -m" do bin(:args=>'-M 2', :stderr=>true).should == "Error: Must specify -m if using -M\n" end it "should error if using mutually exclusive options together" do bin(:args=>'-c foo -d', :stderr=>true).should == "Error: Cannot specify -c and -d together\n" bin(:args=>'-D -d', :stderr=>true).should == "Error: Cannot specify -D and -d together\n" bin(:args=>'-m foo -d', :stderr=>true).should == "Error: Cannot specify -m and -d together\n" bin(:args=>'-S foo -d', :stderr=>true).should == "Error: Cannot specify -S and -d together\n" end it "should use a mock database if no database is given" do bin(:args=>'-c "print DB.adapter_scheme"', :no_conn=>true).should == "mock" end it "should work with a yaml config file" do File.open(TMP_FILE, 'wb'){|f| f.write(YAML.dump(CONN_HASH))} bin(:args=>"-c \"print DB.tables.inspect\" #{TMP_FILE}", :no_conn=>true).should == "[]" DB.create_table(:a){Integer :a} bin(:args=>"-c \"print DB.tables.inspect\" #{TMP_FILE}", :no_conn=>true).should == "[:a]" end it "should work with a yaml config file with string keys" do h = {} CONN_HASH.each{|k,v| h[k.to_s] = v} File.open(TMP_FILE, 'wb'){|f| f.write(YAML.dump(h))} DB.create_table(:a){Integer :a} bin(:args=>"-c \"print DB.tables.inspect\" #{TMP_FILE}", :no_conn=>true).should == "[:a]" end it "should work with a yaml config file with environments" do File.open(TMP_FILE, 'wb'){|f| f.write(YAML.dump(:development=>CONN_HASH))} bin(:args=>"-c \"print DB.tables.inspect\" #{TMP_FILE}", :no_conn=>true).should == "[]" DB.create_table(:a){Integer :a} bin(:args=>"-c \"print DB.tables.inspect\" #{TMP_FILE}", :no_conn=>true).should == "[:a]" end it "-e should set environment for yaml config file" do File.open(TMP_FILE, 'wb'){|f| f.write(YAML.dump(:foo=>CONN_HASH))} bin(:args=>"-c \"print DB.tables.inspect\" -e foo #{TMP_FILE}", :no_conn=>true).should == "[]" DB.create_table(:a){Integer :a} bin(:args=>"-c \"print DB.tables.inspect\" -e foo #{TMP_FILE}", :no_conn=>true).should == "[:a]" File.open(TMP_FILE, 'wb'){|f| f.write(YAML.dump('foo'=>CONN_HASH))} bin(:args=>"-c \"print DB.tables.inspect\" -e foo #{TMP_FILE}", :no_conn=>true).should == "[:a]" end it "should run code in given filenames" do File.open(TMP_FILE, 'wb'){|f| f.write('print DB.tables.inspect')} bin(:post=>TMP_FILE).should == '[]' DB.create_table(:a){Integer :a} bin(:post=>TMP_FILE).should == '[:a]' end it "should run code provided on stdin" do bin(:pre=>'echo print DB.tables.inspect | ').should == '[]' DB.create_table(:a){Integer :a} bin(:pre=>'echo print DB.tables.inspect | ').should == '[:a]' end end �����������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/core/������������������������������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0015660�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/core/connection_pool_spec.rb�������������������������������������������������0000664�0000000�0000000�00000100763�12201565355�0022416�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') CONNECTION_POOL_DEFAULTS = {:pool_timeout=>5, :pool_sleep_time=>0.001, :max_connections=>4} mock_db = lambda do |*a, &b| db = Sequel.mock (class << db; self end).send(:define_method, :connect){|c| b.arity == 1 ? b.call(c) : b.call} if b if b2 = a.shift (class << db; self end).send(:define_method, :disconnect_connection){|c| b2.arity == 1 ? b2.call(c) : b2.call} end db end describe "An empty ConnectionPool" do before do @cpool = Sequel::ConnectionPool.get_pool(mock_db.call, CONNECTION_POOL_DEFAULTS) end specify "should have no available connections" do @cpool.available_connections.should == [] end specify "should have no allocated connections" do @cpool.allocated.should == {} end specify "should have a created_count of zero" do @cpool.created_count.should == 0 end end describe "ConnectionPool options" do specify "should support string option values" do cpool = Sequel::ConnectionPool.get_pool(mock_db.call, {:max_connections=>'5', :pool_timeout=>'3', :pool_sleep_time=>'0.01'}) cpool.max_size.should == 5 cpool.instance_variable_get(:@timeout).should == 3 cpool.instance_variable_get(:@sleep_time).should == 0.01 end specify "should raise an error unless size is positive" do lambda{Sequel::ConnectionPool.get_pool(mock_db.call{1}, :max_connections=>0)}.should raise_error(Sequel::Error) lambda{Sequel::ConnectionPool.get_pool(mock_db.call{1}, :max_connections=>-10)}.should raise_error(Sequel::Error) lambda{Sequel::ConnectionPool.get_pool(mock_db.call{1}, :max_connections=>'-10')}.should raise_error(Sequel::Error) lambda{Sequel::ConnectionPool.get_pool(mock_db.call{1}, :max_connections=>'0')}.should raise_error(Sequel::Error) end end describe "A connection pool handling connections" do before do @max_size = 2 msp = proc{@max_size=3} @cpool = Sequel::ConnectionPool.get_pool(mock_db.call(proc{|c| msp.call}){:got_connection}, CONNECTION_POOL_DEFAULTS.merge(:max_connections=>@max_size)) end specify "#hold should increment #created_count" do @cpool.hold do @cpool.created_count.should == 1 @cpool.hold {@cpool.hold {@cpool.created_count.should == 1}} Thread.new{@cpool.hold {@cpool.created_count.should == 2}}.join end end specify "#hold should add the connection to the #allocated array" do @cpool.hold do @cpool.allocated.size.should == 1 @cpool.allocated.should == {Thread.current=>:got_connection} end end specify "#hold should yield a new connection" do @cpool.hold {|conn| conn.should == :got_connection} end specify "a connection should be de-allocated after it has been used in #hold" do @cpool.hold {} @cpool.allocated.size.should == 0 end specify "#hold should return the value of its block" do @cpool.hold {:block_return}.should == :block_return end if RUBY_VERSION < '1.9.0' and !defined?(RUBY_ENGINE) specify "#hold should remove dead threads from the pool if it reaches its max_size" do Thread.new{@cpool.hold{Thread.current.exit!}}.join @cpool.allocated.keys.map{|t| t.alive?}.should == [false] Thread.new{@cpool.hold{Thread.current.exit!}}.join @cpool.allocated.keys.map{|t| t.alive?}.should == [false, false] Thread.new{@cpool.hold{}}.join @cpool.allocated.should == {} end end specify "#make_new should not make more than max_size connections" do q = Queue.new 50.times{Thread.new{@cpool.hold{q.pop}}} 50.times{q.push nil} @cpool.created_count.should <= @max_size end specify "database's disconnect connection method should be called when a disconnect is detected" do @max_size.should == 2 proc{@cpool.hold{raise Sequel::DatabaseDisconnectError}}.should raise_error(Sequel::DatabaseDisconnectError) @max_size.should == 3 end specify "#hold should remove the connection if a DatabaseDisconnectError is raised" do @cpool.created_count.should == 0 q, q1 = Queue.new, Queue.new @cpool.hold{Thread.new{@cpool.hold{q1.pop; q.push nil}; q1.pop; q.push nil}; q1.push nil; q.pop; q1.push nil; q.pop} @cpool.created_count.should == 2 proc{@cpool.hold{raise Sequel::DatabaseDisconnectError}}.should raise_error(Sequel::DatabaseDisconnectError) @cpool.created_count.should == 1 proc{@cpool.hold{raise Sequel::DatabaseDisconnectError}}.should raise_error(Sequel::DatabaseDisconnectError) @cpool.created_count.should == 0 proc{@cpool.hold{raise Sequel::DatabaseDisconnectError}}.should raise_error(Sequel::DatabaseDisconnectError) @cpool.created_count.should == 0 end end describe "A connection pool handling connection errors" do specify "#hold should raise a Sequel::DatabaseConnectionError if an exception is raised by the connection_proc" do cpool = Sequel::ConnectionPool.get_pool(CONNECTION_POOL_DEFAULTS){raise Interrupt} proc{cpool.hold{:block_return}}.should raise_error(Sequel::DatabaseConnectionError) cpool.created_count.should == 0 end specify "#hold should raise a Sequel::DatabaseConnectionError if nil is returned by the connection_proc" do cpool = Sequel::ConnectionPool.get_pool(CONNECTION_POOL_DEFAULTS){nil} proc{cpool.hold{:block_return}}.should raise_error(Sequel::DatabaseConnectionError) cpool.created_count.should == 0 end end describe "ConnectionPool#hold" do before do value = 0 c = @c = Class.new do define_method(:initialize){value += 1} define_method(:value){value} end @pool = Sequel::ConnectionPool.get_pool(mock_db.call{c.new}, CONNECTION_POOL_DEFAULTS) end specify "shoulda use the database's connect method to get new connections" do res = nil @pool.hold {|c| res = c} res.should be_a_kind_of(@c) res.value.should == 1 @pool.hold {|c| res = c} res.should be_a_kind_of(@c) res.value.should == 1 # the connection maker is invoked only once end specify "should be re-entrant by the same thread" do cc = nil @pool.hold {|c| @pool.hold {|c1| @pool.hold {|c2| cc = c2}}} cc.should be_a_kind_of(@c) end specify "should catch exceptions and reraise them" do proc {@pool.hold {|c| c.foobar}}.should raise_error(NoMethodError) end end describe "A connection pool with a max size of 1" do before do @invoked_count = 0 icp = proc{@invoked_count += 1} @pool = Sequel::ConnectionPool.get_pool(mock_db.call{icp.call; 'herro'}, CONNECTION_POOL_DEFAULTS.merge(:max_connections=>1)) end specify "should let only one thread access the connection at any time" do cc,c1, c2 = nil q, q1 = Queue.new, Queue.new t1 = Thread.new {@pool.hold {|c| cc = c; c1 = c.dup; q1.push nil; q.pop}} q1.pop cc.should == 'herro' c1.should == 'herro' t2 = Thread.new {@pool.hold {|c| c2 = c.dup; q1.push nil; q.pop;}} # connection held by t1 t1.should be_alive t2.should be_alive cc.should == 'herro' c1.should == 'herro' c2.should be_nil @pool.available_connections.should be_empty @pool.allocated.should == {t1=>cc} cc.gsub!('rr', 'll') q.push nil q1.pop t1.join t2.should be_alive c2.should == 'hello' @pool.available_connections.should be_empty @pool.allocated.should == {t2=>cc} #connection released q.push nil t2.join @invoked_count.should == 1 @pool.size.should == 1 @pool.available_connections.should == [cc] @pool.allocated.should be_empty end specify "should let the same thread reenter #hold" do c1, c2, c3 = nil @pool.hold do |c| c1 = c @pool.hold do |cc2| c2 = cc2 @pool.hold do |cc3| c3 = cc3 end end end c1.should == 'herro' c2.should == 'herro' c3.should == 'herro' @invoked_count.should == 1 @pool.size.should == 1 @pool.available_connections.size.should == 1 @pool.allocated.should be_empty end end shared_examples_for "A threaded connection pool" do specify "should not have all_connections yield connections allocated to other threads" do pool = Sequel::ConnectionPool.get_pool(mock_db.call(&@icpp), @cp_opts.merge(:max_connections=>2, :pool_timeout=>0)) q, q1 = Queue.new, Queue.new t = Thread.new do pool.hold do |c1| q1.push nil q.pop end end pool.hold do |c1| q1.pop pool.all_connections{|c| c.should == c1} q.push nil end t.join end specify "should wait until a connection is available if all are checked out" do pool = Sequel::ConnectionPool.get_pool(mock_db.call(&@icpp), @cp_opts.merge(:max_connections=>1, :pool_timeout=>0.1, :pool_sleep_time=>0)) q, q1 = Queue.new, Queue.new t = Thread.new do pool.hold do |c| q1.push nil 3.times{Thread.pass} q.pop end end q1.pop proc{pool.hold{}}.should raise_error(Sequel::PoolTimeout) q.push nil t.join end specify "should not have all_connections yield all available connections" do pool = Sequel::ConnectionPool.get_pool(mock_db.call(&@icpp), @cp_opts.merge(:max_connections=>2, :pool_timeout=>0)) q, q1 = Queue.new, Queue.new b = [] t = Thread.new do pool.hold do |c1| b << c1 q1.push nil q.pop end end pool.hold do |c1| q1.pop b << c1 q.push nil end t.join a = [] pool.all_connections{|c| a << c} a.sort.should == b.sort end specify "should raise a PoolTimeout error if a connection couldn't be acquired before timeout" do q, q1 = Queue.new, Queue.new pool = Sequel::ConnectionPool.get_pool(mock_db.call(&@icpp), @cp_opts.merge(:max_connections=>1, :pool_timeout=>0)) t = Thread.new{pool.hold{|c| q1.push nil; q.pop}} q1.pop proc{pool.hold{|c|}}.should raise_error(Sequel::PoolTimeout) q.push nil t.join end it "should not add a disconnected connection back to the pool if the disconnection_proc raises an error" do pool = Sequel::ConnectionPool.get_pool(mock_db.call(proc{|c| raise Sequel::Error}, &@icpp), @cp_opts.merge(:max_connections=>1, :pool_timeout=>0)) proc{pool.hold{raise Sequel::DatabaseDisconnectError}}.should raise_error(Sequel::Error) pool.available_connections.length.should == 0 end specify "should let five threads simultaneously access separate connections" do cc = {} threads = [] q, q1, q2 = Queue.new, Queue.new, Queue.new 5.times{|i| threads << Thread.new{@pool.hold{|c| q.pop; cc[i] = c; q1.push nil; q2.pop}}; q.push nil; q1.pop} threads.each {|t| t.should be_alive} cc.size.should == 5 @invoked_count.should == 5 @pool.size.should == 5 @pool.available_connections.should be_empty h = {} i = 0 threads.each{|t| h[t] = (i+=1)} @pool.allocated.should == h @pool.available_connections.should == [] 5.times{q2.push nil} threads.each{|t| t.join} @pool.available_connections.size.should == 5 @pool.allocated.should be_empty end specify "should block threads until a connection becomes available" do cc = {} threads = [] q, q1 = Queue.new, Queue.new 5.times{|i| threads << Thread.new{@pool.hold{|c| cc[i] = c; q1.push nil; q.pop}}} 5.times{q1.pop} threads.each {|t| t.should be_alive} @pool.available_connections.should be_empty 3.times {|i| threads << Thread.new {@pool.hold {|c| cc[i + 5] = c; q1.push nil}}} threads[5].should be_alive threads[6].should be_alive threads[7].should be_alive cc.size.should == 5 cc[5].should be_nil cc[6].should be_nil cc[7].should be_nil 5.times{q.push nil} 5.times{|i| threads[i].join} 3.times{q1.pop} 3.times{|i| threads[i+5].join} threads.each {|t| t.should_not be_alive} @pool.size.should == 5 @invoked_count.should == 5 @pool.available_connections.size.should == 5 @pool.allocated.should be_empty end specify "should store connections in a stack if :connection_handling=>:stack" do @pool = Sequel::ConnectionPool.get_pool(mock_db.call(&@icpp), @cp_opts.merge(:connection_handling=>:stack)) c2 = nil c = @pool.hold{|cc| Thread.new{@pool.hold{|cc2| c2 = cc2}}.join; cc} @pool.size.should == 2 @pool.hold{|cc| cc.should == c} @pool.hold{|cc| cc.should == c} @pool.hold do |cc| cc.should == c Thread.new{@pool.hold{|cc2| cc2.should == c2}} end end specify "should store connections in a queue if :connection_handling=>:queue" do @pool = Sequel::ConnectionPool.get_pool(mock_db.call(&@icpp), @cp_opts.merge(:connection_handling=>:queue)) c2 = nil c = @pool.hold{|cc| Thread.new{@pool.hold{|cc2| c2 = cc2}}.join; cc} @pool.size.should == 2 @pool.hold{|cc| cc.should == c2} @pool.hold{|cc| cc.should == c} @pool.hold do |cc| cc.should == c2 Thread.new{@pool.hold{|cc2| cc2.should == c}} end end specify "should not store connections if :connection_handling=>:disconnect" do @pool = Sequel::ConnectionPool.get_pool(mock_db.call(&@icpp), @cp_opts.merge(:connection_handling=>:disconnect)) d = [] meta_def(@pool.db, :disconnect_connection){|c| d << c} @pool.hold do |cc| cc.should == 1 Thread.new{@pool.hold{|cc2| cc2.should == 2}}.join d.should == [2] @pool.hold{|cc3| cc3.should == 1} end @pool.size.should == 0 d.should == [2, 1] @pool.hold{|cc| cc.should == 3} @pool.size.should == 0 d.should == [2, 1, 3] @pool.hold{|cc| cc.should == 4} @pool.size.should == 0 d.should == [2, 1, 3, 4] end end describe "Threaded Unsharded Connection Pool" do before do @invoked_count = 0 @icpp = proc{@invoked_count += 1} @cp_opts = CONNECTION_POOL_DEFAULTS.merge(:max_connections=>5) @pool = Sequel::ConnectionPool.get_pool(mock_db.call(&@icpp), @cp_opts) end it_should_behave_like "A threaded connection pool" end describe "Threaded Sharded Connection Pool" do before do @invoked_count = 0 @icpp = proc{@invoked_count += 1} @cp_opts = CONNECTION_POOL_DEFAULTS.merge(:max_connections=>5, :servers=>{}) @pool = Sequel::ConnectionPool.get_pool(mock_db.call(&@icpp), @cp_opts) end it_should_behave_like "A threaded connection pool" end describe "ConnectionPool#disconnect" do before do @count = 0 cp = proc{@count += 1} @pool = Sequel::ConnectionPool.get_pool(mock_db.call{{:id => cp.call}}, CONNECTION_POOL_DEFAULTS.merge(:max_connections=>5, :servers=>{})) threads = [] q, q1 = Queue.new, Queue.new 5.times {|i| threads << Thread.new {@pool.hold {|c| q1.push nil; q.pop}}} 5.times{q1.pop} 5.times{q.push nil} threads.each {|t| t.join} end specify "should invoke the given block for each available connection" do @pool.size.should == 5 @pool.available_connections.size.should == 5 @pool.available_connections.each {|c| c[:id].should_not be_nil} conns = [] meta_def(@pool.db, :disconnect_connection){|c| conns << c} @pool.disconnect conns.size.should == 5 end specify "should remove all available connections" do @pool.size.should == 5 @pool.disconnect @pool.size.should == 0 end specify "should disconnect connections in use as soon as they are no longer in use" do @pool.size.should == 5 @pool.hold do |conn| @pool.available_connections.size.should == 4 @pool.available_connections.each {|c| c.should_not be(conn)} conns = [] meta_def(@pool.db, :disconnect_connection){|c| conns << c} @pool.disconnect conns.size.should == 4 @pool.size.should == 1 end @pool.size.should == 0 end end describe "A connection pool with multiple servers" do before do ic = @invoked_counts = Hash.new(0) @pool = Sequel::ConnectionPool.get_pool(mock_db.call{|server| "#{server}#{ic[server] += 1}"}, CONNECTION_POOL_DEFAULTS.merge(:servers=>{:read_only=>{}})) end specify "#all_connections should return connections for all servers" do @pool.hold{} @pool.all_connections{|c1| c1.should == "default1"} a = [] @pool.hold(:read_only) do |c| @pool.all_connections{|c1| a << c1} end a.sort_by{|c| c.to_s}.should == ["default1", "read_only1"] end specify "#servers should return symbols for all servers" do @pool.servers.sort_by{|s| s.to_s}.should == [:default, :read_only] end specify "should use the :default server by default" do @pool.size.should == 0 @pool.hold do |c| c.should == "default1" @pool.allocated.should == {Thread.current=>"default1"} end @pool.available_connections.should == ["default1"] @pool.size.should == 1 @invoked_counts.should == {:default=>1} end specify "should use the :default server an invalid server is used" do @pool.hold do |c1| c1.should == "default1" @pool.hold(:blah) do |c2| c2.should == c1 @pool.hold(:blah2) do |c3| c2.should == c3 end end end end specify "should support a :servers_hash option used for converting the server argument" do ic = @invoked_counts @pool = Sequel::ConnectionPool.get_pool(mock_db.call{|server| "#{server}#{ic[server] += 1}"}, CONNECTION_POOL_DEFAULTS.merge(:servers_hash=>Hash.new(:read_only), :servers=>{:read_only=>{}})) @pool.hold(:blah) do |c1| c1.should == "read_only1" @pool.hold(:blah) do |c2| c2.should == c1 @pool.hold(:blah2) do |c3| c2.should == c3 end end end @pool = Sequel::ConnectionPool.get_pool(mock_db.call{|server| "#{server}#{ic[server] += 1}"}, CONNECTION_POOL_DEFAULTS.merge(:servers_hash=>Hash.new{|h,k| raise Sequel::Error}, :servers=>{:read_only=>{}})) proc{@pool.hold(:blah){|c1|}}.should raise_error(Sequel::Error) end specify "should use the requested server if server is given" do @pool.size(:read_only).should == 0 @pool.hold(:read_only) do |c| c.should == "read_only1" @pool.allocated(:read_only).should == {Thread.current=>"read_only1"} end @pool.available_connections(:read_only).should == ["read_only1"] @pool.size(:read_only).should == 1 @invoked_counts.should == {:read_only=>1} end specify "#hold should only yield connections for the server requested" do @pool.hold(:read_only) do |c| c.should == "read_only1" @pool.allocated(:read_only).should == {Thread.current=>"read_only1"} @pool.hold do |d| d.should == "default1" @pool.hold do |e| e.should == d @pool.hold(:read_only){|b| b.should == c} end @pool.allocated.should == {Thread.current=>"default1"} end end @invoked_counts.should == {:read_only=>1, :default=>1} end specify "#disconnect should disconnect from all servers" do @pool.hold(:read_only){} @pool.hold{} conns = [] @pool.size.should == 1 @pool.size(:read_only).should == 1 meta_def(@pool.db, :disconnect_connection){|c| conns << c} @pool.disconnect conns.sort.should == %w'default1 read_only1' @pool.size.should == 0 @pool.size(:read_only).should == 0 @pool.hold(:read_only){|c| c.should == 'read_only2'} @pool.hold{|c| c.should == 'default2'} end specify "#add_servers should add new servers to the pool" do pool = Sequel::ConnectionPool.get_pool(mock_db.call{|s| s}, :servers=>{:server1=>{}}) pool.hold{} pool.hold(:server2){} pool.hold(:server3){} pool.hold(:server1) do pool.allocated.length.should == 0 pool.allocated(:server1).length.should == 1 pool.allocated(:server2).should == nil pool.allocated(:server3).should == nil pool.available_connections.length.should == 1 pool.available_connections(:server1).length.should == 0 pool.available_connections(:server2).should == nil pool.available_connections(:server3).should == nil pool.add_servers([:server2, :server3]) pool.hold(:server2){} pool.hold(:server3) do pool.allocated.length.should == 0 pool.allocated(:server1).length.should == 1 pool.allocated(:server2).length.should == 0 pool.allocated(:server3).length.should == 1 pool.available_connections.length.should == 1 pool.available_connections(:server1).length.should == 0 pool.available_connections(:server2).length.should == 1 pool.available_connections(:server3).length.should == 0 end end end specify "#add_servers should ignore existing keys" do pool = Sequel::ConnectionPool.get_pool(mock_db.call{|s| s}, :servers=>{:server1=>{}}) pool.allocated.length.should == 0 pool.allocated(:server1).length.should == 0 pool.available_connections.length.should == 0 pool.available_connections(:server1).length.should == 0 pool.hold do |c1| c1.should == :default pool.allocated.length.should == 1 pool.allocated(:server1).length.should == 0 pool.available_connections.length.should == 0 pool.available_connections(:server1).length.should == 0 pool.hold(:server1) do |c2| c2.should == :server1 pool.allocated.length.should == 1 pool.allocated(:server1).length.should == 1 pool.available_connections.length.should == 0 pool.available_connections(:server1).length.should == 0 pool.add_servers([:default, :server1]) pool.allocated.length.should == 1 pool.allocated(:server1).length.should == 1 pool.available_connections.length.should == 0 pool.available_connections(:server1).length.should == 0 end pool.allocated.length.should == 1 pool.allocated(:server1).length.should == 0 pool.available_connections.length.should == 0 pool.available_connections(:server1).length.should == 1 pool.add_servers([:default, :server1]) pool.allocated.length.should == 1 pool.allocated(:server1).length.should == 0 pool.available_connections.length.should == 0 pool.available_connections(:server1).length.should == 1 end pool.allocated.length.should == 0 pool.allocated(:server1).length.should == 0 pool.available_connections.length.should == 1 pool.available_connections(:server1).length.should == 1 pool.add_servers([:default, :server1]) pool.allocated.length.should == 0 pool.allocated(:server1).length.should == 0 pool.available_connections.length.should == 1 pool.available_connections(:server1).length.should == 1 end specify "#remove_servers should disconnect available connections immediately" do pool = Sequel::ConnectionPool.get_pool(mock_db.call{|s| s}, :max_connections=>5, :servers=>{:server1=>{}}) threads = [] q, q1 = Queue.new, Queue.new 5.times {|i| threads << Thread.new {pool.hold(:server1){|c| q1.push nil; q.pop}}} 5.times{q1.pop} 5.times{q.push nil} threads.each {|t| t.join} pool.size(:server1).should == 5 pool.remove_servers([:server1]) pool.size(:server1).should == 0 end specify "#remove_servers should disconnect connections in use as soon as they are returned to the pool" do dc = [] pool = Sequel::ConnectionPool.get_pool(mock_db.call(proc{|c| dc << c}){|c| c}, :servers=>{:server1=>{}}) c1 = nil pool.hold(:server1) do |c| pool.size(:server1).should == 1 dc.should == [] pool.remove_servers([:server1]) pool.size(:server1).should == 0 dc.should == [] c1 = c end pool.size(:server1).should == 0 dc.should == [c1] end specify "#remove_servers should remove server related data structures immediately" do pool = Sequel::ConnectionPool.get_pool(mock_db.call{|s| s}, :servers=>{:server1=>{}}) pool.available_connections(:server1).should == [] pool.allocated(:server1).should == {} pool.remove_servers([:server1]) pool.available_connections(:server1).should == nil pool.allocated(:server1).should == nil end specify "#remove_servers should not allow the removal of the default server" do pool = Sequel::ConnectionPool.get_pool(mock_db.call{|s| s}, :servers=>{:server1=>{}}) proc{pool.remove_servers([:server1])}.should_not raise_error proc{pool.remove_servers([:default])}.should raise_error(Sequel::Error) end specify "#remove_servers should ignore servers that have already been removed" do dc = [] pool = Sequel::ConnectionPool.get_pool(mock_db.call(proc{|c| dc << c}){|c| c}, :servers=>{:server1=>{}}) c1 = nil pool.hold(:server1) do |c| pool.size(:server1).should == 1 dc.should == [] pool.remove_servers([:server1]) pool.remove_servers([:server1]) pool.size(:server1).should == 0 dc.should == [] c1 = c end pool.size(:server1).should == 0 dc.should == [c1] end end ST_CONNECTION_POOL_DEFAULTS = CONNECTION_POOL_DEFAULTS.merge(:single_threaded=>true) describe "SingleConnectionPool" do before do @pool = Sequel::ConnectionPool.get_pool(mock_db.call{1234}, ST_CONNECTION_POOL_DEFAULTS) end specify "should provide a #hold method" do conn = nil @pool.hold{|c| conn = c} conn.should == 1234 end specify "should provide a #disconnect method" do conn = nil x = nil pool = Sequel::ConnectionPool.get_pool(mock_db.call(proc{|c| conn = c}){1234}, ST_CONNECTION_POOL_DEFAULTS) pool.hold{|c| x = c} x.should == 1234 pool.disconnect conn.should == 1234 end end describe "A single threaded pool with multiple servers" do before do @max_size=2 msp = proc{@max_size += 1} @pool = Sequel::ConnectionPool.get_pool(mock_db.call(proc{|c| msp.call}){|c| c}, ST_CONNECTION_POOL_DEFAULTS.merge(:servers=>{:read_only=>{}})) end specify "#all_connections should return connections for all servers" do @pool.hold{} @pool.all_connections{|c1| c1.should == :default} a = [] @pool.hold(:read_only) do @pool.all_connections{|c1| a << c1} end a.sort_by{|c| c.to_s}.should == [:default, :read_only] end specify "#servers should return symbols for all servers" do @pool.servers.sort_by{|s| s.to_s}.should == [:default, :read_only] end specify "#add_servers should add new servers to the pool" do @pool.hold(:blah){|c| c.should == :default} @pool.add_servers([:blah]) @pool.hold(:blah){|c| c.should == :blah} end specify "#add_servers should ignore keys already existing" do @pool.hold{|c| c.should == :default} @pool.hold(:read_only){|c| c.should == :read_only} @pool.add_servers([:default, :read_only]) @pool.conn.should == :default @pool.conn(:read_only).should == :read_only end specify "#remove_servers should remove servers from the pool" do @pool.hold(:read_only){|c| c.should == :read_only} @pool.remove_servers([:read_only]) @pool.hold(:read_only){|c| c.should == :default} end specify "#remove_servers should not allow the removal of the default server" do proc{@pool.remove_servers([:default])}.should raise_error(Sequel::Error) end specify "#remove_servers should disconnect connection immediately" do @pool.hold(:read_only){|c| c.should == :read_only} @pool.conn(:read_only).should == :read_only @pool.remove_servers([:read_only]) @pool.conn(:read_only).should == nil @pool.hold{} @pool.conn(:read_only).should == :default end specify "#remove_servers should ignore keys that do not exist" do proc{@pool.remove_servers([:blah])}.should_not raise_error end specify "should use the :default server by default" do @pool.hold{|c| c.should == :default} @pool.conn.should == :default end specify "should use the :default server an invalid server is used" do @pool.hold do |c1| c1.should == :default @pool.hold(:blah) do |c2| c2.should == c1 @pool.hold(:blah2) do |c3| c2.should == c3 end end end end specify "should use the requested server if server is given" do @pool.hold(:read_only){|c| c.should == :read_only} @pool.conn(:read_only).should == :read_only end specify "#hold should only yield connections for the server requested" do @pool.hold(:read_only) do |c| c.should == :read_only @pool.hold do |d| d.should == :default @pool.hold do |e| e.should == d @pool.hold(:read_only){|b| b.should == c} end end end @pool.conn.should == :default @pool.conn(:read_only).should == :read_only end specify "#disconnect should disconnect from all servers" do @pool.hold(:read_only){} @pool.hold{} @pool.conn.should == :default @pool.conn(:read_only).should == :read_only @pool.disconnect @max_size.should == 4 @pool.conn.should == nil @pool.conn(:read_only).should == nil end specify ":disconnection_proc option should set the disconnection proc to use" do @max_size.should == 2 proc{@pool.hold{raise Sequel::DatabaseDisconnectError}}.should raise_error(Sequel::DatabaseDisconnectError) @max_size.should == 3 end specify "#hold should remove the connection if a DatabaseDisconnectError is raised" do @pool.instance_variable_get(:@conns).length.should == 0 @pool.hold{} @pool.instance_variable_get(:@conns).length.should == 1 proc{@pool.hold{raise Sequel::DatabaseDisconnectError}}.should raise_error(Sequel::DatabaseDisconnectError) @pool.instance_variable_get(:@conns).length.should == 0 end end shared_examples_for "All connection pools classes" do specify "should have pool_type return a symbol" do @class.new(mock_db.call{123}, {}).pool_type.should be_a_kind_of(Symbol) end specify "should have all_connections yield current and available connections" do p = @class.new(mock_db.call{123}, {}) p.hold{|c| p.all_connections{|c1| c.should == c1}} end specify "should be able to modify after_connect proc after the pool is created" do a = [] p = @class.new(mock_db.call{123}, {}) p.after_connect = pr = proc{|c| a << c} p.after_connect.should == pr a.should == [] p.hold{} a.should == [123] end specify "should not raise an error when disconnecting twice" do c = @class.new(mock_db.call{123}, {}) proc{c.disconnect}.should_not raise_error proc{c.disconnect}.should_not raise_error end specify "should yield a connection created by the initialize block to hold" do x = nil @class.new(mock_db.call{123}, {}).hold{|c| x = c} x.should == 123 end specify "should have the initialize block accept a shard/server argument" do x = nil @class.new(mock_db.call{|c| [c, c]}, {}).hold{|c| x = c} x.should == [:default, :default] end specify "should have respect an :after_connect proc that is called with each newly created connection" do x = nil @class.new(mock_db.call{123}, :after_connect=>proc{|c| x = [c, c]}).hold{} x.should == [123, 123] end specify "should raise a DatabaseConnectionError if the connection raises an exception" do proc{@class.new(mock_db.call{|c| raise Exception}, {}).hold{}}.should raise_error(Sequel::DatabaseConnectionError) end specify "should raise a DatabaseConnectionError if the initialize block returns nil" do proc{@class.new(mock_db.call{}, {}).hold{}}.should raise_error(Sequel::DatabaseConnectionError) end specify "should call the disconnection_proc option if the hold block raises a DatabaseDisconnectError" do x = nil proc{@class.new(mock_db.call(proc{|c| x = c}){123}).hold{raise Sequel::DatabaseDisconnectError}}.should raise_error(Sequel::DatabaseDisconnectError) x.should == 123 end specify "should have a disconnect method that disconnects the connection" do x = nil c = @class.new(mock_db.call(proc{|c1| x = c1}){123}) c.hold{} x.should == nil c.disconnect x.should == 123 end specify "should have a reentrent hold method" do o = Object.new c = @class.new(mock_db.call{o}, {}) c.hold do |x| x.should == o c.hold do |x1| x1.should == o c.hold do |x2| x2.should == o end end end end specify "should have a servers method that returns an array of shard/server symbols" do @class.new(mock_db.call{123}, {}).servers.should == [:default] end specify "should have a servers method that returns an array of shard/server symbols" do c = @class.new(mock_db.call{123}, {}) c.size.should == 0 c.hold{} c.size.should == 1 end end Sequel::ConnectionPool::CONNECTION_POOL_MAP.keys.each do |k, v| opts = {:single_threaded=>k, :servers=>(v ? {} : nil)} describe "Connection pool with #{opts.inspect}" do before(:all) do Sequel::ConnectionPool.send(:get_pool, mock_db.call, opts) end before do @class = Sequel::ConnectionPool.send(:connection_pool_class, opts) end it_should_behave_like "All connection pools classes" end end �������������ruby-sequel-4.1.1/spec/core/database_spec.rb��������������������������������������������������������0000664�0000000�0000000�00000266601�12201565355�0020776�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') describe "A new Database" do before do @db = Sequel::Database.new(1 => 2, :logger => 3) end after do Sequel.quote_identifiers = false Sequel.identifier_input_method = nil Sequel.identifier_output_method = nil end specify "should receive options" do @db.opts[1].should == 2 @db.opts[:logger].should == 3 end specify "should set the logger from opts[:logger] and opts[:loggers]" do @db.loggers.should == [3] Sequel::Database.new(1 => 2, :loggers => 3).loggers.should == [3] Sequel::Database.new(1 => 2, :loggers => [3]).loggers.should == [3] Sequel::Database.new(1 => 2, :logger => 4, :loggers => 3).loggers.should == [4,3] Sequel::Database.new(1 => 2, :logger => [4], :loggers => [3]).loggers.should == [4,3] end specify "should handle the default string column size" do @db.default_string_column_size.should == 255 db = Sequel::Database.new(:default_string_column_size=>50) db.default_string_column_size.should == 50 db.default_string_column_size = 2 db.default_string_column_size.should == 2 end specify "should set the sql_log_level from opts[:sql_log_level]" do Sequel::Database.new(1 => 2, :sql_log_level=>:debug).sql_log_level.should == :debug Sequel::Database.new(1 => 2, :sql_log_level=>'debug').sql_log_level.should == :debug end specify "should create a connection pool" do @db.pool.should be_a_kind_of(Sequel::ConnectionPool) @db.pool.max_size.should == 4 Sequel::Database.new(:max_connections => 10).pool.max_size.should == 10 end specify "should have the connection pool use the connect method to get connections" do cc = nil d = Sequel::Database.new meta_def(d, :connect){|c| 1234} d.synchronize {|c| cc = c} cc.should == 1234 end specify "should respect the :single_threaded option" do db = Sequel::Database.new(:single_threaded=>true){123} db.pool.should be_a_kind_of(Sequel::SingleConnectionPool) db = Sequel::Database.new(:single_threaded=>'t'){123} db.pool.should be_a_kind_of(Sequel::SingleConnectionPool) db = Sequel::Database.new(:single_threaded=>'1'){123} db.pool.should be_a_kind_of(Sequel::SingleConnectionPool) db = Sequel::Database.new(:single_threaded=>false){123} db.pool.should be_a_kind_of(Sequel::ConnectionPool) db = Sequel::Database.new(:single_threaded=>'f'){123} db.pool.should be_a_kind_of(Sequel::ConnectionPool) db = Sequel::Database.new(:single_threaded=>'0'){123} db.pool.should be_a_kind_of(Sequel::ConnectionPool) end specify "should respect the :quote_identifiers option" do db = Sequel::Database.new(:quote_identifiers=>false) db.quote_identifiers?.should == false db = Sequel::Database.new(:quote_identifiers=>true) db.quote_identifiers?.should == true end specify "should upcase on input and downcase on output by default" do db = Sequel::Database.new db.send(:identifier_input_method_default).should == :upcase db.send(:identifier_output_method_default).should == :downcase end specify "should respect the :identifier_input_method option" do Sequel.identifier_input_method = nil Sequel::Database.identifier_input_method.should == false db = Sequel::Database.new(:identifier_input_method=>nil) db.identifier_input_method.should be_nil db.identifier_input_method = :downcase db.identifier_input_method.should == :downcase db = Sequel::Database.new(:identifier_input_method=>:upcase) db.identifier_input_method.should == :upcase db.identifier_input_method = nil db.identifier_input_method.should be_nil Sequel.identifier_input_method = :downcase Sequel::Database.identifier_input_method.should == :downcase db = Sequel::Database.new(:identifier_input_method=>nil) db.identifier_input_method.should be_nil db.identifier_input_method = :upcase db.identifier_input_method.should == :upcase db = Sequel::Database.new(:identifier_input_method=>:upcase) db.identifier_input_method.should == :upcase db.identifier_input_method = nil db.identifier_input_method.should be_nil end specify "should respect the :identifier_output_method option" do Sequel.identifier_output_method = nil Sequel::Database.identifier_output_method.should == false db = Sequel::Database.new(:identifier_output_method=>nil) db.identifier_output_method.should be_nil db.identifier_output_method = :downcase db.identifier_output_method.should == :downcase db = Sequel::Database.new(:identifier_output_method=>:upcase) db.identifier_output_method.should == :upcase db.identifier_output_method = nil db.identifier_output_method.should be_nil Sequel.identifier_output_method = :downcase Sequel::Database.identifier_output_method.should == :downcase db = Sequel::Database.new(:identifier_output_method=>nil) db.identifier_output_method.should be_nil db.identifier_output_method = :upcase db.identifier_output_method.should == :upcase db = Sequel::Database.new(:identifier_output_method=>:upcase) db.identifier_output_method.should == :upcase db.identifier_output_method = nil db.identifier_output_method.should be_nil end specify "should use the default Sequel.quote_identifiers value" do Sequel.quote_identifiers = true Sequel::Database.new({}).quote_identifiers?.should == true Sequel.quote_identifiers = false Sequel::Database.new({}).quote_identifiers?.should == false Sequel::Database.quote_identifiers = true Sequel::Database.new({}).quote_identifiers?.should == true Sequel::Database.quote_identifiers = false Sequel::Database.new({}).quote_identifiers?.should == false end specify "should use the default Sequel.identifier_input_method value" do Sequel.identifier_input_method = :downcase Sequel::Database.new({}).identifier_input_method.should == :downcase Sequel.identifier_input_method = :upcase Sequel::Database.new({}).identifier_input_method.should == :upcase Sequel::Database.identifier_input_method = :downcase Sequel::Database.new({}).identifier_input_method.should == :downcase Sequel::Database.identifier_input_method = :upcase Sequel::Database.new({}).identifier_input_method.should == :upcase end specify "should use the default Sequel.identifier_output_method value" do Sequel.identifier_output_method = :downcase Sequel::Database.new({}).identifier_output_method.should == :downcase Sequel.identifier_output_method = :upcase Sequel::Database.new({}).identifier_output_method.should == :upcase Sequel::Database.identifier_output_method = :downcase Sequel::Database.new({}).identifier_output_method.should == :downcase Sequel::Database.identifier_output_method = :upcase Sequel::Database.new({}).identifier_output_method.should == :upcase end specify "should respect the quote_indentifiers_default method if Sequel.quote_identifiers = nil" do Sequel.quote_identifiers = nil Sequel::Database.new({}).quote_identifiers?.should == true x = Class.new(Sequel::Database){def quote_identifiers_default; false end} x.new({}).quote_identifiers?.should == false y = Class.new(Sequel::Database){def quote_identifiers_default; true end} y.new({}).quote_identifiers?.should == true end specify "should respect the identifier_input_method_default method" do class Sequel::Database @identifier_input_method = nil end x = Class.new(Sequel::Database){def identifier_input_method_default; :downcase end} x.new({}).identifier_input_method.should == :downcase y = Class.new(Sequel::Database){def identifier_input_method_default; :camelize end} y.new({}).identifier_input_method.should == :camelize end specify "should respect the identifier_output_method_default method if Sequel.identifier_output_method is not called" do class Sequel::Database @identifier_output_method = nil end x = Class.new(Sequel::Database){def identifier_output_method_default; :upcase end} x.new({}).identifier_output_method.should == :upcase y = Class.new(Sequel::Database){def identifier_output_method_default; :underscore end} y.new({}).identifier_output_method.should == :underscore end specify "should just use a :uri option for jdbc with the full connection string" do Sequel::Database.should_receive(:adapter_class).once.with(:jdbc).and_return(Sequel::Database) db = Sequel.connect('jdbc:test://host/db_name') db.should be_a_kind_of(Sequel::Database) db.opts[:uri].should == 'jdbc:test://host/db_name' end specify "should just use a :uri option for do with the full connection string" do Sequel::Database.should_receive(:adapter_class).once.with(:do).and_return(Sequel::Database) db = Sequel.connect('do:test://host/db_name') db.should be_a_kind_of(Sequel::Database) db.opts[:uri].should == 'do:test://host/db_name' end specify "should populate :adapter option when using connection string" do Sequel.connect('mock:/').opts[:adapter].should == "mock" end specify "should respect the :keep_reference option for not keeping a reference in Sequel::DATABASES" do db = Sequel.connect('mock:///?keep_reference=f') Sequel::DATABASES.should_not include(db) end end describe "Database#disconnect" do specify "should call pool.disconnect" do d = Sequel::Database.new p = d.pool p.should_receive(:disconnect).once.with({}).and_return(2) d.disconnect.should == 2 end end describe "Sequel.extension" do specify "should attempt to load the given extension" do proc{Sequel.extension :blah}.should raise_error(LoadError) end end describe "Database#log_info" do before do @o = Object.new def @o.logs; @logs || []; end def @o.to_ary; [self]; end def @o.method_missing(*args); (@logs ||= []) << args; end @db = Sequel::Database.new(:logger=>@o) end specify "should log message at info level to all loggers" do @db.log_info('blah') @o.logs.should == [[:info, 'blah']] end specify "should log message with args at info level to all loggers" do @db.log_info('blah', [1, 2]) @o.logs.should == [[:info, 'blah; [1, 2]']] end end describe "Database#log_yield" do before do @o = Object.new def @o.logs; @logs || []; end def @o.warn(*args); (@logs ||= []) << [:warn] + args; end def @o.method_missing(*args); (@logs ||= []) << args; end def @o.to_ary; [self]; end @db = Sequel::Database.new(:logger=>@o) end specify "should yield to the passed block" do a = nil @db.log_yield('blah'){a = 1} a.should == 1 end specify "should raise an exception if a block is not passed" do proc{@db.log_yield('blah')}.should raise_error end specify "should log message with duration at info level to all loggers" do @db.log_yield('blah'){} @o.logs.length.should == 1 @o.logs.first.length.should == 2 @o.logs.first.first.should == :info @o.logs.first.last.should =~ /\A\(\d\.\d{6}s\) blah\z/ end specify "should respect sql_log_level setting" do @db.sql_log_level = :debug @db.log_yield('blah'){} @o.logs.length.should == 1 @o.logs.first.length.should == 2 @o.logs.first.first.should == :debug @o.logs.first.last.should =~ /\A\(\d\.\d{6}s\) blah\z/ end specify "should log message with duration at warn level if duration greater than log_warn_duration" do @db.log_warn_duration = 0 @db.log_yield('blah'){} @o.logs.length.should == 1 @o.logs.first.length.should == 2 @o.logs.first.first.should == :warn @o.logs.first.last.should =~ /\A\(\d\.\d{6}s\) blah\z/ end specify "should log message with duration at info level if duration less than log_warn_duration" do @db.log_warn_duration = 1000 @db.log_yield('blah'){} @o.logs.length.should == 1 @o.logs.first.length.should == 2 @o.logs.first.first.should == :info @o.logs.first.last.should =~ /\A\(\d\.\d{6}s\) blah\z/ end specify "should log message at error level if block raises an error" do @db.log_warn_duration = 0 proc{@db.log_yield('blah'){raise Sequel::Error, 'adsf'}}.should raise_error @o.logs.length.should == 1 @o.logs.first.length.should == 2 @o.logs.first.first.should == :error @o.logs.first.last.should =~ /\ASequel::Error: adsf: blah\z/ end specify "should include args with message if args passed" do @db.log_yield('blah', [1, 2]){} @o.logs.length.should == 1 @o.logs.first.length.should == 2 @o.logs.first.first.should == :info @o.logs.first.last.should =~ /\A\(\d\.\d{6}s\) blah; \[1, 2\]\z/ end end describe "Database#uri" do before do @c = Class.new(Sequel::Database) do set_adapter_scheme :mau end @db = Sequel.connect('mau://user:pass@localhost:9876/maumau') end specify "should return the connection URI for the database" do @db.uri.should == 'mau://user:pass@localhost:9876/maumau' end specify "should return nil if a connection uri was not used" do Sequel.mock.uri.should be_nil end specify "should be aliased as #url" do @db.url.should == 'mau://user:pass@localhost:9876/maumau' end end describe "Database.adapter_scheme and #adapter_scheme" do specify "should return the database scheme" do Sequel::Database.adapter_scheme.should be_nil @c = Class.new(Sequel::Database) do set_adapter_scheme :mau end @c.adapter_scheme.should == :mau @c.new({}).adapter_scheme.should == :mau end end describe "Database#dataset" do before do @db = Sequel::Database.new @ds = @db.dataset end specify "should provide a blank dataset through #dataset" do @ds.should be_a_kind_of(Sequel::Dataset) @ds.opts.should == {} @ds.db.should be(@db) end specify "should provide a #from dataset" do d = @db.from(:mau) d.should be_a_kind_of(Sequel::Dataset) d.sql.should == 'SELECT * FROM mau' e = @db[:miu] e.should be_a_kind_of(Sequel::Dataset) e.sql.should == 'SELECT * FROM miu' end specify "should provide a filtered #from dataset if a block is given" do d = @db.from(:mau){x.sql_number > 100} d.should be_a_kind_of(Sequel::Dataset) d.sql.should == 'SELECT * FROM mau WHERE (x > 100)' end specify "should provide a #select dataset" do d = @db.select(:a, :b, :c).from(:mau) d.should be_a_kind_of(Sequel::Dataset) d.sql.should == 'SELECT a, b, c FROM mau' end specify "should allow #select to take a block" do d = @db.select(:a, :b){c}.from(:mau) d.should be_a_kind_of(Sequel::Dataset) d.sql.should == 'SELECT a, b, c FROM mau' end end describe "Database#dataset_class" do before do @db = Sequel::Database.new @dsc = Class.new(Sequel::Dataset) end specify "should have setter set the class to use to create datasets" do @db.dataset_class = @dsc ds = @db.dataset ds.should be_a_kind_of(@dsc) ds.opts.should == {} ds.db.should be(@db) end specify "should have getter return the class to use to create datasets" do [@db.dataset_class, @db.dataset_class.superclass].should include(Sequel::Dataset) @db.dataset_class = @dsc [@db.dataset_class, @db.dataset_class.superclass].should include(@dsc) end end describe "Database#extend_datasets" do before do @db = Sequel::Database.new @m = Module.new{def foo() [3] end} @m2 = Module.new{def foo() [4] + super end} @db.extend_datasets(@m) end specify "should clear a cached dataset" do @db = Sequel::Database.new @db.literal(1).should == '1' @db.extend_datasets{def literal(v) '2' end} @db.literal(1).should == '2' end specify "should change the dataset class to a subclass the first time it is called" do @db.dataset_class.superclass.should == Sequel::Dataset end specify "should not create a subclass of the dataset class if called more than once" do @db.extend_datasets(@m2) @db.dataset_class.superclass.should == Sequel::Dataset end specify "should make the dataset class include the module" do @db.dataset_class.ancestors.should include(@m) @db.dataset_class.ancestors.should_not include(@m2) @db.extend_datasets(@m2) @db.dataset_class.ancestors.should include(@m) @db.dataset_class.ancestors.should include(@m2) end specify "should have datasets respond to the module's methods" do @db.dataset.foo.should == [3] @db.extend_datasets(@m2) @db.dataset.foo.should == [4, 3] end specify "should take a block and create a module from it to use" do @db.dataset.foo.should == [3] @db.extend_datasets{def foo() [5] + super end} @db.dataset.foo.should == [5, 3] end specify "should raise an error if both a module and a block are provided" do proc{@db.extend_datasets(@m2){def foo() [5] + super end}}.should raise_error(Sequel::Error) end specify "should be able to override methods defined in the original Dataset class" do @db.extend_datasets(Module.new{def select(*a, &block) super.order(*a, &block) end}) @db[:t].select(:a, :b).sql.should == 'SELECT a, b FROM t ORDER BY a, b' end specify "should reapply settings if dataset_class is changed" do c = Class.new(Sequel::Dataset) @db.dataset_class = c @db.dataset_class.superclass.should == c @db.dataset_class.ancestors.should include(@m) @db.dataset.foo.should == [3] end end describe "Database#disconnect_connection" do specify "should call close on the connection" do o = Object.new def o.close() @closed=true end Sequel::Database.new.disconnect_connection(o) o.instance_variable_get(:@closed).should be_true end end describe "Database#valid_connection?" do specify "should issue a query to validate the connection" do db = Sequel.mock db.synchronize{|c| db.valid_connection?(c)}.should be_true db.synchronize do |c| def c.execute(*) raise Sequel::DatabaseError, "error" end db.valid_connection?(c) end.should be_false end end describe "Database#run" do before do @db = Sequel.mock(:servers=>{:s1=>{}}) end specify "should execute the code on the database" do @db.run("DELETE FROM items") @db.sqls.should == ["DELETE FROM items"] end specify "should handle placeholder literal strings" do @db.run(Sequel.lit("DELETE FROM ?", :items)) @db.sqls.should == ["DELETE FROM items"] end specify "should return nil" do @db.run("DELETE FROM items").should be_nil end specify "should accept options passed to execute_ddl" do @db.run("DELETE FROM items", :server=>:s1) @db.sqls.should == ["DELETE FROM items -- s1"] end end describe "Database#<<" do before do @db = Sequel.mock end specify "should execute the code on the database" do @db << "DELETE FROM items" @db.sqls.should == ["DELETE FROM items"] end specify "should handle placeholder literal strings" do @db << Sequel.lit("DELETE FROM ?", :items) @db.sqls.should == ["DELETE FROM items"] end specify "should be chainable" do @db << "DELETE FROM items" << "DELETE FROM items2" @db.sqls.should == ["DELETE FROM items", "DELETE FROM items2"] end end describe "Database#synchronize" do before do @db = Sequel::Database.new(:max_connections => 1) meta_def(@db, :connect){|c| 12345} end specify "should wrap the supplied block in pool.hold" do q, q1, q2 = Queue.new, Queue.new, Queue.new c1, c2 = nil t1 = Thread.new{@db.synchronize{|c| c1 = c; q.push nil; q1.pop}; q.push nil} q.pop c1.should == 12345 t2 = Thread.new{@db.synchronize{|c| c2 = c; q2.push nil}} @db.pool.available_connections.should be_empty c2.should be_nil q1.push nil q.pop q2.pop c2.should == 12345 t1.join t2.join end end describe "Database#test_connection" do before do @db = Sequel::Database.new pr = proc{@test = rand(100)} meta_def(@db, :connect){|c| pr.call} end specify "should attempt to get a connection" do @db.test_connection @test.should_not be_nil end specify "should return true if successful" do @db.test_connection.should be_true end specify "should raise an error if the attempting to connect raises an error" do def @db.connect(*) raise Sequel::Error end proc{@db.test_connection}.should raise_error(Sequel::Error) end end describe "Database#table_exists?" do specify "should test existence by selecting a row from the table's dataset" do db = Sequel.mock(:fetch=>[Sequel::Error, [], [{:a=>1}]]) db.table_exists?(:a).should be_false db.sqls.should == ["SELECT NULL AS nil FROM a LIMIT 1"] db.table_exists?(:b).should be_true db.table_exists?(:c).should be_true end end shared_examples_for "Database#transaction" do specify "should wrap the supplied block with BEGIN + COMMIT statements" do @db.transaction{@db.execute 'DROP TABLE test;'} @db.sqls.should == ['BEGIN', 'DROP TABLE test;', 'COMMIT'] end specify "should support transaction isolation levels" do meta_def(@db, :supports_transaction_isolation_levels?){true} [:uncommitted, :committed, :repeatable, :serializable].each do |l| @db.transaction(:isolation=>l){@db.run "DROP TABLE #{l}"} end @db.sqls.should == ['BEGIN', 'SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED', 'DROP TABLE uncommitted', 'COMMIT', 'BEGIN', 'SET TRANSACTION ISOLATION LEVEL READ COMMITTED', 'DROP TABLE committed', 'COMMIT', 'BEGIN', 'SET TRANSACTION ISOLATION LEVEL REPEATABLE READ', 'DROP TABLE repeatable', 'COMMIT', 'BEGIN', 'SET TRANSACTION ISOLATION LEVEL SERIALIZABLE', 'DROP TABLE serializable', 'COMMIT'] end specify "should allow specifying a default transaction isolation level" do meta_def(@db, :supports_transaction_isolation_levels?){true} [:uncommitted, :committed, :repeatable, :serializable].each do |l| @db.transaction_isolation_level = l @db.transaction{@db.run "DROP TABLE #{l}"} end @db.sqls.should == ['BEGIN', 'SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED', 'DROP TABLE uncommitted', 'COMMIT', 'BEGIN', 'SET TRANSACTION ISOLATION LEVEL READ COMMITTED', 'DROP TABLE committed', 'COMMIT', 'BEGIN', 'SET TRANSACTION ISOLATION LEVEL REPEATABLE READ', 'DROP TABLE repeatable', 'COMMIT', 'BEGIN', 'SET TRANSACTION ISOLATION LEVEL SERIALIZABLE', 'DROP TABLE serializable', 'COMMIT'] end specify "should support :retry_on option for automatically retrying transactions" do a = [] @db.transaction(:retry_on=>Sequel::DatabaseDisconnectError){a << 1; raise Sequel::DatabaseDisconnectError if a.length < 2} @db.sqls.should == ['BEGIN', 'ROLLBACK', 'BEGIN', 'COMMIT'] a.should == [1, 1] a = [] @db.transaction(:retry_on=>[Sequel::ConstraintViolation, Sequel::SerializationFailure]) do a << 1 raise Sequel::SerializationFailure if a.length == 1 raise Sequel::ConstraintViolation if a.length == 2 end @db.sqls.should == ['BEGIN', 'ROLLBACK', 'BEGIN', 'ROLLBACK', 'BEGIN', 'COMMIT'] a.should == [1, 1, 1] end specify "should support :num_retries option for limiting the number of retry times" do a = [] lambda do @db.transaction(:num_retries=>1, :retry_on=>[Sequel::ConstraintViolation, Sequel::SerializationFailure]) do a << 1 raise Sequel::SerializationFailure if a.length == 1 raise Sequel::ConstraintViolation if a.length == 2 end end.should raise_error(Sequel::ConstraintViolation) @db.sqls.should == ['BEGIN', 'ROLLBACK', 'BEGIN', 'ROLLBACK'] a.should == [1, 1] end specify "should support :num_retries=>nil option to retry indefinitely" do a = [] lambda do @db.transaction(:num_retries=>nil, :retry_on=>[Sequel::ConstraintViolation]) do a << 1 raise Sequel::SerializationFailure if a.length >= 100 raise Sequel::ConstraintViolation end end.should raise_error(Sequel::SerializationFailure) @db.sqls.should == ['BEGIN', 'ROLLBACK'] * 100 a.should == [1] * 100 end specify "should raise an error if attempting to use :retry_on inside another transaction" do proc{@db.transaction{@db.transaction(:retry_on=>Sequel::ConstraintViolation){}}}.should raise_error(Sequel::Error) @db.sqls.should == ['BEGIN', 'ROLLBACK'] end specify "should handle returning inside of the block by committing" do def @db.ret_commit transaction do execute 'DROP TABLE test;' return end end @db.ret_commit @db.sqls.should == ['BEGIN', 'DROP TABLE test;', 'COMMIT'] end specify "should issue ROLLBACK if an exception is raised, and re-raise" do @db.transaction {@db.execute 'DROP TABLE test'; raise RuntimeError} rescue nil @db.sqls.should == ['BEGIN', 'DROP TABLE test', 'ROLLBACK'] proc {@db.transaction {raise RuntimeError}}.should raise_error(RuntimeError) end specify "should handle errors when sending BEGIN" do ec = Class.new(StandardError) meta_def(@db, :database_error_classes){[ec]} meta_def(@db, :log_connection_execute){|c, sql| sql =~ /BEGIN/ ? raise(ec, 'bad') : super(c, sql)} begin @db.transaction{@db.execute 'DROP TABLE test;'} rescue Sequel::DatabaseError => e end e.should_not be_nil e.wrapped_exception.should be_a_kind_of(ec) @db.sqls.should == ['ROLLBACK'] end specify "should handle errors when sending COMMIT" do ec = Class.new(StandardError) meta_def(@db, :database_error_classes){[ec]} meta_def(@db, :log_connection_execute){|c, sql| sql =~ /COMMIT/ ? raise(ec, 'bad') : super(c, sql)} begin @db.transaction{@db.execute 'DROP TABLE test;'} rescue Sequel::DatabaseError => e end e.should_not be_nil e.wrapped_exception.should be_a_kind_of(ec) @db.sqls.should == ['BEGIN', 'DROP TABLE test;', 'ROLLBACK'] end specify "should handle errors when sending ROLLBACK" do ec = Class.new(StandardError) meta_def(@db, :database_error_classes){[ec]} meta_def(@db, :log_connection_execute){|c, sql| sql =~ /ROLLBACK/ ? raise(ec, 'bad') : super(c, sql)} begin @db.transaction{raise ArgumentError, 'asdf'} rescue Sequel::DatabaseError => e end e.should_not be_nil e.wrapped_exception.should be_a_kind_of(ec) @db.sqls.should == ['BEGIN'] end specify "should issue ROLLBACK if Sequel::Rollback is called in the transaction" do @db.transaction do @db.drop_table(:a) raise Sequel::Rollback @db.drop_table(:b) end @db.sqls.should == ['BEGIN', 'DROP TABLE a', 'ROLLBACK'] end specify "should have in_transaction? return true if inside a transaction" do c = nil @db.transaction{c = @db.in_transaction?} c.should be_true end specify "should have in_transaction? handle sharding correctly" do c = [] @db.transaction(:server=>:test){c << @db.in_transaction?} @db.transaction(:server=>:test){c << @db.in_transaction?(:server=>:test)} c.should == [false, true] end specify "should have in_transaction? return false if not in a transaction" do @db.in_transaction?.should be_false end specify "should return nil if Sequel::Rollback is called in the transaction" do @db.transaction{raise Sequel::Rollback}.should be_nil end specify "should reraise Sequel::Rollback errors when using :rollback=>:reraise option is given" do proc {@db.transaction(:rollback=>:reraise){raise Sequel::Rollback}}.should raise_error(Sequel::Rollback) @db.sqls.should == ['BEGIN', 'ROLLBACK'] proc {@db.transaction(:rollback=>:reraise){raise ArgumentError}}.should raise_error(ArgumentError) @db.sqls.should == ['BEGIN', 'ROLLBACK'] @db.transaction(:rollback=>:reraise){1}.should == 1 @db.sqls.should == ['BEGIN', 'COMMIT'] end specify "should always rollback if :rollback=>:always option is given" do proc {@db.transaction(:rollback=>:always){raise ArgumentError}}.should raise_error(ArgumentError) @db.sqls.should == ['BEGIN', 'ROLLBACK'] @db.transaction(:rollback=>:always){raise Sequel::Rollback}.should be_nil @db.sqls.should == ['BEGIN', 'ROLLBACK'] @db.transaction(:rollback=>:always){1}.should be_nil @db.sqls.should == ['BEGIN', 'ROLLBACK'] catch(:foo) do @db.transaction(:rollback=>:always){throw :foo} end @db.sqls.should == ['BEGIN', 'ROLLBACK'] end specify "should raise database errors when commiting a transaction as Sequel::DatabaseError" do meta_def(@db, :commit_transaction){raise ArgumentError} lambda{@db.transaction{}}.should raise_error(ArgumentError) meta_def(@db, :database_error_classes){[ArgumentError]} lambda{@db.transaction{}}.should raise_error(Sequel::DatabaseError) end specify "should be re-entrant" do q, q1 = Queue.new, Queue.new cc = nil t = Thread.new do @db.transaction {@db.transaction {@db.transaction {|c| cc = c q.pop q1.push nil q.pop }}} end q.push nil q1.pop cc.should be_a_kind_of(Sequel::Mock::Connection) tr = @db.instance_variable_get(:@transactions) tr.keys.should == [cc] q.push nil t.join tr.should be_empty end specify "should correctly handle nested transacation use with separate shards" do @db.transaction do |c1| @db.transaction(:server=>:test) do |c2| c1.should_not == c2 @db.execute 'DROP TABLE test;' end end @db.sqls.should == ['BEGIN', 'BEGIN -- test', 'DROP TABLE test;', 'COMMIT -- test', 'COMMIT'] end if (!defined?(RUBY_ENGINE) or RUBY_ENGINE == 'ruby' or RUBY_ENGINE == 'rbx') and RUBY_VERSION < '1.9' specify "should handle Thread#kill for transactions inside threads" do q = Queue.new q1 = Queue.new t = Thread.new do @db.transaction do @db.execute 'DROP TABLE test' q1.push nil q.pop @db.execute 'DROP TABLE test2' end end q1.pop t.kill t.join @db.sqls.should == ['BEGIN', 'DROP TABLE test', 'ROLLBACK'] end end specify "should raise an Error if after_commit or after_rollback is called without a block" do proc{@db.after_commit}.should raise_error(Sequel::Error) proc{@db.after_rollback}.should raise_error(Sequel::Error) end specify "should have after_commit and after_rollback respect :server option" do @db.transaction(:server=>:test){@db.after_commit(:server=>:test){@db.execute('foo', :server=>:test)}} @db.sqls.should == ['BEGIN -- test', 'COMMIT -- test', 'foo -- test'] @db.transaction(:server=>:test){@db.after_rollback(:server=>:test){@db.execute('foo', :server=>:test)}; raise Sequel::Rollback} @db.sqls.should == ['BEGIN -- test', 'ROLLBACK -- test', 'foo -- test'] end specify "should execute after_commit outside transactions" do @db.after_commit{@db.execute('foo')} @db.sqls.should == ['foo'] end specify "should ignore after_rollback outside transactions" do @db.after_rollback{@db.execute('foo')} @db.sqls.should == [] end specify "should support after_commit inside transactions" do @db.transaction{@db.after_commit{@db.execute('foo')}} @db.sqls.should == ['BEGIN', 'COMMIT', 'foo'] end specify "should support after_rollback inside transactions" do @db.transaction{@db.after_rollback{@db.execute('foo')}} @db.sqls.should == ['BEGIN', 'COMMIT'] end specify "should not call after_commit if the transaction rolls back" do @db.transaction{@db.after_commit{@db.execute('foo')}; raise Sequel::Rollback} @db.sqls.should == ['BEGIN', 'ROLLBACK'] end specify "should call after_rollback if the transaction rolls back" do @db.transaction{@db.after_rollback{@db.execute('foo')}; raise Sequel::Rollback} @db.sqls.should == ['BEGIN', 'ROLLBACK', 'foo'] end specify "should call multiple after_commit blocks in order if called inside transactions" do @db.transaction{@db.after_commit{@db.execute('foo')}; @db.after_commit{@db.execute('bar')}} @db.sqls.should == ['BEGIN', 'COMMIT', 'foo', 'bar'] end specify "should call multiple after_rollback blocks in order if called inside transactions" do @db.transaction{@db.after_rollback{@db.execute('foo')}; @db.after_rollback{@db.execute('bar')}; raise Sequel::Rollback} @db.sqls.should == ['BEGIN', 'ROLLBACK', 'foo', 'bar'] end specify "should support after_commit inside nested transactions" do @db.transaction{@db.transaction{@db.after_commit{@db.execute('foo')}}} @db.sqls.should == ['BEGIN', 'COMMIT', 'foo'] end specify "should support after_rollback inside nested transactions" do @db.transaction{@db.transaction{@db.after_rollback{@db.execute('foo')}}; raise Sequel::Rollback} @db.sqls.should == ['BEGIN', 'ROLLBACK', 'foo'] end specify "should raise an error if you attempt to use after_commit inside a prepared transaction" do meta_def(@db, :supports_prepared_transactions?){true} proc{@db.transaction(:prepare=>'XYZ'){@db.after_commit{@db.execute('foo')}}}.should raise_error(Sequel::Error) @db.sqls.should == ['BEGIN', 'ROLLBACK'] end specify "should raise an error if you attempt to use after_rollback inside a prepared transaction" do meta_def(@db, :supports_prepared_transactions?){true} proc{@db.transaction(:prepare=>'XYZ'){@db.after_rollback{@db.execute('foo')}}}.should raise_error(Sequel::Error) @db.sqls.should == ['BEGIN', 'ROLLBACK'] end end describe "Database#transaction with savepoint support" do before do @db = Sequel.mock(:servers=>{:test=>{}}) end it_should_behave_like "Database#transaction" specify "should support after_commit inside savepoints" do meta_def(@db, :supports_savepoints?){true} @db.transaction do @db.after_commit{@db.execute('foo')} @db.transaction(:savepoint=>true){@db.after_commit{@db.execute('bar')}} @db.after_commit{@db.execute('baz')} end @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'RELEASE SAVEPOINT autopoint_1', 'COMMIT', 'foo', 'bar', 'baz'] end specify "should support after_rollback inside savepoints" do meta_def(@db, :supports_savepoints?){true} @db.transaction do @db.after_rollback{@db.execute('foo')} @db.transaction(:savepoint=>true){@db.after_rollback{@db.execute('bar')}} @db.after_rollback{@db.execute('baz')} raise Sequel::Rollback end @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'RELEASE SAVEPOINT autopoint_1', 'ROLLBACK', 'foo', 'bar', 'baz'] end specify "should raise an error if you attempt to use after_commit inside a savepoint in a prepared transaction" do meta_def(@db, :supports_savepoints?){true} meta_def(@db, :supports_prepared_transactions?){true} proc{@db.transaction(:prepare=>'XYZ'){@db.transaction(:savepoint=>true){@db.after_commit{@db.execute('foo')}}}}.should raise_error(Sequel::Error) @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1','ROLLBACK TO SAVEPOINT autopoint_1', 'ROLLBACK'] end specify "should raise an error if you attempt to use after_rollback inside a savepoint in a prepared transaction" do meta_def(@db, :supports_savepoints?){true} meta_def(@db, :supports_prepared_transactions?){true} proc{@db.transaction(:prepare=>'XYZ'){@db.transaction(:savepoint=>true){@db.after_rollback{@db.execute('foo')}}}}.should raise_error(Sequel::Error) @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1','ROLLBACK TO SAVEPOINT autopoint_1', 'ROLLBACK'] end end describe "Database#transaction without savepoint support" do before do @db = Sequel.mock(:servers=>{:test=>{}}) meta_def(@db, :supports_savepoints?){false} end it_should_behave_like "Database#transaction" end describe "Sequel.transaction" do before do @sqls = [] @db1 = Sequel.mock(:append=>'1', :sqls=>@sqls) @db2 = Sequel.mock(:append=>'2', :sqls=>@sqls) @db3 = Sequel.mock(:append=>'3', :sqls=>@sqls) end specify "should run the block inside transacitons on all three databases" do Sequel.transaction([@db1, @db2, @db3]){1}.should == 1 @sqls.should == ['BEGIN -- 1', 'BEGIN -- 2', 'BEGIN -- 3', 'COMMIT -- 3', 'COMMIT -- 2', 'COMMIT -- 1'] end specify "should pass options to all the blocks" do Sequel.transaction([@db1, @db2, @db3], :rollback=>:always){1}.should be_nil @sqls.should == ['BEGIN -- 1', 'BEGIN -- 2', 'BEGIN -- 3', 'ROLLBACK -- 3', 'ROLLBACK -- 2', 'ROLLBACK -- 1'] end specify "should handle Sequel::Rollback exceptions raised by the block to rollback on all databases" do Sequel.transaction([@db1, @db2, @db3]){raise Sequel::Rollback}.should be_nil @sqls.should == ['BEGIN -- 1', 'BEGIN -- 2', 'BEGIN -- 3', 'ROLLBACK -- 3', 'ROLLBACK -- 2', 'ROLLBACK -- 1'] end specify "should handle nested transactions" do Sequel.transaction([@db1, @db2, @db3]){Sequel.transaction([@db1, @db2, @db3]){1}}.should == 1 @sqls.should == ['BEGIN -- 1', 'BEGIN -- 2', 'BEGIN -- 3', 'COMMIT -- 3', 'COMMIT -- 2', 'COMMIT -- 1'] end specify "should handle savepoints" do Sequel.transaction([@db1, @db2, @db3]){Sequel.transaction([@db1, @db2, @db3], :savepoint=>true){1}}.should == 1 @sqls.should == ['BEGIN -- 1', 'BEGIN -- 2', 'BEGIN -- 3', 'SAVEPOINT autopoint_1 -- 1', 'SAVEPOINT autopoint_1 -- 2', 'SAVEPOINT autopoint_1 -- 3', 'RELEASE SAVEPOINT autopoint_1 -- 3', 'RELEASE SAVEPOINT autopoint_1 -- 2', 'RELEASE SAVEPOINT autopoint_1 -- 1', 'COMMIT -- 3', 'COMMIT -- 2', 'COMMIT -- 1'] end end describe "Database#transaction with savepoints" do before do @db = Sequel.mock end specify "should wrap the supplied block with BEGIN + COMMIT statements" do @db.transaction {@db.execute 'DROP TABLE test;'} @db.sqls.should == ['BEGIN', 'DROP TABLE test;', 'COMMIT'] end specify "should use savepoints if given the :savepoint option" do @db.transaction{@db.transaction(:savepoint=>true){@db.execute 'DROP TABLE test;'}} @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'DROP TABLE test;', 'RELEASE SAVEPOINT autopoint_1', 'COMMIT'] end specify "should not use a savepoint if no transaction is in progress" do @db.transaction(:savepoint=>true){@db.execute 'DROP TABLE test;'} @db.sqls.should == ['BEGIN', 'DROP TABLE test;', 'COMMIT'] end specify "should reuse the current transaction if no :savepoint option is given" do @db.transaction{@db.transaction{@db.execute 'DROP TABLE test;'}} @db.sqls.should == ['BEGIN', 'DROP TABLE test;', 'COMMIT'] end specify "should handle returning inside of the block by committing" do def @db.ret_commit transaction do execute 'DROP TABLE test;' return end end @db.ret_commit @db.sqls.should == ['BEGIN', 'DROP TABLE test;', 'COMMIT'] end specify "should handle returning inside of a savepoint by committing" do def @db.ret_commit transaction do transaction(:savepoint=>true) do execute 'DROP TABLE test;' return end end end @db.ret_commit @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'DROP TABLE test;', 'RELEASE SAVEPOINT autopoint_1', 'COMMIT'] end specify "should issue ROLLBACK if an exception is raised, and re-raise" do @db.transaction {@db.execute 'DROP TABLE test'; raise RuntimeError} rescue nil @db.sqls.should == ['BEGIN', 'DROP TABLE test', 'ROLLBACK'] proc {@db.transaction {raise RuntimeError}}.should raise_error(RuntimeError) end specify "should issue ROLLBACK SAVEPOINT if an exception is raised inside a savepoint, and re-raise" do @db.transaction{@db.transaction(:savepoint=>true){@db.execute 'DROP TABLE test'; raise RuntimeError}} rescue nil @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'DROP TABLE test', 'ROLLBACK TO SAVEPOINT autopoint_1', 'ROLLBACK'] proc {@db.transaction {raise RuntimeError}}.should raise_error(RuntimeError) end specify "should issue ROLLBACK if Sequel::Rollback is raised in the transaction" do @db.transaction do @db.drop_table(:a) raise Sequel::Rollback @db.drop_table(:b) end @db.sqls.should == ['BEGIN', 'DROP TABLE a', 'ROLLBACK'] end specify "should issue ROLLBACK SAVEPOINT if Sequel::Rollback is raised in a savepoint" do @db.transaction do @db.transaction(:savepoint=>true) do @db.drop_table(:a) raise Sequel::Rollback end @db.drop_table(:b) end @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'DROP TABLE a', 'ROLLBACK TO SAVEPOINT autopoint_1', 'DROP TABLE b', 'COMMIT'] end specify "should raise database errors when commiting a transaction as Sequel::DatabaseError" do meta_def(@db, :commit_transaction){raise ArgumentError} lambda{@db.transaction{}}.should raise_error(ArgumentError) lambda{@db.transaction{@db.transaction(:savepoint=>true){}}}.should raise_error(ArgumentError) meta_def(@db, :database_error_classes){[ArgumentError]} lambda{@db.transaction{}}.should raise_error(Sequel::DatabaseError) lambda{@db.transaction{@db.transaction(:savepoint=>true){}}}.should raise_error(Sequel::DatabaseError) end end describe "A Database adapter with a scheme" do before do @ccc = Class.new(Sequel::Mock::Database) @ccc.send(:set_adapter_scheme, :ccc) end specify "should be registered in the ADAPTER_MAP" do Sequel::ADAPTER_MAP[:ccc].should == @ccc end specify "should give the database_type as the adapter scheme by default" do @ccc.new.database_type.should == :ccc end specify "should be instantiated when its scheme is specified" do c = Sequel::Database.connect('ccc://localhost/db') c.should be_a_kind_of(@ccc) c.opts[:host].should == 'localhost' c.opts[:database].should == 'db' end specify "should be accessible through Sequel.connect" do c = Sequel.connect 'ccc://localhost/db' c.should be_a_kind_of(@ccc) c.opts[:host].should == 'localhost' c.opts[:database].should == 'db' end specify "should be accessible through Sequel.connect via a block" do x = nil y = nil z = nil returnValue = 'anything' p = proc do |c| c.should be_a_kind_of(@ccc) c.opts[:host].should == 'localhost' c.opts[:database].should == 'db' z = y y = x x = c returnValue end @ccc.class_eval do self::DISCONNECTS = [] def disconnect self.class::DISCONNECTS << self end end Sequel::Database.connect('ccc://localhost/db', &p).should == returnValue @ccc::DISCONNECTS.should == [x] Sequel.connect('ccc://localhost/db', &p).should == returnValue @ccc::DISCONNECTS.should == [y, x] Sequel.send(:def_adapter_method, :ccc) Sequel.ccc('db', :host=>'localhost', &p).should == returnValue @ccc::DISCONNECTS.should == [z, y, x] class << Sequel; remove_method(:ccc) end end specify "should be accessible through Sequel.<adapter>" do Sequel.send(:def_adapter_method, :ccc) # invalid parameters proc {Sequel.ccc('abc', 'def')}.should raise_error(Sequel::Error) proc {Sequel.ccc(1)}.should raise_error(Sequel::Error) c = Sequel.ccc('mydb') c.should be_a_kind_of(@ccc) c.opts.values_at(:adapter, :database, :adapter_class).should == [:ccc, 'mydb', @ccc] c = Sequel.ccc('mydb', :host => 'localhost') c.should be_a_kind_of(@ccc) c.opts.values_at(:adapter, :database, :host, :adapter_class).should == [:ccc, 'mydb', 'localhost', @ccc] c = Sequel.ccc c.should be_a_kind_of(@ccc) c.opts.values_at(:adapter, :adapter_class).should == [:ccc, @ccc] c = Sequel.ccc(:database => 'mydb', :host => 'localhost') c.should be_a_kind_of(@ccc) c.opts.values_at(:adapter, :database, :host, :adapter_class).should == [:ccc, 'mydb', 'localhost', @ccc] class << Sequel; remove_method(:ccc) end end specify "should be accessible through Sequel.connect with options" do c = Sequel.connect(:adapter => :ccc, :database => 'mydb') c.should be_a_kind_of(@ccc) c.opts[:adapter].should == :ccc end specify "should be accessible through Sequel.connect with URL parameters" do c = Sequel.connect 'ccc:///db?host=/tmp&user=test' c.should be_a_kind_of(@ccc) c.opts[:host].should == '/tmp' c.opts[:database].should == 'db' c.opts[:user].should == 'test' end specify "should have URL parameters take precedence over fixed URL parts" do c = Sequel.connect 'ccc://localhost/db?host=a&database=b' c.should be_a_kind_of(@ccc) c.opts[:host].should == 'a' c.opts[:database].should == 'b' end specify "should have hash options take predence over URL parameters or parts" do c = Sequel.connect 'ccc://localhost/db?host=/tmp', :host=>'a', :database=>'b', :user=>'c' c.should be_a_kind_of(@ccc) c.opts[:host].should == 'a' c.opts[:database].should == 'b' c.opts[:user].should == 'c' end specify "should unescape values of URL parameters and parts" do c = Sequel.connect 'ccc:///d%5bb%5d?host=domain%5cinstance' c.should be_a_kind_of(@ccc) c.opts[:database].should == 'd[b]' c.opts[:host].should == 'domain\\instance' end specify "should test the connection if test parameter is truthy" do @ccc.send(:define_method, :connect){} proc{Sequel.connect 'ccc:///d%5bb%5d?test=t'}.should raise_error(Sequel::DatabaseConnectionError) proc{Sequel.connect 'ccc:///d%5bb%5d?test=1'}.should raise_error(Sequel::DatabaseConnectionError) proc{Sequel.connect 'ccc:///d%5bb%5d', :test=>true}.should raise_error(Sequel::DatabaseConnectionError) proc{Sequel.connect 'ccc:///d%5bb%5d', :test=>'t'}.should raise_error(Sequel::DatabaseConnectionError) end specify "should not test the connection if test parameter is not truthy" do proc{Sequel.connect 'ccc:///d%5bb%5d?test=f'}.should_not raise_error proc{Sequel.connect 'ccc:///d%5bb%5d?test=0'}.should_not raise_error proc{Sequel.connect 'ccc:///d%5bb%5d', :test=>false}.should_not raise_error proc{Sequel.connect 'ccc:///d%5bb%5d', :test=>'f'}.should_not raise_error end end describe "Sequel::Database.connect" do specify "should raise an Error if not given a String or Hash" do proc{Sequel::Database.connect(nil)}.should raise_error(Sequel::Error) proc{Sequel::Database.connect(Object.new)}.should raise_error(Sequel::Error) end end describe "An unknown database scheme" do specify "should raise an error in Sequel::Database.connect" do proc {Sequel::Database.connect('ddd://localhost/db')}.should raise_error(Sequel::AdapterNotFound) end specify "should raise an error in Sequel.connect" do proc {Sequel.connect('ddd://localhost/db')}.should raise_error(Sequel::AdapterNotFound) end end describe "A broken adapter (lib is there but the class is not)" do before do @fn = File.join(File.dirname(__FILE__), '../../lib/sequel/adapters/blah.rb') File.open(@fn,'a'){} end after do File.delete(@fn) end specify "should raise an error" do proc {Sequel.connect('blah://blow')}.should raise_error(Sequel::AdapterNotFound) end end describe "A single threaded database" do after do Sequel::Database.single_threaded = false end specify "should use a SingleConnectionPool instead of a ConnectionPool" do db = Sequel::Database.new(:single_threaded => true){123} db.pool.should be_a_kind_of(Sequel::SingleConnectionPool) end specify "should be constructable using :single_threaded => true option" do db = Sequel::Database.new(:single_threaded => true){123} db.pool.should be_a_kind_of(Sequel::SingleConnectionPool) end specify "should be constructable using Database.single_threaded = true" do Sequel::Database.single_threaded = true db = Sequel::Database.new{123} db.pool.should be_a_kind_of(Sequel::SingleConnectionPool) end specify "should be constructable using Sequel.single_threaded = true" do Sequel.single_threaded = true db = Sequel::Database.new{123} db.pool.should be_a_kind_of(Sequel::SingleConnectionPool) end end describe "A single threaded database" do before do conn = 1234567 @db = Sequel::Database.new(:single_threaded => true) meta_def(@db, :connect) do |c| conn += 1 end end specify "should invoke connection_proc only once" do @db.pool.hold {|c| c.should == 1234568} @db.pool.hold {|c| c.should == 1234568} end specify "should disconnect correctly" do def @db.disconnect_connection(c); @dc = c end def @db.dc; @dc end x = nil @db.pool.hold{|c| x = c} @db.pool.hold{|c| c.should == x} proc{@db.disconnect}.should_not raise_error @db.dc.should == x end specify "should convert an Exception on connection into a DatabaseConnectionError" do db = Sequel::Database.new(:single_threaded => true, :servers=>{}) def db.connect(*) raise Exception end proc {db.pool.hold {|c|}}.should raise_error(Sequel::DatabaseConnectionError) end specify "should raise a DatabaseConnectionError if the connection proc returns nil" do db = Sequel.mock(:single_threaded => true, :servers=>{}) def db.connect(*) end proc {db.pool.hold {|c|}}.should raise_error(Sequel::DatabaseConnectionError) end end describe "A database" do after do Sequel::Database.single_threaded = false end specify "should have single_threaded? respond to true if in single threaded mode" do db = Sequel::Database.new(:single_threaded => true){1234} db.should be_single_threaded db = Sequel::Database.new(:max_options => 1) db.should_not be_single_threaded db = Sequel::Database.new db.should_not be_single_threaded Sequel::Database.single_threaded = true db = Sequel::Database.new{123} db.should be_single_threaded db = Sequel::Database.new(:max_options => 4){123} db.should be_single_threaded end specify "should be able to set loggers via the logger= and loggers= methods" do db = Sequel::Database.new s = "I'm a logger" db.logger = s db.loggers.should == [s] db.logger = nil db.loggers.should == [] db.loggers = [s] db.loggers.should == [s] db.loggers = [] db.loggers.should == [] t = "I'm also a logger" db.loggers = [s, t] db.loggers.should == [s,t] end end describe "Database#fetch" do before do @db = Sequel.mock(:fetch=>proc{|sql| {:sql => sql}}) end specify "should create a dataset and invoke its fetch_rows method with the given sql" do sql = nil @db.fetch('select * from xyz') {|r| sql = r[:sql]} sql.should == 'select * from xyz' end specify "should format the given sql with any additional arguments" do sql = nil @db.fetch('select * from xyz where x = ? and y = ?', 15, 'abc') {|r| sql = r[:sql]} sql.should == "select * from xyz where x = 15 and y = 'abc'" @db.fetch('select name from table where name = ? or id in ?', 'aman', [3,4,7]) {|r| sql = r[:sql]} sql.should == "select name from table where name = 'aman' or id in (3, 4, 7)" end specify "should format the given sql with named arguments" do sql = nil @db.fetch('select * from xyz where x = :x and y = :y', :x=>15, :y=>'abc') {|r| sql = r[:sql]} sql.should == "select * from xyz where x = 15 and y = 'abc'" end specify "should return the dataset if no block is given" do @db.fetch('select * from xyz').should be_a_kind_of(Sequel::Dataset) @db.fetch('select a from b').map {|r| r[:sql]}.should == ['select a from b'] @db.fetch('select c from d').inject([]) {|m, r| m << r; m}.should == \ [{:sql => 'select c from d'}] end specify "should return a dataset that always uses the given sql for SELECTs" do ds = @db.fetch('select * from xyz') ds.select_sql.should == 'select * from xyz' ds.sql.should == 'select * from xyz' ds.filter!{price.sql_number < 100} ds.select_sql.should == 'select * from xyz' ds.sql.should == 'select * from xyz' end end describe "Database#[]" do before do @db = Sequel.mock end specify "should return a dataset when symbols are given" do ds = @db[:items] ds.should be_a_kind_of(Sequel::Dataset) ds.opts[:from].should == [:items] end specify "should return a dataset when a string is given" do @db.fetch = proc{|sql| {:sql=>sql}} sql = nil @db['select * from xyz where x = ? and y = ?', 15, 'abc'].each {|r| sql = r[:sql]} sql.should == "select * from xyz where x = 15 and y = 'abc'" end end describe "Database#inspect" do specify "should include the class name and the connection url" do Sequel.connect('mock://foo/bar').inspect.should == '#<Sequel::Mock::Database: "mock://foo/bar">' end specify "should include the class name and the connection options if an options hash was given" do Sequel.connect(:adapter=>:mock).inspect.should =~ /#<Sequel::Mock::Database: \{:adapter=>:mock\}>/ end specify "should include the class name, uri, and connection options if uri and options hash was given" do Sequel.connect('mock://foo', :database=>'bar').inspect.should =~ /#<Sequel::Mock::Database: "mock:\/\/foo" \{:database=>"bar"\}>/ end end describe "Database#get" do before do @db = Sequel.mock(:fetch=>{:a=>1}) end specify "should use Dataset#get to get a single value" do @db.get(:a).should == 1 @db.sqls.should == ['SELECT a LIMIT 1'] @db.get(Sequel.function(:version).as(:version)) @db.sqls.should == ['SELECT version() AS version LIMIT 1'] end specify "should accept a block" do @db.get{a} @db.sqls.should == ['SELECT a LIMIT 1'] @db.get{version(a).as(version)} @db.sqls.should == ['SELECT version(a) AS version LIMIT 1'] end specify "should work when an alias cannot be determined" do @db.get(1).should == 1 @db.sqls.should == ['SELECT 1 AS v LIMIT 1'] end end describe "Database#call" do specify "should call the prepared statement with the given name" do db = Sequel.mock(:fetch=>{:id => 1, :x => 1}) db[:items].prepare(:select, :select_all) db.call(:select_all).should == [{:id => 1, :x => 1}] db[:items].filter(:n=>:$n).prepare(:select, :select_n) db.call(:select_n, :n=>1).should == [{:id => 1, :x => 1}] db.sqls.should == ['SELECT * FROM items', 'SELECT * FROM items WHERE (n = 1)'] end end describe "Database#server_opts" do specify "should return the general opts if no :servers option is used" do opts = {:host=>1, :database=>2} Sequel::Database.new(opts).send(:server_opts, :server1)[:host].should == 1 end specify "should return the general opts if entry for the server is present in the :servers option" do opts = {:host=>1, :database=>2, :servers=>{}} Sequel::Database.new(opts).send(:server_opts, :server1)[:host].should == 1 end specify "should return the general opts merged with the specific opts if given as a hash" do opts = {:host=>1, :database=>2, :servers=>{:server1=>{:host=>3}}} Sequel::Database.new(opts).send(:server_opts, :server1)[:host].should == 3 end specify "should return the sgeneral opts merged with the specific opts if given as a proc" do opts = {:host=>1, :database=>2, :servers=>{:server1=>proc{|db| {:host=>4}}}} Sequel::Database.new(opts).send(:server_opts, :server1)[:host].should == 4 end specify "should raise an error if the specific opts is not a proc or hash" do opts = {:host=>1, :database=>2, :servers=>{:server1=>2}} proc{Sequel::Database.new(opts).send(:server_opts, :server1)}.should raise_error(Sequel::Error) end specify "should return the general opts merged with given opts if given opts is a Hash" do opts = {:host=>1, :database=>2} Sequel::Database.new(opts).send(:server_opts, :host=>2)[:host].should == 2 end end describe "Database#add_servers" do before do @db = Sequel.mock(:host=>1, :database=>2, :servers=>{:server1=>{:host=>3}}) end specify "should add new servers to the connection pool" do @db.synchronize{|c| c.opts[:host].should == 1} @db.synchronize(:server1){|c| c.opts[:host].should == 3} @db.synchronize(:server2){|c| c.opts[:host].should == 1} @db.add_servers(:server2=>{:host=>6}) @db.synchronize{|c| c.opts[:host].should == 1} @db.synchronize(:server1){|c| c.opts[:host].should == 3} @db.synchronize(:server2){|c| c.opts[:host].should == 6} @db.disconnect @db.synchronize{|c| c.opts[:host].should == 1} @db.synchronize(:server1){|c| c.opts[:host].should == 3} @db.synchronize(:server2){|c| c.opts[:host].should == 6} end specify "should replace options for future connections to existing servers" do @db.synchronize{|c| c.opts[:host].should == 1} @db.synchronize(:server1){|c| c.opts[:host].should == 3} @db.synchronize(:server2){|c| c.opts[:host].should == 1} @db.add_servers(:default=>proc{{:host=>4}}, :server1=>{:host=>8}) @db.synchronize{|c| c.opts[:host].should == 1} @db.synchronize(:server1){|c| c.opts[:host].should == 3} @db.synchronize(:server2){|c| c.opts[:host].should == 1} @db.disconnect @db.synchronize{|c| c.opts[:host].should == 4} @db.synchronize(:server1){|c| c.opts[:host].should == 8} @db.synchronize(:server2){|c| c.opts[:host].should == 4} end end describe "Database#remove_servers" do before do @db = Sequel.mock(:host=>1, :database=>2, :servers=>{:server1=>{:host=>3}, :server2=>{:host=>4}}) end specify "should remove servers from the connection pool" do @db.synchronize{|c| c.opts[:host].should == 1} @db.synchronize(:server1){|c| c.opts[:host].should == 3} @db.synchronize(:server2){|c| c.opts[:host].should == 4} @db.remove_servers(:server1, :server2) @db.synchronize{|c| c.opts[:host].should == 1} @db.synchronize(:server1){|c| c.opts[:host].should == 1} @db.synchronize(:server2){|c| c.opts[:host].should == 1} end specify "should accept arrays of symbols" do @db.remove_servers([:server1, :server2]) @db.synchronize{|c| c.opts[:host].should == 1} @db.synchronize(:server1){|c| c.opts[:host].should == 1} @db.synchronize(:server2){|c| c.opts[:host].should == 1} end specify "should allow removal while connections are still open" do @db.synchronize do |c1| c1.opts[:host].should == 1 @db.synchronize(:server1) do |c2| c2.opts[:host].should == 3 @db.synchronize(:server2) do |c3| c3.opts[:host].should == 4 @db.remove_servers(:server1, :server2) @db.synchronize(:server1) do |c4| c4.should_not == c2 c4.should == c1 c4.opts[:host].should == 1 @db.synchronize(:server2) do |c5| c5.should_not == c3 c5.should == c1 c5.opts[:host].should == 1 end end c3.opts[:host].should == 4 end c2.opts[:host].should == 3 end c1.opts[:host].should == 1 end end end describe "Database#each_server with do/jdbc adapter connection string without :adapter option" do specify "should yield a separate database object for each server" do klass = Class.new(Sequel::Database) klass.should_receive(:adapter_class).once.with(:jdbc).and_return(Sequel::Mock::Database) @db = klass.connect('jdbc:blah:', :host=>1, :database=>2, :servers=>{:server1=>{:host=>3}}) hosts = [] @db.each_server do |db| db.should be_a_kind_of(Sequel::Database) db.should_not == @db db.opts[:adapter_class].should == Sequel::Mock::Database db.opts[:database].should == 2 hosts << db.opts[:host] end hosts.sort.should == [1, 3] end specify "should raise if not given a block" do proc{Sequel.mock.each_server}.should raise_error(Sequel::Error) end end describe "Database#each_server" do before do @db = Sequel.mock(:host=>1, :database=>2, :servers=>{:server1=>{:host=>3}, :server2=>{:host=>4}}) end specify "should yield a separate database object for each server" do hosts = [] @db.each_server do |db| db.should be_a_kind_of(Sequel::Database) db.should_not == @db db.opts[:adapter].should == :mock db.opts[:database].should == 2 hosts << db.opts[:host] end hosts.sort.should == [1, 3, 4] end specify "should disconnect and remove entry from Sequel::DATABASES after use" do dbs = [] dcs = [] @db.each_server do |db| dbs << db Sequel::DATABASES.should include(db) meta_def(db, :disconnect){dcs << db} end dbs.each do |db| Sequel::DATABASES.should_not include(db) end dbs.should == dcs end end describe "Database#raise_error" do before do @db = Sequel.mock end specify "should reraise if the exception class is not in opts[:classes]" do e = Class.new(StandardError) proc{@db.send(:raise_error, e.new(''), :classes=>[])}.should raise_error(e) end specify "should convert the exception to a DatabaseError if the exception class is in opts[:classes]" do proc{@db.send(:raise_error, Interrupt.new(''), :classes=>[Interrupt])}.should raise_error(Sequel::DatabaseError) end specify "should convert the exception to a DatabaseError if opts[:classes] if not present" do proc{@db.send(:raise_error, Interrupt.new(''))}.should raise_error(Sequel::DatabaseError) end specify "should convert the exception to a DatabaseDisconnectError if opts[:disconnect] is true" do proc{@db.send(:raise_error, Interrupt.new(''), :disconnect=>true)}.should raise_error(Sequel::DatabaseDisconnectError) end specify "should convert the exception to an appropriate error if exception message matches regexp" do def @db.database_error_regexps {/foo/ => Sequel::DatabaseDisconnectError, /bar/ => Sequel::ConstraintViolation} end proc{@db.send(:raise_error, Interrupt.new('foo'))}.should raise_error(Sequel::DatabaseDisconnectError) proc{@db.send(:raise_error, Interrupt.new('bar'))}.should raise_error(Sequel::ConstraintViolation) end end describe "Database#typecast_value" do before do @db = Sequel::Database.new end specify "should raise an InvalidValue when given an invalid value" do proc{@db.typecast_value(:integer, "13a")}.should raise_error(Sequel::InvalidValue) proc{@db.typecast_value(:float, "4.e2")}.should raise_error(Sequel::InvalidValue) proc{@db.typecast_value(:decimal, :invalid_value)}.should raise_error(Sequel::InvalidValue) proc{@db.typecast_value(:date, Object.new)}.should raise_error(Sequel::InvalidValue) proc{@db.typecast_value(:date, 'a')}.should raise_error(Sequel::InvalidValue) proc{@db.typecast_value(:time, Date.new)}.should raise_error(Sequel::InvalidValue) proc{@db.typecast_value(:datetime, 4)}.should raise_error(Sequel::InvalidValue) end specify "should handle integers with leading 0 as base 10" do @db.typecast_value(:integer, "013").should == 13 @db.typecast_value(:integer, "08").should == 8 @db.typecast_value(:integer, "000013").should == 13 @db.typecast_value(:integer, "000008").should == 8 end specify "should handle integers with leading 0x as base 16" do @db.typecast_value(:integer, "0x013").should == 19 @db.typecast_value(:integer, "0x80").should == 128 end specify "should typecast blobs as as Sequel::SQL::Blob" do v = @db.typecast_value(:blob, "0x013") v.should be_a_kind_of(Sequel::SQL::Blob) v.should == Sequel::SQL::Blob.new("0x013") @db.typecast_value(:blob, v).object_id.should == v.object_id end specify "should typecast boolean values to true, false, or nil" do @db.typecast_value(:boolean, false).should be_false @db.typecast_value(:boolean, 0).should be_false @db.typecast_value(:boolean, "0").should be_false @db.typecast_value(:boolean, 'f').should be_false @db.typecast_value(:boolean, 'false').should be_false @db.typecast_value(:boolean, true).should be_true @db.typecast_value(:boolean, 1).should be_true @db.typecast_value(:boolean, '1').should be_true @db.typecast_value(:boolean, 't').should be_true @db.typecast_value(:boolean, 'true').should be_true @db.typecast_value(:boolean, '').should be_nil end specify "should typecast date values to Date" do @db.typecast_value(:date, Date.today).should == Date.today @db.typecast_value(:date, DateTime.now).should == Date.today @db.typecast_value(:date, Time.now).should == Date.today @db.typecast_value(:date, Date.today.to_s).should == Date.today @db.typecast_value(:date, :year=>Date.today.year, :month=>Date.today.month, :day=>Date.today.day).should == Date.today end specify "should have Sequel.application_to_database_timestamp convert to Sequel.database_timezone" do begin t = Time.utc(2011, 1, 2, 3, 4, 5) # UTC Time t2 = Time.mktime(2011, 1, 2, 3, 4, 5) # Local Time t3 = Time.utc(2011, 1, 2, 3, 4, 5) - (t - t2) # Local Time in UTC Time t4 = Time.mktime(2011, 1, 2, 3, 4, 5) + (t - t2) # UTC Time in Local Time Sequel.application_timezone = :utc Sequel.database_timezone = :local Sequel.application_to_database_timestamp(t).should == t4 Sequel.application_timezone = :local Sequel.database_timezone = :utc Sequel.application_to_database_timestamp(t2).should == t3 ensure Sequel.default_timezone = nil end end specify "should have Database#to_application_timestamp convert values using the database's timezone" do begin t = Time.utc(2011, 1, 2, 3, 4, 5) # UTC Time t2 = Time.mktime(2011, 1, 2, 3, 4, 5) # Local Time t3 = Time.utc(2011, 1, 2, 3, 4, 5) - (t - t2) # Local Time in UTC Time t4 = Time.mktime(2011, 1, 2, 3, 4, 5) + (t - t2) # UTC Time in Local Time Sequel.default_timezone = :utc @db.to_application_timestamp('2011-01-02 03:04:05').should == t Sequel.database_timezone = :local @db.to_application_timestamp('2011-01-02 03:04:05').should == t3 Sequel.default_timezone = :local @db.to_application_timestamp('2011-01-02 03:04:05').should == t2 Sequel.database_timezone = :utc @db.to_application_timestamp('2011-01-02 03:04:05').should == t4 Sequel.default_timezone = :utc @db.timezone = :local @db.to_application_timestamp('2011-01-02 03:04:05').should == t3 Sequel.default_timezone = :local @db.timezone = :utc @db.to_application_timestamp('2011-01-02 03:04:05').should == t4 ensure Sequel.default_timezone = nil end end specify "should typecast datetime values to Sequel.datetime_class with correct timezone handling" do t = Time.utc(2011, 1, 2, 3, 4, 5, 500000) # UTC Time t2 = Time.mktime(2011, 1, 2, 3, 4, 5, 500000) # Local Time t3 = Time.utc(2011, 1, 2, 3, 4, 5, 500000) - (t - t2) # Local Time in UTC Time t4 = Time.mktime(2011, 1, 2, 3, 4, 5, 500000) + (t - t2) # UTC Time in Local Time secs = defined?(Rational) ? Rational(11, 2) : 5.5 r1 = defined?(Rational) ? Rational(t2.utc_offset, 86400) : t2.utc_offset/86400.0 r2 = defined?(Rational) ? Rational((t - t2).to_i, 86400) : (t - t2).to_i/86400.0 dt = DateTime.civil(2011, 1, 2, 3, 4, secs) dt2 = DateTime.civil(2011, 1, 2, 3, 4, secs, r1) dt3 = DateTime.civil(2011, 1, 2, 3, 4, secs) - r2 dt4 = DateTime.civil(2011, 1, 2, 3, 4, secs, r1) + r2 t.should == t4 t2.should == t3 dt.should == dt4 dt2.should == dt3 check = proc do |i, o| v = @db.typecast_value(:datetime, i) v.should == o if o.is_a?(Time) v.utc_offset.should == o.utc_offset else v.offset.should == o.offset end end @db.extend_datasets(Module.new{def supports_timestamp_timezones?; true; end}) begin @db.typecast_value(:datetime, dt).should == t @db.typecast_value(:datetime, dt2).should == t2 @db.typecast_value(:datetime, t).should == t @db.typecast_value(:datetime, t2).should == t2 @db.typecast_value(:datetime, @db.literal(dt)[1...-1]).should == t @db.typecast_value(:datetime, dt.strftime('%F %T.%N')).should == t2 @db.typecast_value(:datetime, Date.civil(2011, 1, 2)).should == Time.mktime(2011, 1, 2, 0, 0, 0) @db.typecast_value(:datetime, :year=>dt.year, :month=>dt.month, :day=>dt.day, :hour=>dt.hour, :minute=>dt.min, :second=>dt.sec, :nanos=>500000000).should == t2 Sequel.datetime_class = DateTime @db.typecast_value(:datetime, dt).should == dt @db.typecast_value(:datetime, dt2).should == dt2 @db.typecast_value(:datetime, t).should == dt @db.typecast_value(:datetime, t2).should == dt2 @db.typecast_value(:datetime, @db.literal(dt)[1...-1]).should == dt @db.typecast_value(:datetime, dt.strftime('%F %T.%N')).should == dt @db.typecast_value(:datetime, Date.civil(2011, 1, 2)).should == DateTime.civil(2011, 1, 2, 0, 0, 0) @db.typecast_value(:datetime, :year=>dt.year, :month=>dt.month, :day=>dt.day, :hour=>dt.hour, :minute=>dt.min, :second=>dt.sec, :nanos=>500000000).should == dt Sequel.application_timezone = :utc Sequel.typecast_timezone = :local Sequel.datetime_class = Time check[dt, t] check[dt2, t3] check[t, t] check[t2, t3] check[@db.literal(dt)[1...-1], t] check[dt.strftime('%F %T.%N'), t3] check[Date.civil(2011, 1, 2), Time.utc(2011, 1, 2, 0, 0, 0)] check[{:year=>dt.year, :month=>dt.month, :day=>dt.day, :hour=>dt.hour, :minute=>dt.min, :second=>dt.sec, :nanos=>500000000}, t3] Sequel.datetime_class = DateTime check[dt, dt] check[dt2, dt3] check[t, dt] check[t2, dt3] check[@db.literal(dt)[1...-1], dt] check[dt.strftime('%F %T.%N'), dt3] check[Date.civil(2011, 1, 2), DateTime.civil(2011, 1, 2, 0, 0, 0)] check[{:year=>dt.year, :month=>dt.month, :day=>dt.day, :hour=>dt.hour, :minute=>dt.min, :second=>dt.sec, :nanos=>500000000}, dt3] Sequel.typecast_timezone = :utc Sequel.datetime_class = Time check[dt, t] check[dt2, t3] check[t, t] check[t2, t3] check[@db.literal(dt)[1...-1], t] check[dt.strftime('%F %T.%N'), t] check[Date.civil(2011, 1, 2), Time.utc(2011, 1, 2, 0, 0, 0)] check[{:year=>dt.year, :month=>dt.month, :day=>dt.day, :hour=>dt.hour, :minute=>dt.min, :second=>dt.sec, :nanos=>500000000}, t] Sequel.datetime_class = DateTime check[dt, dt] check[dt2, dt3] check[t, dt] check[t2, dt3] check[@db.literal(dt)[1...-1], dt] check[dt.strftime('%F %T.%N'), dt] check[Date.civil(2011, 1, 2), DateTime.civil(2011, 1, 2, 0, 0, 0)] check[{:year=>dt.year, :month=>dt.month, :day=>dt.day, :hour=>dt.hour, :minute=>dt.min, :second=>dt.sec, :nanos=>500000000}, dt] Sequel.application_timezone = :local Sequel.datetime_class = Time check[dt, t4] check[dt2, t2] check[t, t4] check[t2, t2] check[@db.literal(dt)[1...-1], t4] check[dt.strftime('%F %T.%N'), t4] check[Date.civil(2011, 1, 2), Time.local(2011, 1, 2, 0, 0, 0)] check[{:year=>dt.year, :month=>dt.month, :day=>dt.day, :hour=>dt.hour, :minute=>dt.min, :second=>dt.sec, :nanos=>500000000}, t4] Sequel.datetime_class = DateTime check[dt, dt4] check[dt2, dt2] check[t, dt4] check[t2, dt2] check[@db.literal(dt)[1...-1], dt4] check[dt.strftime('%F %T.%N'), dt4] check[Date.civil(2011, 1, 2), DateTime.civil(2011, 1, 2, 0, 0, 0, r1)] check[{:year=>dt.year, :month=>dt.month, :day=>dt.day, :hour=>dt.hour, :minute=>dt.min, :second=>dt.sec, :nanos=>500000000}, dt4] Sequel.typecast_timezone = :local Sequel.datetime_class = Time check[dt, t4] check[dt2, t2] check[t, t4] check[t2, t2] check[@db.literal(dt)[1...-1], t4] check[dt.strftime('%F %T.%N'), t2] check[Date.civil(2011, 1, 2), Time.local(2011, 1, 2, 0, 0, 0)] check[{:year=>dt.year, :month=>dt.month, :day=>dt.day, :hour=>dt.hour, :minute=>dt.min, :second=>dt.sec, :nanos=>500000000}, t2] Sequel.datetime_class = DateTime check[dt, dt4] check[dt2, dt2] check[t, dt4] check[t2, dt2] check[@db.literal(dt)[1...-1], dt4] check[dt.strftime('%F %T.%N'), dt2] check[Date.civil(2011, 1, 2), DateTime.civil(2011, 1, 2, 0, 0, 0, r1)] check[{:year=>dt.year, :month=>dt.month, :day=>dt.day, :hour=>dt.hour, :minute=>dt.min, :second=>dt.sec, :nanos=>500000000}, dt2] ensure Sequel.default_timezone = nil Sequel.datetime_class = Time end end specify "should handle arrays when typecasting timestamps" do begin @db.typecast_value(:datetime, [2011, 10, 11, 12, 13, 14]).should == Time.local(2011, 10, 11, 12, 13, 14) @db.typecast_value(:datetime, [2011, 10, 11, 12, 13, 14, 500000000]).should == Time.local(2011, 10, 11, 12, 13, 14, 500000) Sequel.datetime_class = DateTime @db.typecast_value(:datetime, [2011, 10, 11, 12, 13, 14]).should == DateTime.civil(2011, 10, 11, 12, 13, 14) @db.typecast_value(:datetime, [2011, 10, 11, 12, 13, 14, 500000000]).should == DateTime.civil(2011, 10, 11, 12, 13, (defined?(Rational) ? Rational(29, 2) : 14.5)) @db.typecast_value(:datetime, [2011, 10, 11, 12, 13, 14, 500000000, (defined?(Rational) ? Rational(1, 2) : 0.5)]).should == DateTime.civil(2011, 10, 11, 12, 13, (defined?(Rational) ? Rational(29, 2) : 14.5), (defined?(Rational) ? Rational(1, 2) : 0.5)) ensure Sequel.datetime_class = Time end end specify "should handle hashes when typecasting timestamps" do begin @db.typecast_value(:datetime, :year=>2011, :month=>10, :day=>11, :hour=>12, :minute=>13, :second=>14).should == Time.local(2011, 10, 11, 12, 13, 14) @db.typecast_value(:datetime, :year=>2011, :month=>10, :day=>11, :hour=>12, :minute=>13, :second=>14, :nanos=>500000000).should == Time.local(2011, 10, 11, 12, 13, 14, 500000) @db.typecast_value(:datetime, 'year'=>2011, 'month'=>10, 'day'=>11, 'hour'=>12, 'minute'=>13, 'second'=>14).should == Time.local(2011, 10, 11, 12, 13, 14) @db.typecast_value(:datetime, 'year'=>2011, 'month'=>10, 'day'=>11, 'hour'=>12, 'minute'=>13, 'second'=>14, 'nanos'=>500000000).should == Time.local(2011, 10, 11, 12, 13, 14, 500000) Sequel.datetime_class = DateTime @db.typecast_value(:datetime, :year=>2011, :month=>10, :day=>11, :hour=>12, :minute=>13, :second=>14).should == DateTime.civil(2011, 10, 11, 12, 13, 14) @db.typecast_value(:datetime, :year=>2011, :month=>10, :day=>11, :hour=>12, :minute=>13, :second=>14, :nanos=>500000000).should == DateTime.civil(2011, 10, 11, 12, 13, (defined?(Rational) ? Rational(29, 2) : 14.5)) @db.typecast_value(:datetime, 'year'=>2011, 'month'=>10, 'day'=>11, 'hour'=>12, 'minute'=>13, 'second'=>14).should == DateTime.civil(2011, 10, 11, 12, 13, 14) @db.typecast_value(:datetime, 'year'=>2011, 'month'=>10, 'day'=>11, 'hour'=>12, 'minute'=>13, 'second'=>14, 'nanos'=>500000000).should == DateTime.civil(2011, 10, 11, 12, 13, (defined?(Rational) ? Rational(29, 2) : 14.5)) @db.typecast_value(:datetime, :year=>2011, :month=>10, :day=>11, :hour=>12, :minute=>13, :second=>14, :offset=>(defined?(Rational) ? Rational(1, 2) : 0.5)).should == DateTime.civil(2011, 10, 11, 12, 13, 14, (defined?(Rational) ? Rational(1, 2) : 0.5)) @db.typecast_value(:datetime, :year=>2011, :month=>10, :day=>11, :hour=>12, :minute=>13, :second=>14, :nanos=>500000000, :offset=>(defined?(Rational) ? Rational(1, 2) : 0.5)).should == DateTime.civil(2011, 10, 11, 12, 13, (defined?(Rational) ? Rational(29, 2) : 14.5), (defined?(Rational) ? Rational(1, 2) : 0.5)) @db.typecast_value(:datetime, 'year'=>2011, 'month'=>10, 'day'=>11, 'hour'=>12, 'minute'=>13, 'second'=>14, 'offset'=>(defined?(Rational) ? Rational(1, 2) : 0.5)).should == DateTime.civil(2011, 10, 11, 12, 13, 14, (defined?(Rational) ? Rational(1, 2) : 0.5)) @db.typecast_value(:datetime, 'year'=>2011, 'month'=>10, 'day'=>11, 'hour'=>12, 'minute'=>13, 'second'=>14, 'nanos'=>500000000, 'offset'=>(defined?(Rational) ? Rational(1, 2) : 0.5)).should == DateTime.civil(2011, 10, 11, 12, 13, (defined?(Rational) ? Rational(29, 2) : 14.5), (defined?(Rational) ? Rational(1, 2) : 0.5)) ensure Sequel.datetime_class = Time end end specify "should typecast decimal values to BigDecimal" do [1.0, 1, '1.0', BigDecimal('1.0')].each do |i| v = @db.typecast_value(:decimal, i) v.should be_a_kind_of(BigDecimal) v.should == BigDecimal.new('1.0') end end specify "should typecast float values to Float" do [1.0, 1, '1.0', BigDecimal('1.0')].each do |i| v = @db.typecast_value(:float, i) v.should be_a_kind_of(Float) v.should == 1.0 end end specify "should typecast string values to String" do [1.0, '1.0', Sequel.blob('1.0')].each do |i| v = @db.typecast_value(:string, i) v.should be_an_instance_of(String) v.should == "1.0" end end specify "should raise errors when typecasting hash and array values to String" do [[], {}].each do |i| proc{@db.typecast_value(:string, i)}.should raise_error(Sequel::InvalidValue) end end specify "should typecast time values to SQLTime" do t = Time.now st = Sequel::SQLTime.local(t.year, t.month, t.day, 1, 2, 3) [st, Time.utc(t.year, t.month, t.day, 1, 2, 3), Time.local(t.year, t.month, t.day, 1, 2, 3), '01:02:03', {:hour=>1, :minute=>2, :second=>3}].each do |i| v = @db.typecast_value(:time, i) v.should be_an_instance_of(Sequel::SQLTime) v.should == st end end specify "should correctly handle time value conversion to SQLTime with fractional seconds" do t = Time.now st = Sequel::SQLTime.local(t.year, t.month, t.day, 1, 2, 3, 500000) t = Time.local(t.year, t.month, t.day, 1, 2, 3, 500000) @db.typecast_value(:time, t).should == st end specify "should have an underlying exception class available at wrapped_exception" do begin @db.typecast_value(:date, 'a') true.should == false rescue Sequel::InvalidValue => e e.wrapped_exception.should be_a_kind_of(ArgumentError) end end specify "should include underlying exception class in #inspect" do begin @db.typecast_value(:date, 'a') true.should == false rescue Sequel::InvalidValue => e e.inspect.should =~ /\A#<Sequel::InvalidValue: ArgumentError: .*>\z/ end end end describe "Database#blank_object?" do specify "should return whether the object is considered blank" do db = Sequel::Database.new c = lambda{|meth, value| Class.new{define_method(meth){value}}.new} db.send(:blank_object?, "").should == true db.send(:blank_object?, " ").should == true db.send(:blank_object?, nil).should == true db.send(:blank_object?, false).should == true db.send(:blank_object?, []).should == true db.send(:blank_object?, {}).should == true db.send(:blank_object?, c[:empty?, true]).should == true db.send(:blank_object?, c[:blank?, true]).should == true db.send(:blank_object?, " a ").should == false db.send(:blank_object?, 1).should == false db.send(:blank_object?, 1.0).should == false db.send(:blank_object?, true).should == false db.send(:blank_object?, [1]).should == false db.send(:blank_object?, {1.0=>2.0}).should == false db.send(:blank_object?, c[:empty?, false]).should == false db.send(:blank_object?, c[:blank?, false]).should == false end end describe "Database#schema_autoincrementing_primary_key?" do specify "should indicate whether the parsed schema row indicates a primary key" do m = Sequel::Database.new.method(:schema_autoincrementing_primary_key?) m.call(:primary_key=>true, :db_type=>'integer').should == true m.call(:primary_key=>true, :db_type=>'varchar(255)').should == false m.call(:primary_key=>false, :db_type=>'integer').should == false end end describe "Database#supports_schema_parsing?" do specify "should be false by default" do Sequel::Database.new.supports_schema_parsing?.should == false end specify "should be true if the database implements schema_parse_table" do db = Sequel::Database.new def db.schema_parse_table(*) end db.supports_schema_parsing?.should == true end end describe "Database#supports_foreign_key_parsing?" do specify "should be false by default" do Sequel::Database.new.supports_foreign_key_parsing?.should == false end specify "should be true if the database implements foreign_key_list" do db = Sequel::Database.new def db.foreign_key_list(*) end db.supports_foreign_key_parsing?.should == true end end describe "Database#supports_index_parsing?" do specify "should be false by default" do Sequel::Database.new.supports_index_parsing?.should == false end specify "should be true if the database implements indexes" do db = Sequel::Database.new def db.indexes(*) end db.supports_index_parsing?.should == true end end describe "Database#supports_table_listing?" do specify "should be false by default" do Sequel::Database.new.supports_table_listing?.should == false end specify "should be true if the database implements tables" do db = Sequel::Database.new def db.tables(*) end db.supports_table_listing?.should == true end end describe "Database#supports_view_listing?" do specify "should be false by default" do Sequel::Database.new.supports_view_listing?.should == false end specify "should be true if the database implements views" do db = Sequel::Database.new def db.views(*) end db.supports_view_listing?.should == true end end describe "Database#supports_deferrable_constraints?" do specify "should be false by default" do Sequel::Database.new.supports_deferrable_constraints?.should == false end end describe "Database#supports_deferrable_foreign_key_constraints?" do specify "should be false by default" do Sequel::Database.new.supports_deferrable_foreign_key_constraints?.should == false end end describe "Database#supports_transactional_ddl?" do specify "should be false by default" do Sequel::Database.new.supports_transactional_ddl?.should == false end end describe "Database#global_index_namespace?" do specify "should be true by default" do Sequel::Database.new.global_index_namespace?.should == true end end describe "Database#supports_savepoints?" do specify "should be false by default" do Sequel::Database.new.supports_savepoints?.should == false end end describe "Database#supports_savepoints_in_prepared_transactions?" do specify "should be false by default" do Sequel::Database.new.supports_savepoints_in_prepared_transactions?.should == false end specify "should be true if both savepoints and prepared transactions are supported" do db = Sequel::Database.new meta_def(db, :supports_savepoints?){true} meta_def(db, :supports_prepared_transactions?){true} db.supports_savepoints_in_prepared_transactions?.should == true end end describe "Database#supports_prepared_transactions?" do specify "should be false by default" do Sequel::Database.new.supports_prepared_transactions?.should == false end end describe "Database#supports_transaction_isolation_levels?" do specify "should be false by default" do Sequel::Database.new.supports_transaction_isolation_levels?.should == false end end describe "Database#input_identifier_meth" do specify "should be the input_identifer method of a default dataset for this database" do db = Sequel::Database.new db.send(:input_identifier_meth).call(:a).should == 'a' db.identifier_input_method = :upcase db.send(:input_identifier_meth).call(:a).should == 'A' end end describe "Database#output_identifier_meth" do specify "should be the output_identifer method of a default dataset for this database" do db = Sequel::Database.new db.send(:output_identifier_meth).call('A').should == :A db.identifier_output_method = :downcase db.send(:output_identifier_meth).call('A').should == :a end end describe "Database#metadata_dataset" do specify "should be a dataset with the default settings for identifier_input_method and identifier_output_method" do ds = Sequel::Database.new.send(:metadata_dataset) ds.literal(:a).should == 'A' ds.send(:output_identifier, 'A').should == :a end end describe "Database#column_schema_to_ruby_default" do specify "should handle converting many default formats" do db = Sequel::Database.new p = lambda{|d,t| db.send(:column_schema_to_ruby_default, d, t)} p[nil, :integer].should be_nil p[1, :integer].should == 1 p['1', :integer].should == 1 p['-1', :integer].should == -1 p[1.0, :float].should == 1.0 p['1.0', :float].should == 1.0 p['-1.0', :float].should == -1.0 p['1.0', :decimal].should == BigDecimal.new('1.0') p['-1.0', :decimal].should == BigDecimal.new('-1.0') p[true, :boolean].should == true p[false, :boolean].should == false p['1', :boolean].should == true p['0', :boolean].should == false p['true', :boolean].should == true p['false', :boolean].should == false p["'t'", :boolean].should == true p["'f'", :boolean].should == false p["'a'", :string].should == 'a' p["'a'", :blob].should == Sequel.blob('a') p["'a'", :blob].should be_a_kind_of(Sequel::SQL::Blob) p["''", :string].should == '' p["'\\a''b'", :string].should == "\\a'b" p["'NULL'", :string].should == "NULL" p[Date.today, :date].should == Date.today p["'2009-10-29'", :date].should == Date.new(2009,10,29) p["CURRENT_TIMESTAMP", :date].should == Sequel::CURRENT_DATE p["CURRENT_DATE", :date].should == Sequel::CURRENT_DATE p["now()", :date].should == Sequel::CURRENT_DATE p["getdate()", :date].should == Sequel::CURRENT_DATE p["CURRENT_TIMESTAMP", :datetime].should == Sequel::CURRENT_TIMESTAMP p["CURRENT_DATE", :datetime].should == Sequel::CURRENT_TIMESTAMP p["now()", :datetime].should == Sequel::CURRENT_TIMESTAMP p["getdate()", :datetime].should == Sequel::CURRENT_TIMESTAMP p["'2009-10-29T10:20:30-07:00'", :datetime].should == DateTime.parse('2009-10-29T10:20:30-07:00') p["'2009-10-29 10:20:30'", :datetime].should == DateTime.parse('2009-10-29 10:20:30') p["'10:20:30'", :time].should == Time.parse('10:20:30') p["NaN", :float].should be_nil db = Sequel.mock(:host=>'postgres') p["''::text", :string].should == "" p["'\\a''b'::character varying", :string].should == "\\a'b" p["'a'::bpchar", :string].should == "a" p["(-1)", :integer].should == -1 p["(-1.0)", :float].should == -1.0 p['(-1.0)', :decimal].should == BigDecimal.new('-1.0') p["'a'::bytea", :blob].should == Sequel.blob('a') p["'a'::bytea", :blob].should be_a_kind_of(Sequel::SQL::Blob) p["'2009-10-29'::date", :date].should == Date.new(2009,10,29) p["'2009-10-29 10:20:30.241343'::timestamp without time zone", :datetime].should == DateTime.parse('2009-10-29 10:20:30.241343') p["'10:20:30'::time without time zone", :time].should == Time.parse('10:20:30') db = Sequel.mock(:host=>'mysql') p["\\a'b", :string].should == "\\a'b" p["a", :string].should == "a" p["NULL", :string].should == "NULL" p["-1", :float].should == -1.0 p['-1', :decimal].should == BigDecimal.new('-1.0') p["2009-10-29", :date].should == Date.new(2009,10,29) p["2009-10-29 10:20:30", :datetime].should == DateTime.parse('2009-10-29 10:20:30') p["10:20:30", :time].should == Time.parse('10:20:30') p["a", :enum].should == "a" p["a,b", :set].should == "a,b" db = Sequel.mock(:host=>'mssql') p["(N'a')", :string].should == "a" p["((-12))", :integer].should == -12 p["((12.1))", :float].should == 12.1 p["((-12.1))", :decimal].should == BigDecimal.new('-12.1') end end describe "Database extensions" do before(:all) do class << Sequel alias _extension extension remove_method :extension def extension(*) end end end after(:all) do class << Sequel remove_method :extension alias extension _extension remove_method :_extension end end before do @db = Sequel.mock end after do Sequel::Database.instance_variable_set(:@initialize_hook, Proc.new {|db| }) end specify "should be able to register an extension with a module have Database#extension extend the module" do Sequel::Database.register_extension(:foo, Module.new{def a; 1; end}) @db.extension(:foo).a.should == 1 end specify "should be able to register an extension with a block and have Database#extension call the block" do @db.quote_identifiers = false Sequel::Database.register_extension(:foo){|db| db.quote_identifiers = true} @db.extension(:foo).quote_identifiers?.should be_true end specify "should be able to register an extension with a callable and Database#extension call the callable" do @db.quote_identifiers = false Sequel::Database.register_extension(:foo, proc{|db| db.quote_identifiers = true}) @db.extension(:foo).quote_identifiers?.should be_true end specify "should be able to load multiple extensions in the same call" do @db.quote_identifiers = false @db.identifier_input_method = :downcase Sequel::Database.register_extension(:foo, proc{|db| db.quote_identifiers = true}) Sequel::Database.register_extension(:bar, proc{|db| db.identifier_input_method = nil}) @db.extension(:foo, :bar) @db.quote_identifiers?.should be_true @db.identifier_input_method.should be_nil end specify "should return the receiver" do Sequel::Database.register_extension(:foo, Module.new{def a; 1; end}) @db.extension(:foo).should equal(@db) end specify "should raise an Error if registering with both a module and a block" do proc{Sequel::Database.register_extension(:foo, Module.new){}}.should raise_error(Sequel::Error) end specify "should raise an Error if attempting to load an incompatible extension" do proc{@db.extension(:foo2)}.should raise_error(Sequel::Error) end specify "should be able to load an extension into all future Databases with Database.extension" do Sequel::Database.register_extension(:foo, Module.new{def a; 1; end}) Sequel::Database.register_extension(:bar, Module.new{def b; 2; end}) Sequel::Database.extension(:foo, :bar) @db.should_not respond_to(:a) @db.should_not respond_to(:b) Sequel.mock.a.should == 1 Sequel.mock.b.should == 2 end end describe "Database specific exception classes" do before do @db = Sequel.mock class << @db attr_accessor :sql_state def database_exception_sqlstate(exception, opts={}) @sql_state end end end specify "should use appropriate exception classes for given SQL states" do @db.fetch = ArgumentError @db.sql_state = '23502' proc{@db.get(:a)}.should raise_error(Sequel::NotNullConstraintViolation) @db.sql_state = '23503' proc{@db.get(:a)}.should raise_error(Sequel::ForeignKeyConstraintViolation) @db.sql_state = '23505' proc{@db.get(:a)}.should raise_error(Sequel::UniqueConstraintViolation) @db.sql_state = '23513' proc{@db.get(:a)}.should raise_error(Sequel::CheckConstraintViolation) @db.sql_state = '40001' proc{@db.get(:a)}.should raise_error(Sequel::SerializationFailure) end end describe "Database.after_initialize" do after do Sequel::Database.instance_variable_set(:@initialize_hook, Proc.new {|db| }) end specify "should allow a block to be run after each new instance is created" do Sequel::Database.after_initialize{|db| db.sql_log_level = :debug } db = Sequel.mock db.sql_log_level.should == :debug end specify "should allow multiple hooks to be registered" do Sequel::Database.after_initialize{|db| db.sql_log_level = :debug } Sequel::Database.after_initialize{|db| db.loggers << 11 } db = Sequel.mock db.sql_log_level.should == :debug db.loggers.should include(11) end specify "should raise an error if registration is called without a block" do proc { Sequel::Database.after_initialize }.should raise_error(Sequel::Error, /must provide block/i) end end describe "Database#schema_type_class" do specify "should return the class or array of classes for the given type symbol" do db = Sequel.mock {:string=>String, :integer=>Integer, :date=>Date, :datetime=>[Time, DateTime], :time=>Sequel::SQLTime, :boolean=>[TrueClass, FalseClass], :float=>Float, :decimal=>BigDecimal, :blob=>Sequel::SQL::Blob}.each do |sym, klass| db.schema_type_class(sym).should == klass end end end �������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/core/dataset_spec.rb���������������������������������������������������������0000664�0000000�0000000�00000601355�12201565355�0020656�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Dataset" do before do @dataset = Sequel.mock.dataset end specify "should accept database in initialize" do db = "db" d = Sequel::Dataset.new(db) d.db.should be(db) d.opts.should == {} end specify "should provide clone for chainability" do d1 = @dataset.clone(:from => [:test]) d1.class.should == @dataset.class d1.should_not == @dataset d1.db.should be(@dataset.db) d1.opts[:from].should == [:test] @dataset.opts[:from].should be_nil d2 = d1.clone(:order => [:name]) d2.class.should == @dataset.class d2.should_not == d1 d2.should_not == @dataset d2.db.should be(@dataset.db) d2.opts[:from].should == [:test] d2.opts[:order].should == [:name] d1.opts[:order].should be_nil end specify "should include Enumerable" do Sequel::Dataset.included_modules.should include(Enumerable) end specify "should yield rows to each" do ds = Sequel.mock[:t] ds._fetch = {:x=>1} called = false ds.each{|a| called = true; a.should == {:x=>1}} called.should be_true end specify "should get quote_identifiers default from database" do db = Sequel::Database.new(:quote_identifiers=>true) db[:a].quote_identifiers?.should == true db = Sequel::Database.new(:quote_identifiers=>false) db[:a].quote_identifiers?.should == false end specify "should get identifier_input_method default from database" do db = Sequel::Database.new(:identifier_input_method=>:upcase) db[:a].identifier_input_method.should == :upcase db = Sequel::Database.new(:identifier_input_method=>:downcase) db[:a].identifier_input_method.should == :downcase end specify "should get identifier_output_method default from database" do db = Sequel::Database.new(:identifier_output_method=>:upcase) db[:a].identifier_output_method.should == :upcase db = Sequel::Database.new(:identifier_output_method=>:downcase) db[:a].identifier_output_method.should == :downcase end specify "should have quote_identifiers= method which changes literalization of identifiers" do @dataset.quote_identifiers = true @dataset.literal(:a).should == '"a"' @dataset.quote_identifiers = false @dataset.literal(:a).should == 'a' end specify "should have identifier_input_method= method which changes literalization of identifiers" do @dataset.identifier_input_method = :upcase @dataset.literal(:a).should == 'A' @dataset.identifier_input_method = :downcase @dataset.literal(:A).should == 'a' @dataset.identifier_input_method = :reverse @dataset.literal(:at_b).should == 'b_ta' end specify "should have identifier_output_method= method which changes identifiers returned from the database" do @dataset.send(:output_identifier, "at_b_C").should == :at_b_C @dataset.identifier_output_method = :upcase @dataset.send(:output_identifier, "at_b_C").should == :AT_B_C @dataset.identifier_output_method = :downcase @dataset.send(:output_identifier, "at_b_C").should == :at_b_c @dataset.identifier_output_method = :reverse @dataset.send(:output_identifier, "at_b_C").should == :C_b_ta end specify "should have output_identifier handle empty identifiers" do @dataset.send(:output_identifier, "").should == :untitled @dataset.identifier_output_method = :upcase @dataset.send(:output_identifier, "").should == :UNTITLED @dataset.identifier_output_method = :downcase @dataset.send(:output_identifier, "").should == :untitled @dataset.identifier_output_method = :reverse @dataset.send(:output_identifier, "").should == :deltitnu end end describe "Dataset#clone" do before do @dataset = Sequel.mock.dataset.from(:items) end specify "should create an exact copy of the dataset" do @dataset.row_proc = Proc.new{|r| r} clone = @dataset.clone clone.object_id.should_not === @dataset.object_id clone.class.should == @dataset.class clone.db.should == @dataset.db clone.opts.should == @dataset.opts clone.row_proc.should == @dataset.row_proc end specify "should copy the dataset opts" do clone = @dataset.clone clone.opts.should_not equal(@dataset.opts) @dataset.filter!(:a => 'b') clone.opts[:filter].should be_nil clone = @dataset.clone(:from => [:other]) @dataset.opts[:from].should == [:items] clone.opts[:from].should == [:other] end specify "should merge the specified options" do clone = @dataset.clone(1 => 2) clone.opts.should == {1 => 2, :from => [:items]} end specify "should overwrite existing options" do clone = @dataset.clone(:from => [:other]) clone.opts.should == {:from => [:other]} end specify "should return an object with the same modules included" do m = Module.new do def __xyz__; "xyz"; end end @dataset.extend(m) @dataset.clone({}).should respond_to(:__xyz__) end end describe "Dataset#==" do before do @db = Sequel.mock @h = {} end specify "should be the true for dataset with the same db, opts, and SQL" do @db[:t].should == @db[:t] end specify "should be different for datasets with different dbs" do @db[:t].should_not == Sequel.mock[:t] end specify "should be different for datasets with different opts" do @db[:t].should_not == @db[:t].clone(:blah=>1) end specify "should be different for datasets with different SQL" do ds = @db[:t] ds.quote_identifiers = true ds.should_not == @db[:t] end end describe "Dataset#hash" do before do @db = Sequel.mock @h = {} end specify "should be the same for dataset with the same db, opts, and SQL" do @db[:t].hash.should == @db[:t].hash @h[@db[:t]] = 1 @h[@db[:t]].should == 1 end specify "should be different for datasets with different dbs" do @db[:t].hash.should_not == Sequel.mock[:t].hash end specify "should be different for datasets with different opts" do @db[:t].hash.should_not == @db[:t].clone(:blah=>1).hash end specify "should be different for datasets with different SQL" do ds = @db[:t] ds.quote_identifiers = true ds.hash.should_not == @db[:t].hash end end describe "A simple dataset" do before do @dataset = Sequel.mock.dataset.from(:test) end specify "should format a select statement" do @dataset.select_sql.should == 'SELECT * FROM test' end specify "should format a delete statement" do @dataset.delete_sql.should == 'DELETE FROM test' end specify "should format a truncate statement" do @dataset.truncate_sql.should == 'TRUNCATE TABLE test' end specify "should format a truncate statement with multiple tables if supported" do meta_def(@dataset, :check_truncation_allowed!){} @dataset.from(:test, :test2).truncate_sql.should == 'TRUNCATE TABLE test, test2' end specify "should format an insert statement with default values" do @dataset.insert_sql.should == 'INSERT INTO test DEFAULT VALUES' end specify "should use a single column with a default value when the dataset doesn't support using insert statement with default values" do meta_def(@dataset, :insert_supports_empty_values?){false} meta_def(@dataset, :columns){[:a, :b]} @dataset.insert_sql.should == 'INSERT INTO test (b) VALUES (DEFAULT)' end specify "should format an insert statement with hash" do @dataset.insert_sql(:name => 'wxyz', :price => 342). should match(/INSERT INTO test \(name, price\) VALUES \('wxyz', 342\)|INSERT INTO test \(price, name\) VALUES \(342, 'wxyz'\)/) @dataset.insert_sql({}).should == "INSERT INTO test DEFAULT VALUES" end specify "should format an insert statement with string keys" do @dataset.insert_sql('name' => 'wxyz', 'price' => 342). should match(/INSERT INTO test \(name, price\) VALUES \('wxyz', 342\)|INSERT INTO test \(price, name\) VALUES \(342, 'wxyz'\)/) end specify "should format an insert statement with an arbitrary value" do @dataset.insert_sql(123).should == "INSERT INTO test VALUES (123)" end specify "should format an insert statement with sub-query" do @dataset.insert_sql(@dataset.from(:something).filter(:x => 2)).should == "INSERT INTO test SELECT * FROM something WHERE (x = 2)" end specify "should format an insert statement with array" do @dataset.insert_sql('a', 2, 6.5).should == "INSERT INTO test VALUES ('a', 2, 6.5)" end specify "should format an update statement" do @dataset.update_sql(:name => 'abc').should == "UPDATE test SET name = 'abc'" end specify "should be able to return rows for arbitrary SQL" do @dataset.clone(:sql => 'xxx yyy zzz').select_sql.should == "xxx yyy zzz" end specify "should use the :sql option for all sql methods" do sql = "X" ds = @dataset.clone(:sql=>sql) ds.sql.should == sql ds.select_sql.should == sql ds.insert_sql.should == sql ds.delete_sql.should == sql ds.update_sql.should == sql ds.truncate_sql.should == sql end end describe "A dataset with multiple tables in its FROM clause" do before do @dataset = Sequel.mock.dataset.from(:t1, :t2) end specify "should raise on #update_sql" do proc {@dataset.update_sql(:a=>1)}.should raise_error(Sequel::InvalidOperation) end specify "should raise on #delete_sql" do proc {@dataset.delete_sql}.should raise_error(Sequel::InvalidOperation) end specify "should raise on #truncate_sql" do proc {@dataset.truncate_sql}.should raise_error(Sequel::InvalidOperation) end specify "should raise on #insert_sql" do proc {@dataset.insert_sql}.should raise_error(Sequel::InvalidOperation) end specify "should generate a select query FROM all specified tables" do @dataset.select_sql.should == "SELECT * FROM t1, t2" end end describe "Dataset#unused_table_alias" do before do @ds = Sequel.mock.dataset.from(:test) end specify "should return given symbol if it hasn't already been used" do @ds.unused_table_alias(:blah).should == :blah end specify "should return a symbol specifying an alias that hasn't already been used if it has already been used" do @ds.unused_table_alias(:test).should == :test_0 @ds.from(:test, :test_0).unused_table_alias(:test).should == :test_1 @ds.from(:test, :test_0).cross_join(:test_1).unused_table_alias(:test).should == :test_2 end specify "should return an appropriate symbol if given other forms of identifiers" do @ds.unused_table_alias('test').should == :test_0 @ds.unused_table_alias(:b__t___test).should == :test_0 @ds.unused_table_alias(:b__test).should == :test_0 @ds.unused_table_alias(Sequel.qualify(:b, :test)).should == :test_0 @ds.unused_table_alias(Sequel.expr(:b).as(:test)).should == :test_0 @ds.unused_table_alias(Sequel.expr(:b).as(Sequel.identifier(:test))).should == :test_0 @ds.unused_table_alias(Sequel.expr(:b).as('test')).should == :test_0 @ds.unused_table_alias(Sequel.identifier(:test)).should == :test_0 end end describe "Dataset#exists" do before do @ds1 = Sequel.mock[:test] @ds2 = @ds1.filter(Sequel.expr(:price) < 100) @ds3 = @ds1.filter(Sequel.expr(:price) > 50) end specify "should work in filters" do @ds1.filter(@ds2.exists).sql.should == 'SELECT * FROM test WHERE (EXISTS (SELECT * FROM test WHERE (price < 100)))' @ds1.filter(@ds2.exists & @ds3.exists).sql.should == 'SELECT * FROM test WHERE ((EXISTS (SELECT * FROM test WHERE (price < 100))) AND (EXISTS (SELECT * FROM test WHERE (price > 50))))' end specify "should work in select" do @ds1.select(@ds2.exists.as(:a), @ds3.exists.as(:b)).sql.should == 'SELECT (EXISTS (SELECT * FROM test WHERE (price < 100))) AS a, (EXISTS (SELECT * FROM test WHERE (price > 50))) AS b FROM test' end end describe "Dataset#where" do before do @dataset = Sequel.mock[:test] @d1 = @dataset.where(:region => 'Asia') @d2 = @dataset.where('region = ?', 'Asia') @d3 = @dataset.where("a = 1") end specify "should just clone if given an empty argument" do @dataset.where({}).sql.should == @dataset.sql @dataset.where([]).sql.should == @dataset.sql @dataset.where('').sql.should == @dataset.sql @dataset.filter({}).sql.should == @dataset.sql @dataset.filter([]).sql.should == @dataset.sql @dataset.filter('').sql.should == @dataset.sql end specify "should work with hashes" do @dataset.where(:name => 'xyz', :price => 342).select_sql. should match(/WHERE \(\(name = 'xyz'\) AND \(price = 342\)\)|WHERE \(\(price = 342\) AND \(name = 'xyz'\)\)/) end specify "should work with a string with placeholders and arguments for those placeholders" do @dataset.where('price < ? AND id in ?', 100, [1, 2, 3]).select_sql.should == "SELECT * FROM test WHERE (price < 100 AND id in (1, 2, 3))" end specify "should not modify passed array with placeholders" do a = ['price < ? AND id in ?', 100, 1, 2, 3] b = a.dup @dataset.where(a) b.should == a end specify "should work with strings (custom SQL expressions)" do @dataset.where('(a = 1 AND b = 2)').select_sql.should == "SELECT * FROM test WHERE ((a = 1 AND b = 2))" end specify "should work with a string with named placeholders and a hash of placeholder value arguments" do @dataset.where('price < :price AND id in :ids', :price=>100, :ids=>[1, 2, 3]).select_sql.should == "SELECT * FROM test WHERE (price < 100 AND id in (1, 2, 3))" end specify "should not modify passed array with named placeholders" do a = ['price < :price AND id in :ids', {:price=>100}] b = a.dup @dataset.where(a) b.should == a end specify "should not replace named placeholders that don't exist in the hash" do @dataset.where('price < :price AND id in :ids', :price=>100).select_sql.should == "SELECT * FROM test WHERE (price < 100 AND id in :ids)" end specify "should raise an error for a mismatched number of placeholders" do proc{@dataset.where('price < ? AND id in ?', 100).select_sql}.should raise_error(Sequel::Error) proc{@dataset.where('price < ? AND id in ?', 100, [1, 2, 3], 4).select_sql}.should raise_error(Sequel::Error) end specify "should handle placeholders when using an array" do @dataset.where(Sequel.lit(['price < ', ' AND id in '], 100, [1, 2, 3])).select_sql.should == "SELECT * FROM test WHERE price < 100 AND id in (1, 2, 3)" @dataset.where(Sequel.lit(['price < ', ' AND id in '], 100)).select_sql.should == "SELECT * FROM test WHERE price < 100 AND id in " end specify "should handle a mismatched number of placeholders when using an array" do proc{@dataset.where(Sequel.lit(['a = ', ' AND price < ', ' AND id in '], 100)).select_sql}.should raise_error(Sequel::Error) proc{@dataset.where(Sequel.lit(['price < ', ' AND id in '], 100, [1, 2, 3], 4)).select_sql}.should raise_error(Sequel::Error) end specify "should handle partial names" do @dataset.where('price < :price AND id = :p', :p=>2, :price=>100).select_sql.should == "SELECT * FROM test WHERE (price < 100 AND id = 2)" end specify "should affect select, delete and update statements" do @d1.select_sql.should == "SELECT * FROM test WHERE (region = 'Asia')" @d1.delete_sql.should == "DELETE FROM test WHERE (region = 'Asia')" @d1.update_sql(:GDP => 0).should == "UPDATE test SET GDP = 0 WHERE (region = 'Asia')" @d2.select_sql.should == "SELECT * FROM test WHERE (region = 'Asia')" @d2.delete_sql.should == "DELETE FROM test WHERE (region = 'Asia')" @d2.update_sql(:GDP => 0).should == "UPDATE test SET GDP = 0 WHERE (region = 'Asia')" @d3.select_sql.should == "SELECT * FROM test WHERE (a = 1)" @d3.delete_sql.should == "DELETE FROM test WHERE (a = 1)" @d3.update_sql(:GDP => 0).should == "UPDATE test SET GDP = 0 WHERE (a = 1)" end specify "should be composable using AND operator (for scoping)" do @d1.where(:size => 'big').select_sql.should == "SELECT * FROM test WHERE ((region = 'Asia') AND (size = 'big'))" @d1.where('population > 1000').select_sql.should == "SELECT * FROM test WHERE ((region = 'Asia') AND (population > 1000))" @d1.where('(a > 1) OR (b < 2)').select_sql.should == "SELECT * FROM test WHERE ((region = 'Asia') AND ((a > 1) OR (b < 2)))" @d1.where('GDP > ?', 1000).select_sql.should == "SELECT * FROM test WHERE ((region = 'Asia') AND (GDP > 1000))" @d2.where('GDP > ?', 1000).select_sql.should == "SELECT * FROM test WHERE ((region = 'Asia') AND (GDP > 1000))" @d2.where(:name => ['Japan', 'China']).select_sql.should == "SELECT * FROM test WHERE ((region = 'Asia') AND (name IN ('Japan', 'China')))" @d2.where('GDP > ?').select_sql.should == "SELECT * FROM test WHERE ((region = 'Asia') AND (GDP > ?))" @d3.where('b = 2').select_sql.should == "SELECT * FROM test WHERE ((a = 1) AND (b = 2))" @d3.where(:c => 3).select_sql.should == "SELECT * FROM test WHERE ((a = 1) AND (c = 3))" @d3.where('d = ?', 4).select_sql.should == "SELECT * FROM test WHERE ((a = 1) AND (d = 4))" end specify "should be composable using AND operator (for scoping) with block" do @d3.where{e < 5}.select_sql.should == "SELECT * FROM test WHERE ((a = 1) AND (e < 5))" end specify "should accept ranges" do @dataset.filter(:id => 4..7).sql.should == 'SELECT * FROM test WHERE ((id >= 4) AND (id <= 7))' @dataset.filter(:id => 4...7).sql.should == 'SELECT * FROM test WHERE ((id >= 4) AND (id < 7))' @dataset.filter(:table__id => 4..7).sql.should == 'SELECT * FROM test WHERE ((table.id >= 4) AND (table.id <= 7))' @dataset.filter(:table__id => 4...7).sql.should == 'SELECT * FROM test WHERE ((table.id >= 4) AND (table.id < 7))' end specify "should accept nil" do @dataset.filter(:owner_id => nil).sql.should == 'SELECT * FROM test WHERE (owner_id IS NULL)' end specify "should accept a subquery" do @dataset.filter('gdp > ?', @d1.select(Sequel.function(:avg, :gdp))).sql.should == "SELECT * FROM test WHERE (gdp > (SELECT avg(gdp) FROM test WHERE (region = 'Asia')))" end specify "should handle all types of IN/NOT IN queries with empty arrays" do @dataset.filter(:id => []).sql.should == "SELECT * FROM test WHERE (id != id)" @dataset.filter([:id1, :id2] => []).sql.should == "SELECT * FROM test WHERE ((id1 != id1) AND (id2 != id2))" @dataset.exclude(:id => []).sql.should == "SELECT * FROM test WHERE (id = id)" @dataset.exclude([:id1, :id2] => []).sql.should == "SELECT * FROM test WHERE ((id1 = id1) AND (id2 = id2))" end specify "should handle all types of IN/NOT IN queries" do @dataset.filter(:id => @d1.select(:id)).sql.should == "SELECT * FROM test WHERE (id IN (SELECT id FROM test WHERE (region = 'Asia')))" @dataset.filter(:id => [1, 2]).sql.should == "SELECT * FROM test WHERE (id IN (1, 2))" @dataset.filter([:id1, :id2] => @d1.select(:id1, :id2)).sql.should == "SELECT * FROM test WHERE ((id1, id2) IN (SELECT id1, id2 FROM test WHERE (region = 'Asia')))" @dataset.filter([:id1, :id2] => Sequel.value_list([[1, 2], [3,4]])).sql.should == "SELECT * FROM test WHERE ((id1, id2) IN ((1, 2), (3, 4)))" @dataset.filter([:id1, :id2] => [[1, 2], [3,4]]).sql.should == "SELECT * FROM test WHERE ((id1, id2) IN ((1, 2), (3, 4)))" @dataset.exclude(:id => @d1.select(:id)).sql.should == "SELECT * FROM test WHERE (id NOT IN (SELECT id FROM test WHERE (region = 'Asia')))" @dataset.exclude(:id => [1, 2]).sql.should == "SELECT * FROM test WHERE (id NOT IN (1, 2))" @dataset.exclude([:id1, :id2] => @d1.select(:id1, :id2)).sql.should == "SELECT * FROM test WHERE ((id1, id2) NOT IN (SELECT id1, id2 FROM test WHERE (region = 'Asia')))" @dataset.exclude([:id1, :id2] => Sequel.value_list([[1, 2], [3,4]])).sql.should == "SELECT * FROM test WHERE ((id1, id2) NOT IN ((1, 2), (3, 4)))" @dataset.exclude([:id1, :id2] => [[1, 2], [3,4]]).sql.should == "SELECT * FROM test WHERE ((id1, id2) NOT IN ((1, 2), (3, 4)))" end specify "should handle IN/NOT IN queries with multiple columns and an array where the database doesn't support it" do meta_def(@dataset, :supports_multiple_column_in?){false} @dataset.filter([:id1, :id2] => [[1, 2], [3,4]]).sql.should == "SELECT * FROM test WHERE (((id1 = 1) AND (id2 = 2)) OR ((id1 = 3) AND (id2 = 4)))" @dataset.exclude([:id1, :id2] => [[1, 2], [3,4]]).sql.should == "SELECT * FROM test WHERE (((id1 != 1) OR (id2 != 2)) AND ((id1 != 3) OR (id2 != 4)))" @dataset.filter([:id1, :id2] => Sequel.value_list([[1, 2], [3,4]])).sql.should == "SELECT * FROM test WHERE (((id1 = 1) AND (id2 = 2)) OR ((id1 = 3) AND (id2 = 4)))" @dataset.exclude([:id1, :id2] => Sequel.value_list([[1, 2], [3,4]])).sql.should == "SELECT * FROM test WHERE (((id1 != 1) OR (id2 != 2)) AND ((id1 != 3) OR (id2 != 4)))" end specify "should handle IN/NOT IN queries with multiple columns and a dataset where the database doesn't support it" do meta_def(@dataset, :supports_multiple_column_in?){false} db = Sequel.mock(:fetch=>[{:id1=>1, :id2=>2}, {:id1=>3, :id2=>4}]) d1 = db[:test].select(:id1, :id2).filter(:region=>'Asia').columns(:id1, :id2) @dataset.filter([:id1, :id2] => d1).sql.should == "SELECT * FROM test WHERE (((id1 = 1) AND (id2 = 2)) OR ((id1 = 3) AND (id2 = 4)))" db.sqls.should == ["SELECT id1, id2 FROM test WHERE (region = 'Asia')"] @dataset.exclude([:id1, :id2] => d1).sql.should == "SELECT * FROM test WHERE (((id1 != 1) OR (id2 != 2)) AND ((id1 != 3) OR (id2 != 4)))" db.sqls.should == ["SELECT id1, id2 FROM test WHERE (region = 'Asia')"] end specify "should handle IN/NOT IN queries with multiple columns and an empty dataset where the database doesn't support it" do meta_def(@dataset, :supports_multiple_column_in?){false} db = Sequel.mock d1 = db[:test].select(:id1, :id2).filter(:region=>'Asia').columns(:id1, :id2) @dataset.filter([:id1, :id2] => d1).sql.should == "SELECT * FROM test WHERE ((id1 != id1) AND (id2 != id2))" db.sqls.should == ["SELECT id1, id2 FROM test WHERE (region = 'Asia')"] @dataset.exclude([:id1, :id2] => d1).sql.should == "SELECT * FROM test WHERE ((id1 = id1) AND (id2 = id2))" db.sqls.should == ["SELECT id1, id2 FROM test WHERE (region = 'Asia')"] end specify "should handle IN/NOT IN queries for datasets with row_procs" do meta_def(@dataset, :supports_multiple_column_in?){false} db = Sequel.mock(:fetch=>[{:id1=>1, :id2=>2}, {:id1=>3, :id2=>4}]) d1 = db[:test].select(:id1, :id2).filter(:region=>'Asia').columns(:id1, :id2) d1.row_proc = proc{|h| Object.new} @dataset.filter([:id1, :id2] => d1).sql.should == "SELECT * FROM test WHERE (((id1 = 1) AND (id2 = 2)) OR ((id1 = 3) AND (id2 = 4)))" db.sqls.should == ["SELECT id1, id2 FROM test WHERE (region = 'Asia')"] @dataset.exclude([:id1, :id2] => d1).sql.should == "SELECT * FROM test WHERE (((id1 != 1) OR (id2 != 2)) AND ((id1 != 3) OR (id2 != 4)))" db.sqls.should == ["SELECT id1, id2 FROM test WHERE (region = 'Asia')"] end specify "should accept a subquery for an EXISTS clause" do a = @dataset.filter(Sequel.expr(:price) < 100) @dataset.filter(a.exists).sql.should == 'SELECT * FROM test WHERE (EXISTS (SELECT * FROM test WHERE (price < 100)))' end specify "should accept proc expressions" do d = @d1.select(Sequel.function(:avg, :gdp)) @dataset.filter{gdp > d}.sql.should == "SELECT * FROM test WHERE (gdp > (SELECT avg(gdp) FROM test WHERE (region = 'Asia')))" @dataset.filter{a < 1}.sql.should == 'SELECT * FROM test WHERE (a < 1)' @dataset.filter{(a >= 1) & (b <= 2)}.sql.should == 'SELECT * FROM test WHERE ((a >= 1) AND (b <= 2))' @dataset.filter{c.like 'ABC%'}.sql.should == "SELECT * FROM test WHERE (c LIKE 'ABC%' ESCAPE '\\')" @dataset.filter{c.like 'ABC%', '%XYZ'}.sql.should == "SELECT * FROM test WHERE ((c LIKE 'ABC%' ESCAPE '\\') OR (c LIKE '%XYZ' ESCAPE '\\'))" end specify "should work for grouped datasets" do @dataset.group(:a).filter(:b => 1).sql.should == 'SELECT * FROM test WHERE (b = 1) GROUP BY a' end specify "should accept true and false as arguments" do @dataset.filter(true).sql.should == "SELECT * FROM test WHERE 't'" @dataset.filter(Sequel::SQLTRUE).sql.should == "SELECT * FROM test WHERE 't'" @dataset.filter(false).sql.should == "SELECT * FROM test WHERE 'f'" @dataset.filter(Sequel::SQLFALSE).sql.should == "SELECT * FROM test WHERE 'f'" end specify "should use boolean expression if dataset does not support where true/false" do def @dataset.supports_where_true?() false end @dataset.filter(true).sql.should == "SELECT * FROM test WHERE (1 = 1)" @dataset.filter(Sequel::SQLTRUE).sql.should == "SELECT * FROM test WHERE (1 = 1)" @dataset.filter(false).sql.should == "SELECT * FROM test WHERE (1 = 0)" @dataset.filter(Sequel::SQLFALSE).sql.should == "SELECT * FROM test WHERE (1 = 0)" end specify "should allow the use of multiple arguments" do @dataset.filter(:a, :b).sql.should == 'SELECT * FROM test WHERE (a AND b)' @dataset.filter(:a, :b=>1).sql.should == 'SELECT * FROM test WHERE (a AND (b = 1))' @dataset.filter(:a, Sequel.expr(:c) > 3, :b=>1).sql.should == 'SELECT * FROM test WHERE (a AND (c > 3) AND (b = 1))' end specify "should allow the use of blocks and arguments simultaneously" do @dataset.filter(Sequel.expr(:zz) < 3){yy > 3}.sql.should == 'SELECT * FROM test WHERE ((zz < 3) AND (yy > 3))' end specify "should yield a VirtualRow to the block" do x = nil @dataset.filter{|r| x = r; false} x.should be_a_kind_of(Sequel::SQL::VirtualRow) @dataset.filter{|r| ((r.name < 'b') & {r.table__id => 1}) | r.is_active(r.blah, r.xx, r.x__y_z)}.sql.should == "SELECT * FROM test WHERE (((name < 'b') AND (table.id = 1)) OR is_active(blah, xx, x.y_z))" end specify "should instance_eval the block in the context of a VirtualRow if the block doesn't request an argument" do x = nil @dataset.filter{x = self; false} x.should be_a_kind_of(Sequel::SQL::VirtualRow) @dataset.filter{((name < 'b') & {table__id => 1}) | is_active(blah, xx, x__y_z)}.sql.should == "SELECT * FROM test WHERE (((name < 'b') AND (table.id = 1)) OR is_active(blah, xx, x.y_z))" end specify "should raise an error if an invalid argument is used" do proc{@dataset.filter(1)}.should raise_error(Sequel::Error) end specify "should raise an error if a NumericExpression or StringExpression is used" do proc{@dataset.filter(Sequel.expr(:x) + 1)}.should raise_error(Sequel::Error) proc{@dataset.filter(Sequel.expr(:x).sql_string)}.should raise_error(Sequel::Error) end end describe "Dataset#or" do before do @dataset = Sequel.mock.dataset.from(:test) @d1 = @dataset.where(:x => 1) end specify "should just clone if no where clause exists" do @dataset.or(:a => 1).sql.should == 'SELECT * FROM test' end specify "should just clone if given an empty argument" do @d1.or({}).sql.should == @d1.sql @d1.or([]).sql.should == @d1.sql @d1.or('').sql.should == @d1.sql end specify "should add an alternative expression to the where clause" do @d1.or(:y => 2).sql.should == 'SELECT * FROM test WHERE ((x = 1) OR (y = 2))' end specify "should accept all forms of filters" do @d1.or('y > ?', 2).sql.should == 'SELECT * FROM test WHERE ((x = 1) OR (y > 2))' @d1.or(Sequel.expr(:yy) > 3).sql.should == 'SELECT * FROM test WHERE ((x = 1) OR (yy > 3))' end specify "should accept blocks passed to filter" do @d1.or{yy > 3}.sql.should == 'SELECT * FROM test WHERE ((x = 1) OR (yy > 3))' end specify "should correctly add parens to give predictable results" do @d1.filter(:y => 2).or(:z => 3).sql.should == 'SELECT * FROM test WHERE (((x = 1) AND (y = 2)) OR (z = 3))' @d1.or(:y => 2).filter(:z => 3).sql.should == 'SELECT * FROM test WHERE (((x = 1) OR (y = 2)) AND (z = 3))' end specify "should allow the use of blocks and arguments simultaneously" do @d1.or(Sequel.expr(:zz) < 3){yy > 3}.sql.should == 'SELECT * FROM test WHERE ((x = 1) OR ((zz < 3) AND (yy > 3)))' end end describe "Dataset#and" do before do @dataset = Sequel.mock.dataset.from(:test) @d1 = @dataset.where(:x => 1) end specify "should add a WHERE filter if none exists" do @dataset.and(:a => 1).sql.should == 'SELECT * FROM test WHERE (a = 1)' end specify "should add an expression to the where clause" do @d1.and(:y => 2).sql.should == 'SELECT * FROM test WHERE ((x = 1) AND (y = 2))' end specify "should accept different types of filters" do @d1.and('y > ?', 2).sql.should == 'SELECT * FROM test WHERE ((x = 1) AND (y > 2))' @d1.and(Sequel.expr(:yy) > 3).sql.should == 'SELECT * FROM test WHERE ((x = 1) AND (yy > 3))' end specify "should accept blocks passed to filter" do @d1.and{yy > 3}.sql.should == 'SELECT * FROM test WHERE ((x = 1) AND (yy > 3))' end specify "should correctly add parens to give predictable results" do @d1.or(:y => 2).and(:z => 3).sql.should == 'SELECT * FROM test WHERE (((x = 1) OR (y = 2)) AND (z = 3))' @d1.and(:y => 2).or(:z => 3).sql.should == 'SELECT * FROM test WHERE (((x = 1) AND (y = 2)) OR (z = 3))' end end describe "Dataset#exclude" do before do @dataset = Sequel.mock.dataset.from(:test) end specify "should correctly negate the expression when one condition is given" do @dataset.exclude(:region=>'Asia').select_sql.should == "SELECT * FROM test WHERE (region != 'Asia')" end specify "should take multiple conditions as a hash and express the logic correctly in SQL" do @dataset.exclude(:region => 'Asia', :name => 'Japan').select_sql. should match(Regexp.union(/WHERE \(\(region != 'Asia'\) OR \(name != 'Japan'\)\)/, /WHERE \(\(name != 'Japan'\) OR \(region != 'Asia'\)\)/)) end specify "should parenthesize a single string condition correctly" do @dataset.exclude("region = 'Asia' AND name = 'Japan'").select_sql.should == "SELECT * FROM test WHERE NOT (region = 'Asia' AND name = 'Japan')" end specify "should parenthesize an array condition correctly" do @dataset.exclude('region = ? AND name = ?', 'Asia', 'Japan').select_sql.should == "SELECT * FROM test WHERE NOT (region = 'Asia' AND name = 'Japan')" end specify "should correctly parenthesize when it is used twice" do @dataset.exclude(:region => 'Asia').exclude(:name => 'Japan').select_sql.should == "SELECT * FROM test WHERE ((region != 'Asia') AND (name != 'Japan'))" end specify "should support proc expressions" do @dataset.exclude{id < 6}.sql.should == 'SELECT * FROM test WHERE (id >= 6)' end specify "should allow the use of blocks and arguments simultaneously" do @dataset.exclude(:id => (7..11)){id < 6}.sql.should == 'SELECT * FROM test WHERE ((id < 7) OR (id > 11) OR (id >= 6))' @dataset.exclude([:id, 1], [:x, 3]){id < 6}.sql.should == 'SELECT * FROM test WHERE ((id != 1) OR (x != 3) OR (id >= 6))' end end describe "Dataset#exclude_where" do before do @dataset = Sequel.mock.dataset.from(:test) end specify "should correctly negate the expression and add it to the where clause" do @dataset.exclude_where(:region=>'Asia').sql.should == "SELECT * FROM test WHERE (region != 'Asia')" @dataset.exclude_where(:region=>'Asia').exclude_where(:region=>'NA').sql.should == "SELECT * FROM test WHERE ((region != 'Asia') AND (region != 'NA'))" end specify "should affect the where clause even if having clause is already used" do @dataset.group_and_count(:name).having{count > 2}.exclude_where(:region=>'Asia').sql.should == "SELECT name, count(*) AS count FROM test WHERE (region != 'Asia') GROUP BY name HAVING (count > 2)" end end describe "Dataset#exclude_having" do specify "should correctly negate the expression and add it to the having clause" do Sequel.mock.dataset.from(:test).exclude_having{count > 2}.exclude_having{count < 0}.sql.should == "SELECT * FROM test HAVING ((count <= 2) AND (count >= 0))" end end describe "Dataset#invert" do before do @d = Sequel.mock.dataset.from(:test) end specify "should return a dataset that selects no rows if dataset is not filtered" do @d.invert.sql.should == "SELECT * FROM test WHERE 'f'" end specify "should invert current filter if dataset is filtered" do @d.filter(:x).invert.sql.should == 'SELECT * FROM test WHERE NOT x' end specify "should invert both having and where if both are preset" do @d.filter(:x).group(:x).having(:x).invert.sql.should == 'SELECT * FROM test WHERE NOT x GROUP BY x HAVING NOT x' end end describe "Dataset#having" do before do @dataset = Sequel.mock.dataset.from(:test) @grouped = @dataset.group(:region).select(:region, Sequel.function(:sum, :population), Sequel.function(:avg, :gdp)) end specify "should just clone if given an empty argument" do @dataset.having({}).sql.should == @dataset.sql @dataset.having([]).sql.should == @dataset.sql @dataset.having('').sql.should == @dataset.sql end specify "should affect select statements" do @grouped.having('sum(population) > 10').select_sql.should == "SELECT region, sum(population), avg(gdp) FROM test GROUP BY region HAVING (sum(population) > 10)" end specify "should support proc expressions" do @grouped.having{Sequel.function(:sum, :population) > 10}.sql.should == "SELECT region, sum(population), avg(gdp) FROM test GROUP BY region HAVING (sum(population) > 10)" end end describe "a grouped dataset" do before do @dataset = Sequel.mock.dataset.from(:test).group(:type_id) end specify "should raise when trying to generate an update statement" do proc {@dataset.update_sql(:id => 0)}.should raise_error end specify "should raise when trying to generate a delete statement" do proc {@dataset.delete_sql}.should raise_error end specify "should raise when trying to generate a truncate statement" do proc {@dataset.truncate_sql}.should raise_error end specify "should raise when trying to generate an insert statement" do proc {@dataset.insert_sql}.should raise_error end specify "should specify the grouping in generated select statement" do @dataset.select_sql.should == "SELECT * FROM test GROUP BY type_id" end specify "should format the right statement for counting (as a subquery)" do db = Sequel.mock db[:test].select(:name).group(:name).count db.sqls.should == ["SELECT count(*) AS count FROM (SELECT name FROM test GROUP BY name) AS t1 LIMIT 1"] end end describe "Dataset#group_by" do before do @dataset = Sequel.mock[:test].group_by(:type_id) end specify "should raise when trying to generate an update statement" do proc {@dataset.update_sql(:id => 0)}.should raise_error end specify "should raise when trying to generate a delete statement" do proc {@dataset.delete_sql}.should raise_error end specify "should specify the grouping in generated select statement" do @dataset.select_sql.should == "SELECT * FROM test GROUP BY type_id" @dataset.group_by(:a, :b).select_sql.should == "SELECT * FROM test GROUP BY a, b" @dataset.group_by(:type_id=>nil).select_sql.should == "SELECT * FROM test GROUP BY (type_id IS NULL)" end specify "should ungroup when passed nil or no arguments" do @dataset.group_by.select_sql.should == "SELECT * FROM test" @dataset.group_by(nil).select_sql.should == "SELECT * FROM test" end specify "should undo previous grouping" do @dataset.group_by(:a).group_by(:b).select_sql.should == "SELECT * FROM test GROUP BY b" @dataset.group_by(:a, :b).group_by.select_sql.should == "SELECT * FROM test" end specify "should be aliased as #group" do @dataset.group(:type_id=>nil).select_sql.should == "SELECT * FROM test GROUP BY (type_id IS NULL)" end specify "should take a virtual row block" do @dataset.group{type_id > 1}.sql.should == "SELECT * FROM test GROUP BY (type_id > 1)" @dataset.group_by{type_id > 1}.sql.should == "SELECT * FROM test GROUP BY (type_id > 1)" @dataset.group{[type_id > 1, type_id < 2]}.sql.should == "SELECT * FROM test GROUP BY (type_id > 1), (type_id < 2)" @dataset.group(:foo){type_id > 1}.sql.should == "SELECT * FROM test GROUP BY foo, (type_id > 1)" end specify "should support a #group_rollup method if the database supports it" do meta_def(@dataset, :supports_group_rollup?){true} @dataset.group(:type_id).group_rollup.select_sql.should == "SELECT * FROM test GROUP BY ROLLUP(type_id)" @dataset.group(:type_id, :b).group_rollup.select_sql.should == "SELECT * FROM test GROUP BY ROLLUP(type_id, b)" meta_def(@dataset, :uses_with_rollup?){true} @dataset.group(:type_id).group_rollup.select_sql.should == "SELECT * FROM test GROUP BY type_id WITH ROLLUP" @dataset.group(:type_id, :b).group_rollup.select_sql.should == "SELECT * FROM test GROUP BY type_id, b WITH ROLLUP" end specify "should support a #group_cube method if the database supports it" do meta_def(@dataset, :supports_group_cube?){true} @dataset.group(:type_id).group_cube.select_sql.should == "SELECT * FROM test GROUP BY CUBE(type_id)" @dataset.group(:type_id, :b).group_cube.select_sql.should == "SELECT * FROM test GROUP BY CUBE(type_id, b)" meta_def(@dataset, :uses_with_rollup?){true} @dataset.group(:type_id).group_cube.select_sql.should == "SELECT * FROM test GROUP BY type_id WITH CUBE" @dataset.group(:type_id, :b).group_cube.select_sql.should == "SELECT * FROM test GROUP BY type_id, b WITH CUBE" end specify "should have #group_cube and #group_rollup methods raise an Error if not supported it" do proc{@dataset.group(:type_id).group_rollup}.should raise_error(Sequel::Error) proc{@dataset.group(:type_id).group_cube}.should raise_error(Sequel::Error) end end describe "Dataset#as" do specify "should set up an alias" do dataset = Sequel.mock.dataset.from(:test) dataset.select(dataset.limit(1).select(:name).as(:n)).sql.should == 'SELECT (SELECT name FROM test LIMIT 1) AS n FROM test' end end describe "Dataset#literal" do before do @ds = Sequel::Database.new.dataset end specify "should convert qualified symbol notation into dot notation" do @ds.literal(:abc__def).should == 'abc.def' end specify "should convert AS symbol notation into SQL AS notation" do @ds.literal(:xyz___x).should == 'xyz AS x' @ds.literal(:abc__def___x).should == 'abc.def AS x' end specify "should support names with digits" do @ds.literal(:abc2).should == 'abc2' @ds.literal(:xx__yy3).should == 'xx.yy3' @ds.literal(:ab34__temp3_4ax).should == 'ab34.temp3_4ax' @ds.literal(:x1___y2).should == 'x1 AS y2' @ds.literal(:abc2__def3___ggg4).should == 'abc2.def3 AS ggg4' end specify "should support upper case and lower case" do @ds.literal(:ABC).should == 'ABC' @ds.literal(:Zvashtoy__aBcD).should == 'Zvashtoy.aBcD' end specify "should support spaces inside column names" do @ds.quote_identifiers = true @ds.literal(:"AB C").should == '"AB C"' @ds.literal(:"Zvas htoy__aB cD").should == '"Zvas htoy"."aB cD"' @ds.literal(:"aB cD___XX XX").should == '"aB cD" AS "XX XX"' @ds.literal(:"Zva shtoy__aB cD___XX XX").should == '"Zva shtoy"."aB cD" AS "XX XX"' end end describe "Dataset#literal" do before do @dataset = Sequel::Database.new.from(:test) end specify "should escape strings properly" do @dataset.literal('abc').should == "'abc'" @dataset.literal('a"x"bc').should == "'a\"x\"bc'" @dataset.literal("a'bc").should == "'a''bc'" @dataset.literal("a''bc").should == "'a''''bc'" @dataset.literal("a\\bc").should == "'a\\bc'" @dataset.literal("a\\\\bc").should == "'a\\\\bc'" @dataset.literal("a\\'bc").should == "'a\\''bc'" end specify "should escape blobs as strings by default" do @dataset.literal(Sequel.blob('abc')).should == "'abc'" end specify "should literalize numbers properly" do @dataset.literal(1).should == "1" @dataset.literal(1.5).should == "1.5" end specify "should literalize nil as NULL" do @dataset.literal(nil).should == "NULL" end specify "should literalize an array properly" do @dataset.literal([]).should == "(NULL)" @dataset.literal([1, 'abc', 3]).should == "(1, 'abc', 3)" @dataset.literal([1, "a'b''c", 3]).should == "(1, 'a''b''''c', 3)" end specify "should literalize symbols as column references" do @dataset.literal(:name).should == "name" @dataset.literal(:items__name).should == "items.name" @dataset.literal(:"items__na#m$e").should == "items.na#m$e" end specify "should call sql_literal_append with dataset and sql on type if not natively supported and the object responds to it" do @a = Class.new do def sql_literal_append(ds, sql) sql << "called #{ds.blah}" end def sql_literal(ds) "not called #{ds.blah}" end end def @dataset.blah "ds" end @dataset.literal(@a.new).should == "called ds" end specify "should call sql_literal with dataset on type if not natively supported and the object responds to it" do @a = Class.new do def sql_literal(ds) "called #{ds.blah}" end end def @dataset.blah "ds" end @dataset.literal(@a.new).should == "called ds" end specify "should literalize datasets as subqueries" do d = @dataset.from(:test) d.literal(d).should == "(#{d.sql})" end specify "should literalize Sequel::SQLTime properly" do t = Sequel::SQLTime.now s = t.strftime("'%H:%M:%S") @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.usec)}'" end specify "should literalize Time properly" do t = Time.now s = t.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.usec)}'" end specify "should literalize DateTime properly" do t = DateTime.now s = t.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.sec_fraction * (RUBY_VERSION < '1.9.0' ? 86400000000 : 1000000))}'" end specify "should literalize Date properly" do d = Date.today s = d.strftime("'%Y-%m-%d'") @dataset.literal(d).should == s end specify "should literalize Date properly, even if to_s is overridden" do d = Date.today def d.to_s; "adsf" end s = d.strftime("'%Y-%m-%d'") @dataset.literal(d).should == s end specify "should literalize Time, DateTime, Date properly if SQL standard format is required" do meta_def(@dataset, :requires_sql_standard_datetimes?){true} t = Time.now s = t.strftime("TIMESTAMP '%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.usec)}'" t = DateTime.now s = t.strftime("TIMESTAMP '%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.sec_fraction* (RUBY_VERSION < '1.9.0' ? 86400000000 : 1000000))}'" d = Date.today s = d.strftime("DATE '%Y-%m-%d'") @dataset.literal(d).should == s end specify "should literalize Time and DateTime properly if the database support timezones in timestamps" do meta_def(@dataset, :supports_timestamp_timezones?){true} t = Time.now.utc s = t.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.usec)}+0000'" t = DateTime.now.new_offset(0) s = t.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.sec_fraction* (RUBY_VERSION < '1.9.0' ? 86400000000 : 1000000))}+0000'" end specify "should literalize Time and DateTime properly if the database doesn't support usecs in timestamps" do meta_def(@dataset, :supports_timestamp_usecs?){false} t = Time.now.utc s = t.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}'" t = DateTime.now.new_offset(0) s = t.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}'" meta_def(@dataset, :supports_timestamp_timezones?){true} t = Time.now.utc s = t.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}+0000'" t = DateTime.now.new_offset(0) s = t.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}+0000'" end specify "should not modify literal strings" do @dataset.quote_identifiers = true @dataset.literal(Sequel.lit('col1 + 2')).should == 'col1 + 2' @dataset.update_sql(Sequel::SQL::Identifier.new(Sequel.lit('a')) => Sequel.lit('a + 2')).should == 'UPDATE "test" SET a = a + 2' end specify "should literalize BigDecimal instances correctly" do @dataset.literal(BigDecimal.new("80")).should == "80.0" @dataset.literal(BigDecimal.new("NaN")).should == "'NaN'" @dataset.literal(BigDecimal.new("Infinity")).should == "'Infinity'" @dataset.literal(BigDecimal.new("-Infinity")).should == "'-Infinity'" end specify "should literalize PlaceholderLiteralStrings correctly" do @dataset.literal(Sequel::SQL::PlaceholderLiteralString.new('? = ?', [1, 2])).should == '1 = 2' @dataset.literal(Sequel::SQL::PlaceholderLiteralString.new('? = ?', [1, 2], true)).should == '(1 = 2)' @dataset.literal(Sequel::SQL::PlaceholderLiteralString.new(':a = :b', :a=>1, :b=>2)).should == '1 = 2' @dataset.literal(Sequel::SQL::PlaceholderLiteralString.new(':a = :b', {:a=>1, :b=>2}, true)).should == '(1 = 2)' @dataset.literal(Sequel::SQL::PlaceholderLiteralString.new(['', ' = ', ''], [1, 2])).should == '1 = 2' @dataset.literal(Sequel::SQL::PlaceholderLiteralString.new(['', ' = ', ''], [1, 2], true)).should == '(1 = 2)' @dataset.literal(Sequel::SQL::PlaceholderLiteralString.new(['', ' = '], [1, 2])).should == '1 = 2' @dataset.literal(Sequel::SQL::PlaceholderLiteralString.new(['', ' = '], [1, 2], true)).should == '(1 = 2)' end specify "should raise an Error if the object can't be literalized" do proc{@dataset.literal(Object.new)}.should raise_error(Sequel::Error) end end describe "Dataset#from" do before do @dataset = Sequel.mock.dataset end specify "should accept a Dataset" do proc {@dataset.from(@dataset)}.should_not raise_error end specify "should format a Dataset as a subquery if it has had options set" do @dataset.from(@dataset.from(:a).where(:a=>1)).select_sql.should == "SELECT * FROM (SELECT * FROM a WHERE (a = 1)) AS t1" end specify "should automatically alias sub-queries" do @dataset.from(@dataset.from(:a).group(:b)).select_sql.should == "SELECT * FROM (SELECT * FROM a GROUP BY b) AS t1" d1 = @dataset.from(:a).group(:b) d2 = @dataset.from(:c).group(:d) @dataset.from(d1, d2).sql.should == "SELECT * FROM (SELECT * FROM a GROUP BY b) AS t1, (SELECT * FROM c GROUP BY d) AS t2" end specify "should always use a subquery if given a dataset" do @dataset.from(@dataset.from(:a)).select_sql.should == "SELECT * FROM (SELECT * FROM a) AS t1" end specify "should treat string arguments as identifiers" do @dataset.quote_identifiers = true @dataset.from('a').select_sql.should == "SELECT * FROM \"a\"" end specify "should not treat literal strings or blobs as identifiers" do @dataset.quote_identifiers = true @dataset.from(Sequel.lit('a')).select_sql.should == "SELECT * FROM a" @dataset.from(Sequel.blob('a')).select_sql.should == "SELECT * FROM 'a'" end specify "should remove all FROM tables if called with no arguments" do @dataset.from.sql.should == 'SELECT *' end specify "should accept sql functions" do @dataset.from(Sequel.function(:abc, :def)).select_sql.should == "SELECT * FROM abc(def)" @dataset.from(Sequel.function(:a, :i)).select_sql.should == "SELECT * FROM a(i)" end specify "should accept virtual row blocks" do @dataset.from{abc(de)}.select_sql.should == "SELECT * FROM abc(de)" @dataset.from{[i, abc(de)]}.select_sql.should == "SELECT * FROM i, abc(de)" @dataset.from(:a){i}.select_sql.should == "SELECT * FROM a, i" @dataset.from(:a, :b){i}.select_sql.should == "SELECT * FROM a, b, i" @dataset.from(:a, :b){[i, abc(de)]}.select_sql.should == "SELECT * FROM a, b, i, abc(de)" end specify "should accept :schema__table___alias symbol format" do @dataset.from(:abc__def).select_sql.should == "SELECT * FROM abc.def" @dataset.from(:a_b__c).select_sql.should == "SELECT * FROM a_b.c" @dataset.from(:'#__#').select_sql.should == 'SELECT * FROM #.#' @dataset.from(:abc__def___d).select_sql.should == "SELECT * FROM abc.def AS d" @dataset.from(:a_b__d_e___f_g).select_sql.should == "SELECT * FROM a_b.d_e AS f_g" @dataset.from(:'#__#___#').select_sql.should == 'SELECT * FROM #.# AS #' @dataset.from(:abc___def).select_sql.should == "SELECT * FROM abc AS def" @dataset.from(:a_b___c_d).select_sql.should == "SELECT * FROM a_b AS c_d" @dataset.from(:'#___#').select_sql.should == 'SELECT * FROM # AS #' end specify "should not handle :foo__schema__table___alias specially" do @dataset.from(:foo__schema__table___alias).select_sql.should == "SELECT * FROM foo.schema__table AS alias" end specify "should hoist WITH clauses from subqueries if the dataset doesn't support CTEs in subselects" do meta_def(@dataset, :supports_cte?){true} meta_def(@dataset, :supports_cte_in_subselect?){false} @dataset.from(@dataset.from(:a).with(:a, @dataset.from(:b))).sql.should == 'WITH a AS (SELECT * FROM b) SELECT * FROM (SELECT * FROM a) AS t1' @dataset.from(@dataset.from(:a).with(:a, @dataset.from(:b)), @dataset.from(:c).with(:c, @dataset.from(:d))).sql.should == 'WITH a AS (SELECT * FROM b), c AS (SELECT * FROM d) SELECT * FROM (SELECT * FROM a) AS t1, (SELECT * FROM c) AS t2' end end describe "Dataset#select" do before do @d = Sequel.mock.dataset.from(:test) end specify "should accept variable arity" do @d.select(:name).sql.should == 'SELECT name FROM test' @d.select(:a, :b, :test__c).sql.should == 'SELECT a, b, test.c FROM test' end specify "should accept symbols and literal strings" do @d.select(Sequel.lit('aaa')).sql.should == 'SELECT aaa FROM test' @d.select(:a, Sequel.lit('b')).sql.should == 'SELECT a, b FROM test' @d.select(:test__cc, Sequel.lit('test.d AS e')).sql.should == 'SELECT test.cc, test.d AS e FROM test' @d.select(Sequel.lit('test.d AS e'), :test__cc).sql.should == 'SELECT test.d AS e, test.cc FROM test' @d.select(:test__name___n).sql.should == 'SELECT test.name AS n FROM test' end specify "should accept ColumnAlls" do @d.select(Sequel::SQL::ColumnAll.new(:test)).sql.should == 'SELECT test.* FROM test' end specify "should accept QualifiedIdentifiers" do @d.select(Sequel.expr(:test__name).as(:n)).sql.should == 'SELECT test.name AS n FROM test' end specify "should use the wildcard if no arguments are given" do @d.select.sql.should == 'SELECT * FROM test' end specify "should handle array condition specifiers that are aliased" do @d.select(Sequel.as([[:b, :c]], :n)).sql.should == 'SELECT (b = c) AS n FROM test' end specify "should handle hashes returned from virtual row blocks" do @d.select{{:b=>:c}}.sql.should == 'SELECT (b = c) FROM test' end specify "should override the previous select option" do @d.select!(:a, :b, :c).select.sql.should == 'SELECT * FROM test' @d.select!(:price).select(:name).sql.should == 'SELECT name FROM test' end specify "should accept arbitrary objects and literalize them correctly" do @d.select(1, :a, 't').sql.should == "SELECT 1, a, 't' FROM test" @d.select(nil, Sequel.function(:sum, :t), :x___y).sql.should == "SELECT NULL, sum(t), x AS y FROM test" @d.select(nil, 1, Sequel.as(:x, :y)).sql.should == "SELECT NULL, 1, x AS y FROM test" end specify "should accept a block that yields a virtual row" do @d.select{|o| o.a}.sql.should == 'SELECT a FROM test' @d.select{a(1)}.sql.should == 'SELECT a(1) FROM test' @d.select{|o| o.a(1, 2)}.sql.should == 'SELECT a(1, 2) FROM test' @d.select{[a, a(1, 2)]}.sql.should == 'SELECT a, a(1, 2) FROM test' end specify "should merge regular arguments with argument returned from block" do @d.select(:b){a}.sql.should == 'SELECT b, a FROM test' @d.select(:b, :c){|o| o.a(1)}.sql.should == 'SELECT b, c, a(1) FROM test' @d.select(:b){[a, a(1, 2)]}.sql.should == 'SELECT b, a, a(1, 2) FROM test' @d.select(:b, :c){|o| [o.a, o.a(1, 2)]}.sql.should == 'SELECT b, c, a, a(1, 2) FROM test' end end describe "Dataset#select_group" do before do @d = Sequel.mock.dataset.from(:test) end specify "should set both SELECT and GROUP" do @d.select_group(:name).sql.should == 'SELECT name FROM test GROUP BY name' @d.select_group(:a, :b__c, :d___e).sql.should == 'SELECT a, b.c, d AS e FROM test GROUP BY a, b.c, d' end specify "should remove from both SELECT and GROUP if no arguments" do @d.select_group(:name).select_group.sql.should == 'SELECT * FROM test' end specify "should accept virtual row blocks" do @d.select_group{name}.sql.should == 'SELECT name FROM test GROUP BY name' @d.select_group{[name, f(v).as(a)]}.sql.should == 'SELECT name, f(v) AS a FROM test GROUP BY name, f(v)' @d.select_group(:name){f(v).as(a)}.sql.should == 'SELECT name, f(v) AS a FROM test GROUP BY name, f(v)' end end describe "Dataset#select_all" do before do @d = Sequel.mock.dataset.from(:test) end specify "should select the wildcard" do @d.select_all.sql.should == 'SELECT * FROM test' end specify "should override the previous select option" do @d.select!(:a, :b, :c).select_all.sql.should == 'SELECT * FROM test' end specify "should select all columns in a table if given an argument" do @d.select_all(:test).sql.should == 'SELECT test.* FROM test' end specify "should select all columns all tables if given a multiple arguments" do @d.select_all(:test, :foo).sql.should == 'SELECT test.*, foo.* FROM test' end specify "should work correctly with qualified symbols" do @d.select_all(:sch__test).sql.should == 'SELECT sch.test.* FROM test' end specify "should work correctly with aliased symbols" do @d.select_all(:test___al).sql.should == 'SELECT al.* FROM test' @d.select_all(:sch__test___al).sql.should == 'SELECT al.* FROM test' end specify "should work correctly with SQL::Identifiers" do @d.select_all(Sequel.identifier(:test)).sql.should == 'SELECT test.* FROM test' end specify "should work correctly with SQL::QualifiedIdentifier" do @d.select_all(Sequel.qualify(:sch, :test)).sql.should == 'SELECT sch.test.* FROM test' end specify "should work correctly with SQL::AliasedExpressions" do @d.select_all(Sequel.expr(:test).as(:al)).sql.should == 'SELECT al.* FROM test' end specify "should work correctly with SQL::JoinClauses" do d = @d.cross_join(:foo).cross_join(:test___al) @d.select_all(*d.opts[:join]).sql.should == 'SELECT foo.*, al.* FROM test' end end describe "Dataset#select_more" do before do @d = Sequel.mock.dataset.from(:test) end specify "should act like #select_append for datasets with no selection" do @d.select_more(:a, :b).sql.should == 'SELECT *, a, b FROM test' @d.select_all.select_more(:a, :b).sql.should == 'SELECT *, a, b FROM test' @d.select(:blah).select_all.select_more(:a, :b).sql.should == 'SELECT *, a, b FROM test' end specify "should add to the currently selected columns" do @d.select(:a).select_more(:b).sql.should == 'SELECT a, b FROM test' @d.select(Sequel::SQL::ColumnAll.new(:a)).select_more(Sequel::SQL::ColumnAll.new(:b)).sql.should == 'SELECT a.*, b.* FROM test' end specify "should accept a block that yields a virtual row" do @d.select(:a).select_more{|o| o.b}.sql.should == 'SELECT a, b FROM test' @d.select(Sequel::SQL::ColumnAll.new(:a)).select_more(Sequel::SQL::ColumnAll.new(:b)){b(1)}.sql.should == 'SELECT a.*, b.*, b(1) FROM test' end end describe "Dataset#select_append" do before do @d = Sequel.mock.dataset.from(:test) end specify "should select * in addition to columns if no columns selected" do @d.select_append(:a, :b).sql.should == 'SELECT *, a, b FROM test' @d.select_all.select_append(:a, :b).sql.should == 'SELECT *, a, b FROM test' @d.select(:blah).select_all.select_append(:a, :b).sql.should == 'SELECT *, a, b FROM test' end specify "should add to the currently selected columns" do @d.select(:a).select_append(:b).sql.should == 'SELECT a, b FROM test' @d.select(Sequel::SQL::ColumnAll.new(:a)).select_append(Sequel::SQL::ColumnAll.new(:b)).sql.should == 'SELECT a.*, b.* FROM test' end specify "should accept a block that yields a virtual row" do @d.select(:a).select_append{|o| o.b}.sql.should == 'SELECT a, b FROM test' @d.select(Sequel::SQL::ColumnAll.new(:a)).select_append(Sequel::SQL::ColumnAll.new(:b)){b(1)}.sql.should == 'SELECT a.*, b.*, b(1) FROM test' end specify "should select from all from and join tables if SELECT *, column not supported" do meta_def(@d, :supports_select_all_and_column?){false} @d.select_append(:b).sql.should == 'SELECT test.*, b FROM test' @d.from(:test, :c).select_append(:b).sql.should == 'SELECT test.*, c.*, b FROM test, c' @d.cross_join(:c).select_append(:b).sql.should == 'SELECT test.*, c.*, b FROM test CROSS JOIN c' @d.cross_join(:c).cross_join(:d).select_append(:b).sql.should == 'SELECT test.*, c.*, d.*, b FROM test CROSS JOIN c CROSS JOIN d' end end describe "Dataset#order" do before do @dataset = Sequel.mock.dataset.from(:test) end specify "should include an ORDER BY clause in the select statement" do @dataset.order(:name).sql.should == 'SELECT * FROM test ORDER BY name' end specify "should accept multiple arguments" do @dataset.order(:name, Sequel.desc(:price)).sql.should == 'SELECT * FROM test ORDER BY name, price DESC' end specify "should accept :nulls options for asc and desc" do @dataset.order(Sequel.asc(:name, :nulls=>:last), Sequel.desc(:price, :nulls=>:first)).sql.should == 'SELECT * FROM test ORDER BY name ASC NULLS LAST, price DESC NULLS FIRST' end specify "should override a previous ordering" do @dataset.order(:name).order(:stamp).sql.should == 'SELECT * FROM test ORDER BY stamp' end specify "should accept a literal string" do @dataset.order(Sequel.lit('dada ASC')).sql.should == 'SELECT * FROM test ORDER BY dada ASC' end specify "should accept a hash as an expression" do @dataset.order(:name=>nil).sql.should == 'SELECT * FROM test ORDER BY (name IS NULL)' end specify "should accept a nil to remove ordering" do @dataset.order(:bah).order(nil).sql.should == 'SELECT * FROM test' end specify "should accept a block that yields a virtual row" do @dataset.order{|o| o.a}.sql.should == 'SELECT * FROM test ORDER BY a' @dataset.order{a(1)}.sql.should == 'SELECT * FROM test ORDER BY a(1)' @dataset.order{|o| o.a(1, 2)}.sql.should == 'SELECT * FROM test ORDER BY a(1, 2)' @dataset.order{[a, a(1, 2)]}.sql.should == 'SELECT * FROM test ORDER BY a, a(1, 2)' end specify "should merge regular arguments with argument returned from block" do @dataset.order(:b){a}.sql.should == 'SELECT * FROM test ORDER BY b, a' @dataset.order(:b, :c){|o| o.a(1)}.sql.should == 'SELECT * FROM test ORDER BY b, c, a(1)' @dataset.order(:b){[a, a(1, 2)]}.sql.should == 'SELECT * FROM test ORDER BY b, a, a(1, 2)' @dataset.order(:b, :c){|o| [o.a, o.a(1, 2)]}.sql.should == 'SELECT * FROM test ORDER BY b, c, a, a(1, 2)' end end describe "Dataset#unfiltered" do specify "should remove filtering from the dataset" do Sequel.mock.dataset.from(:test).filter(:score=>1).unfiltered.sql.should == 'SELECT * FROM test' end end describe "Dataset#unlimited" do specify "should remove limit and offset from the dataset" do Sequel.mock.dataset.from(:test).limit(1, 2).unlimited.sql.should == 'SELECT * FROM test' end end describe "Dataset#ungrouped" do specify "should remove group and having clauses from the dataset" do Sequel.mock.dataset.from(:test).group(:a).having(:b).ungrouped.sql.should == 'SELECT * FROM test' end end describe "Dataset#unordered" do specify "should remove ordering from the dataset" do Sequel.mock.dataset.from(:test).order(:name).unordered.sql.should == 'SELECT * FROM test' end end describe "Dataset#with_sql" do before do @dataset = Sequel.mock.dataset.from(:test) end specify "should use static sql" do @dataset.with_sql('SELECT 1 FROM test').sql.should == 'SELECT 1 FROM test' end specify "should work with placeholders" do @dataset.with_sql('SELECT ? FROM test', 1).sql.should == 'SELECT 1 FROM test' end specify "should work with named placeholders" do @dataset.with_sql('SELECT :x FROM test', :x=>1).sql.should == 'SELECT 1 FROM test' end specify "should keep row_proc" do @dataset.with_sql('SELECT 1 FROM test').row_proc.should == @dataset.row_proc end specify "should work with method symbols and arguments" do @dataset.with_sql(:delete_sql).sql.should == 'DELETE FROM test' @dataset.with_sql(:insert_sql, :b=>1).sql.should == 'INSERT INTO test (b) VALUES (1)' @dataset.with_sql(:update_sql, :b=>1).sql.should == 'UPDATE test SET b = 1' end end describe "Dataset#order_by" do before do @dataset = Sequel.mock.dataset.from(:test) end specify "should include an ORDER BY clause in the select statement" do @dataset.order_by(:name).sql.should == 'SELECT * FROM test ORDER BY name' end specify "should accept multiple arguments" do @dataset.order_by(:name, Sequel.desc(:price)).sql.should == 'SELECT * FROM test ORDER BY name, price DESC' end specify "should override a previous ordering" do @dataset.order_by(:name).order(:stamp).sql.should == 'SELECT * FROM test ORDER BY stamp' end specify "should accept a string" do @dataset.order_by(Sequel.lit('dada ASC')).sql.should == 'SELECT * FROM test ORDER BY dada ASC' end specify "should accept a nil to remove ordering" do @dataset.order_by(:bah).order_by(nil).sql.should == 'SELECT * FROM test' end end describe "Dataset#order_more and order_append" do before do @dataset = Sequel.mock.dataset.from(:test) end specify "should include an ORDER BY clause in the select statement" do @dataset.order_more(:name).sql.should == 'SELECT * FROM test ORDER BY name' @dataset.order_append(:name).sql.should == 'SELECT * FROM test ORDER BY name' end specify "should add to the end of a previous ordering" do @dataset.order(:name).order_more(Sequel.desc(:stamp)).sql.should == 'SELECT * FROM test ORDER BY name, stamp DESC' @dataset.order(:name).order_append(Sequel.desc(:stamp)).sql.should == 'SELECT * FROM test ORDER BY name, stamp DESC' end specify "should accept a block that yields a virtual row" do @dataset.order(:a).order_more{|o| o.b}.sql.should == 'SELECT * FROM test ORDER BY a, b' @dataset.order(:a, :b).order_more(:c, :d){[e, f(1, 2)]}.sql.should == 'SELECT * FROM test ORDER BY a, b, c, d, e, f(1, 2)' @dataset.order(:a).order_append{|o| o.b}.sql.should == 'SELECT * FROM test ORDER BY a, b' @dataset.order(:a, :b).order_append(:c, :d){[e, f(1, 2)]}.sql.should == 'SELECT * FROM test ORDER BY a, b, c, d, e, f(1, 2)' end end describe "Dataset#order_prepend" do before do @dataset = Sequel.mock.dataset.from(:test) end specify "should include an ORDER BY clause in the select statement" do @dataset.order_prepend(:name).sql.should == 'SELECT * FROM test ORDER BY name' end specify "should add to the beginning of a previous ordering" do @dataset.order(:name).order_prepend(Sequel.desc(:stamp)).sql.should == 'SELECT * FROM test ORDER BY stamp DESC, name' end specify "should accept a block that yields a virtual row" do @dataset.order(:a).order_prepend{|o| o.b}.sql.should == 'SELECT * FROM test ORDER BY b, a' @dataset.order(:a, :b).order_prepend(:c, :d){[e, f(1, 2)]}.sql.should == 'SELECT * FROM test ORDER BY c, d, e, f(1, 2), a, b' end end describe "Dataset#reverse_order" do before do @dataset = Sequel.mock.dataset.from(:test) end specify "should use DESC as default order" do @dataset.reverse_order(:name).sql.should == 'SELECT * FROM test ORDER BY name DESC' end specify "should invert the order given" do @dataset.reverse_order(Sequel.desc(:name)).sql.should == 'SELECT * FROM test ORDER BY name ASC' end specify "should invert the order for ASC expressions" do @dataset.reverse_order(Sequel.asc(:name)).sql.should == 'SELECT * FROM test ORDER BY name DESC' end specify "should accept multiple arguments" do @dataset.reverse_order(:name, Sequel.desc(:price)).sql.should == 'SELECT * FROM test ORDER BY name DESC, price ASC' end specify "should handles NULLS ordering correctly when reversing" do @dataset.reverse_order(Sequel.asc(:name, :nulls=>:first), Sequel.desc(:price, :nulls=>:last)).sql.should == 'SELECT * FROM test ORDER BY name DESC NULLS LAST, price ASC NULLS FIRST' end specify "should reverse a previous ordering if no arguments are given" do @dataset.order(:name).reverse_order.sql.should == 'SELECT * FROM test ORDER BY name DESC' @dataset.order(Sequel.desc(:clumsy), :fool).reverse_order.sql.should == 'SELECT * FROM test ORDER BY clumsy ASC, fool DESC' end specify "should return an unordered dataset for a dataset with no order" do @dataset.unordered.reverse_order.sql.should == 'SELECT * FROM test' end specify "should have #reverse alias" do @dataset.order(:name).reverse.sql.should == 'SELECT * FROM test ORDER BY name DESC' end specify "should accept a block" do @dataset.reverse{name}.sql.should == 'SELECT * FROM test ORDER BY name DESC' @dataset.reverse_order{name}.sql.should == 'SELECT * FROM test ORDER BY name DESC' @dataset.reverse(:foo){name}.sql.should == 'SELECT * FROM test ORDER BY foo DESC, name DESC' @dataset.reverse_order(:foo){name}.sql.should == 'SELECT * FROM test ORDER BY foo DESC, name DESC' @dataset.reverse(Sequel.desc(:foo)){name}.sql.should == 'SELECT * FROM test ORDER BY foo ASC, name DESC' @dataset.reverse_order(Sequel.desc(:foo)){name}.sql.should == 'SELECT * FROM test ORDER BY foo ASC, name DESC' end end describe "Dataset#limit" do before do @dataset = Sequel.mock.dataset.from(:test) end specify "should include a LIMIT clause in the select statement" do @dataset.limit(10).sql.should == 'SELECT * FROM test LIMIT 10' end specify "should accept ranges" do @dataset.limit(3..7).sql.should == 'SELECT * FROM test LIMIT 5 OFFSET 3' @dataset.limit(3...7).sql.should == 'SELECT * FROM test LIMIT 4 OFFSET 3' end specify "should include an offset if a second argument is given" do @dataset.limit(6, 10).sql.should == 'SELECT * FROM test LIMIT 6 OFFSET 10' end specify "should convert regular strings to integers" do @dataset.limit('6', 'a() - 1').sql.should == 'SELECT * FROM test LIMIT 6 OFFSET 0' end specify "should not convert literal strings to integers" do @dataset.limit(Sequel.lit('6'), Sequel.lit('a() - 1')).sql.should == 'SELECT * FROM test LIMIT 6 OFFSET a() - 1' end specify "should not convert other objects" do @dataset.limit(6, Sequel.function(:a) - 1).sql.should == 'SELECT * FROM test LIMIT 6 OFFSET (a() - 1)' end specify "should be able to reset limit and offset with nil values" do @dataset.limit(6).limit(nil).sql.should == 'SELECT * FROM test' @dataset.limit(6, 1).limit(nil).sql.should == 'SELECT * FROM test OFFSET 1' @dataset.limit(6, 1).limit(nil, nil).sql.should == 'SELECT * FROM test' end specify "should work with fixed sql datasets" do @dataset.opts[:sql] = 'select * from cccc' @dataset.limit(6, 10).sql.should == 'SELECT * FROM (select * from cccc) AS t1 LIMIT 6 OFFSET 10' end specify "should raise an error if an invalid limit or offset is used" do proc{@dataset.limit(-1)}.should raise_error(Sequel::Error) proc{@dataset.limit(0)}.should raise_error(Sequel::Error) proc{@dataset.limit(1)}.should_not raise_error proc{@dataset.limit(1, -1)}.should raise_error(Sequel::Error) proc{@dataset.limit(1, 0)}.should_not raise_error proc{@dataset.limit(1, 1)}.should_not raise_error end end describe "Dataset#naked" do specify "should returned clone dataset without row_proc" do d = Sequel.mock.dataset d.row_proc = Proc.new{|r| r} d.naked.row_proc.should be_nil d.row_proc.should_not be_nil end end describe "Dataset#naked!" do specify "should remove any existing row_proc" do d = Sequel.mock.dataset d.row_proc = Proc.new{|r| r} d.naked!.row_proc.should be_nil d.row_proc.should be_nil end end describe "Dataset#qualified_column_name" do before do @dataset = Sequel.mock.dataset.from(:test) end specify "should return the literal value if not given a symbol" do @dataset.literal(@dataset.send(:qualified_column_name, 'ccc__b', :items)).should == "'ccc__b'" @dataset.literal(@dataset.send(:qualified_column_name, 3, :items)).should == '3' @dataset.literal(@dataset.send(:qualified_column_name, Sequel.lit('a'), :items)).should == 'a' end specify "should qualify the column with the supplied table name if given an unqualified symbol" do @dataset.literal(@dataset.send(:qualified_column_name, :b1, :items)).should == 'items.b1' end specify "should not changed the qualifed column's table if given a qualified symbol" do @dataset.literal(@dataset.send(:qualified_column_name, :ccc__b, :items)).should == 'ccc.b' end specify "should handle an aliased identifier" do @dataset.literal(@dataset.send(:qualified_column_name, :ccc, Sequel.expr(:items).as(:i))).should == 'i.ccc' end end describe "Dataset#map" do before do @d = Sequel.mock(:fetch=>[{:a => 1, :b => 2}, {:a => 3, :b => 4}, {:a => 5, :b => 6}])[:items] end specify "should provide the usual functionality if no argument is given" do @d.map{|n| n[:a] + n[:b]}.should == [3, 7, 11] end specify "should map using #[column name] if column name is given" do @d.map(:a).should == [1, 3, 5] end specify "should support multiple column names if an array of column names is given" do @d.map([:a, :b]).should == [[1, 2], [3, 4], [5, 6]] end specify "should not call the row_proc if an argument is given" do @d.row_proc = proc{|r| h = {}; r.keys.each{|k| h[k] = r[k] * 2}; h} @d.map(:a).should == [1, 3, 5] @d.map([:a, :b]).should == [[1, 2], [3, 4], [5, 6]] end specify "should call the row_proc if no argument is given" do @d.row_proc = proc{|r| h = {}; r.keys.each{|k| h[k] = r[k] * 2}; h} @d.map{|n| n[:a] + n[:b]}.should == [6, 14, 22] end specify "should return the complete dataset values if nothing is given" do @d.map.to_a.should == [{:a => 1, :b => 2}, {:a => 3, :b => 4}, {:a => 5, :b => 6}] end end describe "Dataset#to_hash" do before do @d = Sequel.mock(:fetch=>[{:a => 1, :b => 2}, {:a => 3, :b => 4}, {:a => 5, :b => 6}])[:items] end specify "should provide a hash with the first column as key and the second as value" do @d.to_hash(:a, :b).should == {1 => 2, 3 => 4, 5 => 6} @d.to_hash(:b, :a).should == {2 => 1, 4 => 3, 6 => 5} end specify "should provide a hash with the first column as key and the entire hash as value if the value column is blank or nil" do @d.to_hash(:a).should == {1 => {:a => 1, :b => 2}, 3 => {:a => 3, :b => 4}, 5 => {:a => 5, :b => 6}} @d.to_hash(:b).should == {2 => {:a => 1, :b => 2}, 4 => {:a => 3, :b => 4}, 6 => {:a => 5, :b => 6}} end specify "should support using an array of columns as either the key or the value" do @d.to_hash([:a, :b], :b).should == {[1, 2] => 2, [3, 4] => 4, [5, 6] => 6} @d.to_hash(:b, [:a, :b]).should == {2 => [1, 2], 4 => [3, 4], 6 => [5, 6]} @d.to_hash([:b, :a], [:a, :b]).should == {[2, 1] => [1, 2], [4, 3] => [3, 4], [6, 5] => [5, 6]} @d.to_hash([:a, :b]).should == {[1, 2] => {:a => 1, :b => 2}, [3, 4] => {:a => 3, :b => 4}, [5, 6] => {:a => 5, :b => 6}} end specify "should not call the row_proc if two arguments are given" do @d.row_proc = proc{|r| h = {}; r.keys.each{|k| h[k] = r[k] * 2}; h} @d.to_hash(:a, :b).should == {1 => 2, 3 => 4, 5 => 6} @d.to_hash(:b, :a).should == {2 => 1, 4 => 3, 6 => 5} @d.to_hash([:a, :b], :b).should == {[1, 2] => 2, [3, 4] => 4, [5, 6] => 6} @d.to_hash(:b, [:a, :b]).should == {2 => [1, 2], 4 => [3, 4], 6 => [5, 6]} @d.to_hash([:b, :a], [:a, :b]).should == {[2, 1] => [1, 2], [4, 3] => [3, 4], [6, 5] => [5, 6]} end specify "should call the row_proc if only a single argument is given" do @d.row_proc = proc{|r| h = {}; r.keys.each{|k| h[k] = r[k] * 2}; h} @d.to_hash(:a).should == {2 => {:a => 2, :b => 4}, 6 => {:a => 6, :b => 8}, 10 => {:a => 10, :b => 12}} @d.to_hash(:b).should == {4 => {:a => 2, :b => 4}, 8 => {:a => 6, :b => 8}, 12 => {:a => 10, :b => 12}} @d.to_hash([:a, :b]).should == {[2, 4] => {:a => 2, :b => 4}, [6, 8] => {:a => 6, :b => 8}, [10, 12] => {:a => 10, :b => 12}} end end describe "Dataset#to_hash_groups" do before do @d = Sequel.mock(:fetch=>[{:a => 1, :b => 2}, {:a => 3, :b => 4}, {:a => 1, :b => 6}, {:a => 7, :b => 4}])[:items] end specify "should provide a hash with the first column as key and the second as arrays of matching values" do @d.to_hash_groups(:a, :b).should == {1 => [2, 6], 3 => [4], 7 => [4]} @d.to_hash_groups(:b, :a).should == {2 => [1], 4=>[3, 7], 6=>[1]} end specify "should provide a hash with the first column as key and the entire hash as value if the value column is blank or nil" do @d.to_hash_groups(:a).should == {1 => [{:a => 1, :b => 2}, {:a => 1, :b => 6}], 3 => [{:a => 3, :b => 4}], 7 => [{:a => 7, :b => 4}]} @d.to_hash_groups(:b).should == {2 => [{:a => 1, :b => 2}], 4 => [{:a => 3, :b => 4}, {:a => 7, :b => 4}], 6 => [{:a => 1, :b => 6}]} end specify "should support using an array of columns as either the key or the value" do @d.to_hash_groups([:a, :b], :b).should == {[1, 2] => [2], [3, 4] => [4], [1, 6] => [6], [7, 4]=>[4]} @d.to_hash_groups(:b, [:a, :b]).should == {2 => [[1, 2]], 4 => [[3, 4], [7, 4]], 6 => [[1, 6]]} @d.to_hash_groups([:b, :a], [:a, :b]).should == {[2, 1] => [[1, 2]], [4, 3] => [[3, 4]], [6, 1] => [[1, 6]], [4, 7]=>[[7, 4]]} @d.to_hash_groups([:a, :b]).should == {[1, 2] => [{:a => 1, :b => 2}], [3, 4] => [{:a => 3, :b => 4}], [1, 6] => [{:a => 1, :b => 6}], [7, 4] => [{:a => 7, :b => 4}]} end specify "should not call the row_proc if two arguments are given" do @d.row_proc = proc{|r| h = {}; r.keys.each{|k| h[k] = r[k] * 2}; h} @d.to_hash_groups(:a, :b).should == {1 => [2, 6], 3 => [4], 7 => [4]} @d.to_hash_groups(:b, :a).should == {2 => [1], 4=>[3, 7], 6=>[1]} @d.to_hash_groups([:a, :b], :b).should == {[1, 2] => [2], [3, 4] => [4], [1, 6] => [6], [7, 4]=>[4]} @d.to_hash_groups(:b, [:a, :b]).should == {2 => [[1, 2]], 4 => [[3, 4], [7, 4]], 6 => [[1, 6]]} @d.to_hash_groups([:b, :a], [:a, :b]).should == {[2, 1] => [[1, 2]], [4, 3] => [[3, 4]], [6, 1] => [[1, 6]], [4, 7]=>[[7, 4]]} end specify "should call the row_proc if only a single argument is given" do @d.row_proc = proc{|r| h = {}; r.keys.each{|k| h[k] = r[k] * 2}; h} @d.to_hash_groups(:a).should == {2 => [{:a => 2, :b => 4}, {:a => 2, :b => 12}], 6 => [{:a => 6, :b => 8}], 14 => [{:a => 14, :b => 8}]} @d.to_hash_groups(:b).should == {4 => [{:a => 2, :b => 4}], 8 => [{:a => 6, :b => 8}, {:a => 14, :b => 8}], 12 => [{:a => 2, :b => 12}]} @d.to_hash_groups([:a, :b]).should == {[2, 4] => [{:a => 2, :b => 4}], [6, 8] => [{:a => 6, :b => 8}], [2, 12] => [{:a => 2, :b => 12}], [14, 8] => [{:a => 14, :b => 8}]} end end describe "Dataset#distinct" do before do @db = Sequel.mock @dataset = @db[:test].select(:name) end specify "should include DISTINCT clause in statement" do @dataset.distinct.sql.should == 'SELECT DISTINCT name FROM test' end specify "should raise an error if columns given and DISTINCT ON is not supported" do proc{@dataset.distinct}.should_not raise_error proc{@dataset.distinct(:a)}.should raise_error(Sequel::InvalidOperation) end specify "should use DISTINCT ON if columns are given and DISTINCT ON is supported" do meta_def(@dataset, :supports_distinct_on?){true} @dataset.distinct(:a, :b).sql.should == 'SELECT DISTINCT ON (a, b) name FROM test' @dataset.distinct(Sequel.cast(:stamp, :integer), :node_id=>nil).sql.should == 'SELECT DISTINCT ON (CAST(stamp AS integer), (node_id IS NULL)) name FROM test' end specify "should do a subselect for count" do @dataset.distinct.count @db.sqls.should == ['SELECT count(*) AS count FROM (SELECT DISTINCT name FROM test) AS t1 LIMIT 1'] end end describe "Dataset#count" do before do @db = Sequel.mock(:fetch=>{:count=>1}) @dataset = @db.from(:test).columns(:count) end specify "should format SQL properly" do @dataset.count.should == 1 @db.sqls.should == ['SELECT count(*) AS count FROM test LIMIT 1'] end specify "should accept an argument" do @dataset.count(:foo).should == 1 @db.sqls.should == ['SELECT count(foo) AS count FROM test LIMIT 1'] end specify "should work with a nil argument" do @dataset.count(nil).should == 1 @db.sqls.should == ['SELECT count(NULL) AS count FROM test LIMIT 1'] end specify "should accept a virtual row block" do @dataset.count{foo(bar)}.should == 1 @db.sqls.should == ['SELECT count(foo(bar)) AS count FROM test LIMIT 1'] end specify "should raise an Error if given an argument and a block" do proc{@dataset.count(:foo){foo(bar)}}.should raise_error(Sequel::Error) end specify "should include the where clause if it's there" do @dataset.filter(Sequel.expr(:abc) < 30).count.should == 1 @db.sqls.should == ['SELECT count(*) AS count FROM test WHERE (abc < 30) LIMIT 1'] end specify "should count properly for datasets with fixed sql" do @dataset.opts[:sql] = "select abc from xyz" @dataset.count.should == 1 @db.sqls.should == ["SELECT count(*) AS count FROM (select abc from xyz) AS t1 LIMIT 1"] end specify "should count properly when using UNION, INTERSECT, or EXCEPT" do @dataset.union(@dataset).count.should == 1 @db.sqls.should == ["SELECT count(*) AS count FROM (SELECT * FROM test UNION SELECT * FROM test) AS t1 LIMIT 1"] @dataset.intersect(@dataset).count.should == 1 @db.sqls.should == ["SELECT count(*) AS count FROM (SELECT * FROM test INTERSECT SELECT * FROM test) AS t1 LIMIT 1"] @dataset.except(@dataset).count.should == 1 @db.sqls.should == ["SELECT count(*) AS count FROM (SELECT * FROM test EXCEPT SELECT * FROM test) AS t1 LIMIT 1"] end specify "should return limit if count is greater than it" do @dataset.limit(5).count.should == 1 @db.sqls.should == ["SELECT count(*) AS count FROM (SELECT * FROM test LIMIT 5) AS t1 LIMIT 1"] end specify "should work correctly with offsets" do @dataset.limit(nil, 5).count.should == 1 @db.sqls.should == ["SELECT count(*) AS count FROM (SELECT * FROM test OFFSET 5) AS t1 LIMIT 1"] end it "should work on a graphed_dataset" do @dataset.should_receive(:columns).twice.and_return([:a]) @dataset.graph(@dataset, [:a], :table_alias=>:test2).count.should == 1 @db.sqls.should == ['SELECT count(*) AS count FROM test LEFT OUTER JOIN test AS test2 USING (a) LIMIT 1'] end specify "should not cache the columns value" do ds = @dataset.from(:blah).columns(:a) ds.columns.should == [:a] ds.count.should == 1 @db.sqls.should == ['SELECT count(*) AS count FROM blah LIMIT 1'] ds.columns.should == [:a] end end describe "Dataset#group_and_count" do before do @ds = Sequel.mock.dataset.from(:test) end specify "should format SQL properly" do @ds.group_and_count(:name).sql.should == "SELECT name, count(*) AS count FROM test GROUP BY name" end specify "should accept multiple columns for grouping" do @ds.group_and_count(:a, :b).sql.should == "SELECT a, b, count(*) AS count FROM test GROUP BY a, b" end specify "should format column aliases in the select clause but not in the group clause" do @ds.group_and_count(:name___n).sql.should == "SELECT name AS n, count(*) AS count FROM test GROUP BY name" @ds.group_and_count(:name__n).sql.should == "SELECT name.n, count(*) AS count FROM test GROUP BY name.n" end specify "should handle identifiers" do @ds.group_and_count(Sequel.identifier(:name___n)).sql.should == "SELECT name___n, count(*) AS count FROM test GROUP BY name___n" end specify "should handle literal strings" do @ds.group_and_count(Sequel.lit("name")).sql.should == "SELECT name, count(*) AS count FROM test GROUP BY name" end specify "should handle aliased expressions" do @ds.group_and_count(Sequel.expr(:name).as(:n)).sql.should == "SELECT name AS n, count(*) AS count FROM test GROUP BY name" @ds.group_and_count(Sequel.identifier(:name).as(:n)).sql.should == "SELECT name AS n, count(*) AS count FROM test GROUP BY name" end specify "should take a virtual row block" do @ds.group_and_count{(type_id > 1).as(t)}.sql.should == "SELECT (type_id > 1) AS t, count(*) AS count FROM test GROUP BY (type_id > 1)" @ds.group_and_count{[(type_id > 1).as(t), type_id < 2]}.sql.should == "SELECT (type_id > 1) AS t, (type_id < 2), count(*) AS count FROM test GROUP BY (type_id > 1), (type_id < 2)" @ds.group_and_count(:foo){type_id > 1}.sql.should == "SELECT foo, (type_id > 1), count(*) AS count FROM test GROUP BY foo, (type_id > 1)" end end describe "Dataset#empty?" do specify "should return true if no records exist in the dataset" do db = Sequel.mock(:fetch=>proc{|sql| {1=>1} unless sql =~ /WHERE 'f'/}) db.from(:test).should_not be_empty db.sqls.should == ['SELECT 1 AS one FROM test LIMIT 1'] db.from(:test).filter(false).should be_empty db.sqls.should == ["SELECT 1 AS one FROM test WHERE 'f' LIMIT 1"] end end describe "Dataset#first_source_alias" do before do @ds = Sequel.mock.dataset end specify "should be the entire first source if not aliased" do @ds.from(:t).first_source_alias.should == :t @ds.from(Sequel.identifier(:t__a)).first_source_alias.should == Sequel.identifier(:t__a) @ds.from(:s__t).first_source_alias.should == :s__t @ds.from(Sequel.qualify(:s, :t)).first_source_alias.should == Sequel.qualify(:s, :t) end specify "should be the alias if aliased" do @ds.from(:t___a).first_source_alias.should == :a @ds.from(:s__t___a).first_source_alias.should == :a @ds.from(Sequel.expr(:t).as(:a)).first_source_alias.should == :a end specify "should be aliased as first_source" do @ds.from(:t).first_source.should == :t @ds.from(Sequel.identifier(:t__a)).first_source.should == Sequel.identifier(:t__a) @ds.from(:s__t___a).first_source.should == :a @ds.from(Sequel.expr(:t).as(:a)).first_source.should == :a end specify "should raise exception if table doesn't have a source" do proc{@ds.first_source_alias}.should raise_error(Sequel::Error) end end describe "Dataset#first_source_table" do before do @ds = Sequel.mock.dataset end specify "should be the entire first source if not aliased" do @ds.from(:t).first_source_table.should == :t @ds.from(Sequel.identifier(:t__a)).first_source_table.should == Sequel.identifier(:t__a) @ds.from(:s__t).first_source_table.should == :s__t @ds.from(Sequel.qualify(:s, :t)).first_source_table.should == Sequel.qualify(:s, :t) end specify "should be the unaliased part if aliased" do @ds.literal(@ds.from(:t___a).first_source_table).should == "t" @ds.literal(@ds.from(:s__t___a).first_source_table).should == "s.t" @ds.literal(@ds.from(Sequel.expr(:t).as(:a)).first_source_table).should == "t" end specify "should raise exception if table doesn't have a source" do proc{@ds.first_source_table}.should raise_error(Sequel::Error) end end describe "Dataset#from_self" do before do @ds = Sequel.mock.dataset.from(:test).select(:name).limit(1) end specify "should set up a default alias" do @ds.from_self.sql.should == 'SELECT * FROM (SELECT name FROM test LIMIT 1) AS t1' end specify "should modify only the new dataset" do @ds.from_self.select(:bogus).sql.should == 'SELECT bogus FROM (SELECT name FROM test LIMIT 1) AS t1' end specify "should use the user-specified alias" do @ds.from_self(:alias=>:some_name).sql.should == 'SELECT * FROM (SELECT name FROM test LIMIT 1) AS some_name' end specify "should use the user-specified alias for joins" do @ds.from_self(:alias=>:some_name).inner_join(:posts, :alias=>:name).sql.should == \ 'SELECT * FROM (SELECT name FROM test LIMIT 1) AS some_name INNER JOIN posts ON (posts.alias = some_name.name)' end specify "should not remove non-SQL options such as :server" do @ds.server(:blah).from_self(:alias=>:some_name).opts[:server].should == :blah end specify "should hoist WITH clauses in current dataset if dataset doesn't support WITH in subselect" do ds = Sequel.mock.dataset meta_def(ds, :supports_cte?){true} meta_def(ds, :supports_cte_in_subselect?){false} ds.from(:a).with(:a, ds.from(:b)).from_self.sql.should == 'WITH a AS (SELECT * FROM b) SELECT * FROM (SELECT * FROM a) AS t1' ds.from(:a, :c).with(:a, ds.from(:b)).with(:c, ds.from(:d)).from_self.sql.should == 'WITH a AS (SELECT * FROM b), c AS (SELECT * FROM d) SELECT * FROM (SELECT * FROM a, c) AS t1' end specify "should have working mutation method" do @ds.from_self! @ds.sql.should == 'SELECT * FROM (SELECT name FROM test LIMIT 1) AS t1' end end describe "Dataset#join_table" do before do @d = Sequel.mock.dataset.from(:items) @d.quote_identifiers = true end specify "should format the JOIN clause properly" do @d.join_table(:left_outer, :categories, :category_id => :id).sql.should == 'SELECT * FROM "items" LEFT OUTER JOIN "categories" ON ("categories"."category_id" = "items"."id")' end specify "should handle multiple conditions on the same join table column" do @d.join_table(:left_outer, :categories, [[:category_id, :id], [:category_id, 0..100]]).sql.should == 'SELECT * FROM "items" LEFT OUTER JOIN "categories" ON (("categories"."category_id" = "items"."id") AND ("categories"."category_id" >= 0) AND ("categories"."category_id" <= 100))' end specify "should include WHERE clause if applicable" do @d.filter(Sequel.expr(:price) < 100).join_table(:right_outer, :categories, :category_id => :id).sql.should == 'SELECT * FROM "items" RIGHT OUTER JOIN "categories" ON ("categories"."category_id" = "items"."id") WHERE ("price" < 100)' end specify "should include ORDER BY clause if applicable" do @d.order(:stamp).join_table(:full_outer, :categories, :category_id => :id).sql.should == 'SELECT * FROM "items" FULL OUTER JOIN "categories" ON ("categories"."category_id" = "items"."id") ORDER BY "stamp"' end specify "should support multiple joins" do @d.join_table(:inner, :b, :items_id=>:id).join_table(:left_outer, :c, :b_id => :b__id).sql.should == 'SELECT * FROM "items" INNER JOIN "b" ON ("b"."items_id" = "items"."id") LEFT OUTER JOIN "c" ON ("c"."b_id" = "b"."id")' end specify "should support arbitrary join types" do @d.join_table(:magic, :categories, :category_id=>:id).sql.should == 'SELECT * FROM "items" MAGIC JOIN "categories" ON ("categories"."category_id" = "items"."id")' end specify "should support many join methods" do @d.left_outer_join(:categories, :category_id=>:id).sql.should == 'SELECT * FROM "items" LEFT OUTER JOIN "categories" ON ("categories"."category_id" = "items"."id")' @d.right_outer_join(:categories, :category_id=>:id).sql.should == 'SELECT * FROM "items" RIGHT OUTER JOIN "categories" ON ("categories"."category_id" = "items"."id")' @d.full_outer_join(:categories, :category_id=>:id).sql.should == 'SELECT * FROM "items" FULL OUTER JOIN "categories" ON ("categories"."category_id" = "items"."id")' @d.inner_join(:categories, :category_id=>:id).sql.should == 'SELECT * FROM "items" INNER JOIN "categories" ON ("categories"."category_id" = "items"."id")' @d.left_join(:categories, :category_id=>:id).sql.should == 'SELECT * FROM "items" LEFT JOIN "categories" ON ("categories"."category_id" = "items"."id")' @d.right_join(:categories, :category_id=>:id).sql.should == 'SELECT * FROM "items" RIGHT JOIN "categories" ON ("categories"."category_id" = "items"."id")' @d.full_join(:categories, :category_id=>:id).sql.should == 'SELECT * FROM "items" FULL JOIN "categories" ON ("categories"."category_id" = "items"."id")' @d.natural_join(:categories).sql.should == 'SELECT * FROM "items" NATURAL JOIN "categories"' @d.natural_left_join(:categories).sql.should == 'SELECT * FROM "items" NATURAL LEFT JOIN "categories"' @d.natural_right_join(:categories).sql.should == 'SELECT * FROM "items" NATURAL RIGHT JOIN "categories"' @d.natural_full_join(:categories).sql.should == 'SELECT * FROM "items" NATURAL FULL JOIN "categories"' @d.cross_join(:categories).sql.should == 'SELECT * FROM "items" CROSS JOIN "categories"' end specify "should raise an error if additional arguments are provided to join methods that don't take conditions" do proc{@d.natural_join(:categories, :id=>:id)}.should raise_error(ArgumentError) proc{@d.natural_left_join(:categories, :id=>:id)}.should raise_error(ArgumentError) proc{@d.natural_right_join(:categories, :id=>:id)}.should raise_error(ArgumentError) proc{@d.natural_full_join(:categories, :id=>:id)}.should raise_error(ArgumentError) proc{@d.cross_join(:categories, :id=>:id)}.should raise_error(ArgumentError) end specify "should raise an error if blocks are provided to join methods that don't pass them" do proc{@d.natural_join(:categories){}}.should raise_error(Sequel::Error) proc{@d.natural_left_join(:categories){}}.should raise_error(Sequel::Error) proc{@d.natural_right_join(:categories){}}.should raise_error(Sequel::Error) proc{@d.natural_full_join(:categories){}}.should raise_error(Sequel::Error) proc{@d.cross_join(:categories){}}.should raise_error(Sequel::Error) end specify "should default to a plain join if nil is used for the type" do @d.join_table(nil, :categories, :category_id=>:id).sql.should == 'SELECT * FROM "items" JOIN "categories" ON ("categories"."category_id" = "items"."id")' end specify "should use an inner join for Dataset#join" do @d.join(:categories, :category_id=>:id).sql.should == 'SELECT * FROM "items" INNER JOIN "categories" ON ("categories"."category_id" = "items"."id")' end specify "should support aliased tables using the :table_alias option" do @d.from('stats').join('players', {:id => :player_id}, :table_alias=>:p).sql.should == 'SELECT * FROM "stats" INNER JOIN "players" AS "p" ON ("p"."id" = "stats"."player_id")' end specify "should support aliased tables using an implicit alias" do @d.from('stats').join(Sequel.expr(:players).as(:p), {:id => :player_id}).sql.should == 'SELECT * FROM "stats" INNER JOIN "players" AS "p" ON ("p"."id" = "stats"."player_id")' end specify "should support using an alias for the FROM when doing the first join with unqualified condition columns" do @d.from(Sequel.as(:foo, :f)).join_table(:inner, :bar, :id => :bar_id).sql.should == 'SELECT * FROM "foo" AS "f" INNER JOIN "bar" ON ("bar"."id" = "f"."bar_id")' end specify "should support implicit schemas in from table symbols" do @d.from(:s__t).join(:u__v, {:id => :player_id}).sql.should == 'SELECT * FROM "s"."t" INNER JOIN "u"."v" ON ("u"."v"."id" = "s"."t"."player_id")' end specify "should support implicit aliases in from table symbols" do @d.from(:t___z).join(:v___y, {:id => :player_id}).sql.should == 'SELECT * FROM "t" AS "z" INNER JOIN "v" AS "y" ON ("y"."id" = "z"."player_id")' @d.from(:s__t___z).join(:u__v___y, {:id => :player_id}).sql.should == 'SELECT * FROM "s"."t" AS "z" INNER JOIN "u"."v" AS "y" ON ("y"."id" = "z"."player_id")' end specify "should support AliasedExpressions" do @d.from(Sequel.expr(:s).as(:t)).join(Sequel.expr(:u).as(:v), {:id => :player_id}).sql.should == 'SELECT * FROM "s" AS "t" INNER JOIN "u" AS "v" ON ("v"."id" = "t"."player_id")' end specify "should support the :implicit_qualifier option" do @d.from('stats').join('players', {:id => :player_id}, :implicit_qualifier=>:p).sql.should == 'SELECT * FROM "stats" INNER JOIN "players" ON ("players"."id" = "p"."player_id")' end specify "should default :qualify option to default_join_table_qualification" do def @d.default_join_table_qualification; false; end @d.from('stats').join(:players, :id => :player_id).sql.should == 'SELECT * FROM "stats" INNER JOIN "players" ON ("id" = "player_id")' end specify "should not qualify if :qualify=>false option is given" do @d.from('stats').join(:players, {:id => :player_id}, :qualify=>false).sql.should == 'SELECT * FROM "stats" INNER JOIN "players" ON ("id" = "player_id")' end specify "should do deep qualification if :qualify=>:deep option is given" do @d.from('stats').join(:players, {Sequel.function(:f, :id) => Sequel.subscript(:player_id, 0)}, :qualify=>:deep).sql.should == 'SELECT * FROM "stats" INNER JOIN "players" ON (f("players"."id") = "stats"."player_id"[0])' end specify "should do only qualification if :qualify=>:symbol option is given" do @d.from('stats').join(:players, {Sequel.function(:f, :id) => :player_id}, :qualify=>:symbol).sql.should == 'SELECT * FROM "stats" INNER JOIN "players" ON (f("id") = "stats"."player_id")' end specify "should allow for arbitrary conditions in the JOIN clause" do @d.join_table(:left_outer, :categories, :status => 0).sql.should == 'SELECT * FROM "items" LEFT OUTER JOIN "categories" ON ("categories"."status" = 0)' @d.join_table(:left_outer, :categories, :categorizable_type => "Post").sql.should == 'SELECT * FROM "items" LEFT OUTER JOIN "categories" ON ("categories"."categorizable_type" = \'Post\')' @d.join_table(:left_outer, :categories, :timestamp => Sequel::CURRENT_TIMESTAMP).sql.should == 'SELECT * FROM "items" LEFT OUTER JOIN "categories" ON ("categories"."timestamp" = CURRENT_TIMESTAMP)' @d.join_table(:left_outer, :categories, :status => [1, 2, 3]).sql.should == 'SELECT * FROM "items" LEFT OUTER JOIN "categories" ON ("categories"."status" IN (1, 2, 3))' end specify "should raise error for a table without a source" do proc {Sequel.mock.dataset.join('players', :id => :player_id)}.should raise_error(Sequel::Error) end specify "should support joining datasets" do ds = Sequel.mock.dataset.from(:categories) @d.join_table(:left_outer, ds, :item_id => :id).sql.should == 'SELECT * FROM "items" LEFT OUTER JOIN (SELECT * FROM categories) AS "t1" ON ("t1"."item_id" = "items"."id")' ds.filter!(:active => true) @d.join_table(:left_outer, ds, :item_id => :id).sql.should == 'SELECT * FROM "items" LEFT OUTER JOIN (SELECT * FROM categories WHERE (active IS TRUE)) AS "t1" ON ("t1"."item_id" = "items"."id")' @d.from_self.join_table(:left_outer, ds, :item_id => :id).sql.should == 'SELECT * FROM (SELECT * FROM "items") AS "t1" LEFT OUTER JOIN (SELECT * FROM categories WHERE (active IS TRUE)) AS "t2" ON ("t2"."item_id" = "t1"."id")' end specify "should support joining datasets and aliasing the join" do ds = Sequel.mock.dataset.from(:categories) @d.join_table(:left_outer, ds, {:ds__item_id => :id}, :table_alias=>:ds).sql.should == 'SELECT * FROM "items" LEFT OUTER JOIN (SELECT * FROM categories) AS "ds" ON ("ds"."item_id" = "items"."id")' end specify "should support joining multiple datasets" do ds = Sequel.mock.dataset.from(:categories) ds2 = Sequel.mock.dataset.from(:nodes).select(:name) ds3 = Sequel.mock.dataset.from(:attributes).filter("name = 'blah'") @d.join_table(:left_outer, ds, :item_id => :id).join_table(:inner, ds2, :node_id=>:id).join_table(:right_outer, ds3, :attribute_id=>:id).sql.should == 'SELECT * FROM "items" LEFT OUTER JOIN (SELECT * FROM categories) AS "t1" ON ("t1"."item_id" = "items"."id") ' \ 'INNER JOIN (SELECT name FROM nodes) AS "t2" ON ("t2"."node_id" = "t1"."id") ' \ 'RIGHT OUTER JOIN (SELECT * FROM attributes WHERE (name = \'blah\')) AS "t3" ON ("t3"."attribute_id" = "t2"."id")' end specify "should support using an SQL String as the join condition" do @d.join(:categories, "c.item_id = items.id", :table_alias=>:c).sql.should == 'SELECT * FROM "items" INNER JOIN "categories" AS "c" ON (c.item_id = items.id)' end specify "should support using a boolean column as the join condition" do @d.join(:categories, :active).sql.should == 'SELECT * FROM "items" INNER JOIN "categories" ON "active"' end specify "should support using an expression as the join condition" do @d.join(:categories, Sequel.expr(:number) > 10).sql.should == 'SELECT * FROM "items" INNER JOIN "categories" ON ("number" > 10)' end specify "should support natural and cross joins" do @d.join_table(:natural, :categories).sql.should == 'SELECT * FROM "items" NATURAL JOIN "categories"' @d.join_table(:cross, :categories, nil).sql.should == 'SELECT * FROM "items" CROSS JOIN "categories"' @d.join_table(:natural, :categories, nil, :table_alias=>:c).sql.should == 'SELECT * FROM "items" NATURAL JOIN "categories" AS "c"' end specify "should support joins with a USING clause if an array of symbols is used" do @d.join(:categories, [:id]).sql.should == 'SELECT * FROM "items" INNER JOIN "categories" USING ("id")' @d.join(:categories, [:id1, :id2]).sql.should == 'SELECT * FROM "items" INNER JOIN "categories" USING ("id1", "id2")' end specify "should emulate JOIN USING (poorly) if the dataset doesn't support it" do meta_def(@d, :supports_join_using?){false} @d.join(:categories, [:id]).sql.should == 'SELECT * FROM "items" INNER JOIN "categories" ON ("categories"."id" = "items"."id")' end specify "should hoist WITH clauses from subqueries if the dataset doesn't support CTEs in subselects" do meta_def(@d, :supports_cte?){true} meta_def(@d, :supports_cte_in_subselect?){false} @d.join(Sequel.mock.dataset.from(:categories).with(:a, Sequel.mock.dataset.from(:b)), [:id]).sql.should == 'WITH "a" AS (SELECT * FROM b) SELECT * FROM "items" INNER JOIN (SELECT * FROM categories) AS "t1" USING ("id")' end specify "should raise an error if using an array of symbols with a block" do proc{@d.join(:categories, [:id]){|j,lj,js|}}.should raise_error(Sequel::Error) end specify "should support using a block that receieves the join table/alias, last join table/alias, and array of previous joins" do @d.join(:categories) do |join_alias, last_join_alias, joins| join_alias.should == :categories last_join_alias.should == :items joins.should == [] end @d.from(Sequel.as(:items, :i)).join(:categories, nil, :table_alias=>:c) do |join_alias, last_join_alias, joins| join_alias.should == :c last_join_alias.should == :i joins.should == [] end @d.from(:items___i).join(:categories, nil, :table_alias=>:c) do |join_alias, last_join_alias, joins| join_alias.should == :c last_join_alias.should == :i joins.should == [] end @d.join(:blah).join(:categories, nil, :table_alias=>:c) do |join_alias, last_join_alias, joins| join_alias.should == :c last_join_alias.should == :blah joins.should be_a_kind_of(Array) joins.length.should == 1 joins.first.should be_a_kind_of(Sequel::SQL::JoinClause) joins.first.join_type.should == :inner end @d.join_table(:natural, :blah, nil, :table_alias=>:b).join(:categories, nil, :table_alias=>:c) do |join_alias, last_join_alias, joins| join_alias.should == :c last_join_alias.should == :b joins.should be_a_kind_of(Array) joins.length.should == 1 joins.first.should be_a_kind_of(Sequel::SQL::JoinClause) joins.first.join_type.should == :natural end @d.join(:blah).join(:categories).join(:blah2) do |join_alias, last_join_alias, joins| join_alias.should == :blah2 last_join_alias.should == :categories joins.should be_a_kind_of(Array) joins.length.should == 2 joins.first.should be_a_kind_of(Sequel::SQL::JoinClause) joins.first.table.should == :blah joins.last.should be_a_kind_of(Sequel::SQL::JoinClause) joins.last.table.should == :categories end end specify "should use the block result as the only condition if no condition is given" do @d.join(:categories){|j,lj,js| {Sequel.qualify(j, :b)=>Sequel.qualify(lj, :c)}}.sql.should == 'SELECT * FROM "items" INNER JOIN "categories" ON ("categories"."b" = "items"."c")' @d.join(:categories){|j,lj,js| Sequel.qualify(j, :b) > Sequel.qualify(lj, :c)}.sql.should == 'SELECT * FROM "items" INNER JOIN "categories" ON ("categories"."b" > "items"."c")' end specify "should combine the block conditions and argument conditions if both given" do @d.join(:categories, :a=>:d){|j,lj,js| {Sequel.qualify(j, :b)=>Sequel.qualify(lj, :c)}}.sql.should == 'SELECT * FROM "items" INNER JOIN "categories" ON (("categories"."a" = "items"."d") AND ("categories"."b" = "items"."c"))' @d.join(:categories, :a=>:d){|j,lj,js| Sequel.qualify(j, :b) > Sequel.qualify(lj, :c)}.sql.should == 'SELECT * FROM "items" INNER JOIN "categories" ON (("categories"."a" = "items"."d") AND ("categories"."b" > "items"."c"))' end specify "should prefer explicit aliases over implicit" do @d.from(:items___i).join(:categories___c, {:category_id => :id}, {:table_alias=>:c2, :implicit_qualifier=>:i2}).sql.should == 'SELECT * FROM "items" AS "i" INNER JOIN "categories" AS "c2" ON ("c2"."category_id" = "i2"."id")' @d.from(Sequel.expr(:items).as(:i)).join(Sequel.expr(:categories).as(:c), {:category_id => :id}, {:table_alias=>:c2, :implicit_qualifier=>:i2}).sql.should == 'SELECT * FROM "items" AS "i" INNER JOIN "categories" AS "c2" ON ("c2"."category_id" = "i2"."id")' end specify "should not allow insert, update, delete, or truncate" do proc{@d.join(:categories, :a=>:d).insert_sql}.should raise_error(Sequel::InvalidOperation) proc{@d.join(:categories, :a=>:d).update_sql(:a=>1)}.should raise_error(Sequel::InvalidOperation) proc{@d.join(:categories, :a=>:d).delete_sql}.should raise_error(Sequel::InvalidOperation) proc{@d.join(:categories, :a=>:d).truncate_sql}.should raise_error(Sequel::InvalidOperation) end end describe "Dataset aggregate methods" do before do @d = Sequel.mock(:fetch=>proc{|s| {1=>s}})[:test] end specify "should include min" do @d.min(:a).should == 'SELECT min(a) AS min FROM test LIMIT 1' end specify "should include max" do @d.max(:b).should == 'SELECT max(b) AS max FROM test LIMIT 1' end specify "should include sum" do @d.sum(:c).should == 'SELECT sum(c) AS sum FROM test LIMIT 1' end specify "should include avg" do @d.avg(:d).should == 'SELECT avg(d) AS avg FROM test LIMIT 1' end specify "should accept qualified columns" do @d.avg(:test__bc).should == 'SELECT avg(test.bc) AS avg FROM test LIMIT 1' end specify "should use a subselect for the same conditions as count" do d = @d.order(:a).limit(5) d.avg(:a).should == 'SELECT avg(a) AS avg FROM (SELECT * FROM test ORDER BY a LIMIT 5) AS t1 LIMIT 1' d.sum(:a).should == 'SELECT sum(a) AS sum FROM (SELECT * FROM test ORDER BY a LIMIT 5) AS t1 LIMIT 1' d.min(:a).should == 'SELECT min(a) AS min FROM (SELECT * FROM test ORDER BY a LIMIT 5) AS t1 LIMIT 1' d.max(:a).should == 'SELECT max(a) AS max FROM (SELECT * FROM test ORDER BY a LIMIT 5) AS t1 LIMIT 1' end specify "should accept virtual row blocks" do @d.avg{a(b)}.should == 'SELECT avg(a(b)) AS avg FROM test LIMIT 1' @d.sum{a(b)}.should == 'SELECT sum(a(b)) AS sum FROM test LIMIT 1' @d.min{a(b)}.should == 'SELECT min(a(b)) AS min FROM test LIMIT 1' @d.max{a(b)}.should == 'SELECT max(a(b)) AS max FROM test LIMIT 1' end end describe "Dataset#range" do before do @db = Sequel.mock(:fetch=>{:v1 => 1, :v2 => 10}) @ds = @db[:test] end specify "should generate a correct SQL statement" do @ds.range(:stamp) @db.sqls.should == ["SELECT min(stamp) AS v1, max(stamp) AS v2 FROM test LIMIT 1"] @ds.filter(Sequel.expr(:price) > 100).range(:stamp) @db.sqls.should == ["SELECT min(stamp) AS v1, max(stamp) AS v2 FROM test WHERE (price > 100) LIMIT 1"] end specify "should return a range object" do @ds.range(:tryme).should == (1..10) end specify "should use a subselect for the same conditions as count" do @ds.order(:stamp).limit(5).range(:stamp).should == (1..10) @db.sqls.should == ['SELECT min(stamp) AS v1, max(stamp) AS v2 FROM (SELECT * FROM test ORDER BY stamp LIMIT 5) AS t1 LIMIT 1'] end specify "should accept virtual row blocks" do @ds.range{a(b)} @db.sqls.should == ["SELECT min(a(b)) AS v1, max(a(b)) AS v2 FROM test LIMIT 1"] end end describe "Dataset#interval" do before do @db = Sequel.mock(:fetch=>{:v => 1234}) @ds = @db[:test] end specify "should generate the correct SQL statement" do @ds.interval(:stamp) @db.sqls.should == ["SELECT (max(stamp) - min(stamp)) AS interval FROM test LIMIT 1"] @ds.filter(Sequel.expr(:price) > 100).interval(:stamp) @db.sqls.should == ["SELECT (max(stamp) - min(stamp)) AS interval FROM test WHERE (price > 100) LIMIT 1"] end specify "should use a subselect for the same conditions as count" do @ds.order(:stamp).limit(5).interval(:stamp).should == 1234 @db.sqls.should == ['SELECT (max(stamp) - min(stamp)) AS interval FROM (SELECT * FROM test ORDER BY stamp LIMIT 5) AS t1 LIMIT 1'] end specify "should accept virtual row blocks" do @ds.interval{a(b)} @db.sqls.should == ["SELECT (max(a(b)) - min(a(b))) AS interval FROM test LIMIT 1"] end end describe "Dataset #first and #last" do before do @d = Sequel.mock(:fetch=>proc{|s| {:s=>s}})[:test] end specify "should return a single record if no argument is given" do @d.order(:a).first.should == {:s=>'SELECT * FROM test ORDER BY a LIMIT 1'} @d.order(:a).last.should == {:s=>'SELECT * FROM test ORDER BY a DESC LIMIT 1'} end specify "should return the first/last matching record if argument is not an Integer" do @d.order(:a).first(:z => 26).should == {:s=>'SELECT * FROM test WHERE (z = 26) ORDER BY a LIMIT 1'} @d.order(:a).first('z = ?', 15).should == {:s=>'SELECT * FROM test WHERE (z = 15) ORDER BY a LIMIT 1'} @d.order(:a).last(:z => 26).should == {:s=>'SELECT * FROM test WHERE (z = 26) ORDER BY a DESC LIMIT 1'} @d.order(:a).last('z = ?', 15).should == {:s=>'SELECT * FROM test WHERE (z = 15) ORDER BY a DESC LIMIT 1'} end specify "should set the limit and return an array of records if the given number is > 1" do i = rand(10) + 10 r = @d.order(:a).first(i).should == [{:s=>"SELECT * FROM test ORDER BY a LIMIT #{i}"}] i = rand(10) + 10 r = @d.order(:a).last(i).should == [{:s=>"SELECT * FROM test ORDER BY a DESC LIMIT #{i}"}] end specify "should return the first matching record if a block is given without an argument" do @d.first{z > 26}.should == {:s=>'SELECT * FROM test WHERE (z > 26) LIMIT 1'} @d.order(:name).last{z > 26}.should == {:s=>'SELECT * FROM test WHERE (z > 26) ORDER BY name DESC LIMIT 1'} end specify "should combine block and standard argument filters if argument is not an Integer" do @d.first(:y=>25){z > 26}.should == {:s=>'SELECT * FROM test WHERE ((z > 26) AND (y = 25)) LIMIT 1'} @d.order(:name).last('y = ?', 16){z > 26}.should == {:s=>'SELECT * FROM test WHERE ((z > 26) AND (y = 16)) ORDER BY name DESC LIMIT 1'} end specify "should filter and return an array of records if an Integer argument is provided and a block is given" do i = rand(10) + 10 r = @d.order(:a).first(i){z > 26}.should == [{:s=>"SELECT * FROM test WHERE (z > 26) ORDER BY a LIMIT #{i}"}] i = rand(10) + 10 r = @d.order(:a).last(i){z > 26}.should == [{:s=>"SELECT * FROM test WHERE (z > 26) ORDER BY a DESC LIMIT #{i}"}] end specify "should return nil if no records match" do Sequel.mock[:t].first.should == nil end specify "#last should raise if no order is given" do proc {@d.last}.should raise_error(Sequel::Error) proc {@d.last(2)}.should raise_error(Sequel::Error) proc {@d.order(:a).last}.should_not raise_error proc {@d.order(:a).last(2)}.should_not raise_error end specify "#last should invert the order" do @d.order(:a).last.should == {:s=>'SELECT * FROM test ORDER BY a DESC LIMIT 1'} @d.order(Sequel.desc(:b)).last.should == {:s=>'SELECT * FROM test ORDER BY b ASC LIMIT 1'} @d.order(:c, :d).last.should == {:s=>'SELECT * FROM test ORDER BY c DESC, d DESC LIMIT 1'} @d.order(Sequel.desc(:e), :f).last.should == {:s=>'SELECT * FROM test ORDER BY e ASC, f DESC LIMIT 1'} end end describe "Dataset #first!" do before do @db = Sequel.mock(:fetch=>proc{|s| {:s=>s}}) @d = @db[:test] end specify "should return a single record if no argument is given" do @d.order(:a).first!.should == {:s=>'SELECT * FROM test ORDER BY a LIMIT 1'} end specify "should return the first! matching record if argument is not an Integer" do @d.order(:a).first!(:z => 26).should == {:s=>'SELECT * FROM test WHERE (z = 26) ORDER BY a LIMIT 1'} @d.order(:a).first!('z = ?', 15).should == {:s=>'SELECT * FROM test WHERE (z = 15) ORDER BY a LIMIT 1'} end specify "should set the limit and return an array of records if the given number is > 1" do i = rand(10) + 10 @d.order(:a).first!(i).should == [{:s=>"SELECT * FROM test ORDER BY a LIMIT #{i}"}] end specify "should return the first! matching record if a block is given without an argument" do @d.first!{z > 26}.should == {:s=>'SELECT * FROM test WHERE (z > 26) LIMIT 1'} end specify "should combine block and standard argument filters if argument is not an Integer" do @d.first!(:y=>25){z > 26}.should == {:s=>'SELECT * FROM test WHERE ((z > 26) AND (y = 25)) LIMIT 1'} end specify "should filter and return an array of records if an Integer argument is provided and a block is given" do i = rand(10) + 10 @d.order(:a).first!(i){z > 26}.should == [{:s=>"SELECT * FROM test WHERE (z > 26) ORDER BY a LIMIT #{i}"}] end specify "should raise NoMatchingRow exception if no rows match" do proc{Sequel.mock[:t].first!}.should raise_error(Sequel::NoMatchingRow) end end describe "Dataset compound operations" do before do @a = Sequel.mock.dataset.from(:a).filter(:z => 1) @b = Sequel.mock.dataset.from(:b).filter(:z => 2) end specify "should support UNION and UNION ALL" do @a.union(@b).sql.should == "SELECT * FROM (SELECT * FROM a WHERE (z = 1) UNION SELECT * FROM b WHERE (z = 2)) AS t1" @b.union(@a, :all=>true).sql.should == "SELECT * FROM (SELECT * FROM b WHERE (z = 2) UNION ALL SELECT * FROM a WHERE (z = 1)) AS t1" end specify "should support INTERSECT and INTERSECT ALL" do @a.intersect(@b).sql.should == "SELECT * FROM (SELECT * FROM a WHERE (z = 1) INTERSECT SELECT * FROM b WHERE (z = 2)) AS t1" @b.intersect(@a, :all=>true).sql.should == "SELECT * FROM (SELECT * FROM b WHERE (z = 2) INTERSECT ALL SELECT * FROM a WHERE (z = 1)) AS t1" end specify "should support EXCEPT and EXCEPT ALL" do @a.except(@b).sql.should == "SELECT * FROM (SELECT * FROM a WHERE (z = 1) EXCEPT SELECT * FROM b WHERE (z = 2)) AS t1" @b.except(@a, :all=>true).sql.should == "SELECT * FROM (SELECT * FROM b WHERE (z = 2) EXCEPT ALL SELECT * FROM a WHERE (z = 1)) AS t1" end specify "should support :alias option for specifying identifier" do @a.union(@b, :alias=>:xx).sql.should == "SELECT * FROM (SELECT * FROM a WHERE (z = 1) UNION SELECT * FROM b WHERE (z = 2)) AS xx" @a.intersect(@b, :alias=>:xx).sql.should == "SELECT * FROM (SELECT * FROM a WHERE (z = 1) INTERSECT SELECT * FROM b WHERE (z = 2)) AS xx" @a.except(@b, :alias=>:xx).sql.should == "SELECT * FROM (SELECT * FROM a WHERE (z = 1) EXCEPT SELECT * FROM b WHERE (z = 2)) AS xx" end specify "should support :from_self=>false option to not wrap the compound in a SELECT * FROM (...)" do @b.union(@a, :from_self=>false).sql.should == "SELECT * FROM b WHERE (z = 2) UNION SELECT * FROM a WHERE (z = 1)" @b.intersect(@a, :from_self=>false).sql.should == "SELECT * FROM b WHERE (z = 2) INTERSECT SELECT * FROM a WHERE (z = 1)" @b.except(@a, :from_self=>false).sql.should == "SELECT * FROM b WHERE (z = 2) EXCEPT SELECT * FROM a WHERE (z = 1)" @b.union(@a, :from_self=>false, :all=>true).sql.should == "SELECT * FROM b WHERE (z = 2) UNION ALL SELECT * FROM a WHERE (z = 1)" @b.intersect(@a, :from_self=>false, :all=>true).sql.should == "SELECT * FROM b WHERE (z = 2) INTERSECT ALL SELECT * FROM a WHERE (z = 1)" @b.except(@a, :from_self=>false, :all=>true).sql.should == "SELECT * FROM b WHERE (z = 2) EXCEPT ALL SELECT * FROM a WHERE (z = 1)" end specify "should raise an InvalidOperation if INTERSECT or EXCEPT is used and they are not supported" do meta_def(@a, :supports_intersect_except?){false} proc{@a.intersect(@b)}.should raise_error(Sequel::InvalidOperation) proc{@a.intersect(@b,:all=> true)}.should raise_error(Sequel::InvalidOperation) proc{@a.except(@b)}.should raise_error(Sequel::InvalidOperation) proc{@a.except(@b, :all=>true)}.should raise_error(Sequel::InvalidOperation) end specify "should raise an InvalidOperation if INTERSECT ALL or EXCEPT ALL is used and they are not supported" do meta_def(@a, :supports_intersect_except_all?){false} proc{@a.intersect(@b)}.should_not raise_error proc{@a.intersect(@b, :all=>true)}.should raise_error(Sequel::InvalidOperation) proc{@a.except(@b)}.should_not raise_error proc{@a.except(@b, :all=>true)}.should raise_error(Sequel::InvalidOperation) end specify "should handle chained compound operations" do @a.union(@b).union(@a, :all=>true).sql.should == "SELECT * FROM (SELECT * FROM (SELECT * FROM a WHERE (z = 1) UNION SELECT * FROM b WHERE (z = 2)) AS t1 UNION ALL SELECT * FROM a WHERE (z = 1)) AS t1" @a.intersect(@b, :all=>true).intersect(@a).sql.should == "SELECT * FROM (SELECT * FROM (SELECT * FROM a WHERE (z = 1) INTERSECT ALL SELECT * FROM b WHERE (z = 2)) AS t1 INTERSECT SELECT * FROM a WHERE (z = 1)) AS t1" @a.except(@b).except(@a, :all=>true).sql.should == "SELECT * FROM (SELECT * FROM (SELECT * FROM a WHERE (z = 1) EXCEPT SELECT * FROM b WHERE (z = 2)) AS t1 EXCEPT ALL SELECT * FROM a WHERE (z = 1)) AS t1" end specify "should use a subselect when using a compound operation with a dataset that already has a compound operation" do @a.union(@b.union(@a, :all=>true)).sql.should == "SELECT * FROM (SELECT * FROM a WHERE (z = 1) UNION SELECT * FROM (SELECT * FROM b WHERE (z = 2) UNION ALL SELECT * FROM a WHERE (z = 1)) AS t1) AS t1" @a.intersect(@b.intersect(@a), :all=>true).sql.should == "SELECT * FROM (SELECT * FROM a WHERE (z = 1) INTERSECT ALL SELECT * FROM (SELECT * FROM b WHERE (z = 2) INTERSECT SELECT * FROM a WHERE (z = 1)) AS t1) AS t1" @a.except(@b.except(@a, :all=>true)).sql.should == "SELECT * FROM (SELECT * FROM a WHERE (z = 1) EXCEPT SELECT * FROM (SELECT * FROM b WHERE (z = 2) EXCEPT ALL SELECT * FROM a WHERE (z = 1)) AS t1) AS t1" end specify "should order and limit properly when using UNION, INTERSECT, or EXCEPT" do @dataset = Sequel.mock.dataset.from(:test) @dataset.union(@dataset).limit(2).sql.should == "SELECT * FROM (SELECT * FROM test UNION SELECT * FROM test) AS t1 LIMIT 2" @dataset.limit(2).intersect(@dataset).sql.should == "SELECT * FROM (SELECT * FROM (SELECT * FROM test LIMIT 2) AS t1 INTERSECT SELECT * FROM test) AS t1" @dataset.except(@dataset.limit(2)).sql.should == "SELECT * FROM (SELECT * FROM test EXCEPT SELECT * FROM (SELECT * FROM test LIMIT 2) AS t1) AS t1" @dataset.union(@dataset).order(:num).sql.should == "SELECT * FROM (SELECT * FROM test UNION SELECT * FROM test) AS t1 ORDER BY num" @dataset.order(:num).intersect(@dataset).sql.should == "SELECT * FROM (SELECT * FROM (SELECT * FROM test ORDER BY num) AS t1 INTERSECT SELECT * FROM test) AS t1" @dataset.except(@dataset.order(:num)).sql.should == "SELECT * FROM (SELECT * FROM test EXCEPT SELECT * FROM (SELECT * FROM test ORDER BY num) AS t1) AS t1" @dataset.limit(2).order(:a).union(@dataset.limit(3).order(:b)).order(:c).limit(4).sql.should == "SELECT * FROM (SELECT * FROM (SELECT * FROM test ORDER BY a LIMIT 2) AS t1 UNION SELECT * FROM (SELECT * FROM test ORDER BY b LIMIT 3) AS t1) AS t1 ORDER BY c LIMIT 4" end specify "should hoist WITH clauses in given dataset if dataset doesn't support WITH in subselect" do ds = Sequel.mock.dataset meta_def(ds, :supports_cte?){true} meta_def(ds, :supports_cte_in_subselect?){false} ds.from(:a).union(ds.from(:c).with(:c, ds.from(:d)), :from_self=>false).sql.should == 'WITH c AS (SELECT * FROM d) SELECT * FROM a UNION SELECT * FROM c' ds.from(:a).except(ds.from(:c).with(:c, ds.from(:d))).sql.should == 'WITH c AS (SELECT * FROM d) SELECT * FROM (SELECT * FROM a EXCEPT SELECT * FROM c) AS t1' ds.from(:a).with(:a, ds.from(:b)).intersect(ds.from(:c).with(:c, ds.from(:d)), :from_self=>false).sql.should == 'WITH a AS (SELECT * FROM b), c AS (SELECT * FROM d) SELECT * FROM a INTERSECT SELECT * FROM c' end end describe "Dataset#[]" do before do @db = Sequel.mock(:fetch=>{1 => 2, 3 => 4}) @d = @db[:items] end specify "should return a single record filtered according to the given conditions" do @d[:name => 'didi'].should == {1 => 2, 3 => 4} @db.sqls.should == ["SELECT * FROM items WHERE (name = 'didi') LIMIT 1"] @d[:id => 5..45].should == {1 => 2, 3 => 4} @db.sqls.should == ["SELECT * FROM items WHERE ((id >= 5) AND (id <= 45)) LIMIT 1"] end end describe "Dataset#single_record" do before do @db = Sequel.mock end specify "should call each with a limit of 1 and return the record" do @db.fetch = {:a=>1} @db[:test].single_record.should == {:a=>1} @db.sqls.should == ['SELECT * FROM test LIMIT 1'] end specify "should return nil if no record is present" do @db[:test].single_record.should be_nil @db.sqls.should == ['SELECT * FROM test LIMIT 1'] end end describe "Dataset#single_value" do before do @db = Sequel.mock end specify "should call each and return the first value of the first record" do @db.fetch = {:a=>1} @db[:test].single_value.should == 1 @db.sqls.should == ['SELECT * FROM test LIMIT 1'] end specify "should return nil if no records" do @db[:test].single_value.should be_nil @db.sqls.should == ['SELECT * FROM test LIMIT 1'] end it "should work on a graphed_dataset" do @db.fetch = {:a=>1} ds = @db[:test].columns(:a) ds.graph(ds, [:a], :table_alias=>:test2).single_value.should == 1 @db.sqls.should == ['SELECT test.a, test2.a AS test2_a FROM test LEFT OUTER JOIN test AS test2 USING (a) LIMIT 1'] end end describe "Dataset#get" do before do @d = Sequel.mock(:fetch=>proc{|s| {:name=>s}})[:test] end specify "should select the specified column and fetch its value" do @d.get(:name).should == "SELECT name FROM test LIMIT 1" @d.get(:abc).should == "SELECT abc FROM test LIMIT 1" end specify "should work with filters" do @d.filter(:id => 1).get(:name).should == "SELECT name FROM test WHERE (id = 1) LIMIT 1" end specify "should work with aliased fields" do @d.get(Sequel.expr(:x__b).as(:name)).should == "SELECT x.b AS name FROM test LIMIT 1" end specify "should accept a block that yields a virtual row" do @d.get{|o| o.x__b.as(:name)}.should == "SELECT x.b AS name FROM test LIMIT 1" @d.get{x(1).as(:name)}.should == "SELECT x(1) AS name FROM test LIMIT 1" end specify "should raise an error if both a regular argument and block argument are used" do proc{@d.get(:name){|o| o.x__b.as(:name)}}.should raise_error(Sequel::Error) end specify "should support false and nil values" do @d.get(false).should == "SELECT 'f' AS v FROM test LIMIT 1" @d.get(nil).should == "SELECT NULL AS v FROM test LIMIT 1" end specify "should support an array of expressions to get an array of results" do @d._fetch = {:name=>1, :abc=>2} @d.get([:name, :abc]).should == [1, 2] @d.db.sqls.should == ['SELECT name, abc FROM test LIMIT 1'] end specify "should support an array with a single expression" do @d.get([:name]).should == ['SELECT name FROM test LIMIT 1'] end specify "should handle an array with aliases" do @d._fetch = {:name=>1, :abc=>2} @d.get([:n___name, Sequel.as(:a, :abc)]).should == [1, 2] @d.db.sqls.should == ['SELECT n AS name, a AS abc FROM test LIMIT 1'] end specify "should raise an Error if an alias cannot be determined" do @d._fetch = {:name=>1, :abc=>2} proc{@d.get([Sequel.+(:a, 1), :a])}.should raise_error(Sequel::Error) end specify "should support an array of expressions in a virtual row" do @d._fetch = {:name=>1, :abc=>2} @d.get{[name, n__abc]}.should == [1, 2] @d.db.sqls.should == ['SELECT name, n.abc FROM test LIMIT 1'] end specify "should work with static SQL" do @d.with_sql('SELECT foo').get(:name).should == "SELECT foo" @d._fetch = {:name=>1, :abc=>2} @d.with_sql('SELECT foo').get{[name, n__abc]}.should == [1, 2] @d.db.sqls.should == ['SELECT foo'] * 2 end specify "should handle cases where no rows are returned" do @d._fetch = [] @d.get(:n).should == nil @d.get([:n, :a]).should == nil @d.db.sqls.should == ['SELECT n FROM test LIMIT 1', 'SELECT n, a FROM test LIMIT 1'] end end describe "Dataset#set_row_proc" do before do @db = Sequel.mock(:fetch=>[{:a=>1}, {:a=>2}]) @dataset = @db[:items] @dataset.row_proc = proc{|h| h[:der] = h[:a] + 2; h} end specify "should cause dataset to pass all rows through the filter" do rows = @dataset.all rows.map{|h| h[:der]}.should == [3, 4] @db.sqls.should == ['SELECT * FROM items'] end specify "should be copied over when dataset is cloned" do @dataset.filter(:a => 1).all.should == [{:a=>1, :der=>3}, {:a=>2, :der=>4}] end end describe "Dataset#<<" do before do @db = Sequel.mock end specify "should call #insert" do @db[:items] << {:name => 1} @db.sqls.should == ['INSERT INTO items (name) VALUES (1)'] end specify "should be chainable" do @db[:items] << {:name => 1} << @db[:old_items].select(:name) @db.sqls.should == ['INSERT INTO items (name) VALUES (1)', 'INSERT INTO items SELECT name FROM old_items'] end end describe "Dataset#columns" do before do @dataset = Sequel.mock[:items] end specify "should return the value of @columns if @columns is not nil" do @dataset.columns(:a, :b, :c).columns.should == [:a, :b, :c] @dataset.db.sqls.should == [] end specify "should attempt to get a single record and return @columns if @columns is nil" do @dataset.db.columns = [:a] @dataset.columns.should == [:a] @dataset.db.sqls.should == ['SELECT * FROM items LIMIT 1'] end specify "should be cleared if you change the selected columns" do @dataset.db.columns = [[:a], [:b]] @dataset.columns.should == [:a] @dataset.db.sqls.should == ['SELECT * FROM items LIMIT 1'] @dataset.columns.should == [:a] @dataset.db.sqls.should == [] ds = @dataset.select{foo{}} ds.columns.should == [:b] @dataset.db.sqls.should == ['SELECT foo() FROM items LIMIT 1'] end specify "should be cleared if you change the FROM table" do @dataset.db.columns = [[:a], [:b]] @dataset.columns.should == [:a] @dataset.db.sqls.should == ['SELECT * FROM items LIMIT 1'] ds = @dataset.from(:foo) ds.columns.should == [:b] @dataset.db.sqls.should == ['SELECT * FROM foo LIMIT 1'] end specify "should be cleared if you join a table to the dataset" do @dataset.db.columns = [[:a], [:a, :b]] @dataset.columns.should == [:a] @dataset.db.sqls.should == ['SELECT * FROM items LIMIT 1'] ds = @dataset.cross_join(:foo) ds.columns.should == [:a, :b] @dataset.db.sqls.should == ['SELECT * FROM items CROSS JOIN foo LIMIT 1'] end specify "should be cleared if you set custom SQL for the dataset" do @dataset.db.columns = [[:a], [:b]] @dataset.columns.should == [:a] @dataset.db.sqls.should == ['SELECT * FROM items LIMIT 1'] ds = @dataset.with_sql('SELECT b FROM foo') ds.columns.should == [:b] @dataset.db.sqls.should == ['SELECT b FROM foo'] end specify "should ignore any filters, orders, or DISTINCT clauses" do @dataset.db.columns = [:a] @dataset.filter!(:b=>100).order!(:b).distinct! @dataset.columns.should == [:a] @dataset.db.sqls.should == ['SELECT * FROM items LIMIT 1'] end end describe "Dataset#columns!" do specify "should always attempt to get a record and return @columns" do ds = Sequel.mock(:columns=>[[:a, :b, :c], [:d, :e, :f]])[:items] ds.columns!.should == [:a, :b, :c] ds.db.sqls.should == ['SELECT * FROM items LIMIT 1'] ds.columns!.should == [:d, :e, :f] ds.db.sqls.should == ['SELECT * FROM items LIMIT 1'] end end describe "Dataset#import" do before do @db = Sequel.mock @ds = @db[:items] end specify "should return nil without a query if no values" do @ds.import(['x', 'y'], []).should == nil @db.sqls.should == [] end specify "should accept string keys as column names" do @ds.import(['x', 'y'], [[1, 2], [3, 4]]) @db.sqls.should == ['BEGIN', "INSERT INTO items (x, y) VALUES (1, 2)", "INSERT INTO items (x, y) VALUES (3, 4)", 'COMMIT'] end specify "should accept a columns array and a values array" do @ds.import([:x, :y], [[1, 2], [3, 4]]) @db.sqls.should == ['BEGIN', "INSERT INTO items (x, y) VALUES (1, 2)", "INSERT INTO items (x, y) VALUES (3, 4)", 'COMMIT'] end specify "should accept a columns array and a dataset" do @ds2 = @ds.from(:cats).filter(:purr => true).select(:a, :b) @ds.import([:x, :y], @ds2) @db.sqls.should == ['BEGIN', "INSERT INTO items (x, y) SELECT a, b FROM cats WHERE (purr IS TRUE)", 'COMMIT'] end specify "should accept a columns array and a values array with :commit_every option" do @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]], :commit_every => 3) @db.sqls.should == ['BEGIN', "INSERT INTO items (x, y) VALUES (1, 2)", "INSERT INTO items (x, y) VALUES (3, 4)", "INSERT INTO items (x, y) VALUES (5, 6)", 'COMMIT'] end specify "should accept a columns array and a values array with :slice option" do @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]], :slice => 2) @db.sqls.should == ['BEGIN', "INSERT INTO items (x, y) VALUES (1, 2)", "INSERT INTO items (x, y) VALUES (3, 4)", 'COMMIT', 'BEGIN', "INSERT INTO items (x, y) VALUES (5, 6)", 'COMMIT'] end end describe "Dataset#multi_insert" do before do @db = Sequel.mock(:servers=>{:s1=>{}}) @ds = @db[:items] @list = [{:name => 'abc'}, {:name => 'def'}, {:name => 'ghi'}] end specify "should return nil without a query if no values" do @ds.multi_insert([]).should == nil @db.sqls.should == [] end specify "should issue multiple insert statements inside a transaction" do @ds.multi_insert(@list) @db.sqls.should == ['BEGIN', "INSERT INTO items (name) VALUES ('abc')", "INSERT INTO items (name) VALUES ('def')", "INSERT INTO items (name) VALUES ('ghi')", 'COMMIT'] end specify "should respect :server option" do @ds.multi_insert(@list, :server=>:s1) @db.sqls.should == ['BEGIN -- s1', "INSERT INTO items (name) VALUES ('abc') -- s1", "INSERT INTO items (name) VALUES ('def') -- s1", "INSERT INTO items (name) VALUES ('ghi') -- s1", 'COMMIT -- s1'] end specify "should respect existing :server option on dataset" do @ds.server(:s1).multi_insert(@list) @db.sqls.should == ['BEGIN -- s1', "INSERT INTO items (name) VALUES ('abc') -- s1", "INSERT INTO items (name) VALUES ('def') -- s1", "INSERT INTO items (name) VALUES ('ghi') -- s1", 'COMMIT -- s1'] end specify "should respect :return=>:primary_key option" do @db.autoid = 1 @ds.multi_insert(@list, :return=>:primary_key).should == [1, 2, 3] @db.sqls.should == ['BEGIN', "INSERT INTO items (name) VALUES ('abc')", "INSERT INTO items (name) VALUES ('def')", "INSERT INTO items (name) VALUES ('ghi')", 'COMMIT'] end specify "should handle different formats for tables" do @ds = @ds.from(:sch__tab) @ds.multi_insert(@list) @db.sqls.should == ['BEGIN', "INSERT INTO sch.tab (name) VALUES ('abc')", "INSERT INTO sch.tab (name) VALUES ('def')", "INSERT INTO sch.tab (name) VALUES ('ghi')", 'COMMIT'] @ds = @ds.from(Sequel.qualify(:sch, :tab)) @ds.multi_insert(@list) @db.sqls.should == ['BEGIN', "INSERT INTO sch.tab (name) VALUES ('abc')", "INSERT INTO sch.tab (name) VALUES ('def')", "INSERT INTO sch.tab (name) VALUES ('ghi')", 'COMMIT'] @ds = @ds.from(Sequel.identifier(:sch__tab)) @ds.multi_insert(@list) @db.sqls.should == ['BEGIN', "INSERT INTO sch__tab (name) VALUES ('abc')", "INSERT INTO sch__tab (name) VALUES ('def')", "INSERT INTO sch__tab (name) VALUES ('ghi')", 'COMMIT'] end specify "should accept the :commit_every option for committing every x records" do @ds.multi_insert(@list, :commit_every => 1) @db.sqls.should == ['BEGIN', "INSERT INTO items (name) VALUES ('abc')", 'COMMIT', 'BEGIN', "INSERT INTO items (name) VALUES ('def')", 'COMMIT', 'BEGIN', "INSERT INTO items (name) VALUES ('ghi')", 'COMMIT'] end specify "should accept the :slice option for committing every x records" do @ds.multi_insert(@list, :slice => 2) @db.sqls.should == ['BEGIN', "INSERT INTO items (name) VALUES ('abc')", "INSERT INTO items (name) VALUES ('def')", 'COMMIT', 'BEGIN', "INSERT INTO items (name) VALUES ('ghi')", 'COMMIT'] end specify "should accept string keys as column names" do @ds.multi_insert([{'x'=>1, 'y'=>2}, {'x'=>3, 'y'=>4}]) sqls = @db.sqls ["INSERT INTO items (x, y) VALUES (1, 2)", "INSERT INTO items (y, x) VALUES (2, 1)"].should include(sqls.slice!(1)) ["INSERT INTO items (x, y) VALUES (3, 4)", "INSERT INTO items (y, x) VALUES (4, 3)"].should include(sqls.slice!(1)) sqls.should == ['BEGIN', 'COMMIT'] end specify "should not do anything if no hashes are provided" do @ds.multi_insert([]) @db.sqls.should == [] end end describe "Dataset" do before do @d = Sequel.mock.dataset.from(:x) end specify "should support self-changing select!" do @d.select!(:y) @d.sql.should == "SELECT y FROM x" end specify "should support self-changing from!" do @d.from!(:y) @d.sql.should == "SELECT * FROM y" end specify "should support self-changing order!" do @d.order!(:y) @d.sql.should == "SELECT * FROM x ORDER BY y" end specify "should support self-changing filter!" do @d.filter!(:y => 1) @d.sql.should == "SELECT * FROM x WHERE (y = 1)" end specify "should support self-changing filter! with block" do @d.filter!{y < 2} @d.sql.should == "SELECT * FROM x WHERE (y < 2)" end specify "should raise for ! methods that don't return a dataset" do proc {@d.opts!}.should raise_error(NameError) end specify "should raise for missing methods" do proc {@d.xuyz}.should raise_error(NameError) proc {@d.xyz!}.should raise_error(NameError) proc {@d.xyz?}.should raise_error(NameError) end specify "should support chaining of bang methods" do @d.order!(:y).filter!(:y => 1).sql.should == "SELECT * FROM x WHERE (y = 1) ORDER BY y" end end describe "Dataset#update_sql" do before do @ds = Sequel.mock.dataset.from(:items) end specify "should accept strings" do @ds.update_sql("a = b").should == "UPDATE items SET a = b" end specify "should handle implicitly qualified symbols" do @ds.update_sql(:items__a=>:b).should == "UPDATE items SET items.a = b" end specify "should accept hash with string keys" do @ds.update_sql('c' => 'd').should == "UPDATE items SET c = 'd'" end specify "should accept array subscript references" do @ds.update_sql((Sequel.subscript(:day, 1)) => 'd').should == "UPDATE items SET day[1] = 'd'" end end describe "Dataset#insert_sql" do before do @ds = Sequel.mock.dataset.from(:items) end specify "should accept hash with symbol keys" do @ds.insert_sql(:c => 'd').should == "INSERT INTO items (c) VALUES ('d')" end specify "should accept hash with string keys" do @ds.insert_sql('c' => 'd').should == "INSERT INTO items (c) VALUES ('d')" end specify "should quote string keys" do @ds.quote_identifiers = true @ds.insert_sql('c' => 'd').should == "INSERT INTO \"items\" (\"c\") VALUES ('d')" end specify "should accept array subscript references" do @ds.insert_sql((Sequel.subscript(:day, 1)) => 'd').should == "INSERT INTO items (day[1]) VALUES ('d')" end specify "should raise an Error if the dataset has no sources" do proc{Sequel::Database.new.dataset.insert_sql}.should raise_error(Sequel::Error) end specify "should accept datasets" do @ds.insert_sql(@ds).should == "INSERT INTO items SELECT * FROM items" end specify "should accept datasets with columns" do @ds.insert_sql([:a, :b], @ds).should == "INSERT INTO items (a, b) SELECT * FROM items" end specify "should raise if given bad values" do proc{@ds.clone(:values=>'a').send(:_insert_sql)}.should raise_error(Sequel::Error) end specify "should accept separate values" do @ds.insert_sql(1).should == "INSERT INTO items VALUES (1)" @ds.insert_sql(1, 2).should == "INSERT INTO items VALUES (1, 2)" @ds.insert_sql(1, 2, 3).should == "INSERT INTO items VALUES (1, 2, 3)" end specify "should accept a single array of values" do @ds.insert_sql([1, 2, 3]).should == "INSERT INTO items VALUES (1, 2, 3)" end specify "should accept an array of columns and an array of values" do @ds.insert_sql([:a, :b, :c], [1, 2, 3]).should == "INSERT INTO items (a, b, c) VALUES (1, 2, 3)" end specify "should raise an array if the columns and values differ in size" do proc{@ds.insert_sql([:a, :b], [1, 2, 3])}.should raise_error(Sequel::Error) end specify "should accept a single LiteralString" do @ds.insert_sql(Sequel.lit('VALUES (1, 2, 3)')).should == "INSERT INTO items VALUES (1, 2, 3)" end specify "should accept an array of columns and an LiteralString" do @ds.insert_sql([:a, :b, :c], Sequel.lit('VALUES (1, 2, 3)')).should == "INSERT INTO items (a, b, c) VALUES (1, 2, 3)" end end describe "Dataset#inspect" do before do class ::InspectDataset < Sequel::Dataset; end end after do Object.send(:remove_const, :InspectDataset) if defined?(::InspectDataset) end specify "should include the class name and the corresponding SQL statement" do Sequel::Dataset.new(Sequel.mock).from(:blah).inspect.should == '#<Sequel::Dataset: "SELECT * FROM blah">' InspectDataset.new(Sequel.mock).from(:blah).inspect.should == '#<InspectDataset: "SELECT * FROM blah">' end specify "should skip anonymous classes" do Class.new(Class.new(Sequel::Dataset)).new(Sequel.mock).from(:blah).inspect.should == '#<Sequel::Dataset: "SELECT * FROM blah">' Class.new(InspectDataset).new(Sequel.mock).from(:blah).inspect.should == '#<InspectDataset: "SELECT * FROM blah">' end end describe "Dataset#all" do before do @dataset = Sequel.mock(:fetch=>[{:x => 1, :y => 2}, {:x => 3, :y => 4}])[:items] end specify "should return an array with all records" do @dataset.all.should == [{:x => 1, :y => 2}, {:x => 3, :y => 4}] @dataset.db.sqls.should == ["SELECT * FROM items"] end specify "should iterate over the array if a block is given" do a = [] @dataset.all{|r| a << r.values_at(:x, :y)}.should == [{:x => 1, :y => 2}, {:x => 3, :y => 4}] a.should == [[1, 2], [3, 4]] @dataset.db.sqls.should == ["SELECT * FROM items"] end end describe "Dataset#grep" do before do @ds = Sequel.mock[:posts] end specify "should format a filter correctly" do @ds.grep(:title, 'ruby').sql.should == "SELECT * FROM posts WHERE ((title LIKE 'ruby' ESCAPE '\\'))" end specify "should support multiple columns" do @ds.grep([:title, :body], 'ruby').sql.should == "SELECT * FROM posts WHERE ((title LIKE 'ruby' ESCAPE '\\') OR (body LIKE 'ruby' ESCAPE '\\'))" end specify "should support multiple search terms" do @ds.grep(:title, ['abc', 'def']).sql.should == "SELECT * FROM posts WHERE ((title LIKE 'abc' ESCAPE '\\') OR (title LIKE 'def' ESCAPE '\\'))" end specify "should support multiple columns and search terms" do @ds.grep([:title, :body], ['abc', 'def']).sql.should == "SELECT * FROM posts WHERE ((title LIKE 'abc' ESCAPE '\\') OR (title LIKE 'def' ESCAPE '\\') OR (body LIKE 'abc' ESCAPE '\\') OR (body LIKE 'def' ESCAPE '\\'))" end specify "should support the :all_patterns option" do @ds.grep([:title, :body], ['abc', 'def'], :all_patterns=>true).sql.should == "SELECT * FROM posts WHERE (((title LIKE 'abc' ESCAPE '\\') OR (body LIKE 'abc' ESCAPE '\\')) AND ((title LIKE 'def' ESCAPE '\\') OR (body LIKE 'def' ESCAPE '\\')))" end specify "should support the :all_columns option" do @ds.grep([:title, :body], ['abc', 'def'], :all_columns=>true).sql.should == "SELECT * FROM posts WHERE (((title LIKE 'abc' ESCAPE '\\') OR (title LIKE 'def' ESCAPE '\\')) AND ((body LIKE 'abc' ESCAPE '\\') OR (body LIKE 'def' ESCAPE '\\')))" end specify "should support the :case_insensitive option" do @ds.grep([:title, :body], ['abc', 'def'], :case_insensitive=>true).sql.should == "SELECT * FROM posts WHERE ((UPPER(title) LIKE UPPER('abc') ESCAPE '\\') OR (UPPER(title) LIKE UPPER('def') ESCAPE '\\') OR (UPPER(body) LIKE UPPER('abc') ESCAPE '\\') OR (UPPER(body) LIKE UPPER('def') ESCAPE '\\'))" end specify "should support the :all_patterns and :all_columns options together" do @ds.grep([:title, :body], ['abc', 'def'], :all_patterns=>true, :all_columns=>true).sql.should == "SELECT * FROM posts WHERE ((title LIKE 'abc' ESCAPE '\\') AND (body LIKE 'abc' ESCAPE '\\') AND (title LIKE 'def' ESCAPE '\\') AND (body LIKE 'def' ESCAPE '\\'))" end specify "should support the :all_patterns and :case_insensitive options together" do @ds.grep([:title, :body], ['abc', 'def'], :all_patterns=>true, :case_insensitive=>true).sql.should == "SELECT * FROM posts WHERE (((UPPER(title) LIKE UPPER('abc') ESCAPE '\\') OR (UPPER(body) LIKE UPPER('abc') ESCAPE '\\')) AND ((UPPER(title) LIKE UPPER('def') ESCAPE '\\') OR (UPPER(body) LIKE UPPER('def') ESCAPE '\\')))" end specify "should support the :all_columns and :case_insensitive options together" do @ds.grep([:title, :body], ['abc', 'def'], :all_columns=>true, :case_insensitive=>true).sql.should == "SELECT * FROM posts WHERE (((UPPER(title) LIKE UPPER('abc') ESCAPE '\\') OR (UPPER(title) LIKE UPPER('def') ESCAPE '\\')) AND ((UPPER(body) LIKE UPPER('abc') ESCAPE '\\') OR (UPPER(body) LIKE UPPER('def') ESCAPE '\\')))" end specify "should support the :all_patterns, :all_columns, and :case_insensitive options together" do @ds.grep([:title, :body], ['abc', 'def'], :all_patterns=>true, :all_columns=>true, :case_insensitive=>true).sql.should == "SELECT * FROM posts WHERE ((UPPER(title) LIKE UPPER('abc') ESCAPE '\\') AND (UPPER(body) LIKE UPPER('abc') ESCAPE '\\') AND (UPPER(title) LIKE UPPER('def') ESCAPE '\\') AND (UPPER(body) LIKE UPPER('def') ESCAPE '\\'))" end specify "should not support regexps if the database doesn't supports it" do proc{@ds.grep(:title, /ruby/).sql}.should raise_error(Sequel::InvalidOperation) proc{@ds.grep(:title, [/^ruby/, 'ruby']).sql}.should raise_error(Sequel::InvalidOperation) end specify "should support regexps if the database supports it" do def @ds.supports_regexp?; true end @ds.grep(:title, /ruby/).sql.should == "SELECT * FROM posts WHERE ((title ~ 'ruby'))" @ds.grep(:title, [/^ruby/, 'ruby']).sql.should == "SELECT * FROM posts WHERE ((title ~ '^ruby') OR (title LIKE 'ruby' ESCAPE '\\'))" end specify "should support searching against other columns" do @ds.grep(:title, :body).sql.should == "SELECT * FROM posts WHERE ((title LIKE body ESCAPE '\\'))" end end describe "Dataset default #fetch_rows, #insert, #update, #delete, #with_sql_delete, #truncate, #execute" do before do @db = Sequel::Database.new @ds = @db[:items] end specify "#delete should execute delete SQL" do @db.should_receive(:execute).once.with('DELETE FROM items', :server=>:default) @ds.delete @db.should_receive(:execute_dui).once.with('DELETE FROM items', :server=>:default) @ds.delete end specify "#with_sql_delete should execute delete SQL" do sql = 'DELETE FROM foo' @db.should_receive(:execute).once.with(sql, :server=>:default) @ds.with_sql_delete(sql) @db.should_receive(:execute_dui).once.with(sql, :server=>:default) @ds.with_sql_delete(sql) end specify "#insert should execute insert SQL" do @db.should_receive(:execute).once.with('INSERT INTO items DEFAULT VALUES', :server=>:default) @ds.insert([]) @db.should_receive(:execute_insert).once.with('INSERT INTO items DEFAULT VALUES', :server=>:default) @ds.insert([]) end specify "#update should execute update SQL" do @db.should_receive(:execute).once.with('UPDATE items SET number = 1', :server=>:default) @ds.update(:number=>1) @db.should_receive(:execute_dui).once.with('UPDATE items SET number = 1', :server=>:default) @ds.update(:number=>1) end specify "#truncate should execute truncate SQL" do @db.should_receive(:execute).once.with('TRUNCATE TABLE items', :server=>:default) @ds.truncate.should == nil @db.should_receive(:execute_ddl).once.with('TRUNCATE TABLE items', :server=>:default) @ds.truncate.should == nil end specify "#truncate should raise an InvalidOperation exception if the dataset is filtered" do proc{@ds.filter(:a=>1).truncate}.should raise_error(Sequel::InvalidOperation) proc{@ds.having(:a=>1).truncate}.should raise_error(Sequel::InvalidOperation) end specify "#execute should execute the SQL on the database" do @db.should_receive(:execute).once.with('SELECT 1', :server=>:read_only) @ds.send(:execute, 'SELECT 1') end end describe "Dataset prepared statements and bound variables " do before do @db = Sequel.mock @ds = @db[:items] meta_def(@ds, :insert_sql){|*v| "#{super(*v)}#{' RETURNING *' if opts.has_key?(:returning)}" } end specify "#call should take a type and bind hash and interpolate it" do @ds.filter(:num=>:$n).call(:each, :n=>1) @ds.filter(:num=>:$n).call(:select, :n=>1) @ds.filter(:num=>:$n).call([:map, :a], :n=>1) @ds.filter(:num=>:$n).call([:to_hash, :a, :b], :n=>1) @ds.filter(:num=>:$n).call([:to_hash_groups, :a, :b], :n=>1) @ds.filter(:num=>:$n).call(:first, :n=>1) @ds.filter(:num=>:$n).call(:delete, :n=>1) @ds.filter(:num=>:$n).call(:update, {:n=>1, :n2=>2}, :num=>:$n2) @ds.call(:insert, {:n=>1}, :num=>:$n) @ds.call(:insert_select, {:n=>1}, :num=>:$n) @db.sqls.should == [ 'SELECT * FROM items WHERE (num = 1)', 'SELECT * FROM items WHERE (num = 1)', 'SELECT * FROM items WHERE (num = 1)', 'SELECT * FROM items WHERE (num = 1)', 'SELECT * FROM items WHERE (num = 1)', 'SELECT * FROM items WHERE (num = 1) LIMIT 1', 'DELETE FROM items WHERE (num = 1)', 'UPDATE items SET num = 2 WHERE (num = 1)', 'INSERT INTO items (num) VALUES (1)', 'INSERT INTO items (num) VALUES (1) RETURNING *'] end specify "#prepare should take a type and name and store it in the database for later use with call" do pss = [] pss << @ds.filter(:num=>:$n).prepare(:each, :en) pss << @ds.filter(:num=>:$n).prepare(:select, :sn) pss << @ds.filter(:num=>:$n).prepare([:map, :a], :sm) pss << @ds.filter(:num=>:$n).prepare([:to_hash, :a, :b], :sh) pss << @ds.filter(:num=>:$n).prepare([:to_hash_groups, :a, :b], :shg) pss << @ds.filter(:num=>:$n).prepare(:first, :fn) pss << @ds.filter(:num=>:$n).prepare(:delete, :dn) pss << @ds.filter(:num=>:$n).prepare(:update, :un, :num=>:$n2) pss << @ds.prepare(:insert, :in, :num=>:$n) pss << @ds.prepare(:insert_select, :ins, :num=>:$n) @db.prepared_statements.keys.sort_by{|k| k.to_s}.should == [:dn, :en, :fn, :in, :ins, :sh, :shg, :sm, :sn, :un] [:en, :sn, :sm, :sh, :shg, :fn, :dn, :un, :in, :ins].each_with_index{|x, i| @db.prepared_statements[x].should == pss[i]} @db.call(:en, :n=>1){} @db.call(:sn, :n=>1) @db.call(:sm, :n=>1) @db.call(:sh, :n=>1) @db.call(:shg, :n=>1) @db.call(:fn, :n=>1) @db.call(:dn, :n=>1) @db.call(:un, :n=>1, :n2=>2) @db.call(:in, :n=>1) @db.call(:ins, :n=>1) @db.sqls.should == [ 'SELECT * FROM items WHERE (num = 1)', 'SELECT * FROM items WHERE (num = 1)', 'SELECT * FROM items WHERE (num = 1)', 'SELECT * FROM items WHERE (num = 1)', 'SELECT * FROM items WHERE (num = 1)', 'SELECT * FROM items WHERE (num = 1) LIMIT 1', 'DELETE FROM items WHERE (num = 1)', 'UPDATE items SET num = 2 WHERE (num = 1)', 'INSERT INTO items (num) VALUES (1)', 'INSERT INTO items (num) VALUES (1) RETURNING *'] end specify "#call should default to using :all if an invalid type is given" do @ds.filter(:num=>:$n).call(:select_all, :n=>1) @db.sqls.should == ['SELECT * FROM items WHERE (num = 1)'] end specify "#inspect should indicate it is a prepared statement with the prepared SQL" do @ds.filter(:num=>:$n).prepare(:select, :sn).inspect.should == \ '<Sequel::Mock::Dataset/PreparedStatement "SELECT * FROM items WHERE (num = $n)">' end specify "should handle literal strings" do @ds.filter("num = ?", :$n).call(:select, :n=>1) @db.sqls.should == ['SELECT * FROM items WHERE (num = 1)'] end specify "should handle columns on prepared statements correctly" do @db.columns = [:num] meta_def(@ds, :select_where_sql){|sql| super(sql); sql << " OR #{columns.first} = 1" if opts[:where]} @ds.filter(:num=>:$n).prepare(:select, :sn).sql.should == 'SELECT * FROM items WHERE (num = $n) OR num = 1' @db.sqls.should == ['SELECT * FROM items LIMIT 1'] end specify "should handle datasets using static sql and placeholders" do @db["SELECT * FROM items WHERE (num = ?)", :$n].call(:select, :n=>1) @db.sqls.should == ['SELECT * FROM items WHERE (num = 1)'] end specify "should handle subselects" do @ds.filter(:$b).filter(:num=>@ds.select(:num).filter(:num=>:$n)).filter(:$c).call(:select, :n=>1, :b=>0, :c=>2) @db.sqls.should == ['SELECT * FROM items WHERE (0 AND (num IN (SELECT num FROM items WHERE (num = 1))) AND 2)'] end specify "should handle subselects in subselects" do @ds.filter(:$b).filter(:num=>@ds.select(:num).filter(:num=>@ds.select(:num).filter(:num=>:$n))).call(:select, :n=>1, :b=>0) @db.sqls.should == ['SELECT * FROM items WHERE (0 AND (num IN (SELECT num FROM items WHERE (num IN (SELECT num FROM items WHERE (num = 1))))))'] end specify "should handle subselects with literal strings" do @ds.filter(:$b).filter(:num=>@ds.select(:num).filter("num = ?", :$n)).call(:select, :n=>1, :b=>0) @db.sqls.should == ['SELECT * FROM items WHERE (0 AND (num IN (SELECT num FROM items WHERE (num = 1))))'] end specify "should handle subselects with static sql and placeholders" do @ds.filter(:$b).filter(:num=>@db["SELECT num FROM items WHERE (num = ?)", :$n]).call(:select, :n=>1, :b=>0) @db.sqls.should == ['SELECT * FROM items WHERE (0 AND (num IN (SELECT num FROM items WHERE (num = 1))))'] end end describe Sequel::Dataset::UnnumberedArgumentMapper do before do @db = Sequel.mock @ds = @db[:items].filter(:num=>:$n) def @ds.execute(sql, opts={}, &block) super(sql, opts.merge({:arguments=>bind_arguments}), &block) end def @ds.execute_dui(sql, opts={}, &block) super(sql, opts.merge({:arguments=>bind_arguments}), &block) end def @ds.execute_insert(sql, opts={}, &block) super(sql, opts.merge({:arguments=>bind_arguments}), &block) end @ps = [] @ps << @ds.prepare(:select, :s) @ps << @ds.prepare(:all, :a) @ps << @ds.prepare(:first, :f) @ps << @ds.prepare(:delete, :d) @ps << @ds.prepare(:insert, :i, :num=>:$n) @ps << @ds.prepare(:update, :u, :num=>:$n) @ps.each do |p| p.extend(Sequel::Dataset::ArgumentMapper) # Work around for old rbx p.extend(Sequel::Dataset::UnnumberedArgumentMapper) end end specify "#inspect should show the actual SQL submitted to the database" do @ps.first.inspect.should == '<Sequel::Mock::Dataset/PreparedStatement "SELECT * FROM items WHERE (num = ?)">' end specify "should submit the SQL to the database with placeholders and bind variables" do @ps.each{|p| p.prepared_sql; p.call(:n=>1)} @db.sqls.should == ["SELECT * FROM items WHERE (num = ?) -- args: [1]", "SELECT * FROM items WHERE (num = ?) -- args: [1]", "SELECT * FROM items WHERE (num = ?) LIMIT 1 -- args: [1]", "DELETE FROM items WHERE (num = ?) -- args: [1]", "INSERT INTO items (num) VALUES (?) -- args: [1]", "UPDATE items SET num = ? WHERE (num = ?) -- args: [1, 1]"] end specify "should handle unrecognized statement types as :all" do ps = @ds.prepare(:select_all, :s) ps.extend(Sequel::Dataset::ArgumentMapper) # Work around for old rbx ps.extend(Sequel::Dataset::UnnumberedArgumentMapper) ps.prepared_sql ps.call(:n=>1) @db.sqls.should == ["SELECT * FROM items WHERE (num = ?) -- args: [1]"] end end describe "Sequel::Dataset#server" do specify "should set the server to use for the dataset" do @db = Sequel.mock(:servers=>{:s=>{}, :i=>{}, :d=>{}, :u=>{}}) @ds = @db[:items].server(:s) @ds.all @ds.server(:i).insert(:a=>1) @ds.server(:d).delete @ds.server(:u).update(:a=>Sequel.expr(:a)+1) @db.sqls.should == ['SELECT * FROM items -- s', 'INSERT INTO items (a) VALUES (1) -- i', 'DELETE FROM items -- d', 'UPDATE items SET a = (a + 1) -- u'] end end describe "Sequel::Dataset#each_server" do specify "should yield a dataset for each server" do @db = Sequel.mock(:servers=>{:s=>{}, :i=>{}}) @ds = @db[:items] @ds.each_server do |ds| ds.should be_a_kind_of(Sequel::Dataset) ds.should_not == @ds ds.sql.should == @ds.sql ds.all end @db.sqls.sort.should == ['SELECT * FROM items', 'SELECT * FROM items -- i', 'SELECT * FROM items -- s'] end end describe "Sequel::Dataset#qualify" do before do @ds = Sequel::Database.new[:t] end specify "should qualify to the table if one is given" do @ds.filter{a<b}.qualify(:e).sql.should == 'SELECT e.* FROM t WHERE (e.a < e.b)' end specify "should handle the select, order, where, having, and group options/clauses" do @ds.select(:a).filter(:a=>1).order(:a).group(:a).having(:a).qualify.sql.should == 'SELECT t.a FROM t WHERE (t.a = 1) GROUP BY t.a HAVING t.a ORDER BY t.a' end specify "should handle the select using a table.* if all columns are currently selected" do @ds.filter(:a=>1).order(:a).group(:a).having(:a).qualify.sql.should == 'SELECT t.* FROM t WHERE (t.a = 1) GROUP BY t.a HAVING t.a ORDER BY t.a' end specify "should handle hashes in select option" do @ds.select(:a=>:b).qualify.sql.should == 'SELECT (t.a = t.b) FROM t' end specify "should handle symbols" do @ds.select(:a, :b__c, :d___e, :f__g___h).qualify.sql.should == 'SELECT t.a, b.c, t.d AS e, f.g AS h FROM t' end specify "should handle arrays" do @ds.filter(:a=>[:b, :c]).qualify.sql.should == 'SELECT t.* FROM t WHERE (t.a IN (t.b, t.c))' end specify "should handle hashes" do @ds.select(Sequel.case({:b=>{:c=>1}}, false)).qualify.sql.should == "SELECT (CASE WHEN t.b THEN (t.c = 1) ELSE 'f' END) FROM t" end specify "should handle SQL::Identifiers" do @ds.select{a}.qualify.sql.should == 'SELECT t.a FROM t' end specify "should handle SQL::OrderedExpressions" do @ds.order(Sequel.desc(:a), Sequel.asc(:b)).qualify.sql.should == 'SELECT t.* FROM t ORDER BY t.a DESC, t.b ASC' end specify "should handle SQL::AliasedExpressions" do @ds.select(Sequel.expr(:a).as(:b)).qualify.sql.should == 'SELECT t.a AS b FROM t' end specify "should handle SQL::CaseExpressions" do @ds.filter{Sequel.case({a=>b}, c, d)}.qualify.sql.should == 'SELECT t.* FROM t WHERE (CASE t.d WHEN t.a THEN t.b ELSE t.c END)' end specify "should handle SQL:Casts" do @ds.filter{a.cast(:boolean)}.qualify.sql.should == 'SELECT t.* FROM t WHERE CAST(t.a AS boolean)' end specify "should handle SQL::Functions" do @ds.filter{a(b, 1)}.qualify.sql.should == 'SELECT t.* FROM t WHERE a(t.b, 1)' end specify "should handle SQL::ComplexExpressions" do @ds.filter{(a+b)<(c-3)}.qualify.sql.should == 'SELECT t.* FROM t WHERE ((t.a + t.b) < (t.c - 3))' end specify "should handle SQL::ValueLists" do @ds.filter(:a=>Sequel.value_list([:b, :c])).qualify.sql.should == 'SELECT t.* FROM t WHERE (t.a IN (t.b, t.c))' end specify "should handle SQL::Subscripts" do @ds.filter{a.sql_subscript(b,3)}.qualify.sql.should == 'SELECT t.* FROM t WHERE t.a[t.b, 3]' end specify "should handle SQL::PlaceholderLiteralStrings" do @ds.filter('? > ?', :a, 1).qualify.sql.should == 'SELECT t.* FROM t WHERE (t.a > 1)' end specify "should handle SQL::PlaceholderLiteralStrings with named placeholders" do @ds.filter(':a > :b', :a=>:c, :b=>1).qualify.sql.should == 'SELECT t.* FROM t WHERE (t.c > 1)' end specify "should handle SQL::Wrappers" do @ds.filter(Sequel::SQL::Wrapper.new(:a)).qualify.sql.should == 'SELECT t.* FROM t WHERE t.a' end specify "should handle SQL::WindowFunctions" do meta_def(@ds, :supports_window_functions?){true} @ds.select{sum(:over, :args=>:a, :partition=>:b, :order=>:c){}}.qualify.sql.should == 'SELECT sum(t.a) OVER (PARTITION BY t.b ORDER BY t.c) FROM t' end specify "should handle all other objects by returning them unchanged" do @ds.select("a").filter{a(3)}.filter('blah').order(Sequel.lit('true')).group(Sequel.lit('a > ?', 1)).having(false).qualify.sql.should == "SELECT 'a' FROM t WHERE (a(3) AND (blah)) GROUP BY a > 1 HAVING 'f' ORDER BY true" end end describe "Sequel::Dataset#unbind" do before do @ds = Sequel::Database.new[:t] @u = proc{|ds| ds, bv = ds.unbind; [ds.sql, bv]} end specify "should unbind values assigned to equality and inequality statements" do @ds.filter(:foo=>1).unbind.first.sql.should == "SELECT * FROM t WHERE (foo = $foo)" @ds.exclude(:foo=>1).unbind.first.sql.should == "SELECT * FROM t WHERE (foo != $foo)" @ds.filter{foo > 1}.unbind.first.sql.should == "SELECT * FROM t WHERE (foo > $foo)" @ds.filter{foo >= 1}.unbind.first.sql.should == "SELECT * FROM t WHERE (foo >= $foo)" @ds.filter{foo < 1}.unbind.first.sql.should == "SELECT * FROM t WHERE (foo < $foo)" @ds.filter{foo <= 1}.unbind.first.sql.should == "SELECT * FROM t WHERE (foo <= $foo)" end specify "should return variables that could be used bound to recreate the previous query" do @ds.filter(:foo=>1).unbind.last.should == {:foo=>1} @ds.exclude(:foo=>1).unbind.last.should == {:foo=>1} end specify "should return variables as symbols" do @ds.filter(Sequel.expr(:foo)=>1).unbind.last.should == {:foo=>1} @ds.exclude(Sequel.expr(:foo__bar)=>1).unbind.last.should == {:"foo.bar"=>1} end specify "should handle numerics, strings, dates, times, and datetimes" do @u[@ds.filter(:foo=>1)].should == ["SELECT * FROM t WHERE (foo = $foo)", {:foo=>1}] @u[@ds.filter(:foo=>1.0)].should == ["SELECT * FROM t WHERE (foo = $foo)", {:foo=>1.0}] @u[@ds.filter(:foo=>BigDecimal.new('1.0'))].should == ["SELECT * FROM t WHERE (foo = $foo)", {:foo=>BigDecimal.new('1.0')}] @u[@ds.filter(:foo=>'a')].should == ["SELECT * FROM t WHERE (foo = $foo)", {:foo=>'a'}] @u[@ds.filter(:foo=>Date.today)].should == ["SELECT * FROM t WHERE (foo = $foo)", {:foo=>Date.today}] t = Time.now @u[@ds.filter(:foo=>t)].should == ["SELECT * FROM t WHERE (foo = $foo)", {:foo=>t}] dt = DateTime.now @u[@ds.filter(:foo=>dt)].should == ["SELECT * FROM t WHERE (foo = $foo)", {:foo=>dt}] end specify "should not unbind literal strings" do @u[@ds.filter(:foo=>Sequel.lit('a'))].should == ["SELECT * FROM t WHERE (foo = a)", {}] end specify "should not unbind Identifiers, QualifiedIdentifiers, or Symbols used as booleans" do @u[@ds.filter(:foo).filter{bar}.filter{foo__bar}].should == ["SELECT * FROM t WHERE (foo AND bar AND foo.bar)", {}] end specify "should not unbind for values it doesn't understand" do @u[@ds.filter(:foo=>Class.new{def sql_literal(ds) 'bar' end}.new)].should == ["SELECT * FROM t WHERE (foo = bar)", {}] end specify "should handle QualifiedIdentifiers" do @u[@ds.filter{foo__bar > 1}].should == ["SELECT * FROM t WHERE (foo.bar > $foo.bar)", {:"foo.bar"=>1}] end specify "should handle wrapped objects" do @u[@ds.filter{Sequel::SQL::Wrapper.new(foo__bar) > Sequel::SQL::Wrapper.new(1)}].should == ["SELECT * FROM t WHERE (foo.bar > $foo.bar)", {:"foo.bar"=>1}] end specify "should handle deep nesting" do @u[@ds.filter{foo > 1}.and{bar < 2}.or(:baz=>3).and(Sequel.case({~Sequel.expr(:x=>4)=>true}, false))].should == ["SELECT * FROM t WHERE ((((foo > $foo) AND (bar < $bar)) OR (baz = $baz)) AND (CASE WHEN (x != $x) THEN 't' ELSE 'f' END))", {:foo=>1, :bar=>2, :baz=>3, :x=>4}] end specify "should handle JOIN ON" do @u[@ds.cross_join(:x).join(:a, [:u]).join(:b, [[:c, :d], [:e,1]])].should == ["SELECT * FROM t CROSS JOIN x INNER JOIN a USING (u) INNER JOIN b ON ((b.c = a.d) AND (b.e = $b.e))", {:"b.e"=>1}] end specify "should raise an UnbindDuplicate exception if same variable is used with multiple different values" do proc{@ds.filter(:foo=>1).or(:foo=>2).unbind}.should raise_error(Sequel::UnbindDuplicate) end specify "should handle case where the same variable has the same value in multiple places " do @u[@ds.filter(:foo=>1).or(:foo=>1)].should == ["SELECT * FROM t WHERE ((foo = $foo) OR (foo = $foo))", {:foo=>1}] end specify "should raise Error for unhandled objects inside Identifiers and QualifiedIndentifiers" do proc{@ds.filter(Sequel::SQL::Identifier.new([]) > 1).unbind}.should raise_error(Sequel::Error) proc{@ds.filter{foo.qualify({}) > 1}.unbind}.should raise_error(Sequel::Error) end end describe "Sequel::Dataset #with and #with_recursive" do before do @db = Sequel::Database.new @ds = @db[:t] end specify "#with should take a name and dataset and use a WITH clause" do @ds.with(:t, @db[:x]).sql.should == 'WITH t AS (SELECT * FROM x) SELECT * FROM t' end specify "#with_recursive should take a name, nonrecursive dataset, and recursive dataset, and use a WITH clause" do @ds.with_recursive(:t, @db[:x], @db[:t]).sql.should == 'WITH t AS (SELECT * FROM x UNION ALL SELECT * FROM t) SELECT * FROM t' end specify "#with and #with_recursive should add to existing WITH clause if called multiple times" do @ds.with(:t, @db[:x]).with(:j, @db[:y]).sql.should == 'WITH t AS (SELECT * FROM x), j AS (SELECT * FROM y) SELECT * FROM t' @ds.with_recursive(:t, @db[:x], @db[:t]).with_recursive(:j, @db[:y], @db[:j]).sql.should == 'WITH t AS (SELECT * FROM x UNION ALL SELECT * FROM t), j AS (SELECT * FROM y UNION ALL SELECT * FROM j) SELECT * FROM t' @ds.with(:t, @db[:x]).with_recursive(:j, @db[:y], @db[:j]).sql.should == 'WITH t AS (SELECT * FROM x), j AS (SELECT * FROM y UNION ALL SELECT * FROM j) SELECT * FROM t' end specify "#with and #with_recursive should take an :args option" do @ds.with(:t, @db[:x], :args=>[:b]).sql.should == 'WITH t(b) AS (SELECT * FROM x) SELECT * FROM t' @ds.with_recursive(:t, @db[:x], @db[:t], :args=>[:b, :c]).sql.should == 'WITH t(b, c) AS (SELECT * FROM x UNION ALL SELECT * FROM t) SELECT * FROM t' end specify "#with and #with_recursive should quote the columns in the :args option" do @ds.quote_identifiers = true @ds.with(:t, @db[:x], :args=>[:b]).sql.should == 'WITH "t"("b") AS (SELECT * FROM x) SELECT * FROM "t"' @ds.with_recursive(:t, @db[:x], @db[:t], :args=>[:b, :c]).sql.should == 'WITH "t"("b", "c") AS (SELECT * FROM x UNION ALL SELECT * FROM t) SELECT * FROM "t"' end specify "#with_recursive should take an :union_all=>false option" do @ds.with_recursive(:t, @db[:x], @db[:t], :union_all=>false).sql.should == 'WITH t AS (SELECT * FROM x UNION SELECT * FROM t) SELECT * FROM t' end specify "#with and #with_recursive should raise an error unless the dataset supports CTEs" do meta_def(@ds, :supports_cte?){false} proc{@ds.with(:t, @db[:x], :args=>[:b])}.should raise_error(Sequel::Error) proc{@ds.with_recursive(:t, @db[:x], @db[:t], :args=>[:b, :c])}.should raise_error(Sequel::Error) end specify "#with should work on insert, update, and delete statements if they support it" do [:insert, :update, :delete].each do |m| meta_def(@ds, :"#{m}_clause_methods"){[:"#{m}_with_sql"] + super()} end @ds.with(:t, @db[:x]).insert_sql(1).should == 'WITH t AS (SELECT * FROM x) INSERT INTO t VALUES (1)' @ds.with(:t, @db[:x]).update_sql(:foo=>1).should == 'WITH t AS (SELECT * FROM x) UPDATE t SET foo = 1' @ds.with(:t, @db[:x]).delete_sql.should == 'WITH t AS (SELECT * FROM x) DELETE FROM t' end specify "should hoist WITH clauses in given dataset(s) if dataset doesn't support WITH in subselect" do meta_def(@ds, :supports_cte?){true} meta_def(@ds, :supports_cte_in_subselect?){false} @ds.with(:t, @ds.from(:s).with(:s, @ds.from(:r))).sql.should == 'WITH s AS (SELECT * FROM r), t AS (SELECT * FROM s) SELECT * FROM t' @ds.with_recursive(:t, @ds.from(:s).with(:s, @ds.from(:r)), @ds.from(:q).with(:q, @ds.from(:p))).sql.should == 'WITH s AS (SELECT * FROM r), q AS (SELECT * FROM p), t AS (SELECT * FROM s UNION ALL SELECT * FROM q) SELECT * FROM t' end end describe Sequel::SQL::Constants do before do @db = Sequel::Database.new end it "should have CURRENT_DATE" do @db.literal(Sequel::SQL::Constants::CURRENT_DATE).should == 'CURRENT_DATE' @db.literal(Sequel::CURRENT_DATE).should == 'CURRENT_DATE' end it "should have CURRENT_TIME" do @db.literal(Sequel::SQL::Constants::CURRENT_TIME).should == 'CURRENT_TIME' @db.literal(Sequel::CURRENT_TIME).should == 'CURRENT_TIME' end it "should have CURRENT_TIMESTAMP" do @db.literal(Sequel::SQL::Constants::CURRENT_TIMESTAMP).should == 'CURRENT_TIMESTAMP' @db.literal(Sequel::CURRENT_TIMESTAMP).should == 'CURRENT_TIMESTAMP' end it "should have NULL" do @db.literal(Sequel::SQL::Constants::NULL).should == 'NULL' @db.literal(Sequel::NULL).should == 'NULL' end it "should have NOTNULL" do @db.literal(Sequel::SQL::Constants::NOTNULL).should == 'NOT NULL' @db.literal(Sequel::NOTNULL).should == 'NOT NULL' end it "should have TRUE and SQLTRUE" do @db.literal(Sequel::SQL::Constants::TRUE).should == "'t'" @db.literal(Sequel::TRUE).should == "'t'" @db.literal(Sequel::SQL::Constants::SQLTRUE).should == "'t'" @db.literal(Sequel::SQLTRUE).should == "'t'" end it "should have FALSE and SQLFALSE" do @db.literal(Sequel::SQL::Constants::FALSE).should == "'f'" @db.literal(Sequel::FALSE).should == "'f'" @db.literal(Sequel::SQL::Constants::SQLFALSE).should == "'f'" @db.literal(Sequel::SQLFALSE).should == "'f'" end end describe "Sequel timezone support" do before do @db = Sequel::Database.new @dataset = @db.dataset meta_def(@dataset, :supports_timestamp_timezones?){true} meta_def(@dataset, :supports_timestamp_usecs?){false} @offset = sprintf("%+03i%02i", *(Time.now.utc_offset/60).divmod(60)) end after do Sequel.default_timezone = nil Sequel.datetime_class = Time end specify "should handle an database timezone of :utc when literalizing values" do Sequel.database_timezone = :utc t = Time.now s = t.getutc.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}+0000'" t = DateTime.now s = t.new_offset(0).strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}+0000'" end specify "should handle an database timezone of :local when literalizing values" do Sequel.database_timezone = :local t = Time.now.utc s = t.getlocal.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}#{@offset}'" t = DateTime.now.new_offset(0) s = t.new_offset(DateTime.now.offset).strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}#{@offset}'" end specify "should have Database#timezone override Sequel.database_timezone" do Sequel.database_timezone = :local @db.timezone = :utc t = Time.now s = t.getutc.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}+0000'" t = DateTime.now s = t.new_offset(0).strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}+0000'" Sequel.database_timezone = :utc @db.timezone = :local t = Time.now.utc s = t.getlocal.strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}#{@offset}'" t = DateTime.now.new_offset(0) s = t.new_offset(DateTime.now.offset).strftime("'%Y-%m-%d %H:%M:%S") @dataset.literal(t).should == "#{s}#{@offset}'" end specify "should handle converting database timestamps into application timestamps" do Sequel.database_timezone = :utc Sequel.application_timezone = :local t = Time.now.utc Sequel.database_to_application_timestamp(t).to_s.should == t.getlocal.to_s Sequel.database_to_application_timestamp(t.to_s).to_s.should == t.getlocal.to_s Sequel.database_to_application_timestamp(t.strftime('%Y-%m-%d %H:%M:%S')).to_s.should == t.getlocal.to_s Sequel.datetime_class = DateTime dt = DateTime.now dt2 = dt.new_offset(0) Sequel.database_to_application_timestamp(dt2).to_s.should == dt.to_s Sequel.database_to_application_timestamp(dt2.to_s).to_s.should == dt.to_s Sequel.database_to_application_timestamp(dt2.strftime('%Y-%m-%d %H:%M:%S')).to_s.should == dt.to_s Sequel.datetime_class = Time Sequel.database_timezone = :local Sequel.application_timezone = :utc Sequel.database_to_application_timestamp(t.getlocal).to_s.should == t.to_s Sequel.database_to_application_timestamp(t.getlocal.to_s).to_s.should == t.to_s Sequel.database_to_application_timestamp(t.getlocal.strftime('%Y-%m-%d %H:%M:%S')).to_s.should == t.to_s Sequel.datetime_class = DateTime Sequel.database_to_application_timestamp(dt).to_s.should == dt2.to_s Sequel.database_to_application_timestamp(dt.to_s).to_s.should == dt2.to_s Sequel.database_to_application_timestamp(dt.strftime('%Y-%m-%d %H:%M:%S')).to_s.should == dt2.to_s end specify "should handle typecasting timestamp columns" do Sequel.typecast_timezone = :utc Sequel.application_timezone = :local t = Time.now.utc @db.typecast_value(:datetime, t).to_s.should == t.getlocal.to_s @db.typecast_value(:datetime, t.to_s).to_s.should == t.getlocal.to_s @db.typecast_value(:datetime, t.strftime('%Y-%m-%d %H:%M:%S')).to_s.should == t.getlocal.to_s Sequel.datetime_class = DateTime dt = DateTime.now dt2 = dt.new_offset(0) @db.typecast_value(:datetime, dt2).to_s.should == dt.to_s @db.typecast_value(:datetime, dt2.to_s).to_s.should == dt.to_s @db.typecast_value(:datetime, dt2.strftime('%Y-%m-%d %H:%M:%S')).to_s.should == dt.to_s Sequel.datetime_class = Time Sequel.typecast_timezone = :local Sequel.application_timezone = :utc @db.typecast_value(:datetime, t.getlocal).to_s.should == t.to_s @db.typecast_value(:datetime, t.getlocal.to_s).to_s.should == t.to_s @db.typecast_value(:datetime, t.getlocal.strftime('%Y-%m-%d %H:%M:%S')).to_s.should == t.to_s Sequel.datetime_class = DateTime @db.typecast_value(:datetime, dt).to_s.should == dt2.to_s @db.typecast_value(:datetime, dt.to_s).to_s.should == dt2.to_s @db.typecast_value(:datetime, dt.strftime('%Y-%m-%d %H:%M:%S')).to_s.should == dt2.to_s end specify "should handle converting database timestamp columns from an array of values" do Sequel.database_timezone = :utc Sequel.application_timezone = :local t = Time.now.utc Sequel.database_to_application_timestamp([t.year, t.mon, t.day, t.hour, t.min, t.sec]).to_s.should == t.getlocal.to_s Sequel.datetime_class = DateTime dt = DateTime.now dt2 = dt.new_offset(0) Sequel.database_to_application_timestamp([dt2.year, dt2.mon, dt2.day, dt2.hour, dt2.min, dt2.sec]).to_s.should == dt.to_s Sequel.datetime_class = Time Sequel.database_timezone = :local Sequel.application_timezone = :utc t = t.getlocal Sequel.database_to_application_timestamp([t.year, t.mon, t.day, t.hour, t.min, t.sec]).to_s.should == t.getutc.to_s Sequel.datetime_class = DateTime Sequel.database_to_application_timestamp([dt.year, dt.mon, dt.day, dt.hour, dt.min, dt.sec]).to_s.should == dt2.to_s end specify "should raise an InvalidValue error when an error occurs while converting a timestamp" do proc{Sequel.database_to_application_timestamp([0, 0, 0, 0, 0, 0])}.should raise_error(Sequel::InvalidValue) end specify "should raise an error when attempting to typecast to a timestamp from an unsupported type" do proc{Sequel.database_to_application_timestamp(Object.new)}.should raise_error(Sequel::InvalidValue) end specify "should raise an InvalidValue error when the DateTime class is used and when a bad application timezone is used when attempting to convert timestamps" do Sequel.application_timezone = :blah Sequel.datetime_class = DateTime proc{Sequel.database_to_application_timestamp('2009-06-01 10:20:30')}.should raise_error(Sequel::InvalidValue) end specify "should raise an InvalidValue error when the DateTime class is used and when a bad database timezone is used when attempting to convert timestamps" do Sequel.database_timezone = :blah Sequel.datetime_class = DateTime proc{Sequel.database_to_application_timestamp('2009-06-01 10:20:30')}.should raise_error(Sequel::InvalidValue) end specify "should have Sequel.default_timezone= should set all other timezones" do Sequel.database_timezone.should == nil Sequel.application_timezone.should == nil Sequel.typecast_timezone.should == nil Sequel.default_timezone = :utc Sequel.database_timezone.should == :utc Sequel.application_timezone.should == :utc Sequel.typecast_timezone.should == :utc end end describe "Sequel::Dataset#select_map" do before do @ds = Sequel.mock(:fetch=>[{:c=>1}, {:c=>2}])[:t] end specify "should do select and map in one step" do @ds.select_map(:a).should == [1, 2] @ds.db.sqls.should == ['SELECT a FROM t'] end specify "should handle implicit qualifiers in arguments" do @ds.select_map(:a__b).should == [1, 2] @ds.db.sqls.should == ['SELECT a.b FROM t'] end specify "should raise if multiple arguments and can't determine alias" do proc{@ds.select_map([Sequel.function(:a), :b])}.should raise_error(Sequel::Error) proc{@ds.select_map(Sequel.function(:a)){b}}.should raise_error(Sequel::Error) proc{@ds.select_map{[a{}, b]}}.should raise_error(Sequel::Error) end specify "should handle implicit aliases in arguments" do @ds.select_map(:a___b).should == [1, 2] @ds.db.sqls.should == ['SELECT a AS b FROM t'] end specify "should handle other objects" do @ds.select_map(Sequel.lit("a").as(:b)).should == [1, 2] @ds.db.sqls.should == ['SELECT a AS b FROM t'] end specify "should handle identifiers with strings" do @ds.select_map([Sequel::SQL::Identifier.new('c'), :c]).should == [[1, 1], [2, 2]] @ds.db.sqls.should == ['SELECT c, c FROM t'] end specify "should raise an error for plain strings" do proc{@ds.select_map(['c', :c])}.should raise_error(Sequel::Error) @ds.db.sqls.should == [] end specify "should handle an expression without a determinable alias" do @ds.select_map{a(t__c)}.should == [1, 2] @ds.db.sqls.should == ['SELECT a(t.c) AS v FROM t'] end specify "should accept a block" do @ds.select_map{a(t__c).as(b)}.should == [1, 2] @ds.db.sqls.should == ['SELECT a(t.c) AS b FROM t'] end specify "should accept a block with an array of columns" do @ds.select_map{[a(t__c).as(c), a(t__c).as(c)]}.should == [[1, 1], [2, 2]] @ds.db.sqls.should == ['SELECT a(t.c) AS c, a(t.c) AS c FROM t'] end specify "should accept a block with a column" do @ds.select_map(:c){a(t__c).as(c)}.should == [[1, 1], [2, 2]] @ds.db.sqls.should == ['SELECT c, a(t.c) AS c FROM t'] end specify "should accept a block and array of arguments" do @ds.select_map([:c, :c]){[a(t__c).as(c), a(t__c).as(c)]}.should == [[1, 1, 1, 1], [2, 2, 2, 2]] @ds.db.sqls.should == ['SELECT c, c, a(t.c) AS c, a(t.c) AS c FROM t'] end specify "should handle an array of columns" do @ds.select_map([:c, :c]).should == [[1, 1], [2, 2]] @ds.db.sqls.should == ['SELECT c, c FROM t'] @ds.select_map([Sequel.expr(:d).as(:c), Sequel.qualify(:b, :c), Sequel.identifier(:c), Sequel.identifier(:c).qualify(:b), :a__c, :a__d___c]).should == [[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2]] @ds.db.sqls.should == ['SELECT d AS c, b.c, c, b.c, a.c, a.d AS c FROM t'] end specify "should handle an array with a single element" do @ds.select_map([:c]).should == [[1], [2]] @ds.db.sqls.should == ['SELECT c FROM t'] end end describe "Sequel::Dataset#select_order_map" do before do @ds = Sequel.mock(:fetch=>[{:c=>1}, {:c=>2}])[:t] end specify "should do select and map in one step" do @ds.select_order_map(:a).should == [1, 2] @ds.db.sqls.should == ['SELECT a FROM t ORDER BY a'] end specify "should handle implicit qualifiers in arguments" do @ds.select_order_map(:a__b).should == [1, 2] @ds.db.sqls.should == ['SELECT a.b FROM t ORDER BY a.b'] end specify "should raise if multiple arguments and can't determine alias" do proc{@ds.select_order_map([Sequel.function(:a), :b])}.should raise_error(Sequel::Error) proc{@ds.select_order_map(Sequel.function(:a)){b}}.should raise_error(Sequel::Error) proc{@ds.select_order_map{[a{}, b]}}.should raise_error(Sequel::Error) end specify "should handle implicit aliases in arguments" do @ds.select_order_map(:a___b).should == [1, 2] @ds.db.sqls.should == ['SELECT a AS b FROM t ORDER BY a'] end specify "should handle implicit qualifiers and aliases in arguments" do @ds.select_order_map(:t__a___b).should == [1, 2] @ds.db.sqls.should == ['SELECT t.a AS b FROM t ORDER BY t.a'] end specify "should handle AliasedExpressions" do @ds.select_order_map(Sequel.lit("a").as(:b)).should == [1, 2] @ds.db.sqls.should == ['SELECT a AS b FROM t ORDER BY a'] end specify "should handle OrderedExpressions" do @ds.select_order_map(Sequel.desc(:a)).should == [1, 2] @ds.db.sqls.should == ['SELECT a FROM t ORDER BY a DESC'] end specify "should handle an expression without a determinable alias" do @ds.select_order_map{a(t__c)}.should == [1, 2] @ds.db.sqls.should == ['SELECT a(t.c) AS v FROM t ORDER BY a(t.c)'] end specify "should accept a block" do @ds.select_order_map{a(t__c).as(b)}.should == [1, 2] @ds.db.sqls.should == ['SELECT a(t.c) AS b FROM t ORDER BY a(t.c)'] end specify "should accept a block with an array of columns" do @ds.select_order_map{[c.desc, a(t__c).as(c)]}.should == [[1, 1], [2, 2]] @ds.db.sqls.should == ['SELECT c, a(t.c) AS c FROM t ORDER BY c DESC, a(t.c)'] end specify "should accept a block with a column" do @ds.select_order_map(:c){a(t__c).as(c)}.should == [[1, 1], [2, 2]] @ds.db.sqls.should == ['SELECT c, a(t.c) AS c FROM t ORDER BY c, a(t.c)'] end specify "should accept a block and array of arguments" do @ds.select_order_map([:c, :c]){[a(t__c).as(c), c.desc]}.should == [[1, 1, 1, 1], [2, 2, 2, 2]] @ds.db.sqls.should == ['SELECT c, c, a(t.c) AS c, c FROM t ORDER BY c, c, a(t.c), c DESC'] end specify "should handle an array of columns" do @ds.select_order_map([:c, :c]).should == [[1, 1], [2, 2]] @ds.db.sqls.should == ['SELECT c, c FROM t ORDER BY c, c'] @ds.select_order_map([Sequel.expr(:d).as(:c), Sequel.qualify(:b, :c), Sequel.identifier(:c), Sequel.identifier(:c).qualify(:b), Sequel.identifier(:c).qualify(:b).desc, :a__c, Sequel.desc(:a__d___c), Sequel.desc(Sequel.expr(:a__d___c))]).should == [[1, 1, 1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2, 2, 2]] @ds.db.sqls.should == ['SELECT d AS c, b.c, c, b.c, b.c, a.c, a.d AS c, a.d AS c FROM t ORDER BY d, b.c, c, b.c, b.c DESC, a.c, a.d DESC, a.d DESC'] end specify "should handle an array with a single element" do @ds.select_order_map([:c]).should == [[1], [2]] @ds.db.sqls.should == ['SELECT c FROM t ORDER BY c'] end end describe "Sequel::Dataset#select_hash" do before do @db = Sequel.mock(:fetch=>[{:a=>1, :b=>2}, {:a=>3, :b=>4}]) @ds = @db[:t] end specify "should do select and to_hash in one step" do @ds.select_hash(:a, :b).should == {1=>2, 3=>4} @ds.db.sqls.should == ['SELECT a, b FROM t'] end specify "should handle implicit qualifiers in arguments" do @ds.select_hash(:t__a, :t__b).should == {1=>2, 3=>4} @ds.db.sqls.should == ['SELECT t.a, t.b FROM t'] end specify "should handle implicit aliases in arguments" do @ds.select_hash(:c___a, :d___b).should == {1=>2, 3=>4} @ds.db.sqls.should == ['SELECT c AS a, d AS b FROM t'] end specify "should handle implicit qualifiers and aliases in arguments" do @ds.select_hash(:t__c___a, :t__d___b).should == {1=>2, 3=>4} @ds.db.sqls.should == ['SELECT t.c AS a, t.d AS b FROM t'] end specify "should handle SQL::Identifiers in arguments" do @ds.select_hash(Sequel.identifier(:a), Sequel.identifier(:b)).should == {1=>2, 3=>4} @ds.db.sqls.should == ['SELECT a, b FROM t'] end specify "should handle SQL::QualifiedIdentifiers in arguments" do @ds.select_hash(Sequel.qualify(:t, :a), Sequel.identifier(:b).qualify(:t)).should == {1=>2, 3=>4} @ds.db.sqls.should == ['SELECT t.a, t.b FROM t'] end specify "should handle SQL::AliasedExpressions in arguments" do @ds.select_hash(Sequel.expr(:c).as(:a), Sequel.expr(:t).as(:b)).should == {1=>2, 3=>4} @ds.db.sqls.should == ['SELECT c AS a, t AS b FROM t'] end specify "should work with arrays of columns" do @db.fetch = [{:a=>1, :b=>2, :c=>3}, {:a=>4, :b=>5, :c=>6}] @ds.select_hash([:a, :c], :b).should == {[1, 3]=>2, [4, 6]=>5} @ds.db.sqls.should == ['SELECT a, c, b FROM t'] @ds.select_hash(:a, [:b, :c]).should == {1=>[2, 3], 4=>[5, 6]} @ds.db.sqls.should == ['SELECT a, b, c FROM t'] @ds.select_hash([:a, :b], [:b, :c]).should == {[1, 2]=>[2, 3], [4, 5]=>[5, 6]} @ds.db.sqls.should == ['SELECT a, b, b, c FROM t'] end specify "should raise an error if the resulting symbol cannot be determined" do proc{@ds.select_hash(Sequel.expr(:c).as(:a), Sequel.function(:b))}.should raise_error(Sequel::Error) end end describe "Sequel::Dataset#select_hash_groups" do before do @db = Sequel.mock(:fetch=>[{:a=>1, :b=>2}, {:a=>3, :b=>4}]) @ds = @db[:t] end specify "should do select and to_hash in one step" do @ds.select_hash_groups(:a, :b).should == {1=>[2], 3=>[4]} @ds.db.sqls.should == ['SELECT a, b FROM t'] end specify "should handle implicit qualifiers in arguments" do @ds.select_hash_groups(:t__a, :t__b).should == {1=>[2], 3=>[4]} @ds.db.sqls.should == ['SELECT t.a, t.b FROM t'] end specify "should handle implicit aliases in arguments" do @ds.select_hash_groups(:c___a, :d___b).should == {1=>[2], 3=>[4]} @ds.db.sqls.should == ['SELECT c AS a, d AS b FROM t'] end specify "should handle implicit qualifiers and aliases in arguments" do @ds.select_hash_groups(:t__c___a, :t__d___b).should == {1=>[2], 3=>[4]} @ds.db.sqls.should == ['SELECT t.c AS a, t.d AS b FROM t'] end specify "should handle SQL::Identifiers in arguments" do @ds.select_hash_groups(Sequel.identifier(:a), Sequel.identifier(:b)).should == {1=>[2], 3=>[4]} @ds.db.sqls.should == ['SELECT a, b FROM t'] end specify "should handle SQL::QualifiedIdentifiers in arguments" do @ds.select_hash_groups(Sequel.qualify(:t, :a), Sequel.identifier(:b).qualify(:t)).should == {1=>[2], 3=>[4]} @ds.db.sqls.should == ['SELECT t.a, t.b FROM t'] end specify "should handle SQL::AliasedExpressions in arguments" do @ds.select_hash_groups(Sequel.expr(:c).as(:a), Sequel.expr(:t).as(:b)).should == {1=>[2], 3=>[4]} @ds.db.sqls.should == ['SELECT c AS a, t AS b FROM t'] end specify "should work with arrays of columns" do @db.fetch = [{:a=>1, :b=>2, :c=>3}, {:a=>4, :b=>5, :c=>6}] @ds.select_hash_groups([:a, :c], :b).should == {[1, 3]=>[2], [4, 6]=>[5]} @ds.db.sqls.should == ['SELECT a, c, b FROM t'] @ds.select_hash_groups(:a, [:b, :c]).should == {1=>[[2, 3]], 4=>[[5, 6]]} @ds.db.sqls.should == ['SELECT a, b, c FROM t'] @ds.select_hash_groups([:a, :b], [:b, :c]).should == {[1, 2]=>[[2, 3]], [4, 5]=>[[5, 6]]} @ds.db.sqls.should == ['SELECT a, b, b, c FROM t'] end specify "should raise an error if the resulting symbol cannot be determined" do proc{@ds.select_hash_groups(Sequel.expr(:c).as(:a), Sequel.function(:b))}.should raise_error(Sequel::Error) end end describe "Modifying joined datasets" do before do @ds = Sequel.mock.from(:b, :c).join(:d, [:id]).where(:id => 2) meta_def(@ds, :supports_modifying_joins?){true} end specify "should allow deleting from joined datasets" do @ds.delete @ds.db.sqls.should == ['DELETE FROM b, c WHERE (id = 2)'] end specify "should allow updating joined datasets" do @ds.update(:a=>1) @ds.db.sqls.should == ['UPDATE b, c INNER JOIN d USING (id) SET a = 1 WHERE (id = 2)'] end end describe "Dataset#lock_style and for_update" do before do @ds = Sequel.mock.dataset.from(:t) end specify "#for_update should use FOR UPDATE" do @ds.for_update.sql.should == "SELECT * FROM t FOR UPDATE" end specify "#lock_style should accept symbols" do @ds.lock_style(:update).sql.should == "SELECT * FROM t FOR UPDATE" end specify "#lock_style should accept strings for arbitrary SQL" do @ds.lock_style("FOR SHARE").sql.should == "SELECT * FROM t FOR SHARE" end end describe "Custom ASTTransformer" do specify "should transform given objects" do c = Class.new(Sequel::ASTTransformer) do def v(s) (s.is_a?(Symbol) || s.is_a?(String)) ? :"#{s}#{s}" : super end end.new ds = Sequel.mock.dataset.from(:t).cross_join(:a___g).join(:b___h, [:c]).join(:d___i, :e=>:f) ds.sql.should == 'SELECT * FROM t CROSS JOIN a AS g INNER JOIN b AS h USING (c) INNER JOIN d AS i ON (i.e = h.f)' ds.clone(:from=>c.transform(ds.opts[:from]), :join=>c.transform(ds.opts[:join])).sql.should == 'SELECT * FROM tt CROSS JOIN aa AS gg INNER JOIN bb AS hh USING (cc) INNER JOIN dd AS ii ON (ii.ee = hh.ff)' end end describe "Dataset#returning" do before do @ds = Sequel.mock(:fetch=>proc{|s| {:foo=>s}})[:t].returning(:foo) @pr = proc do [:insert, :update, :delete].each do |m| meta_def(@ds, :"#{m}_clause_methods"){super() + [:"#{m}_returning_sql"]} end end end specify "should use RETURNING clause in the SQL if the dataset supports it" do @pr.call @ds.delete_sql.should == "DELETE FROM t RETURNING foo" @ds.insert_sql(1).should == "INSERT INTO t VALUES (1) RETURNING foo" @ds.update_sql(:foo=>1).should == "UPDATE t SET foo = 1 RETURNING foo" end specify "should not use RETURNING clause in the SQL if the dataset does not support it" do @ds.delete_sql.should == "DELETE FROM t" @ds.insert_sql(1).should == "INSERT INTO t VALUES (1)" @ds.update_sql(:foo=>1).should == "UPDATE t SET foo = 1" end specify "should have insert, update, and delete yield to blocks if RETURNING is used" do @pr.call h = {} @ds.delete{|r| h = r} h.should == {:foo=>"DELETE FROM t RETURNING foo"} @ds.insert(1){|r| h = r} h.should == {:foo=>"INSERT INTO t VALUES (1) RETURNING foo"} @ds.update(:foo=>1){|r| h = r} h.should == {:foo=>"UPDATE t SET foo = 1 RETURNING foo"} end specify "should have insert, update, and delete return arrays of hashes if RETURNING is used and a block is not given" do @pr.call @ds.delete.should == [{:foo=>"DELETE FROM t RETURNING foo"}] @ds.insert(1).should == [{:foo=>"INSERT INTO t VALUES (1) RETURNING foo"}] @ds.update(:foo=>1).should == [{:foo=>"UPDATE t SET foo = 1 RETURNING foo"}] end end describe "Dataset emulating bitwise operator support" do before do @ds = Sequel::Database.new.dataset @ds.quote_identifiers = true def @ds.complex_expression_sql_append(sql, op, args) sql << complex_expression_arg_pairs(args){|a, b| "bitand(#{literal(a)}, #{literal(b)})"} end end it "should work with any numbers of arguments for operators" do @ds.select(Sequel::SQL::ComplexExpression.new(:&, :x)).sql.should == 'SELECT "x"' @ds.select(Sequel.expr(:x) & 1).sql.should == 'SELECT bitand("x", 1)' @ds.select(Sequel.expr(:x) & 1 & 2).sql.should == 'SELECT bitand(bitand("x", 1), 2)' end end describe "Dataset feature defaults" do it "should not require aliases for recursive CTEs by default" do Sequel::Database.new.dataset.recursive_cte_requires_column_aliases?.should be_false end it "should not require placeholder type specifiers by default" do Sequel::Database.new.dataset.requires_placeholder_type_specifiers?.should be_false end end describe "Dataset extensions" do before(:all) do class << Sequel alias _extension extension remove_method :extension def extension(*) end end end after(:all) do class << Sequel remove_method :extension alias extension _extension remove_method :_extension end end before do @ds = Sequel.mock.dataset end specify "should be able to register an extension with a module Database#extension extend the module" do Sequel::Dataset.register_extension(:foo, Module.new{def a; 1; end}) @ds.extension(:foo).a.should == 1 end specify "should be able to register an extension with a block and Database#extension call the block" do @ds.quote_identifiers = false Sequel::Dataset.register_extension(:foo){|db| db.quote_identifiers = true} @ds.extension(:foo).quote_identifiers?.should be_true end specify "should be able to register an extension with a callable and Database#extension call the callable" do @ds.quote_identifiers = false Sequel::Dataset.register_extension(:foo, proc{|db| db.quote_identifiers = true}) @ds.extension(:foo).quote_identifiers?.should be_true end specify "should be able to load multiple extensions in the same call" do @ds.quote_identifiers = false @ds.identifier_input_method = :downcase Sequel::Dataset.register_extension(:foo, proc{|ds| ds.quote_identifiers = true}) Sequel::Dataset.register_extension(:bar, proc{|ds| ds.identifier_input_method = nil}) ds = @ds.extension(:foo, :bar) ds.quote_identifiers?.should be_true ds.identifier_input_method.should be_nil end specify "should have #extension not modify the receiver" do Sequel::Dataset.register_extension(:foo, Module.new{def a; 1; end}) @ds.extension(:foo) proc{@ds.a}.should raise_error(NoMethodError) end specify "should have #extension not return a cloned dataset" do @ds.extend(Module.new{def b; 2; end}) Sequel::Dataset.register_extension(:foo, Module.new{def a; 1; end}) v = @ds.extension(:foo) v.should_not equal(@ds) v.should be_a_kind_of(Sequel::Dataset) v.b.should == 2 end specify "should have #extension! modify the receiver" do Sequel::Dataset.register_extension(:foo, Module.new{def a; 1; end}) @ds.extension!(:foo) @ds.a.should == 1 end specify "should have #extension! return the receiver" do Sequel::Dataset.register_extension(:foo, Module.new{def a; 1; end}) @ds.extension!(:foo).should equal(@ds) end specify "should register a Database extension for modifying all datasets when registering with a module" do Sequel::Dataset.register_extension(:foo, Module.new{def a; 1; end}) Sequel.mock.extension(:foo).dataset.a.should == 1 end specify "should raise an Error if registering with both a module and a block" do proc{Sequel::Dataset.register_extension(:foo, Module.new){}}.should raise_error(Sequel::Error) end specify "should raise an Error if attempting to load an incompatible extension" do proc{@ds.extension(:foo2)}.should raise_error(Sequel::Error) end end describe "Dataset#schema_and_table" do before do @ds = Sequel.mock[:test] end it "should correctly handle symbols" do @ds.schema_and_table(:s).should == [nil, 's'] @ds.schema_and_table(:s___a).should == [nil, 's'] @ds.schema_and_table(:t__s).should == ['t', 's'] @ds.schema_and_table(:t__s___a).should == ['t', 's'] end it "should correctly handle strings" do @ds.schema_and_table('s').should == [nil, 's'] end it "should correctly handle literal strings" do s = Sequel.lit('s') @ds.schema_and_table(s).last.should equal(s) end it "should correctly handle identifiers" do @ds.schema_and_table(Sequel.identifier(:s)).should == [nil, 's'] end it "should correctly handle qualified identifiers" do @ds.schema_and_table(Sequel.qualify(:t, :s)).should == ['t', 's'] end end describe "Dataset#split_qualifiers" do before do @ds = Sequel.mock[:test] end it "should correctly handle symbols" do @ds.split_qualifiers(:s).should == ['s'] @ds.split_qualifiers(:s___a).should == ['s'] @ds.split_qualifiers(:t__s).should == ['t', 's'] @ds.split_qualifiers(:t__s___a).should == ['t', 's'] end it "should correctly handle strings" do @ds.split_qualifiers('s').should == ['s'] end it "should correctly handle identifiers" do @ds.split_qualifiers(Sequel.identifier(:s)).should == ['s'] end it "should correctly handle simple qualified identifiers" do @ds.split_qualifiers(Sequel.qualify(:t, :s)).should == ['t', 's'] end it "should correctly handle complex qualified identifiers" do @ds.split_qualifiers(Sequel.qualify(:d__t, :s)).should == ['d', 't', 's'] @ds.split_qualifiers(Sequel.qualify(Sequel.qualify(:d, :t), :s)).should == ['d', 't', 's'] @ds.split_qualifiers(Sequel.qualify(:d, :t__s)).should == ['d', 't', 's'] @ds.split_qualifiers(Sequel.qualify(:d, Sequel.qualify(:t, :s))).should == ['d', 't', 's'] @ds.split_qualifiers(Sequel.qualify(:d__t, :s__s2)).should == ['d', 't', 's', 's2'] @ds.split_qualifiers(Sequel.qualify(Sequel.qualify(:d, :t), Sequel.qualify(:s, :s2))).should == ['d', 't', 's', 's2'] end end describe "Dataset#paged_each" do before do @ds = Sequel.mock[:test].order(:x) @db = (0...10).map{|i| {:x=>i}} @ds._fetch = @db @rows = [] @proc = lambda{|row| @rows << row} end it "should yield rows to the passed block" do @ds.paged_each(&@proc) @rows.should == @db end it "should respect the row_proc" do @ds.row_proc = lambda{|row| {:x=>row[:x]*2}} @ds.paged_each(&@proc) @rows.should == @db.map{|row| {:x=>row[:x]*2}} end it "should use a transaction to ensure consistent results" do @ds.paged_each(&@proc) sqls = @ds.db.sqls sqls[0].should == 'BEGIN' sqls[-1].should == 'COMMIT' end it "should use a limit and offset to go through the dataset in chunks at a time" do @ds.paged_each(&@proc) @ds.db.sqls[1...-1].should == ['SELECT * FROM test ORDER BY x LIMIT 1000 OFFSET 0'] end it "should accept a :rows_per_fetch option to change the number of rows per fetch" do @ds._fetch = @db.each_slice(3).to_a @ds.paged_each(:rows_per_fetch=>3, &@proc) @rows.should == @db @ds.db.sqls[1...-1].should == ['SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 0', 'SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 3', 'SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 6', 'SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 9'] end it "should handle cases where the last query returns nothing" do @ds._fetch = @db.each_slice(5).to_a @ds.paged_each(:rows_per_fetch=>5, &@proc) @rows.should == @db @ds.db.sqls[1...-1].should == ['SELECT * FROM test ORDER BY x LIMIT 5 OFFSET 0', 'SELECT * FROM test ORDER BY x LIMIT 5 OFFSET 5', 'SELECT * FROM test ORDER BY x LIMIT 5 OFFSET 10'] end it "should respect an existing server option to use" do @ds = Sequel.mock(:servers=>{:foo=>{}})[:test].order(:x) @ds._fetch = @db @ds.server(:foo).paged_each(&@proc) @rows.should == @db @ds.db.sqls.should == ["BEGIN -- foo", "SELECT * FROM test ORDER BY x LIMIT 1000 OFFSET 0 -- foo", "COMMIT -- foo"] end it "should require an order" do lambda{@ds.unordered.paged_each(&@proc)}.should raise_error(Sequel::Error) end it "should handle an existing limit and/or offset" do @ds._fetch = @db.each_slice(3).to_a @ds.limit(5).paged_each(:rows_per_fetch=>3, &@proc) @ds.db.sqls[1...-1].should == ["SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 0", "SELECT * FROM test ORDER BY x LIMIT 2 OFFSET 3"] @ds._fetch = @db.each_slice(3).to_a @ds.limit(5, 2).paged_each(:rows_per_fetch=>3, &@proc) @ds.db.sqls[1...-1].should == ["SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 2", "SELECT * FROM test ORDER BY x LIMIT 2 OFFSET 5"] @ds._fetch = @db.each_slice(3).to_a @ds.limit(nil, 2).paged_each(:rows_per_fetch=>3, &@proc) @ds.db.sqls[1...-1].should == ["SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 2", "SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 5", "SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 8", "SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 11"] end end describe "Dataset#escape_like" do before do @ds = Sequel.mock[:test] end it "should escape % and _ and \\ characters" do @ds.escape_like("foo\\%_bar").should == "foo\\\\\\%\\_bar" end end describe "Dataset#supports_replace?" do it "should be false by default" do Sequel::Dataset.new(nil).supports_replace?.should be_false end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/core/deprecated_spec.rb������������������������������������������������������0000664�0000000�0000000�00000003703�12201565355�0021322�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') describe "Sequel::Deprecated" do before do @d = Sequel::Deprecation @prev_prefix = @d.prefix @prev_output = @d.output @prev_backtrace_filter = @d.backtrace_filter @output = [] def @output.puts(s) self << s end @d.prefix = false @d.output = @output @d.backtrace_filter = false end after do @d.prefix = @prev_prefix @d.output = @prev_output @d.backtrace_filter = @prev_backtrace_filter end specify "should output full messages to the given output" do @d.deprecate("foo") @output.should == ['foo'] end specify "should consider two arguments to be a method name and additional text" do @d.deprecate("foo", "Use bar instead") @output.should == ['foo is deprecated and will be removed in Sequel 4.0. Use bar instead.'] end specify "should include a prefix if set" do @d.prefix = "DEPWARN: " @d.deprecate("foo") @output.should == ['DEPWARN: foo'] end specify "should not output anything if output is false" do @d.output = false proc{@d.deprecate("foo")}.should_not raise_error end specify "should include full backtrace if backtrace_filter is true" do @d.backtrace_filter = true @d.deprecate("foo") @output.first.should == 'foo' (4..100).should include(@output.count) end specify "should include given lines of backtrace if backtrace_filter is an integer" do @d.backtrace_filter = 1 @d.deprecate("foo") @output.first.should == 'foo' @output.count.should == 2 @output.clear @d.backtrace_filter = 3 @d.deprecate("foo") @output.first.should == 'foo' @output.count.should == 4 end specify "should select backtrace lines if backtrace_filter is a proc" do @d.backtrace_filter = lambda{|line, line_no| line_no < 3 && line =~ /./} @d.deprecate("foo") @output.first.should == 'foo' @output.count.should == 4 end end �������������������������������������������������������������ruby-sequel-4.1.1/spec/core/expression_filters_spec.rb����������������������������������������������0000664�0000000�0000000�00000123757�12201565355�0023165�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') describe "Blockless Ruby Filters" do before do db = Sequel::Database.new @d = db[:items] def @d.l(*args, &block) literal(filter_expr(*args, &block)) end def @d.lit(*args) literal(*args) end end it "should support boolean columns directly" do @d.l(:x).should == 'x' end it "should support qualified columns" do @d.l(:x__y).should == 'x.y' end it "should support NOT with SQL functions" do @d.l(~Sequel.function(:is_blah)).should == 'NOT is_blah()' @d.l(~Sequel.function(:is_blah, :x)).should == 'NOT is_blah(x)' @d.l(~Sequel.function(:is_blah, :x__y)).should == 'NOT is_blah(x.y)' @d.l(~Sequel.function(:is_blah, :x, :x__y)).should == 'NOT is_blah(x, x.y)' end it "should handle multiple ~" do @d.l(~Sequel.~(:x)).should == 'x' @d.l(~~Sequel.~(:x)).should == 'NOT x' @d.l(~~Sequel.&(:x, :y)).should == '(x AND y)' @d.l(~~Sequel.|(:x, :y)).should == '(x OR y)' end it "should support = via Hash" do @d.l(:x => 100).should == '(x = 100)' @d.l(:x => 'a').should == '(x = \'a\')' @d.l(:x => true).should == '(x IS TRUE)' @d.l(:x => false).should == '(x IS FALSE)' @d.l(:x => nil).should == '(x IS NULL)' @d.l(:x => [1,2,3]).should == '(x IN (1, 2, 3))' end it "should use = 't' and != 't' OR IS NULL if IS TRUE is not supported" do meta_def(@d, :supports_is_true?){false} @d.l(:x => true).should == "(x = 't')" @d.l(~Sequel.expr(:x => true)).should == "((x != 't') OR (x IS NULL))" @d.l(:x => false).should == "(x = 'f')" @d.l(~Sequel.expr(:x => false)).should == "((x != 'f') OR (x IS NULL))" end it "should support != via inverted Hash" do @d.l(~Sequel.expr(:x => 100)).should == '(x != 100)' @d.l(~Sequel.expr(:x => 'a')).should == '(x != \'a\')' @d.l(~Sequel.expr(:x => true)).should == '(x IS NOT TRUE)' @d.l(~Sequel.expr(:x => false)).should == '(x IS NOT FALSE)' @d.l(~Sequel.expr(:x => nil)).should == '(x IS NOT NULL)' end it "should support ~ via Hash and Regexp (if supported by database)" do def @d.supports_regexp?; true end @d.l(:x => /blah/).should == '(x ~ \'blah\')' end it "should support !~ via inverted Hash and Regexp" do def @d.supports_regexp?; true end @d.l(~Sequel.expr(:x => /blah/)).should == '(x !~ \'blah\')' end it "should support negating ranges" do @d.l(~Sequel.expr(:x => 1..5)).should == '((x < 1) OR (x > 5))' @d.l(~Sequel.expr(:x => 1...5)).should == '((x < 1) OR (x >= 5))' end it "should support negating IN with Dataset or Array" do @d.l(~Sequel.expr(:x => @d.select(:i))).should == '(x NOT IN (SELECT i FROM items))' @d.l(~Sequel.expr(:x => [1,2,3])).should == '(x NOT IN (1, 2, 3))' end it "should not add ~ method to string expressions" do proc{~Sequel.expr(:x).sql_string}.should raise_error(NoMethodError) end it "should allow mathematical or string operations on true, false, or nil" do @d.lit(Sequel.expr(:x) + 1).should == '(x + 1)' @d.lit(Sequel.expr(:x) - true).should == "(x - 't')" @d.lit(Sequel.expr(:x) / false).should == "(x / 'f')" @d.lit(Sequel.expr(:x) * nil).should == '(x * NULL)' @d.lit(Sequel.join([:x, nil])).should == '(x || NULL)' end it "should allow mathematical or string operations on boolean complex expressions" do @d.lit(Sequel.expr(:x) + (Sequel.expr(:y) + 1)).should == '(x + y + 1)' @d.lit(Sequel.expr(:x) - ~Sequel.expr(:y)).should == '(x - NOT y)' @d.lit(Sequel.expr(:x) / (Sequel.expr(:y) & :z)).should == '(x / (y AND z))' @d.lit(Sequel.expr(:x) * (Sequel.expr(:y) | :z)).should == '(x * (y OR z))' @d.lit(Sequel.expr(:x) + Sequel.expr(:y).like('a')).should == "(x + (y LIKE 'a' ESCAPE '\\'))" @d.lit(Sequel.expr(:x) - ~Sequel.expr(:y).like('a')).should == "(x - (y NOT LIKE 'a' ESCAPE '\\'))" @d.lit(Sequel.join([:x, ~Sequel.expr(:y).like('a')])).should == "(x || (y NOT LIKE 'a' ESCAPE '\\'))" end it "should support AND conditions via &" do @d.l(Sequel.expr(:x) & :y).should == '(x AND y)' @d.l(Sequel.expr(:x).sql_boolean & :y).should == '(x AND y)' @d.l(Sequel.expr(:x) & :y & :z).should == '(x AND y AND z)' @d.l(Sequel.expr(:x) & {:y => :z}).should == '(x AND (y = z))' @d.l((Sequel.expr(:x) + 200 < 0) & (Sequel.expr(:y) - 200 < 0)).should == '(((x + 200) < 0) AND ((y - 200) < 0))' @d.l(Sequel.expr(:x) & ~Sequel.expr(:y)).should == '(x AND NOT y)' @d.l(~Sequel.expr(:x) & :y).should == '(NOT x AND y)' @d.l(~Sequel.expr(:x) & ~Sequel.expr(:y)).should == '(NOT x AND NOT y)' end it "should support OR conditions via |" do @d.l(Sequel.expr(:x) | :y).should == '(x OR y)' @d.l(Sequel.expr(:x).sql_boolean | :y).should == '(x OR y)' @d.l(Sequel.expr(:x) | :y | :z).should == '(x OR y OR z)' @d.l(Sequel.expr(:x) | {:y => :z}).should == '(x OR (y = z))' @d.l((Sequel.expr(:x).sql_number > 200) | (Sequel.expr(:y).sql_number < 200)).should == '((x > 200) OR (y < 200))' end it "should support & | combinations" do @d.l((Sequel.expr(:x) | :y) & :z).should == '((x OR y) AND z)' @d.l(Sequel.expr(:x) | (Sequel.expr(:y) & :z)).should == '(x OR (y AND z))' @d.l((Sequel.expr(:x) & :w) | (Sequel.expr(:y) & :z)).should == '((x AND w) OR (y AND z))' end it "should support & | with ~" do @d.l(~((Sequel.expr(:x) | :y) & :z)).should == '((NOT x AND NOT y) OR NOT z)' @d.l(~(Sequel.expr(:x) | (Sequel.expr(:y) & :z))).should == '(NOT x AND (NOT y OR NOT z))' @d.l(~((Sequel.expr(:x) & :w) | (Sequel.expr(:y) & :z))).should == '((NOT x OR NOT w) AND (NOT y OR NOT z))' @d.l(~((Sequel.expr(:x).sql_number > 200) | (Sequel.expr(:y) & :z))).should == '((x <= 200) AND (NOT y OR NOT z))' end it "should support LiteralString" do @d.l(Sequel.lit('x')).should == '(x)' @d.l(~Sequel.lit('x')).should == 'NOT x' @d.l(~~Sequel.lit('x')).should == 'x' @d.l(~((Sequel.lit('x') | :y) & :z)).should == '((NOT x AND NOT y) OR NOT z)' @d.l(~(Sequel.expr(:x) | Sequel.lit('y'))).should == '(NOT x AND NOT y)' @d.l(~(Sequel.lit('x') & Sequel.lit('y'))).should == '(NOT x OR NOT y)' @d.l(Sequel.expr(Sequel.lit('y') => Sequel.lit('z')) & Sequel.lit('x')).should == '((y = z) AND x)' @d.l((Sequel.lit('x') > 200) & (Sequel.lit('y') < 200)).should == '((x > 200) AND (y < 200))' @d.l(~(Sequel.lit('x') + 1 > 100)).should == '((x + 1) <= 100)' @d.l(Sequel.lit('x').like('a')).should == '(x LIKE \'a\' ESCAPE \'\\\')' @d.l(Sequel.lit('x') + 1 > 100).should == '((x + 1) > 100)' @d.l((Sequel.lit('x') * :y) < 100.01).should == '((x * y) < 100.01)' @d.l((Sequel.lit('x') - Sequel.expr(:y)/2) >= 100000000000000000000000000000000000).should == '((x - (y / 2)) >= 100000000000000000000000000000000000)' @d.l((Sequel.lit('z') * ((Sequel.lit('x') / :y)/(Sequel.expr(:x) + :y))) <= 100).should == '((z * (x / y / (x + y))) <= 100)' @d.l(~((((Sequel.lit('x') - :y)/(Sequel.expr(:x) + :y))*:z) <= 100)).should == '((((x - y) / (x + y)) * z) > 100)' end it "should support hashes by ANDing the conditions" do @d.l(:x => 100, :y => 'a')[1...-1].split(' AND ').sort.should == ['(x = 100)', '(y = \'a\')'] @d.l(:x => true, :y => false)[1...-1].split(' AND ').sort.should == ['(x IS TRUE)', '(y IS FALSE)'] @d.l(:x => nil, :y => [1,2,3])[1...-1].split(' AND ').sort.should == ['(x IS NULL)', '(y IN (1, 2, 3))'] end it "should support arrays with all two pairs the same as hashes" do @d.l([[:x, 100],[:y, 'a']]).should == '((x = 100) AND (y = \'a\'))' @d.l([[:x, true], [:y, false]]).should == '((x IS TRUE) AND (y IS FALSE))' @d.l([[:x, nil], [:y, [1,2,3]]]).should == '((x IS NULL) AND (y IN (1, 2, 3)))' end it "should emulate columns for array values" do @d.l([:x, :y]=>Sequel.value_list([[1,2], [3,4]])).should == '((x, y) IN ((1, 2), (3, 4)))' @d.l([:x, :y, :z]=>[[1,2,5], [3,4,6]]).should == '((x, y, z) IN ((1, 2, 5), (3, 4, 6)))' end it "should emulate multiple column in if not supported" do meta_def(@d, :supports_multiple_column_in?){false} @d.l([:x, :y]=>Sequel.value_list([[1,2], [3,4]])).should == '(((x = 1) AND (y = 2)) OR ((x = 3) AND (y = 4)))' @d.l([:x, :y, :z]=>[[1,2,5], [3,4,6]]).should == '(((x = 1) AND (y = 2) AND (z = 5)) OR ((x = 3) AND (y = 4) AND (z = 6)))' end it "should support StringExpression#+ for concatenation of SQL strings" do @d.lit(Sequel.expr(:x).sql_string + :y).should == '(x || y)' @d.lit(Sequel.join([:x]) + :y).should == '(x || y)' @d.lit(Sequel.join([:x, :z], ' ') + :y).should == "(x || ' ' || z || y)" end it "should be supported inside blocks" do @d.l{Sequel.or([[:x, nil], [:y, [1,2,3]]])}.should == '((x IS NULL) OR (y IN (1, 2, 3)))' @d.l{Sequel.~([[:x, nil], [:y, [1,2,3]]])}.should == '((x IS NOT NULL) OR (y NOT IN (1, 2, 3)))' @d.l{~((((Sequel.lit('x') - :y)/(Sequel.expr(:x) + :y))*:z) <= 100)}.should == '((((x - y) / (x + y)) * z) > 100)' @d.l{Sequel.&({:x => :a}, {:y => :z})}.should == '((x = a) AND (y = z))' end it "should support &, |, ^, ~, <<, and >> for NumericExpressions" do @d.l(Sequel.expr(:x).sql_number & 1 > 100).should == '((x & 1) > 100)' @d.l(Sequel.expr(:x).sql_number | 1 > 100).should == '((x | 1) > 100)' @d.l(Sequel.expr(:x).sql_number ^ 1 > 100).should == '((x ^ 1) > 100)' @d.l(~Sequel.expr(:x).sql_number > 100).should == '(~x > 100)' @d.l(Sequel.expr(:x).sql_number << 1 > 100).should == '((x << 1) > 100)' @d.l(Sequel.expr(:x).sql_number >> 1 > 100).should == '((x >> 1) > 100)' @d.l((Sequel.expr(:x) + 1) & 1 > 100).should == '(((x + 1) & 1) > 100)' @d.l((Sequel.expr(:x) + 1) | 1 > 100).should == '(((x + 1) | 1) > 100)' @d.l((Sequel.expr(:x) + 1) ^ 1 > 100).should == '(((x + 1) ^ 1) > 100)' @d.l(~(Sequel.expr(:x) + 1) > 100).should == '(~(x + 1) > 100)' @d.l((Sequel.expr(:x) + 1) << 1 > 100).should == '(((x + 1) << 1) > 100)' @d.l((Sequel.expr(:x) + 1) >> 1 > 100).should == '(((x + 1) >> 1) > 100)' @d.l((Sequel.expr(:x) + 1) & (Sequel.expr(:x) + 2) > 100).should == '(((x + 1) & (x + 2)) > 100)' end it "should allow using a Bitwise method on a ComplexExpression that isn't a NumericExpression" do @d.lit((Sequel.expr(:x) + 1) & (Sequel.expr(:x) + '2')).should == "((x + 1) & (x || '2'))" end it "should allow using a Boolean method on a ComplexExpression that isn't a BooleanExpression" do @d.l(Sequel.expr(:x) & (Sequel.expr(:x) + '2')).should == "(x AND (x || '2'))" end it "should raise an error if attempting to invert a ComplexExpression that isn't a BooleanExpression" do proc{Sequel::SQL::BooleanExpression.invert(Sequel.expr(:x) + 2)}.should raise_error(Sequel::Error) end it "should return self on .lit" do y = Sequel.expr(:x) + 1 y.lit.should == y end it "should return have .sql_literal return the literal SQL for the expression" do y = Sequel.expr(:x) + 1 y.sql_literal(@d).should == '(x + 1)' y.sql_literal(@d).should == @d.literal(y) end it "should support SQL::Constants" do @d.l({:x => Sequel::NULL}).should == '(x IS NULL)' @d.l({:x => Sequel::NOTNULL}).should == '(x IS NOT NULL)' @d.l({:x => Sequel::TRUE}).should == '(x IS TRUE)' @d.l({:x => Sequel::FALSE}).should == '(x IS FALSE)' @d.l({:x => Sequel::SQLTRUE}).should == '(x IS TRUE)' @d.l({:x => Sequel::SQLFALSE}).should == '(x IS FALSE)' end it "should support negation of SQL::Constants" do @d.l(Sequel.~(:x => Sequel::NULL)).should == '(x IS NOT NULL)' @d.l(Sequel.~(:x => Sequel::NOTNULL)).should == '(x IS NULL)' @d.l(Sequel.~(:x => Sequel::TRUE)).should == '(x IS NOT TRUE)' @d.l(Sequel.~(:x => Sequel::FALSE)).should == '(x IS NOT FALSE)' @d.l(Sequel.~(:x => Sequel::SQLTRUE)).should == '(x IS NOT TRUE)' @d.l(Sequel.~(:x => Sequel::SQLFALSE)).should == '(x IS NOT FALSE)' end it "should support direct negation of SQL::Constants" do @d.l({:x => ~Sequel::NULL}).should == '(x IS NOT NULL)' @d.l({:x => ~Sequel::NOTNULL}).should == '(x IS NULL)' @d.l({:x => ~Sequel::TRUE}).should == '(x IS FALSE)' @d.l({:x => ~Sequel::FALSE}).should == '(x IS TRUE)' @d.l({:x => ~Sequel::SQLTRUE}).should == '(x IS FALSE)' @d.l({:x => ~Sequel::SQLFALSE}).should == '(x IS TRUE)' end it "should raise an error if trying to invert an invalid SQL::Constant" do proc{~Sequel::CURRENT_DATE}.should raise_error(Sequel::Error) end it "should raise an error if trying to create an invalid complex expression" do proc{Sequel::SQL::ComplexExpression.new(:BANG, 1, 2)}.should raise_error(Sequel::Error) end it "should use a string concatentation for + if given a string" do @d.lit(Sequel.expr(:x) + '1').should == "(x || '1')" @d.lit(Sequel.expr(:x) + '1' + '1').should == "(x || '1' || '1')" end it "should use an addition for + if given a literal string" do @d.lit(Sequel.expr(:x) + Sequel.lit('1')).should == "(x + 1)" @d.lit(Sequel.expr(:x) + Sequel.lit('1') + Sequel.lit('1')).should == "(x + 1 + 1)" end it "should use a bitwise operator for & and | if given an integer" do @d.lit(Sequel.expr(:x) & 1).should == "(x & 1)" @d.lit(Sequel.expr(:x) | 1).should == "(x | 1)" @d.lit(Sequel.expr(:x) & 1 & 1).should == "(x & 1 & 1)" @d.lit(Sequel.expr(:x) | 1 | 1).should == "(x | 1 | 1)" end it "should allow adding a string to an integer expression" do @d.lit(Sequel.expr(:x) + 1 + 'a').should == "(x + 1 + 'a')" end it "should allow adding an integer to an string expression" do @d.lit(Sequel.expr(:x) + 'a' + 1).should == "(x || 'a' || 1)" end it "should allow adding a boolean to an integer expression" do @d.lit(Sequel.expr(:x) + 1 + true).should == "(x + 1 + 't')" end it "should allow adding a boolean to an string expression" do @d.lit(Sequel.expr(:x) + 'a' + true).should == "(x || 'a' || 't')" end it "should allow using a boolean operation with an integer on an boolean expression" do @d.lit(Sequel.expr(:x) & :a & 1).should == "(x AND a AND 1)" end it "should allow using a boolean operation with a string on an boolean expression" do @d.lit(Sequel.expr(:x) & :a & 'a').should == "(x AND a AND 'a')" end it "should allowing AND of boolean expression and literal string" do @d.lit(Sequel.expr(:x) & :a & Sequel.lit('a')).should == "(x AND a AND a)" end it "should allowing + of integer expression and literal string" do @d.lit(Sequel.expr(:x) + :a + Sequel.lit('a')).should == "(x + a + a)" end it "should allowing + of string expression and literal string" do @d.lit(Sequel.expr(:x) + 'a' + Sequel.lit('a')).should == "(x || 'a' || a)" end it "should allow sql_{string,boolean,number} methods on numeric expressions" do @d.lit((Sequel.expr(:x) + 1).sql_string + 'a').should == "((x + 1) || 'a')" @d.lit((Sequel.expr(:x) + 1).sql_boolean & 1).should == "((x + 1) AND 1)" @d.lit((Sequel.expr(:x) + 1).sql_number + 'a').should == "(x + 1 + 'a')" end it "should allow sql_{string,boolean,number} methods on string expressions" do @d.lit((Sequel.expr(:x) + 'a').sql_string + 'a').should == "(x || 'a' || 'a')" @d.lit((Sequel.expr(:x) + 'a').sql_boolean & 1).should == "((x || 'a') AND 1)" @d.lit((Sequel.expr(:x) + 'a').sql_number + 'a').should == "((x || 'a') + 'a')" end it "should allow sql_{string,boolean,number} methods on boolean expressions" do @d.lit((Sequel.expr(:x) & :y).sql_string + 'a').should == "((x AND y) || 'a')" @d.lit((Sequel.expr(:x) & :y).sql_boolean & 1).should == "(x AND y AND 1)" @d.lit((Sequel.expr(:x) & :y).sql_number + 'a').should == "((x AND y) + 'a')" end it "should raise an error if trying to literalize an invalid complex expression" do ce = Sequel.+(:x, 1) ce.instance_variable_set(:@op, :BANG) proc{@d.lit(ce)}.should raise_error(Sequel::Error) end it "should support equality comparison of two expressions" do e1 = ~Sequel.like(:comment, '%:hidden:%') e2 = ~Sequel.like(:comment, '%:hidden:%') e1.should == e2 end it "should support expression filter methods on Datasets" do d = @d.select(:a) @d.lit(d + 1).should == '((SELECT a FROM items) + 1)' @d.lit(d - 1).should == '((SELECT a FROM items) - 1)' @d.lit(d * 1).should == '((SELECT a FROM items) * 1)' @d.lit(d / 1).should == '((SELECT a FROM items) / 1)' @d.lit(d => 1).should == '((SELECT a FROM items) = 1)' @d.lit(Sequel.~(d => 1)).should == '((SELECT a FROM items) != 1)' @d.lit(d > 1).should == '((SELECT a FROM items) > 1)' @d.lit(d < 1).should == '((SELECT a FROM items) < 1)' @d.lit(d >= 1).should == '((SELECT a FROM items) >= 1)' @d.lit(d <= 1).should == '((SELECT a FROM items) <= 1)' @d.lit(d.as(:b)).should == '(SELECT a FROM items) AS b' @d.lit(d & :b).should == '((SELECT a FROM items) AND b)' @d.lit(d | :b).should == '((SELECT a FROM items) OR b)' @d.lit(~d).should == 'NOT (SELECT a FROM items)' @d.lit(d.cast(Integer)).should == 'CAST((SELECT a FROM items) AS integer)' @d.lit(d.cast_numeric).should == 'CAST((SELECT a FROM items) AS integer)' @d.lit(d.cast_string).should == 'CAST((SELECT a FROM items) AS varchar(255))' @d.lit(d.cast_numeric << :b).should == '(CAST((SELECT a FROM items) AS integer) << b)' @d.lit(d.cast_string + :b).should == '(CAST((SELECT a FROM items) AS varchar(255)) || b)' @d.lit(d.extract(:year)).should == 'extract(year FROM (SELECT a FROM items))' @d.lit(d.sql_boolean & :b).should == '((SELECT a FROM items) AND b)' @d.lit(d.sql_number << :b).should == '((SELECT a FROM items) << b)' @d.lit(d.sql_string + :b).should == '((SELECT a FROM items) || b)' @d.lit(d.asc).should == '(SELECT a FROM items) ASC' @d.lit(d.desc).should == '(SELECT a FROM items) DESC' @d.lit(d.like(:b)).should == '((SELECT a FROM items) LIKE b ESCAPE \'\\\')' @d.lit(d.ilike(:b)).should == '(UPPER((SELECT a FROM items)) LIKE UPPER(b) ESCAPE \'\\\')' end it "should handled emulated char_length function" do @d.lit(Sequel.char_length(:a)).should == 'char_length(a)' end it "should handled emulated trim function" do @d.lit(Sequel.trim(:a)).should == 'trim(a)' end end describe Sequel::SQL::VirtualRow do before do db = Sequel::Database.new db.quote_identifiers = true @d = db[:items] meta_def(@d, :supports_window_functions?){true} def @d.l(*args, &block) literal(filter_expr(*args, &block)) end end it "should treat methods without arguments as identifiers" do @d.l{column}.should == '"column"' end it "should treat methods without arguments that have embedded double underscores as qualified identifiers" do @d.l{table__column}.should == '"table"."column"' end it "should treat methods with arguments as functions with the arguments" do @d.l{function(arg1, 10, 'arg3')}.should == 'function("arg1", 10, \'arg3\')' end it "should treat methods with a block and no arguments as a function call with no arguments" do @d.l{version{}}.should == 'version()' end it "should treat methods with a block and a leading argument :* as a function call with the SQL wildcard" do @d.l{count(:*){}}.should == 'count(*)' end it "should treat methods with a block and a leading argument :distinct as a function call with DISTINCT and the additional method arguments" do @d.l{count(:distinct, column1){}}.should == 'count(DISTINCT "column1")' @d.l{count(:distinct, column1, column2){}}.should == 'count(DISTINCT "column1", "column2")' end it "should raise an error if an unsupported argument is used with a block" do proc{@d.where{count(:blah){}}}.should raise_error(Sequel::Error) end it "should treat methods with a block and a leading argument :over as a window function call" do @d.l{rank(:over){}}.should == 'rank() OVER ()' end it "should support :partition options for window function calls" do @d.l{rank(:over, :partition=>column1){}}.should == 'rank() OVER (PARTITION BY "column1")' @d.l{rank(:over, :partition=>[column1, column2]){}}.should == 'rank() OVER (PARTITION BY "column1", "column2")' end it "should support :args options for window function calls" do @d.l{avg(:over, :args=>column1){}}.should == 'avg("column1") OVER ()' @d.l{avg(:over, :args=>[column1, column2]){}}.should == 'avg("column1", "column2") OVER ()' end it "should support :order option for window function calls" do @d.l{rank(:over, :order=>column1){}}.should == 'rank() OVER (ORDER BY "column1")' @d.l{rank(:over, :order=>[column1, column2]){}}.should == 'rank() OVER (ORDER BY "column1", "column2")' end it "should support :window option for window function calls" do @d.l{rank(:over, :window=>:win){}}.should == 'rank() OVER ("win")' end it "should support :*=>true option for window function calls" do @d.l{count(:over, :* =>true){}}.should == 'count(*) OVER ()' end it "should support :frame=>:all option for window function calls" do @d.l{rank(:over, :frame=>:all){}}.should == 'rank() OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)' end it "should support :frame=>:rows option for window function calls" do @d.l{rank(:over, :frame=>:rows){}}.should == 'rank() OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)' end it "should support :frame=>'some string' option for window function calls" do @d.l{rank(:over, :frame=>'RANGE BETWEEN 3 PRECEDING AND CURRENT ROW'){}}.should == 'rank() OVER (RANGE BETWEEN 3 PRECEDING AND CURRENT ROW)' end it "should raise an error if an invalid :frame option is used" do proc{@d.l{rank(:over, :frame=>:blah){}}}.should raise_error(Sequel::Error) end it "should support all these options together" do @d.l{count(:over, :* =>true, :partition=>a, :order=>b, :window=>:win, :frame=>:rows){}}.should == 'count(*) OVER ("win" PARTITION BY "a" ORDER BY "b" ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)' end it "should raise an error if window functions are not supported" do class << @d; remove_method :supports_window_functions? end meta_def(@d, :supports_window_functions?){false} proc{@d.l{count(:over, :* =>true, :partition=>a, :order=>b, :window=>:win, :frame=>:rows){}}}.should raise_error(Sequel::Error) proc{Sequel.mock.dataset.filter{count(:over, :* =>true, :partition=>a, :order=>b, :window=>:win, :frame=>:rows){}}.sql}.should raise_error(Sequel::Error) end it "should deal with classes without requiring :: prefix" do @d.l{date < Date.today}.should == "(\"date\" < '#{Date.today}')" @d.l{date < Sequel::CURRENT_DATE}.should == "(\"date\" < CURRENT_DATE)" @d.l{num < Math::PI.to_i}.should == "(\"num\" < 3)" end it "should deal with methods added to Object after requiring Sequel" do class Object def adsoiwemlsdaf; 42; end end Sequel::BasicObject.remove_methods! @d.l{a > adsoiwemlsdaf}.should == '("a" > "adsoiwemlsdaf")' end it "should deal with private methods added to Kernel after requiring Sequel" do module Kernel private def adsoiwemlsdaf2; 42; end end Sequel::BasicObject.remove_methods! @d.l{a > adsoiwemlsdaf2}.should == '("a" > "adsoiwemlsdaf2")' end it "should have operator methods defined that produce Sequel expression objects" do @d.l{|o| o.&({:a=>1}, :b)}.should == '(("a" = 1) AND "b")' @d.l{|o| o.|({:a=>1}, :b)}.should == '(("a" = 1) OR "b")' @d.l{|o| o.+(1, :b) > 2}.should == '((1 + "b") > 2)' @d.l{|o| o.-(1, :b) < 2}.should == '((1 - "b") < 2)' @d.l{|o| o.*(1, :b) >= 2}.should == '((1 * "b") >= 2)' @d.l{|o| o./(1, :b) <= 2}.should == '((1 / "b") <= 2)' @d.l{|o| o.~(:a=>1)}.should == '("a" != 1)' @d.l{|o| o.~([[:a, 1], [:b, 2]])}.should == '(("a" != 1) OR ("b" != 2))' @d.l{|o| o.<(1, :b)}.should == '(1 < "b")' @d.l{|o| o.>(1, :b)}.should == '(1 > "b")' @d.l{|o| o.<=(1, :b)}.should == '(1 <= "b")' @d.l{|o| o.>=(1, :b)}.should == '(1 >= "b")' end it "should have have ` produce literal strings" do @d.l{a > `some SQL`}.should == '("a" > some SQL)' @d.l{|o| o.a > o.`('some SQL')}.should == '("a" > some SQL)' #` end end describe "Sequel core extension replacements" do before do @db = Sequel::Database.new @ds = @db.dataset def @ds.supports_regexp?; true end @o = Object.new def @o.sql_literal(ds) 'foo' end end def l(arg, should) @ds.literal(arg).should == should end it "Sequel.expr should return items wrapped in Sequel objects" do Sequel.expr(1).should be_a_kind_of(Sequel::SQL::NumericExpression) Sequel.expr('a').should be_a_kind_of(Sequel::SQL::StringExpression) Sequel.expr(true).should be_a_kind_of(Sequel::SQL::BooleanExpression) Sequel.expr(nil).should be_a_kind_of(Sequel::SQL::Wrapper) Sequel.expr({1=>2}).should be_a_kind_of(Sequel::SQL::BooleanExpression) Sequel.expr([[1, 2]]).should be_a_kind_of(Sequel::SQL::BooleanExpression) Sequel.expr([1]).should be_a_kind_of(Sequel::SQL::Wrapper) Sequel.expr{|o| o.a}.should be_a_kind_of(Sequel::SQL::Identifier) Sequel.expr{a}.should be_a_kind_of(Sequel::SQL::Identifier) Sequel.expr(:a).should be_a_kind_of(Sequel::SQL::Identifier) Sequel.expr(:a__b).should be_a_kind_of(Sequel::SQL::QualifiedIdentifier) Sequel.expr(:a___c).should be_a_kind_of(Sequel::SQL::AliasedExpression) Sequel.expr(:a___c).expression.should be_a_kind_of(Sequel::SQL::Identifier) Sequel.expr(:a__b___c).should be_a_kind_of(Sequel::SQL::AliasedExpression) Sequel.expr(:a__b___c).expression.should be_a_kind_of(Sequel::SQL::QualifiedIdentifier) end it "Sequel.expr should return an appropriate wrapped object" do l(Sequel.expr(1) + 1, "(1 + 1)") l(Sequel.expr('a') + 'b', "('a' || 'b')") l(Sequel.expr(:b) & nil, "(b AND NULL)") l(Sequel.expr(nil) & true, "(NULL AND 't')") l(Sequel.expr(false) & true, "('f' AND 't')") l(Sequel.expr(true) | false, "('t' OR 'f')") l(Sequel.expr(@o) + 1, "(foo + 1)") end it "Sequel.expr should handle condition specifiers" do l(Sequel.expr(:a=>1) & nil, "((a = 1) AND NULL)") l(Sequel.expr([[:a, 1]]) & nil, "((a = 1) AND NULL)") l(Sequel.expr([[:a, 1], [:b, 2]]) & nil, "((a = 1) AND (b = 2) AND NULL)") end it "Sequel.expr should handle arrays that are not condition specifiers" do l(Sequel.expr([1]), "(1)") l(Sequel.expr([1, 2]), "(1, 2)") end it "Sequel.expr should treat blocks/procs as virtual rows and wrap the output" do l(Sequel.expr{1} + 1, "(1 + 1)") l(Sequel.expr{o__a} + 1, "(o.a + 1)") l(Sequel.expr{[[:a, 1]]} & nil, "((a = 1) AND NULL)") l(Sequel.expr{|v| @o} + 1, "(foo + 1)") l(Sequel.expr(proc{1}) + 1, "(1 + 1)") l(Sequel.expr(proc{o__a}) + 1, "(o.a + 1)") l(Sequel.expr(proc{[[:a, 1]]}) & nil, "((a = 1) AND NULL)") l(Sequel.expr(proc{|v| @o}) + 1, "(foo + 1)") end it "Sequel.expr should handle lambda proc virtual rows" do l(Sequel.expr(&lambda{1}), "1") l(Sequel.expr(&lambda{|| 1}), "1") end it "Sequel.expr should raise an error if given an argument and a block" do proc{Sequel.expr(nil){}}.should raise_error(Sequel::Error) end it "Sequel.expr should raise an error if given neither an argument nor a block" do proc{Sequel.expr}.should raise_error(Sequel::Error) end it "Sequel.expr should return existing Sequel expressions directly" do o = Sequel.expr(1) Sequel.expr(o).should equal(o) o = Sequel.lit('1') Sequel.expr(o).should equal(o) end it "Sequel.~ should invert the given object" do l(Sequel.~(nil), 'NOT NULL') l(Sequel.~(:a=>1), "(a != 1)") l(Sequel.~([[:a, 1]]), "(a != 1)") l(Sequel.~([[:a, 1], [:b, 2]]), "((a != 1) OR (b != 2))") l(Sequel.~(Sequel.expr([[:a, 1], [:b, 2]]) & nil), "((a != 1) OR (b != 2) OR NOT NULL)") end it "Sequel.case should use a CASE expression" do l(Sequel.case({:a=>1}, 2), "(CASE WHEN a THEN 1 ELSE 2 END)") l(Sequel.case({:a=>1}, 2, :b), "(CASE b WHEN a THEN 1 ELSE 2 END)") l(Sequel.case([[:a, 1]], 2), "(CASE WHEN a THEN 1 ELSE 2 END)") l(Sequel.case([[:a, 1]], 2, :b), "(CASE b WHEN a THEN 1 ELSE 2 END)") l(Sequel.case([[:a, 1], [:c, 3]], 2), "(CASE WHEN a THEN 1 WHEN c THEN 3 ELSE 2 END)") l(Sequel.case([[:a, 1], [:c, 3]], 2, :b), "(CASE b WHEN a THEN 1 WHEN c THEN 3 ELSE 2 END)") end it "Sequel.case should raise an error if not given a condition specifier" do proc{Sequel.case(1, 2)}.should raise_error(Sequel::Error) end it "Sequel.value_list should use an SQL value list" do l(Sequel.value_list([[1, 2]]), "((1, 2))") end it "Sequel.value_list raise an error if not given an array" do proc{Sequel.value_list(1)}.should raise_error(Sequel::Error) end it "Sequel.negate should negate all entries in conditions specifier and join with AND" do l(Sequel.negate(:a=>1), "(a != 1)") l(Sequel.negate([[:a, 1]]), "(a != 1)") l(Sequel.negate([[:a, 1], [:b, 2]]), "((a != 1) AND (b != 2))") end it "Sequel.negate should raise an error if not given a conditions specifier" do proc{Sequel.negate(1)}.should raise_error(Sequel::Error) end it "Sequel.or should join all entries in conditions specifier with OR" do l(Sequel.or(:a=>1), "(a = 1)") l(Sequel.or([[:a, 1]]), "(a = 1)") l(Sequel.or([[:a, 1], [:b, 2]]), "((a = 1) OR (b = 2))") end it "Sequel.or should raise an error if not given a conditions specifier" do proc{Sequel.or(1)}.should raise_error(Sequel::Error) end it "Sequel.join should should use SQL string concatenation to join array" do l(Sequel.join([]), "''") l(Sequel.join(['a']), "('a')") l(Sequel.join(['a', 'b']), "('a' || 'b')") l(Sequel.join(['a', 'b'], 'c'), "('a' || 'c' || 'b')") l(Sequel.join([true, :b], :c), "('t' || c || b)") l(Sequel.join([false, nil], Sequel.lit('c')), "('f' || c || NULL)") l(Sequel.join([Sequel.expr('a'), Sequel.lit('d')], 'c'), "('a' || 'c' || d)") end it "Sequel.join should raise an error if not given an array" do proc{Sequel.join(1)}.should raise_error(Sequel::Error) end it "Sequel.& should join all arguments given with AND" do l(Sequel.&(:a), "(a)") l(Sequel.&(:a, :b=>:c), "(a AND (b = c))") l(Sequel.&(:a, {:b=>:c}, Sequel.lit('d')), "(a AND (b = c) AND d)") end it "Sequel.& should raise an error if given no arguments" do proc{Sequel.&}.should raise_error(Sequel::Error) end it "Sequel.| should join all arguments given with OR" do l(Sequel.|(:a), "(a)") l(Sequel.|(:a, :b=>:c), "(a OR (b = c))") l(Sequel.|(:a, {:b=>:c}, Sequel.lit('d')), "(a OR (b = c) OR d)") end it "Sequel.| should raise an error if given no arguments" do proc{Sequel.|}.should raise_error(Sequel::Error) end it "Sequel.as should return an aliased expression" do l(Sequel.as(:a, :b), "a AS b") end it "Sequel.cast should return a CAST expression" do l(Sequel.cast(:a, :int), "CAST(a AS int)") l(Sequel.cast(:a, Integer), "CAST(a AS integer)") end it "Sequel.cast_numeric should return a CAST expression treated as a number" do l(Sequel.cast_numeric(:a), "CAST(a AS integer)") l(Sequel.cast_numeric(:a, :int), "CAST(a AS int)") l(Sequel.cast_numeric(:a) << 2, "(CAST(a AS integer) << 2)") end it "Sequel.cast_string should return a CAST expression treated as a string" do l(Sequel.cast_string(:a), "CAST(a AS varchar(255))") l(Sequel.cast_string(:a, :text), "CAST(a AS text)") l(Sequel.cast_string(:a) + 'a', "(CAST(a AS varchar(255)) || 'a')") end it "Sequel.lit should return a literal string" do l(Sequel.lit('a'), "a") end it "Sequel.lit should return the argument if given a single literal string" do o = Sequel.lit('a') Sequel.lit(o).should equal(o) end it "Sequel.lit should accept multiple arguments for a placeholder literal string" do l(Sequel.lit('a = ?', 1), "a = 1") l(Sequel.lit('? = ?', :a, 1), "a = 1") l(Sequel.lit('a = :a', :a=>1), "a = 1") end it "Sequel.lit should work with an array for the placeholder string" do l(Sequel.lit(['a = '], 1), "a = 1") l(Sequel.lit(['', ' = '], :a, 1), "a = 1") end it "Sequel.blob should return an SQL::Blob" do l(Sequel.blob('a'), "'a'") Sequel.blob('a').should be_a_kind_of(Sequel::SQL::Blob) end it "Sequel.blob should return the given argument if given a blob" do o = Sequel.blob('a') Sequel.blob(o).should equal(o) end it "Sequel.deep_qualify should do a deep qualification into nested structors" do l(Sequel.deep_qualify(:t, Sequel.+(:c, 1)), "(t.c + 1)") end it "Sequel.qualify should return a qualified identifier" do l(Sequel.qualify(:t, :c), "t.c") end it "Sequel.identifier should return an identifier" do l(Sequel.identifier(:t__c), "t__c") end it "Sequel.asc should return an ASC ordered expression" do l(Sequel.asc(:a), "a ASC") l(Sequel.asc(:a, :nulls=>:first), "a ASC NULLS FIRST") end it "Sequel.desc should return a DESC ordered expression " do l(Sequel.desc(:a), "a DESC") l(Sequel.desc(:a, :nulls=>:last), "a DESC NULLS LAST") end it "Sequel.{+,-,*,/} should accept arguments and use the appropriate operator" do %w'+ - * /'.each do |op| l(Sequel.send(op, 1), '(1)') l(Sequel.send(op, 1, 2), "(1 #{op} 2)") l(Sequel.send(op, 1, 2, 3), "(1 #{op} 2 #{op} 3)") end end it "Sequel.{+,-,*,/} should raise if given no arguments" do %w'+ - * /'.each do |op| proc{Sequel.send(op)}.should raise_error(Sequel::Error) end end it "Sequel.like should use a LIKE expression" do l(Sequel.like('a', 'b'), "('a' LIKE 'b' ESCAPE '\\')") l(Sequel.like(:a, :b), "(a LIKE b ESCAPE '\\')") l(Sequel.like(:a, /b/), "(a ~ 'b')") l(Sequel.like(:a, 'c', /b/), "((a LIKE 'c' ESCAPE '\\') OR (a ~ 'b'))") end it "Sequel.ilike should use an ILIKE expression" do l(Sequel.ilike('a', 'b'), "(UPPER('a') LIKE UPPER('b') ESCAPE '\\')") l(Sequel.ilike(:a, :b), "(UPPER(a) LIKE UPPER(b) ESCAPE '\\')") l(Sequel.ilike(:a, /b/), "(a ~* 'b')") l(Sequel.ilike(:a, 'c', /b/), "((UPPER(a) LIKE UPPER('c') ESCAPE '\\') OR (a ~* 'b'))") end it "Sequel.subscript should use an SQL subscript" do l(Sequel.subscript(:a, 1), 'a[1]') l(Sequel.subscript(:a, 1, 2), 'a[1, 2]') l(Sequel.subscript(:a, [1, 2]), 'a[1, 2]') l(Sequel.subscript(:a, 1..2), 'a[1:2]') l(Sequel.subscript(:a, 1...3), 'a[1:2]') end it "Sequel.function should return an SQL function" do l(Sequel.function(:a), 'a()') l(Sequel.function(:a, 1), 'a(1)') l(Sequel.function(:a, :b, 2), 'a(b, 2)') end it "Sequel.extract should use a date/time extraction" do l(Sequel.extract(:year, :a), 'extract(year FROM a)') end it "#* with no arguments should use a ColumnAll for Identifier and QualifiedIdentifier" do l(Sequel.expr(:a).*, 'a.*') l(Sequel.expr(:a__b).*, 'a.b.*') end it "SQL::Blob should be aliasable and castable by default" do b = Sequel.blob('a') l(b.as(:a), "'a' AS a") l(b.cast(Integer), "CAST('a' AS integer)") end it "SQL::Blob should be convertable to a literal string by default" do b = Sequel.blob('a ?') l(b.lit, "a ?") l(b.lit(1), "a 1") end end describe "Sequel::SQL::Function#==" do specify "should be true for functions with the same name and arguments, false otherwise" do a = Sequel.function(:date, :t) b = Sequel.function(:date, :t) a.should == b (a == b).should == true c = Sequel.function(:date, :c) a.should_not == c (a == c).should == false d = Sequel.function(:time, :c) a.should_not == d c.should_not == d (a == d).should == false (c == d).should == false end end describe "Sequel::SQL::OrderedExpression" do specify "should #desc" do @oe = Sequel.asc(:column) @oe.descending.should == false @oe.desc.descending.should == true end specify "should #asc" do @oe = Sequel.desc(:column) @oe.descending.should == true @oe.asc.descending.should == false end specify "should #invert" do @oe = Sequel.desc(:column) @oe.invert.descending.should == false @oe.invert.invert.descending.should == true end end describe "Expression" do specify "should consider objects == only if they have the same attributes" do Sequel.qualify(:table, :column).cast(:type).*(:numeric_column).asc.should == Sequel.qualify(:table, :column).cast(:type).*(:numeric_column).asc Sequel.qualify(:table, :other_column).cast(:type).*(:numeric_column).asc.should_not == Sequel.qualify(:table, :column).cast(:type).*(:numeric_column).asc Sequel.qualify(:table, :column).cast(:type).*(:numeric_column).asc.should eql(Sequel.qualify(:table, :column).cast(:type).*(:numeric_column).asc) Sequel.qualify(:table, :other_column).cast(:type).*(:numeric_column).asc.should_not eql(Sequel.qualify(:table, :column).cast(:type).*(:numeric_column).asc) end specify "should use the same hash value for objects that have the same attributes" do Sequel.qualify(:table, :column).cast(:type).*(:numeric_column).asc.hash.should == Sequel.qualify(:table, :column).cast(:type).*(:numeric_column).asc.hash Sequel.qualify(:table, :other_column).cast(:type).*(:numeric_column).asc.hash.should_not == Sequel.qualify(:table, :column).cast(:type).*(:numeric_column).asc.hash h = {} a = Sequel.qualify(:table, :column).cast(:type).*(:numeric_column).asc b = Sequel.qualify(:table, :column).cast(:type).*(:numeric_column).asc h[a] = 1 h[b] = 2 h[a].should == 2 h[b].should == 2 end end describe "Sequel::SQLTime" do before do @db = Sequel.mock end specify ".create should create from hour, minutes, seconds and optional microseconds" do @db.literal(Sequel::SQLTime.create(1, 2, 3)).should == "'01:02:03.000000'" @db.literal(Sequel::SQLTime.create(1, 2, 3, 500000)).should == "'01:02:03.500000'" end end describe "Sequel::SQL::Wrapper" do before do @ds = Sequel.mock.dataset end specify "should wrap objects so they can be used by the Sequel DSL" do o = Object.new def o.sql_literal(ds) 'foo' end s = Sequel::SQL::Wrapper.new(o) @ds.literal(s).should == "foo" @ds.literal(s+1).should == "(foo + 1)" @ds.literal(s & true).should == "(foo AND 't')" @ds.literal(s < 1).should == "(foo < 1)" @ds.literal(s.sql_subscript(1)).should == "foo[1]" @ds.literal(s.like('a')).should == "(foo LIKE 'a' ESCAPE '\\')" @ds.literal(s.as(:a)).should == "foo AS a" @ds.literal(s.cast(Integer)).should == "CAST(foo AS integer)" @ds.literal(s.desc).should == "foo DESC" @ds.literal(s.sql_string + '1').should == "(foo || '1')" end end describe "Sequel::SQL::Blob#to_sequel_blob" do specify "should return self" do c = Sequel::SQL::Blob.new('a') c.to_sequel_blob.should equal(c) end end describe Sequel::SQL::Subscript do before do @s = Sequel::SQL::Subscript.new(:a, [1]) @ds = Sequel.mock.dataset end specify "should have | return a new non-nested subscript" do s = (@s | 2) s.should_not equal(@s) @ds.literal(s).should == 'a[1, 2]' end specify "should have [] return a new nested subscript" do s = @s[2] s.should_not equal(@s) @ds.literal(s).should == 'a[1][2]' end end describe Sequel::SQL::CaseExpression, "#with_merged_expression" do specify "should return self if it has no expression" do c = Sequel.case({1=>0}, 3) c.with_merged_expression.should equal(c) end specify "should merge expression into conditions if it has an expression" do db = Sequel::Database.new c = Sequel.case({1=>0}, 3, 4) db.literal(c.with_merged_expression).should == db.literal(Sequel.case({{4=>1}=>0}, 3)) end end describe "Sequel.recursive_map" do specify "should recursively convert an array using a callable" do Sequel.recursive_map(['1'], proc{|s| s.to_i}).should == [1] Sequel.recursive_map([['1']], proc{|s| s.to_i}).should == [[1]] end specify "should not call callable if value is nil" do Sequel.recursive_map([nil], proc{|s| s.to_i}).should == [nil] Sequel.recursive_map([[nil]], proc{|s| s.to_i}).should == [[nil]] end end describe "Sequel.delay" do before do @o = Class.new do def a @a ||= 0 @a += 1 end def _a @a if defined?(@a) end attr_accessor :b end.new end specify "should delay calling the block until literalization" do ds = Sequel.mock[:b].where(:a=>Sequel.delay{@o.a}) @o._a.should be_nil ds.sql.should == "SELECT * FROM b WHERE (a = 1)" @o._a.should == 1 ds.sql.should == "SELECT * FROM b WHERE (a = 2)" @o._a.should == 2 end specify "should have the condition specifier handling respect delayed evaluations" do ds = Sequel.mock[:b].where(:a=>Sequel.delay{@o.b}) ds.sql.should == "SELECT * FROM b WHERE (a IS NULL)" @o.b = 1 ds.sql.should == "SELECT * FROM b WHERE (a = 1)" @o.b = [1, 2] ds.sql.should == "SELECT * FROM b WHERE (a IN (1, 2))" end specify "should raise if called without a block" do proc{Sequel.delay}.should raise_error(Sequel::Error) end end describe Sequel do before do Sequel::JSON = Class.new do self::ParserError = Sequel def self.parse(json, opts={}) [json, opts] end end end after do Sequel.send(:remove_const, :JSON) end specify ".parse_json should parse json correctly" do Sequel.parse_json('[]').should == ['[]', {:create_additions=>false}] end specify ".json_parser_error_class should return the related parser error class" do Sequel.json_parser_error_class.should == Sequel end specify ".object_to_json should return a json version of the object" do o = Object.new def o.to_json(*args); [1, args]; end Sequel.object_to_json(o, :foo).should == [1, [:foo]] end end describe "Sequel::LiteralString" do before do @s = Sequel::LiteralString.new("? = ?") end specify "should have lit return self if no arguments" do @s.lit.should equal(@s) end specify "should have lit return self if return a placeholder literal string if arguments" do @s.lit(1, 2).should be_a_kind_of(Sequel::SQL::PlaceholderLiteralString) Sequel.mock.literal(@s.lit(1, :a)).should == '1 = a' end specify "should have to_sequel_blob convert to blob" do @s.to_sequel_blob.should == @s @s.to_sequel_blob.should be_a_kind_of(Sequel::SQL::Blob) end end describe "Sequel core extensions" do specify "should have Sequel.core_extensions? be false by default" do Sequel.core_extensions?.should be_false end end �����������������ruby-sequel-4.1.1/spec/core/mock_adapter_spec.rb����������������������������������������������������0000664�0000000�0000000�00000043411�12201565355�0021653�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel Mock Adapter" do specify "should have an adapter method" do db = Sequel.mock db.should be_a_kind_of(Sequel::Mock::Database) db.adapter_scheme.should == :mock end specify "should have constructor accept no arguments" do Sequel::Mock::Database.new.should be_a_kind_of(Sequel::Mock::Database) end specify "should each not return any rows by default" do called = false Sequel.mock[:t].each{|r| called = true} called.should be_false end specify "should return 0 for update/delete/with_sql_delete/execute_dui by default" do Sequel.mock[:t].update(:a=>1).should == 0 Sequel.mock[:t].delete.should == 0 Sequel.mock[:t].with_sql_delete('DELETE FROM t').should == 0 Sequel.mock.execute_dui('DELETE FROM t').should == 0 end specify "should return nil for insert/execute_insert by default" do Sequel.mock[:t].insert(:a=>1).should be_nil Sequel.mock.execute_insert('INSERT INTO a () DEFAULT VALUES').should be_nil end specify "should be able to set the rows returned by each using :fetch option with a single hash" do rs = [] db = Sequel.mock(:fetch=>{:a=>1}) db[:t].each{|r| rs << r} rs.should == [{:a=>1}] db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>1}] db[:t].each{|r| r[:a] = 2; rs << r} rs.should == [{:a=>1}, {:a=>1}, {:a=>2}] db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>1}, {:a=>2}, {:a=>1}] end specify "should be able to set the rows returned by each using :fetch option with an array of hashes" do rs = [] db = Sequel.mock(:fetch=>[{:a=>1}, {:a=>2}]) db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>2}] db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>2}, {:a=>1}, {:a=>2}] db[:t].each{|r| r[:a] += 2; rs << r} rs.should == [{:a=>1}, {:a=>2}, {:a=>1}, {:a=>2}, {:a=>3}, {:a=>4}] db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>2}, {:a=>1}, {:a=>2}, {:a=>3}, {:a=>4}, {:a=>1}, {:a=>2}] end specify "should be able to set the rows returned by each using :fetch option with an array or arrays of hashes" do rs = [] db = Sequel.mock(:fetch=>[[{:a=>1}, {:a=>2}], [{:a=>3}, {:a=>4}]]) db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>2}] db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>2}, {:a=>3}, {:a=>4}] db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>2}, {:a=>3}, {:a=>4}] end specify "should be able to set the rows returned by each using :fetch option with a proc that takes sql" do rs = [] db = Sequel.mock(:fetch=>proc{|sql| sql =~ /FROM t/ ? {:b=>1} : [{:a=>1}, {:a=>2}]}) db[:t].each{|r| rs << r} rs.should == [{:b=>1}] db[:b].each{|r| rs << r} rs.should == [{:b=>1}, {:a=>1}, {:a=>2}] db[:t].each{|r| r[:b] += 1; rs << r} db[:b].each{|r| r[:a] += 2; rs << r} rs.should == [{:b=>1}, {:a=>1}, {:a=>2}, {:b=>2}, {:a=>3}, {:a=>4}] db[:t].each{|r| rs << r} db[:b].each{|r| rs << r} rs.should == [{:b=>1}, {:a=>1}, {:a=>2}, {:b=>2}, {:a=>3}, {:a=>4}, {:b=>1}, {:a=>1}, {:a=>2}] end specify "should have a fetch= method for setting rows returned by each after the fact" do rs = [] db = Sequel.mock db.fetch = {:a=>1} db[:t].each{|r| rs << r} rs.should == [{:a=>1}] db[:t].each{|r| rs << r} rs.should == [{:a=>1}] * 2 end specify "should be able to set an exception to raise by setting the :fetch option to an exception class " do db = Sequel.mock(:fetch=>ArgumentError) proc{db[:t].all}.should raise_error(Sequel::DatabaseError) begin db[:t].all rescue => e end e.should be_a_kind_of(Sequel::DatabaseError) e.wrapped_exception.should be_a_kind_of(ArgumentError) end specify "should be able to set separate kinds of results for fetch using an array" do rs = [] db = Sequel.mock(:fetch=>[{:a=>1}, [{:a=>2}, {:a=>3}], proc{|s| {:a=>4}}, proc{|s| }, nil, ArgumentError]) db[:t].each{|r| rs << r} rs.should == [{:a=>1}] db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>2}, {:a=>3}] db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>2}, {:a=>3}, {:a=>4}] db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>2}, {:a=>3}, {:a=>4}] db[:t].each{|r| rs << r} rs.should == [{:a=>1}, {:a=>2}, {:a=>3}, {:a=>4}] proc{db[:t].all}.should raise_error(Sequel::DatabaseError) end specify "should be able to set the rows returned by each on a per dataset basis using _fetch" do rs = [] db = Sequel.mock(:fetch=>{:a=>1}) ds = db[:t] ds.each{|r| rs << r} rs.should == [{:a=>1}] ds._fetch = {:b=>2} ds.each{|r| rs << r} rs.should == [{:a=>1}, {:b=>2}] end specify "should raise Error if given an invalid object to fetch" do proc{Sequel.mock(:fetch=>Class.new).get(:a)}.should raise_error(Sequel::Error) proc{Sequel.mock(:fetch=>Object.new).get(:a)}.should raise_error(Sequel::Error) end specify "should be able to set the number of rows modified by update and delete using :numrows option as an integer" do db = Sequel.mock(:numrows=>2) db[:t].update(:a=>1).should == 2 db[:t].delete.should == 2 db[:t].update(:a=>1).should == 2 db[:t].delete.should == 2 end specify "should be able to set the number of rows modified by update and delete using :numrows option as an array of integers" do db = Sequel.mock(:numrows=>[2, 1]) db[:t].update(:a=>1).should == 2 db[:t].delete.should == 1 db[:t].update(:a=>1).should == 0 db[:t].delete.should == 0 end specify "should be able to set the number of rows modified by update and delete using :numrows option as a proc" do db = Sequel.mock(:numrows=>proc{|sql| sql =~ / t/ ? 2 : 1}) db[:t].update(:a=>1).should == 2 db[:t].delete.should == 2 db[:b].update(:a=>1).should == 1 db[:b].delete.should == 1 end specify "should be able to set an exception to raise by setting the :numrows option to an exception class " do db = Sequel.mock(:numrows=>ArgumentError) proc{db[:t].update(:a=>1)}.should raise_error(Sequel::DatabaseError) begin db[:t].delete rescue => e end e.should be_a_kind_of(Sequel::DatabaseError) e.wrapped_exception.should be_a_kind_of(ArgumentError) end specify "should be able to set separate kinds of results for numrows using an array" do db = Sequel.mock(:numrows=>[1, proc{|s| 2}, nil, ArgumentError]) db[:t].delete.should == 1 db[:t].update(:a=>1).should == 2 db[:t].delete.should == 0 proc{db[:t].delete}.should raise_error(Sequel::DatabaseError) end specify "should have a numrows= method to set the number of rows modified by update and delete after the fact" do db = Sequel.mock db.numrows = 2 db[:t].update(:a=>1).should == 2 db[:t].delete.should == 2 db[:t].update(:a=>1).should == 2 db[:t].delete.should == 2 end specify "should be able to set the number of rows modified by update and delete on a per dataset basis" do db = Sequel.mock(:numrows=>2) ds = db[:t] ds.update(:a=>1).should == 2 ds.delete.should == 2 ds.numrows = 3 ds.update(:a=>1).should == 3 ds.delete.should == 3 end specify "should raise Error if given an invalid object for numrows or autoid" do proc{Sequel.mock(:numrows=>Class.new)[:a].delete}.should raise_error(Sequel::Error) proc{Sequel.mock(:numrows=>Object.new)[:a].delete}.should raise_error(Sequel::Error) proc{Sequel.mock(:autoid=>Class.new)[:a].insert}.should raise_error(Sequel::Error) proc{Sequel.mock(:autoid=>Object.new)[:a].insert}.should raise_error(Sequel::Error) end specify "should be able to set the autogenerated primary key returned by insert using :autoid option as an integer" do db = Sequel.mock(:autoid=>1) db[:t].insert(:a=>1).should == 1 db[:t].insert(:a=>1).should == 2 db[:t].insert(:a=>1).should == 3 end specify "should be able to set the autogenerated primary key returned by insert using :autoid option as an array of integers" do db = Sequel.mock(:autoid=>[1, 3, 5]) db[:t].insert(:a=>1).should == 1 db[:t].insert(:a=>1).should == 3 db[:t].insert(:a=>1).should == 5 db[:t].insert(:a=>1).should be_nil end specify "should be able to set the autogenerated primary key returned by insert using :autoid option as a proc" do db = Sequel.mock(:autoid=>proc{|sql| sql =~ /INTO t / ? 2 : 1}) db[:t].insert(:a=>1).should == 2 db[:t].insert(:a=>1).should == 2 db[:b].insert(:a=>1).should == 1 db[:b].insert(:a=>1).should == 1 end specify "should be able to set an exception to raise by setting the :autoid option to an exception class " do db = Sequel.mock(:autoid=>ArgumentError) proc{db[:t].insert(:a=>1)}.should raise_error(Sequel::DatabaseError) begin db[:t].insert rescue => e end e.should be_a_kind_of(Sequel::DatabaseError) e.wrapped_exception.should be_a_kind_of(ArgumentError) end specify "should be able to set separate kinds of results for autoid using an array" do db = Sequel.mock(:autoid=>[1, proc{|s| 2}, nil, ArgumentError]) db[:t].insert.should == 1 db[:t].insert.should == 2 db[:t].insert.should == nil proc{db[:t].insert}.should raise_error(Sequel::DatabaseError) end specify "should have an autoid= method to set the autogenerated primary key returned by insert after the fact" do db = Sequel.mock db.autoid = 1 db[:t].insert(:a=>1).should == 1 db[:t].insert(:a=>1).should == 2 db[:t].insert(:a=>1).should == 3 end specify "should be able to set the autogenerated primary key returned by insert on a per dataset basis" do db = Sequel.mock(:autoid=>1) ds = db[:t] ds.insert(:a=>1).should == 1 ds.autoid = 5 ds.insert(:a=>1).should == 5 ds.insert(:a=>1).should == 6 db[:t].insert(:a=>1).should == 2 end specify "should be able to set the columns to set in the dataset as an array of symbols" do db = Sequel.mock(:columns=>[:a, :b]) db[:t].columns.should == [:a, :b] db.sqls.should == ["SELECT * FROM t LIMIT 1"] ds = db[:t] ds.all db.sqls.should == ["SELECT * FROM t"] ds.columns.should == [:a, :b] db.sqls.should == [] db[:t].columns.should == [:a, :b] end specify "should be able to set the columns to set in the dataset as an array of arrays of symbols" do db = Sequel.mock(:columns=>[[:a, :b], [:c, :d]]) db[:t].columns.should == [:a, :b] db[:t].columns.should == [:c, :d] end specify "should be able to set the columns to set in the dataset as a proc" do db = Sequel.mock(:columns=>proc{|sql| (sql =~ / t/) ? [:a, :b] : [:c, :d]}) db[:b].columns.should == [:c, :d] db[:t].columns.should == [:a, :b] end specify "should have a columns= method to set the columns to set after the fact" do db = Sequel.mock db.columns = [[:a, :b], [:c, :d]] db[:t].columns.should == [:a, :b] db[:t].columns.should == [:c, :d] end specify "should raise Error if given an invalid columns" do proc{Sequel.mock(:columns=>Object.new)[:a].columns}.should raise_error(Sequel::Error) end specify "should not quote identifiers by default" do Sequel.mock.send(:quote_identifiers_default).should be_false end specify "should allow overriding of server_version" do db = Sequel.mock db.server_version.should == nil db.server_version = 80102 db.server_version.should == 80102 end specify "should not have identifier input/output methods by default" do Sequel.mock.send(:identifier_input_method_default).should be_nil Sequel.mock.send(:identifier_output_method_default).should be_nil end specify "should keep a record of all executed SQL in #sqls" do db = Sequel.mock db[:t].all db[:b].delete db[:c].insert(:a=>1) db[:d].update(:a=>1) db.sqls.should == ['SELECT * FROM t', 'DELETE FROM b', 'INSERT INTO c (a) VALUES (1)', 'UPDATE d SET a = 1'] end specify "should clear sqls on retrieval" do db = Sequel.mock db[:t].all db.sqls.should == ['SELECT * FROM t'] db.sqls.should == [] end specify "should also log SQL executed to the given loggers" do a = [] def a.method_missing(m, *x) push(*x) end db = Sequel.mock(:loggers=>[a]) db[:t].all db[:b].delete db[:c].insert(:a=>1) db[:d].update(:a=>1) a.should == ['SELECT * FROM t', 'DELETE FROM b', 'INSERT INTO c (a) VALUES (1)', 'UPDATE d SET a = 1'] end specify "should correctly handle transactions" do db = Sequel.mock db.transaction{db[:a].all} db.sqls.should == ['BEGIN', 'SELECT * FROM a', 'COMMIT'] db.transaction{db[:a].all; raise Sequel::Rollback} db.sqls.should == ['BEGIN', 'SELECT * FROM a', 'ROLLBACK'] proc{db.transaction{db[:a].all; raise ArgumentError}}.should raise_error(ArgumentError) db.sqls.should == ['BEGIN', 'SELECT * FROM a', 'ROLLBACK'] proc{db.transaction(:rollback=>:reraise){db[:a].all; raise Sequel::Rollback}}.should raise_error(Sequel::Rollback) db.sqls.should == ['BEGIN', 'SELECT * FROM a', 'ROLLBACK'] db.transaction(:rollback=>:always){db[:a].all} db.sqls.should == ['BEGIN', 'SELECT * FROM a', 'ROLLBACK'] db.transaction{db.transaction{db[:a].all; raise Sequel::Rollback}} db.sqls.should == ['BEGIN', 'SELECT * FROM a', 'ROLLBACK'] db.transaction{db.transaction(:savepoint=>true){db[:a].all; raise Sequel::Rollback}} db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'SELECT * FROM a', 'ROLLBACK TO SAVEPOINT autopoint_1', 'COMMIT'] db.transaction{db.transaction(:savepoint=>true){db[:a].all}; raise Sequel::Rollback} db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'SELECT * FROM a', 'RELEASE SAVEPOINT autopoint_1', 'ROLLBACK'] end specify "should correctly handle transactions when sharding" do db = Sequel.mock(:servers=>{:test=>{}}) db.transaction{db.transaction(:server=>:test){db[:a].all; db[:t].server(:test).all}} db.sqls.should == ['BEGIN', 'BEGIN -- test', 'SELECT * FROM a', 'SELECT * FROM t -- test', 'COMMIT -- test', 'COMMIT'] end specify "should yield a mock connection object from synchronize" do c = Sequel.mock.synchronize{|conn| conn} c.should be_a_kind_of(Sequel::Mock::Connection) end specify "should deal correctly with sharding" do db = Sequel.mock(:servers=>{:test=>{}}) c1 = db.synchronize{|conn| conn} c2 = db.synchronize(:test){|conn| conn} c1.server.should == :default c2.server.should == :test end specify "should disconnect correctly" do db = Sequel.mock db.test_connection proc{db.disconnect}.should_not raise_error end specify "should accept :extend option for extending the object with a module" do Sequel.mock(:extend=>Module.new{def foo(v) v * 2 end}).foo(3).should == 6 end specify "should accept :sqls option for where to store the SQL queries" do a = [] Sequel.mock(:sqls=>a)[:t].all a.should == ['SELECT * FROM t'] end specify "should include :append option in SQL if it is given" do db = Sequel.mock(:append=>'a') db[:t].all db.sqls.should == ['SELECT * FROM t -- a'] end specify "should append :arguments option to execute to the SQL if present" do db = Sequel.mock db.execute('SELECT * FROM t', :arguments=>[1, 2]) db.sqls.should == ['SELECT * FROM t -- args: [1, 2]'] end specify "should have Dataset#columns take columns to set and return self" do db = Sequel.mock ds = db[:t].columns(:id, :a, :b) ds.should be_a_kind_of(Sequel::Mock::Dataset) ds.columns.should == [:id, :a, :b] end specify "should be able to load dialects based on the database name" do begin qi = class Sequel::Database; @quote_identifiers; end ii = class Sequel::Database; @identifier_input_method; end io = class Sequel::Database; @identifier_output_method; end Sequel.quote_identifiers = nil class Sequel::Database; @identifier_input_method=nil; end class Sequel::Database; @identifier_output_method=nil; end Sequel.mock(:host=>'access').select(Date.new(2011, 12, 13)).sql.should == 'SELECT #2011-12-13#' Sequel.mock(:host=>'db2').select(1).sql.should == 'SELECT 1 FROM "SYSIBM"."SYSDUMMY1"' Sequel.mock(:host=>'firebird')[:a].distinct.limit(1, 2).sql.should == 'SELECT DISTINCT FIRST 1 SKIP 2 * FROM "A"' Sequel.mock(:host=>'informix')[:a].distinct.limit(1, 2).sql.should == 'SELECT SKIP 2 FIRST 1 DISTINCT * FROM A' Sequel.mock(:host=>'mssql')[:a].full_text_search(:b, 'c').sql.should == "SELECT * FROM [A] WHERE (CONTAINS ([B], 'c'))" Sequel.mock(:host=>'mysql')[:a].full_text_search(:b, 'c').sql.should == "SELECT * FROM `a` WHERE (MATCH (`b`) AGAINST ('c'))" Sequel.mock(:host=>'oracle')[:a].limit(1).sql.should == 'SELECT * FROM (SELECT * FROM "A") "T1" WHERE (ROWNUM <= 1)' Sequel.mock(:host=>'postgres')[:a].full_text_search(:b, 'c').sql.should == "SELECT * FROM \"a\" WHERE (to_tsvector('simple'::regconfig, (COALESCE(\"b\", ''))) @@ to_tsquery('simple'::regconfig, 'c'))" Sequel.mock(:host=>'sqlite')[:a___b].sql.should == "SELECT * FROM `a` AS 'b'" ensure Sequel.quote_identifiers = qi Sequel::Database.send(:instance_variable_set, :@identifier_input_method, ii) Sequel::Database.send(:instance_variable_set, :@identifier_output_method, io) end end specify "should automatically set version for postgres and mssql" do Sequel.mock(:host=>'postgres').server_version.should == 90103 Sequel.mock(:host=>'mssql').server_version.should == 10000000 end specify "should stub out the primary_key method for postgres" do Sequel.mock(:host=>'postgres').primary_key(:t).should == :id end specify "should stub out the bound_variable_arg method for postgres" do Sequel.mock(:host=>'postgres').bound_variable_arg(:t, nil).should == :t end specify "should handle creating tables on oracle" do proc{Sequel.mock(:host=>'oracle').create_table(:a){String :b}}.should_not raise_error end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/core/object_graph_spec.rb����������������������������������������������������0000664�0000000�0000000�00000031445�12201565355�0021655�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') describe Sequel::Dataset, " graphing" do before do @db = Sequel.mock(:columns=>proc do |sql| case sql when /points/ [:id, :x, :y] when /lines/ [:id, :x, :y, :graph_id] else [:id, :name, :x, :y, :lines_x] end end) @ds1 = @db.from(:points) @ds2 = @db.from(:lines) @ds3 = @db.from(:graphs) [@ds1, @ds2, @ds3].each{|ds| ds.columns} @db.sqls end it "#graph should not modify the current dataset's opts" do o1 = @ds1.opts o2 = o1.dup ds1 = @ds1.graph(@ds2, :x=>:id) @ds1.opts.should == o1 @ds1.opts.should == o2 ds1.opts.should_not == o1 end it "#graph should not modify the current dataset's opts if current dataset is already graphed" do ds2 = @ds1.graph(@ds2) proc{@ds1.graph(@ds2)}.should_not raise_error proc{ds2.graph(@ds3)}.should_not raise_error proc{ds2.graph(@ds3)}.should_not raise_error end it "#graph should accept a simple dataset and pass the table to join" do ds = @ds1.graph(@ds2, :x=>:id) ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' end it "#graph should use currently selected columns as the basis for the selected columns in a new graph" do ds = @ds1.select(:id).graph(@ds2, :x=>:id) ds.sql.should == 'SELECT points.id, lines.id AS lines_id, lines.x, lines.y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' ds = @ds1.select(:id, :x).graph(@ds2, :x=>:id) ds.sql.should == 'SELECT points.id, points.x, lines.id AS lines_id, lines.x AS lines_x, lines.y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' ds = @ds1.select(Sequel.identifier(:id), Sequel.qualify(:points, :x)).graph(@ds2, :x=>:id) ds.sql.should == 'SELECT points.id, points.x, lines.id AS lines_id, lines.x AS lines_x, lines.y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' ds = @ds1.select(Sequel.identifier(:id).qualify(:points), Sequel.identifier(:x).as(:y)).graph(@ds2, :x=>:id) ds.sql.should == 'SELECT points.id, points.x AS y, lines.id AS lines_id, lines.x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' ds = @ds1.select(:id, Sequel.identifier(:x).qualify(Sequel.identifier(:points)).as(Sequel.identifier(:y))).graph(@ds2, :x=>:id) ds.sql.should == 'SELECT points.id, points.x AS y, lines.id AS lines_id, lines.x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' end it "#graph should raise error if currently selected expressions cannot be handled" do proc{@ds1.select(1).graph(@ds2, :x=>:id)}.should raise_error(Sequel::Error) end it "#graph should accept a complex dataset and pass it directly to join" do ds = @ds1.graph(@ds2.filter(:x=>1), {:x=>:id}) ds.sql.should == 'SELECT points.id, points.x, points.y, t1.id AS t1_id, t1.x AS t1_x, t1.y AS t1_y, t1.graph_id FROM points LEFT OUTER JOIN (SELECT * FROM lines WHERE (x = 1)) AS t1 ON (t1.x = points.id)' end it "#graph should work on from_self datasets" do ds = @ds1.from_self.graph(@ds2, :x=>:id) ds.sql.should == 'SELECT t1.id, t1.x, t1.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM (SELECT * FROM points) AS t1 LEFT OUTER JOIN lines ON (lines.x = t1.id)' ds = @ds1.graph(@ds2.from_self, :x=>:id) ds.sql.should == 'SELECT points.id, points.x, points.y, t1.id AS t1_id, t1.x AS t1_x, t1.y AS t1_y, t1.graph_id FROM points LEFT OUTER JOIN (SELECT * FROM (SELECT * FROM lines) AS t1) AS t1 ON (t1.x = points.id)' ds = @ds1.from_self.from_self.graph(@ds2.from_self.from_self, :x=>:id) ds.sql.should == 'SELECT t1.id, t1.x, t1.y, t2.id AS t2_id, t2.x AS t2_x, t2.y AS t2_y, t2.graph_id FROM (SELECT * FROM (SELECT * FROM points) AS t1) AS t1 LEFT OUTER JOIN (SELECT * FROM (SELECT * FROM (SELECT * FROM lines) AS t1) AS t1) AS t2 ON (t2.x = t1.id)' ds = @ds1.from(@ds1, @ds3).graph(@ds2.from_self, :x=>:id) ds.sql.should == 'SELECT t1.id, t1.x, t1.y, t3.id AS t3_id, t3.x AS t3_x, t3.y AS t3_y, t3.graph_id FROM (SELECT * FROM (SELECT * FROM points) AS t1, (SELECT * FROM graphs) AS t2) AS t1 LEFT OUTER JOIN (SELECT * FROM (SELECT * FROM lines) AS t1) AS t3 ON (t3.x = t1.id)' end it "#graph should accept a symbol table name as the dataset" do ds = @ds1.graph(:lines, :x=>:id) ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' end it "#graph should raise an error if a symbol, dataset, or model is not used" do proc{@ds1.graph(Object.new, :x=>:id)}.should raise_error(Sequel::Error) end it "#graph should accept a :table_alias option" do ds = @ds1.graph(:lines, {:x=>:id}, :table_alias=>:planes) ds.sql.should == 'SELECT points.id, points.x, points.y, planes.id AS planes_id, planes.x AS planes_x, planes.y AS planes_y, planes.graph_id FROM points LEFT OUTER JOIN lines AS planes ON (planes.x = points.id)' end it "#graph should accept a :implicit_qualifier option" do ds = @ds1.graph(:lines, {:x=>:id}, :implicit_qualifier=>:planes) ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = planes.id)' end it "#graph should accept a :join_type option" do ds = @ds1.graph(:lines, {:x=>:id}, :join_type=>:inner) ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points INNER JOIN lines ON (lines.x = points.id)' end it "#graph should not select any columns from the graphed table if :select option is false" do ds = @ds1.graph(:lines, {:x=>:id}, :select=>false).graph(:graphs, :id=>:graph_id) ds.sql.should == 'SELECT points.id, points.x, points.y, graphs.id AS graphs_id, graphs.name, graphs.x AS graphs_x, graphs.y AS graphs_y, graphs.lines_x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN graphs ON (graphs.id = lines.graph_id)' end it "#graph should use the given columns if :select option is used" do ds = @ds1.graph(:lines, {:x=>:id}, :select=>[:x, :graph_id]).graph(:graphs, :id=>:graph_id) ds.sql.should == 'SELECT points.id, points.x, points.y, lines.x AS lines_x, lines.graph_id, graphs.id AS graphs_id, graphs.name, graphs.x AS graphs_x, graphs.y AS graphs_y, graphs.lines_x AS graphs_lines_x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN graphs ON (graphs.id = lines.graph_id)' end it "#graph should pass all join_conditions to join_table" do ds = @ds1.graph(@ds2, [[:x, :id], [:y, :id]]) ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON ((lines.x = points.id) AND (lines.y = points.id))' end it "#graph should accept a block instead of conditions and pass it to join_table" do ds = @ds1.graph(@ds2){|ja, lja, js| [[Sequel.qualify(ja, :x), Sequel.qualify(lja, :id)], [Sequel.qualify(ja, :y), Sequel.qualify(lja, :id)]]} ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON ((lines.x = points.id) AND (lines.y = points.id))' end it "#graph should not add columns if graph is called after set_graph_aliases" do ds = @ds1.set_graph_aliases([[:x,[:points, :x]], [:y,[:lines, :y]]]) ds.sql.should == 'SELECT points.x, lines.y FROM points' ds = ds.graph(:lines, :x=>:id) ds.sql.should == 'SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' end it "#graph should allow graphing of multiple datasets" do ds = @ds1.graph(@ds2, :x=>:id).graph(@ds3, :id=>:graph_id) ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id, graphs.id AS graphs_id, graphs.name, graphs.x AS graphs_x, graphs.y AS graphs_y, graphs.lines_x AS graphs_lines_x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN graphs ON (graphs.id = lines.graph_id)' end it "#graph should allow graphing of the same dataset multiple times" do ds = @ds1.graph(@ds2, :x=>:id).graph(@ds2, {:y=>:points__id}, :table_alias=>:graph) ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id, graph.id AS graph_id_0, graph.x AS graph_x, graph.y AS graph_y, graph.graph_id AS graph_graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN lines AS graph ON (graph.y = points.id)' end it "#graph should raise an error if the table/table alias has already been used" do proc{@ds1.graph(@ds1, :x=>:id)}.should raise_error(Sequel::Error) proc{@ds1.graph(@ds2, :x=>:id)}.should_not raise_error proc{@ds1.graph(@ds2, :x=>:id).graph(@ds2, :x=>:id)}.should raise_error(Sequel::Error) proc{@ds1.graph(@ds2, :x=>:id).graph(@ds2, {:x=>:id}, :table_alias=>:blah)}.should_not raise_error end it "#set_graph_aliases and #add_graph_aliases should not modify the current dataset's opts" do o1 = @ds1.opts o2 = o1.dup ds1 = @ds1.set_graph_aliases(:x=>[:graphs,:id]) @ds1.opts.should == o1 @ds1.opts.should == o2 ds1.opts.should_not == o1 o3 = ds1.opts ds2 = ds1.add_graph_aliases(:y=>[:blah,:id]) ds1.opts.should == o3 ds1.opts.should == o3 ds2.opts.should_not == o2 end it "#set_graph_aliases should specify the graph mapping" do ds = @ds1.graph(:lines, :x=>:id) ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' ds = ds.set_graph_aliases(:x=>[:points, :x], :y=>[:lines, :y]) ['SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)', 'SELECT lines.y, points.x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' ].should(include(ds.sql)) end it "#add_graph_aliases should add columns to the graph mapping" do @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points, :q]).add_graph_aliases(:y=>[:lines, :r]).sql.should == 'SELECT points.q AS x, lines.r AS y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' end it "#add_graph_aliases should raise an error if called without existing graph aliases" do proc{@ds1.add_graph_aliases(:y=>[:lines, :r])}.should raise_error(Sequel::Error) end it "#set_graph_aliases should allow a third entry to specify an expression to use other than the default" do ds = @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points, :x, 1], :y=>[:lines, :y, Sequel.function(:random)]) ['SELECT 1 AS x, random() AS y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)', 'SELECT random() AS y, 1 AS x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' ].should(include(ds.sql)) end it "#set_graph_aliases should allow a single array entry to specify a table, assuming the same column as the key" do ds = @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points], :y=>[:lines]) ['SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)', 'SELECT lines.y, points.x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' ].should(include(ds.sql)) end it "#set_graph_aliases should allow hash values to be symbols specifying table, assuming the same column as the key" do ds = @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>:points, :y=>:lines) ['SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)', 'SELECT lines.y, points.x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)' ].should(include(ds.sql)) end it "#set_graph_aliases should only alias columns if necessary" do ds = @ds1.set_graph_aliases(:x=>[:points, :x], :y=>[:lines, :y]) ['SELECT points.x, lines.y FROM points', 'SELECT lines.y, points.x FROM points' ].should(include(ds.sql)) ds = @ds1.set_graph_aliases(:x1=>[:points, :x], :y=>[:lines, :y]) ['SELECT points.x AS x1, lines.y FROM points', 'SELECT lines.y, points.x AS x1 FROM points' ].should(include(ds.sql)) end it "#ungraphed should remove the splitting of result sets into component tables" do @db.fetch = {:id=>1,:x=>2,:y=>3,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7} @ds1.graph(@ds2, :x=>:id).ungraphed.all.should == [{:id=>1,:x=>2,:y=>3,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7}] end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/core/schema_generator_spec.rb������������������������������������������������0000664�0000000�0000000�00000015111�12201565355�0022524�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') describe Sequel::Schema::Generator do before do @generator = Sequel::Schema::Generator.new(Sequel.mock) do string :title column :body, :text foreign_key :parent_id primary_key :id check 'price > 100' constraint(:xxx) {{:yyy => :zzz}} index :title index [:title, :body], :unique => true foreign_key :node_id, :nodes foreign_key :deferrable_node_id, :nodes, :deferrable => true primary_key [:title, :parent_id], :name => :cpk foreign_key [:node_id, :prop_id], :nodes_props, :name => :cfk end @columns, @indexes, @constraints = @generator.columns, @generator.indexes, @generator.constraints end it "should respond to everything" do @generator.respond_to?(:foo).should be_true end if RUBY_VERSION >= '1.9' it "should primary key column first" do @columns.first[:name].should == :id @columns.first[:primary_key].should == true @columns[3][:name].should == :parent_id @columns[3][:primary_key].should == nil end it "counts definitions correctly" do @columns.size.should == 6 @indexes.size.should == 2 @constraints.size.should == 4 end it "retrieves primary key name" do @generator.primary_key_name.should == :id end it "keeps columns in order" do @columns[1][:name].should == :title @columns[1][:type].should == :string @columns[2][:name].should == :body @columns[2][:type].should == :text end it "creates foreign key column" do @columns[3][:name].should == :parent_id @columns[3][:type].should == Integer @columns[4][:name].should == :node_id @columns[4][:type].should == Integer end it "creates deferrable altered foreign key column" do @columns[5][:name].should == :deferrable_node_id @columns[5][:type].should == Integer @columns[5][:deferrable].should == true end it "uses table for foreign key columns, if specified" do @columns[3][:table].should == nil @columns[4][:table].should == :nodes @constraints[3][:table].should == :nodes_props end it "finds columns" do [:title, :body, :parent_id, :id].each do |col| @generator.has_column?(col).should be_true end @generator.has_column?(:foo).should_not be_true end it "creates constraints" do @constraints[0][:name].should == nil @constraints[0][:type].should == :check @constraints[0][:check].should == ['price > 100'] @constraints[1][:name].should == :xxx @constraints[1][:type].should == :check @constraints[1][:check].should be_a_kind_of(Proc) @constraints[2][:name].should == :cpk @constraints[2][:type].should == :primary_key @constraints[2][:columns].should == [ :title, :parent_id ] @constraints[3][:name].should == :cfk @constraints[3][:type].should == :foreign_key @constraints[3][:columns].should == [ :node_id, :prop_id ] @constraints[3][:table].should == :nodes_props end it "creates indexes" do @indexes[0][:columns].should include(:title) @indexes[1][:columns].should include(:title) @indexes[1][:columns].should include(:body) end end describe Sequel::Schema::AlterTableGenerator do before do @generator = Sequel::Schema::AlterTableGenerator.new(Sequel.mock) do add_column :aaa, :text drop_column :bbb rename_column :ccc, :ho set_column_type :ddd, :float set_column_default :eee, 1 add_index [:fff, :ggg] drop_index :hhh drop_index :hhh, :name=>:blah_blah add_full_text_index :blah add_spatial_index :geom add_index :blah, :type => :hash add_index :blah, :where => {:something => true} add_constraint :con1, 'fred > 100' drop_constraint :con2 add_unique_constraint [:aaa, :bbb, :ccc], :name => :con3 add_primary_key :id add_foreign_key :node_id, :nodes add_primary_key [:aaa, :bbb] add_foreign_key [:node_id, :prop_id], :nodes_props add_foreign_key [:node_id, :prop_id], :nodes_props, :name => :fkey drop_foreign_key :node_id drop_foreign_key [:node_id, :prop_id] drop_foreign_key [:node_id, :prop_id], :name => :fkey end end specify "should generate operation records" do @generator.operations.should == [ {:op => :add_column, :name => :aaa, :type => :text}, {:op => :drop_column, :name => :bbb}, {:op => :rename_column, :name => :ccc, :new_name => :ho}, {:op => :set_column_type, :name => :ddd, :type => :float}, {:op => :set_column_default, :name => :eee, :default => 1}, {:op => :add_index, :columns => [:fff, :ggg]}, {:op => :drop_index, :columns => [:hhh]}, {:op => :drop_index, :columns => [:hhh], :name=>:blah_blah}, {:op => :add_index, :columns => [:blah], :type => :full_text}, {:op => :add_index, :columns => [:geom], :type => :spatial}, {:op => :add_index, :columns => [:blah], :type => :hash}, {:op => :add_index, :columns => [:blah], :where => {:something => true}}, {:op => :add_constraint, :type => :check, :name => :con1, :check => ['fred > 100']}, {:op => :drop_constraint, :name => :con2}, {:op => :add_constraint, :type => :unique, :name => :con3, :columns => [:aaa, :bbb, :ccc]}, {:op => :add_column, :name => :id, :type => Integer, :primary_key=>true, :auto_increment=>true}, {:op => :add_column, :name => :node_id, :type => Integer, :table=>:nodes}, {:op => :add_constraint, :type => :primary_key, :columns => [:aaa, :bbb]}, {:op => :add_constraint, :type => :foreign_key, :columns => [:node_id, :prop_id], :table => :nodes_props}, {:op => :add_constraint, :type => :foreign_key, :columns => [:node_id, :prop_id], :table => :nodes_props, :name => :fkey}, {:op => :drop_constraint, :type => :foreign_key, :columns => [:node_id]}, {:op => :drop_column, :name => :node_id}, {:op => :drop_constraint, :type => :foreign_key, :columns => [:node_id, :prop_id]}, {:op => :drop_constraint, :type => :foreign_key, :columns => [:node_id, :prop_id], :name => :fkey}, ] end end describe "Sequel::Schema::Generator generic type methods" do it "should store the type class in :type for each column" do Sequel::Schema::Generator.new(Sequel.mock) do String :a Integer :b Fixnum :c Bignum :d Float :e BigDecimal :f Date :g DateTime :h Time :i Numeric :j File :k TrueClass :l FalseClass :m end.columns.map{|c| c[:type]}.should == [String, Integer, Fixnum, Bignum, Float, BigDecimal, Date, DateTime, Time, Numeric, File, TrueClass, FalseClass] end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/core/schema_spec.rb����������������������������������������������������������0000664�0000000�0000000�00000154564�12201565355�0020476�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') describe "DB#create_table" do before do @db = Sequel.mock end specify "should accept the table name" do @db.create_table(:cats) {} @db.sqls.should == ['CREATE TABLE cats ()'] end specify "should accept the table name in multiple formats" do @db.create_table(:cats__cats) {} @db.create_table("cats__cats1") {} @db.create_table(Sequel.identifier(:cats__cats2)) {} @db.create_table(Sequel.qualify(:cats3, :cats)) {} @db.sqls.should == ['CREATE TABLE cats.cats ()', 'CREATE TABLE cats__cats1 ()', 'CREATE TABLE cats__cats2 ()', 'CREATE TABLE cats3.cats ()'] end specify "should raise an error if the table name argument is not valid" do proc{@db.create_table(1) {}}.should raise_error(Sequel::Error) proc{@db.create_table(Sequel.as(:cats, :c)) {}}.should raise_error(Sequel::Error) end specify "should remove cached schema entry" do @db.instance_variable_set(:@schemas, {'cats'=>[]}) @db.create_table(:cats){Integer :a} @db.instance_variable_get(:@schemas).should be_empty end specify "should accept multiple columns" do @db.create_table(:cats) do column :id, :integer column :name, :text end @db.sqls.should == ['CREATE TABLE cats (id integer, name text)'] end specify "should accept method calls as data types" do @db.create_table(:cats) do integer :id text :name end @db.sqls.should == ['CREATE TABLE cats (id integer, name text)'] end specify "should transform types given as ruby classes to database-specific types" do @db.create_table(:cats) do String :a Integer :b Fixnum :c Bignum :d Float :e BigDecimal :f Date :g DateTime :h Time :i Numeric :j File :k TrueClass :l FalseClass :m column :n, Fixnum primary_key :o, :type=>String foreign_key :p, :f, :type=>Date end @db.sqls.should == ['CREATE TABLE cats (o varchar(255) PRIMARY KEY AUTOINCREMENT, a varchar(255), b integer, c integer, d bigint, e double precision, f numeric, g date, h timestamp, i timestamp, j numeric, k blob, l boolean, m boolean, n integer, p date REFERENCES f)'] end specify "should transform types given as ruby classes to database-specific types" do @db.default_string_column_size = 50 @db.create_table(:cats) do String :a String :a2, :size=>13 String :a3, :fixed=>true String :a4, :size=>13, :fixed=>true String :a5, :text=>true varchar :a6 varchar :a7, :size=>13 end @db.sqls.should == ['CREATE TABLE cats (a varchar(50), a2 varchar(13), a3 char(50), a4 char(13), a5 text, a6 varchar(50), a7 varchar(13))'] end specify "should allow the use of modifiers with ruby class types" do @db.create_table(:cats) do String :a, :size=>50 String :b, :text=>true String :c, :fixed=>true, :size=>40 Time :d, :only_time=>true BigDecimal :e, :size=>[11,2] end @db.sqls.should == ['CREATE TABLE cats (a varchar(50), b text, c char(40), d time, e numeric(11, 2))'] end specify "should raise an error if you use a ruby class that isn't handled" do proc{@db.create_table(:cats){column :a, Class}}.should raise_error(Sequel::Error) end specify "should accept primary key definition" do @db.create_table(:cats) do primary_key :id end @db.sqls.should == ['CREATE TABLE cats (id integer PRIMARY KEY AUTOINCREMENT)'] @db.create_table(:cats) do primary_key :id, :serial, :auto_increment => false end @db.sqls.should == ['CREATE TABLE cats (id serial PRIMARY KEY)'] @db.create_table(:cats) do primary_key :id, :type => :serial, :auto_increment => false end @db.sqls.should == ['CREATE TABLE cats (id serial PRIMARY KEY)'] end specify "should allow naming primary key constraint with :primary_key_constraint_name option" do @db.create_table(:cats) do primary_key :id, :primary_key_constraint_name=>:foo end @db.sqls.should == ['CREATE TABLE cats (id integer CONSTRAINT foo PRIMARY KEY AUTOINCREMENT)'] end specify "should handling splitting named column constraints into table constraints if unsupported" do def @db.supports_named_column_constraints?; false end @db.create_table(:cats) do primary_key :id, :primary_key_constraint_name=>:foo foreign_key :cat_id, :cats, :unique=>true, :unique_constraint_name=>:bar, :foreign_key_constraint_name=>:baz, :deferrable=>true, :key=>:foo_id, :on_delete=>:cascade, :on_update=>:restrict end @db.sqls.should == ['CREATE TABLE cats (id integer AUTOINCREMENT, cat_id integer, CONSTRAINT foo PRIMARY KEY (id), CONSTRAINT baz FOREIGN KEY (cat_id) REFERENCES cats(foo_id) ON DELETE CASCADE ON UPDATE RESTRICT DEFERRABLE INITIALLY DEFERRED, CONSTRAINT bar UNIQUE (cat_id))'] end specify "should accept and literalize default values" do @db.create_table(:cats) do integer :id, :default => 123 text :name, :default => "abc'def" end @db.sqls.should == ["CREATE TABLE cats (id integer DEFAULT 123, name text DEFAULT 'abc''def')"] end specify "should accept not null definition" do @db.create_table(:cats) do integer :id text :name, :null => false text :name2, :allow_null => false end @db.sqls.should == ["CREATE TABLE cats (id integer, name text NOT NULL, name2 text NOT NULL)"] end specify "should accept null definition" do @db.create_table(:cats) do integer :id text :name, :null => true text :name2, :allow_null => true end @db.sqls.should == ["CREATE TABLE cats (id integer, name text NULL, name2 text NULL)"] end specify "should accept unique definition" do @db.create_table(:cats) do integer :id text :name, :unique => true end @db.sqls.should == ["CREATE TABLE cats (id integer, name text UNIQUE)"] end specify "should allow naming unique constraint with :unique_constraint_name option" do @db.create_table(:cats) do text :name, :unique => true, :unique_constraint_name=>:foo end @db.sqls.should == ["CREATE TABLE cats (name text CONSTRAINT foo UNIQUE)"] end specify "should handle not deferred unique constraints" do @db.create_table(:cats) do integer :id text :name unique :name, :deferrable=>false end @db.sqls.should == ["CREATE TABLE cats (id integer, name text, UNIQUE (name) NOT DEFERRABLE)"] end specify "should handle deferred unique constraints" do @db.create_table(:cats) do integer :id text :name unique :name, :deferrable=>true end @db.sqls.should == ["CREATE TABLE cats (id integer, name text, UNIQUE (name) DEFERRABLE INITIALLY DEFERRED)"] end specify "should handle deferred initially immediate unique constraints" do @db.create_table(:cats) do integer :id text :name unique :name, :deferrable=>:immediate end @db.sqls.should == ["CREATE TABLE cats (id integer, name text, UNIQUE (name) DEFERRABLE INITIALLY IMMEDIATE)"] end specify "should accept unsigned definition" do @db.create_table(:cats) do integer :value, :unsigned => true end @db.sqls.should == ["CREATE TABLE cats (value integer UNSIGNED)"] end specify "should accept [SET|ENUM](...) types" do @db.create_table(:cats) do set :color, :elements => ['black', 'tricolor', 'grey'] end @db.sqls.should == ["CREATE TABLE cats (color set('black', 'tricolor', 'grey'))"] end specify "should accept varchar size" do @db.create_table(:cats) do varchar :name end @db.sqls.should == ["CREATE TABLE cats (name varchar(255))"] @db.create_table(:cats) do varchar :name, :size => 51 end @db.sqls.should == ["CREATE TABLE cats (name varchar(51))"] end specify "should use double precision for double type" do @db.create_table(:cats) do double :name end @db.sqls.should == ["CREATE TABLE cats (name double precision)"] end specify "should accept foreign keys without options" do @db.create_table(:cats) do foreign_key :project_id end @db.sqls.should == ["CREATE TABLE cats (project_id integer)"] end specify "should accept foreign keys with options" do @db.create_table(:cats) do foreign_key :project_id, :table => :projects end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects)"] end specify "should accept foreign keys with separate table argument" do @db.create_table(:cats) do foreign_key :project_id, :projects, :default=>3 end @db.sqls.should == ["CREATE TABLE cats (project_id integer DEFAULT 3 REFERENCES projects)"] end specify "should allowing naming foreign key constraint with :foreign_key_constraint_name option" do @db.create_table(:cats) do foreign_key :project_id, :projects, :foreign_key_constraint_name=>:foo end @db.sqls.should == ["CREATE TABLE cats (project_id integer CONSTRAINT foo REFERENCES projects)"] end specify "should raise an error if the table argument to foreign_key isn't a hash, symbol, or nil" do proc{@db.create_table(:cats){foreign_key :project_id, Object.new, :default=>3}}.should raise_error(Sequel::Error) end specify "should accept foreign keys with arbitrary keys" do @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :key => :id end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects(id))"] @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :key => :zzz end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects(zzz))"] end specify "should accept foreign keys with ON DELETE clause" do @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_delete => :restrict end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON DELETE RESTRICT)"] @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_delete => :cascade end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON DELETE CASCADE)"] @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_delete => :no_action end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON DELETE NO ACTION)"] @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_delete => :set_null end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON DELETE SET NULL)"] @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_delete => :set_default end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON DELETE SET DEFAULT)"] @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_delete => 'NO ACTION FOO' end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON DELETE NO ACTION FOO)"] end specify "should accept foreign keys with ON UPDATE clause" do @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_update => :restrict end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON UPDATE RESTRICT)"] @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_update => :cascade end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON UPDATE CASCADE)"] @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_update => :no_action end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON UPDATE NO ACTION)"] @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_update => :set_null end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON UPDATE SET NULL)"] @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_update => :set_default end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON UPDATE SET DEFAULT)"] @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_update => 'SET DEFAULT FOO' end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON UPDATE SET DEFAULT FOO)"] end specify "should accept foreign keys with deferrable option" do @db.create_table(:cats) do foreign_key :project_id, :projects, :deferrable=>true end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects DEFERRABLE INITIALLY DEFERRED)"] end specify "should accept collation" do @db.create_table(:cats) do varchar :name, :collate => :utf8_bin end @db.sqls.should == ["CREATE TABLE cats (name varchar(255) COLLATE utf8_bin)"] end specify "should accept inline index definition" do @db.create_table(:cats) do integer :id, :index => true end @db.sqls.should == ["CREATE TABLE cats (id integer)", "CREATE INDEX cats_id_index ON cats (id)"] end specify "should accept inline index definition with a hash of options" do @db.create_table(:cats) do integer :id, :index => {:unique=>true} end @db.sqls.should == ["CREATE TABLE cats (id integer)", "CREATE UNIQUE INDEX cats_id_index ON cats (id)"] end specify "should accept inline index definition for foreign keys" do @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_delete => :cascade, :index => true end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON DELETE CASCADE)", "CREATE INDEX cats_project_id_index ON cats (project_id)"] end specify "should accept inline index definition for foreign keys with a hash of options" do @db.create_table(:cats) do foreign_key :project_id, :table => :projects, :on_delete => :cascade, :index => {:unique=>true} end @db.sqls.should == ["CREATE TABLE cats (project_id integer REFERENCES projects ON DELETE CASCADE)", "CREATE UNIQUE INDEX cats_project_id_index ON cats (project_id)"] end specify "should accept index definitions" do @db.create_table(:cats) do integer :id index :id end @db.sqls.should == ["CREATE TABLE cats (id integer)", "CREATE INDEX cats_id_index ON cats (id)"] end specify "should accept unique constraint definitions" do @db.create_table(:cats) do text :name unique :name end @db.sqls.should == ["CREATE TABLE cats (name text, UNIQUE (name))"] end specify "should not raise on index error for unsupported index definitions if ignore_index_errors is used" do proc { @db.create_table(:cats, :ignore_index_errors=>true) do text :name full_text_index :name end }.should_not raise_error end specify "should raise on full-text index definitions" do proc { @db.create_table(:cats) do text :name full_text_index :name end }.should raise_error(Sequel::Error) end specify "should raise on spatial index definitions" do proc { @db.create_table(:cats) do point :geom spatial_index :geom end }.should raise_error(Sequel::Error) end specify "should raise on partial index definitions" do proc { @db.create_table(:cats) do text :name index :name, :where => {:something => true} end }.should raise_error(Sequel::Error) end specify "should raise index definitions with type" do proc { @db.create_table(:cats) do text :name index :name, :type => :hash end }.should raise_error(Sequel::Error) end specify "should ignore errors if the database raises an error on an index creation statement and the :ignore_index_errors option is used" do meta_def(@db, :execute_ddl){|*a| raise Sequel::DatabaseError if /blah/.match(a.first); super(*a)} lambda{@db.create_table(:cats){Integer :id; index :blah; index :id}}.should raise_error(Sequel::DatabaseError) @db.sqls.should == ['CREATE TABLE cats (id integer)'] lambda{@db.create_table(:cats, :ignore_index_errors=>true){Integer :id; index :blah; index :id}}.should_not raise_error @db.sqls.should == ['CREATE TABLE cats (id integer)', 'CREATE INDEX cats_id_index ON cats (id)'] end specify "should accept multiple index definitions" do @db.create_table(:cats) do integer :id index :id index :name end @db.sqls.should == ["CREATE TABLE cats (id integer)", "CREATE INDEX cats_id_index ON cats (id)", "CREATE INDEX cats_name_index ON cats (name)"] end specify "should accept functional indexes" do @db.create_table(:cats) do integer :id index Sequel.function(:lower, :name) end @db.sqls.should == ["CREATE TABLE cats (id integer)", "CREATE INDEX cats_lower_name__index ON cats (lower(name))"] end specify "should accept indexes with identifiers" do @db.create_table(:cats) do integer :id index Sequel.identifier(:lower__name) end @db.sqls.should == ["CREATE TABLE cats (id integer)", "CREATE INDEX cats_lower__name_index ON cats (lower__name)"] end specify "should accept custom index names" do @db.create_table(:cats) do integer :id index :id, :name => 'abc' end @db.sqls.should == ["CREATE TABLE cats (id integer)", "CREATE INDEX abc ON cats (id)"] end specify "should accept unique index definitions" do @db.create_table(:cats) do integer :id index :id, :unique => true end @db.sqls.should == ["CREATE TABLE cats (id integer)", "CREATE UNIQUE INDEX cats_id_index ON cats (id)"] end specify "should accept composite index definitions" do @db.create_table(:cats) do integer :id index [:id, :name], :unique => true end @db.sqls.should == ["CREATE TABLE cats (id integer)", "CREATE UNIQUE INDEX cats_id_name_index ON cats (id, name)"] end specify "should accept unnamed constraint definitions with blocks" do @db.create_table(:cats) do integer :score check{(x > 0) & (y < 1)} end @db.sqls.should == ["CREATE TABLE cats (score integer, CHECK ((x > 0) AND (y < 1)))"] end specify "should accept unnamed constraint definitions with function calls" do @db.create_table(:cats) do integer :score check{f(x)} end @db.sqls.should == ["CREATE TABLE cats (score integer, CHECK (f(x)))"] end specify "should accept unnamed constraint definitions" do @db.create_table(:cats) do check 'price < ?', 100 end @db.sqls.should == ["CREATE TABLE cats (CHECK (price < 100))"] end specify "should accept arrays of pairs constraints" do @db.create_table(:cats) do check [[:price, 100]] end @db.sqls.should == ["CREATE TABLE cats (CHECK (price = 100))"] end specify "should accept hash constraints" do @db.create_table(:cats) do check :price=>100 end @db.sqls.should == ["CREATE TABLE cats (CHECK (price = 100))"] end specify "should accept array constraints" do @db.create_table(:cats) do check [Sequel.expr(:x) > 0, Sequel.expr(:y) < 1] end @db.sqls.should == ["CREATE TABLE cats (CHECK ((x > 0) AND (y < 1)))"] end specify "should accept named constraint definitions" do @db.create_table(:cats) do integer :score constraint :valid_score, 'score <= 100' end @db.sqls.should == ["CREATE TABLE cats (score integer, CONSTRAINT valid_score CHECK (score <= 100))"] end specify "should accept named constraint definitions with options" do @db.create_table(:cats) do integer :score constraint({:name=>:valid_score, :deferrable=>true}, 'score <= 100') end @db.sqls.should == ["CREATE TABLE cats (score integer, CONSTRAINT valid_score CHECK (score <= 100) DEFERRABLE INITIALLY DEFERRED)"] end specify "should accept named constraint definitions with block" do @db.create_table(:cats) do constraint(:blah_blah){(x.sql_number > 0) & (y.sql_number < 1)} end @db.sqls.should == ["CREATE TABLE cats (CONSTRAINT blah_blah CHECK ((x > 0) AND (y < 1)))"] end specify "should raise an error if an invalid constraint type is used" do proc{@db.create_table(:cats){unique [:a, :b], :type=>:bb}}.should raise_error(Sequel::Error) end specify "should accept composite primary keys" do @db.create_table(:cats) do integer :a integer :b primary_key [:a, :b] end @db.sqls.should == ["CREATE TABLE cats (a integer, b integer, PRIMARY KEY (a, b))"] end specify "should accept named composite primary keys" do @db.create_table(:cats) do integer :a integer :b primary_key [:a, :b], :name => :cpk end @db.sqls.should == ["CREATE TABLE cats (a integer, b integer, CONSTRAINT cpk PRIMARY KEY (a, b))"] end specify "should accept composite foreign keys" do @db.create_table(:cats) do integer :a integer :b foreign_key [:a, :b], :abc end @db.sqls.should == ["CREATE TABLE cats (a integer, b integer, FOREIGN KEY (a, b) REFERENCES abc)"] end specify "should accept named composite foreign keys" do @db.create_table(:cats) do integer :a integer :b foreign_key [:a, :b], :abc, :name => :cfk end @db.sqls.should == ["CREATE TABLE cats (a integer, b integer, CONSTRAINT cfk FOREIGN KEY (a, b) REFERENCES abc)"] end specify "should accept composite foreign keys with arbitrary keys" do @db.create_table(:cats) do integer :a integer :b foreign_key [:a, :b], :abc, :key => [:real_a, :real_b] end @db.sqls.should == ["CREATE TABLE cats (a integer, b integer, FOREIGN KEY (a, b) REFERENCES abc(real_a, real_b))"] @db.create_table(:cats) do integer :a integer :b foreign_key [:a, :b], :abc, :key => [:z, :x] end @db.sqls.should == ["CREATE TABLE cats (a integer, b integer, FOREIGN KEY (a, b) REFERENCES abc(z, x))"] end specify "should accept composite foreign keys with on delete and on update clauses" do @db.create_table(:cats) do integer :a integer :b foreign_key [:a, :b], :abc, :on_delete => :cascade end @db.sqls.should == ["CREATE TABLE cats (a integer, b integer, FOREIGN KEY (a, b) REFERENCES abc ON DELETE CASCADE)"] @db.create_table(:cats) do integer :a integer :b foreign_key [:a, :b], :abc, :on_update => :no_action end @db.sqls.should == ["CREATE TABLE cats (a integer, b integer, FOREIGN KEY (a, b) REFERENCES abc ON UPDATE NO ACTION)"] @db.create_table(:cats) do integer :a integer :b foreign_key [:a, :b], :abc, :on_delete => :restrict, :on_update => :set_default end @db.sqls.should == ["CREATE TABLE cats (a integer, b integer, FOREIGN KEY (a, b) REFERENCES abc ON DELETE RESTRICT ON UPDATE SET DEFAULT)"] @db.create_table(:cats) do integer :a integer :b foreign_key [:a, :b], :abc, :key => [:x, :y], :on_delete => :set_null, :on_update => :set_null end @db.sqls.should == ["CREATE TABLE cats (a integer, b integer, FOREIGN KEY (a, b) REFERENCES abc(x, y) ON DELETE SET NULL ON UPDATE SET NULL)"] end specify "should accept an :as option to create a table from the results of a dataset" do @db.create_table(:cats, :as=>@db[:a]) @db.sqls.should == ['CREATE TABLE cats AS SELECT * FROM a'] end specify "should accept an :as option to create a table from a SELECT string" do @db.create_table(:cats, :as=>'SELECT * FROM a') @db.sqls.should == ['CREATE TABLE cats AS SELECT * FROM a'] end specify "should raise an Error if both a block and an :as argument are given" do proc{@db.create_table(:cats, :as=>@db[:a]){}}.should raise_error(Sequel::Error) end end describe "DB#create_table!" do before do @db = Sequel.mock end specify "should create the table if it does not exist" do meta_def(@db, :table_exists?){|a| false} @db.create_table!(:cats){|*a|} @db.sqls.should == ['CREATE TABLE cats ()'] end specify "should drop the table before creating it if it already exists" do meta_def(@db, :table_exists?){|a| true} @db.create_table!(:cats){|*a|} @db.sqls.should == ['DROP TABLE cats', 'CREATE TABLE cats ()'] end end describe "DB#create_table?" do before do @db = Sequel.mock end specify "should not create the table if the table already exists" do meta_def(@db, :table_exists?){|a| true} @db.create_table?(:cats){|*a|} @db.sqls.should == [] end specify "should create the table if the table doesn't already exist" do meta_def(@db, :table_exists?){|a| false} @db.create_table?(:cats){|*a|} @db.sqls.should == ['CREATE TABLE cats ()'] end specify "should use IF NOT EXISTS if the database supports that" do meta_def(@db, :supports_create_table_if_not_exists?){true} @db.create_table?(:cats){|*a|} @db.sqls.should == ['CREATE TABLE IF NOT EXISTS cats ()'] end end describe "DB#create_join_table" do before do @db = Sequel.mock end specify "should take a hash with foreign keys and table name values" do @db.create_join_table(:cat_id=>:cats, :dog_id=>:dogs) @db.sqls.should == ['CREATE TABLE cats_dogs (cat_id integer NOT NULL REFERENCES cats, dog_id integer NOT NULL REFERENCES dogs, PRIMARY KEY (cat_id, dog_id))', 'CREATE INDEX cats_dogs_dog_id_cat_id_index ON cats_dogs (dog_id, cat_id)'] end specify "should be able to have values be a hash of options" do @db.create_join_table(:cat_id=>{:table=>:cats, :null=>true}, :dog_id=>{:table=>:dogs, :default=>0}) @db.sqls.should == ['CREATE TABLE cats_dogs (cat_id integer NULL REFERENCES cats, dog_id integer DEFAULT 0 NOT NULL REFERENCES dogs, PRIMARY KEY (cat_id, dog_id))', 'CREATE INDEX cats_dogs_dog_id_cat_id_index ON cats_dogs (dog_id, cat_id)'] end specify "should be able to pass a second hash of table options" do @db.create_join_table({:cat_id=>:cats, :dog_id=>:dogs}, :temp=>true) @db.sqls.should == ['CREATE TEMPORARY TABLE cats_dogs (cat_id integer NOT NULL REFERENCES cats, dog_id integer NOT NULL REFERENCES dogs, PRIMARY KEY (cat_id, dog_id))', 'CREATE INDEX cats_dogs_dog_id_cat_id_index ON cats_dogs (dog_id, cat_id)'] end specify "should recognize :name option in table options" do @db.create_join_table({:cat_id=>:cats, :dog_id=>:dogs}, :name=>:f) @db.sqls.should == ['CREATE TABLE f (cat_id integer NOT NULL REFERENCES cats, dog_id integer NOT NULL REFERENCES dogs, PRIMARY KEY (cat_id, dog_id))', 'CREATE INDEX f_dog_id_cat_id_index ON f (dog_id, cat_id)'] end specify "should recognize :index_options option in table options" do @db.create_join_table({:cat_id=>:cats, :dog_id=>:dogs}, :index_options=>{:name=>:foo_index}) @db.sqls.should == ['CREATE TABLE cats_dogs (cat_id integer NOT NULL REFERENCES cats, dog_id integer NOT NULL REFERENCES dogs, PRIMARY KEY (cat_id, dog_id))', 'CREATE INDEX foo_index ON cats_dogs (dog_id, cat_id)'] end specify "should recognize :no_index option in table options" do @db.create_join_table({:cat_id=>:cats, :dog_id=>:dogs}, :no_index=>true) @db.sqls.should == ['CREATE TABLE cats_dogs (cat_id integer NOT NULL REFERENCES cats, dog_id integer NOT NULL REFERENCES dogs, PRIMARY KEY (cat_id, dog_id))'] end specify "should recognize :no_primary_key option in table options" do @db.create_join_table({:cat_id=>:cats, :dog_id=>:dogs}, :no_primary_key=>true) @db.sqls.should == ['CREATE TABLE cats_dogs (cat_id integer NOT NULL REFERENCES cats, dog_id integer NOT NULL REFERENCES dogs)', 'CREATE INDEX cats_dogs_dog_id_cat_id_index ON cats_dogs (dog_id, cat_id)'] end specify "should raise an error if the hash doesn't have 2 entries with table names" do proc{@db.create_join_table({})}.should raise_error(Sequel::Error) proc{@db.create_join_table({:cat_id=>:cats})}.should raise_error(Sequel::Error) proc{@db.create_join_table({:cat_id=>:cats, :human_id=>:humans, :dog_id=>:dog})}.should raise_error(Sequel::Error) proc{@db.create_join_table({:cat_id=>:cats, :dog_id=>{}})}.should raise_error(Sequel::Error) end end describe "DB#drop_join_table" do before do @db = Sequel.mock end specify "should take a hash with foreign keys and table name values and drop the table" do @db.drop_join_table(:cat_id=>:cats, :dog_id=>:dogs) @db.sqls.should == ['DROP TABLE cats_dogs'] end specify "should be able to have values be a hash of options" do @db.drop_join_table(:cat_id=>{:table=>:cats, :null=>true}, :dog_id=>{:table=>:dogs, :default=>0}) @db.sqls.should == ['DROP TABLE cats_dogs'] end specify "should respect a second hash of table options" do @db.drop_join_table({:cat_id=>:cats, :dog_id=>:dogs}, :cascade=>true) @db.sqls.should == ['DROP TABLE cats_dogs CASCADE'] end specify "should respect :name option for table name" do @db.drop_join_table({:cat_id=>:cats, :dog_id=>:dogs}, :name=>:f) @db.sqls.should == ['DROP TABLE f'] end specify "should raise an error if the hash doesn't have 2 entries with table names" do proc{@db.drop_join_table({})}.should raise_error(Sequel::Error) proc{@db.drop_join_table({:cat_id=>:cats})}.should raise_error(Sequel::Error) proc{@db.drop_join_table({:cat_id=>:cats, :human_id=>:humans, :dog_id=>:dog})}.should raise_error(Sequel::Error) proc{@db.drop_join_table({:cat_id=>:cats, :dog_id=>{}})}.should raise_error(Sequel::Error) end end describe "DB#drop_table" do before do @db = Sequel.mock end specify "should generate a DROP TABLE statement" do @db.drop_table :cats @db.sqls.should == ['DROP TABLE cats'] end specify "should drop multiple tables at once" do @db.drop_table :cats, :dogs @db.sqls.should == ['DROP TABLE cats', 'DROP TABLE dogs'] end specify "should take an options hash and support the :cascade option" do @db.drop_table :cats, :dogs, :cascade=>true @db.sqls.should == ['DROP TABLE cats CASCADE', 'DROP TABLE dogs CASCADE'] end end describe "DB#drop_table?" do before do @db = Sequel.mock end specify "should drop the table if it exists" do meta_def(@db, :table_exists?){|a| true} @db.drop_table?(:cats) @db.sqls.should == ["DROP TABLE cats"] end specify "should do nothing if the table does not exist" do meta_def(@db, :table_exists?){|a| false} @db.drop_table?(:cats) @db.sqls.should == [] end specify "should operate on multiple tables at once" do meta_def(@db, :table_exists?){|a| a == :cats} @db.drop_table? :cats, :dogs @db.sqls.should == ['DROP TABLE cats'] end specify "should take an options hash and support the :cascade option" do meta_def(@db, :table_exists?){|a| true} @db.drop_table? :cats, :dogs, :cascade=>true @db.sqls.should == ['DROP TABLE cats CASCADE', 'DROP TABLE dogs CASCADE'] end specify "should use IF NOT EXISTS if the database supports that" do meta_def(@db, :supports_drop_table_if_exists?){true} @db.drop_table? :cats, :dogs @db.sqls.should == ['DROP TABLE IF EXISTS cats', 'DROP TABLE IF EXISTS dogs'] end specify "should use IF NOT EXISTS with CASCADE if the database supports that" do meta_def(@db, :supports_drop_table_if_exists?){true} @db.drop_table? :cats, :dogs, :cascade=>true @db.sqls.should == ['DROP TABLE IF EXISTS cats CASCADE', 'DROP TABLE IF EXISTS dogs CASCADE'] end end describe "DB#alter_table" do before do @db = Sequel.mock end specify "should allow adding not null constraint via set_column_allow_null with false argument" do @db.alter_table(:cats) do set_column_allow_null :score, false end @db.sqls.should == ["ALTER TABLE cats ALTER COLUMN score SET NOT NULL"] end specify "should allow removing not null constraint via set_column_allow_null with true argument" do @db.alter_table(:cats) do set_column_allow_null :score, true end @db.sqls.should == ["ALTER TABLE cats ALTER COLUMN score DROP NOT NULL"] end specify "should allow adding not null constraint via set_column_not_null" do @db.alter_table(:cats) do set_column_not_null :score end @db.sqls.should == ["ALTER TABLE cats ALTER COLUMN score SET NOT NULL"] end specify "should allow removing not null constraint via set_column_allow_null without argument" do @db.alter_table(:cats) do set_column_allow_null :score end @db.sqls.should == ["ALTER TABLE cats ALTER COLUMN score DROP NOT NULL"] end specify "should support add_column" do @db.alter_table(:cats) do add_column :score, :integer end @db.sqls.should == ["ALTER TABLE cats ADD COLUMN score integer"] end specify "should support add_constraint" do @db.alter_table(:cats) do add_constraint :valid_score, 'score <= 100' end @db.sqls.should == ["ALTER TABLE cats ADD CONSTRAINT valid_score CHECK (score <= 100)"] end specify "should support add_constraint with options" do @db.alter_table(:cats) do add_constraint({:name=>:valid_score, :deferrable=>true}, 'score <= 100') end @db.sqls.should == ["ALTER TABLE cats ADD CONSTRAINT valid_score CHECK (score <= 100) DEFERRABLE INITIALLY DEFERRED"] end specify "should support add_constraint with block" do @db.alter_table(:cats) do add_constraint(:blah_blah){(x.sql_number > 0) & (y.sql_number < 1)} end @db.sqls.should == ["ALTER TABLE cats ADD CONSTRAINT blah_blah CHECK ((x > 0) AND (y < 1))"] end specify "should support add_unique_constraint" do @db.alter_table(:cats) do add_unique_constraint [:a, :b] end @db.sqls.should == ["ALTER TABLE cats ADD UNIQUE (a, b)"] @db.alter_table(:cats) do add_unique_constraint [:a, :b], :name => :ab_uniq end @db.sqls.should == ["ALTER TABLE cats ADD CONSTRAINT ab_uniq UNIQUE (a, b)"] end specify "should support add_foreign_key" do @db.alter_table(:cats) do add_foreign_key :node_id, :nodes end @db.sqls.should == ["ALTER TABLE cats ADD COLUMN node_id integer REFERENCES nodes"] end specify "should support add_foreign_key with composite foreign keys" do @db.alter_table(:cats) do add_foreign_key [:node_id, :prop_id], :nodes_props end @db.sqls.should == ["ALTER TABLE cats ADD FOREIGN KEY (node_id, prop_id) REFERENCES nodes_props"] @db.alter_table(:cats) do add_foreign_key [:node_id, :prop_id], :nodes_props, :name => :cfk end @db.sqls.should == ["ALTER TABLE cats ADD CONSTRAINT cfk FOREIGN KEY (node_id, prop_id) REFERENCES nodes_props"] @db.alter_table(:cats) do add_foreign_key [:node_id, :prop_id], :nodes_props, :key => [:nid, :pid] end @db.sqls.should == ["ALTER TABLE cats ADD FOREIGN KEY (node_id, prop_id) REFERENCES nodes_props(nid, pid)"] @db.alter_table(:cats) do add_foreign_key [:node_id, :prop_id], :nodes_props, :on_delete => :restrict, :on_update => :cascade end @db.sqls.should == ["ALTER TABLE cats ADD FOREIGN KEY (node_id, prop_id) REFERENCES nodes_props ON DELETE RESTRICT ON UPDATE CASCADE"] end specify "should support add_index" do @db.alter_table(:cats) do add_index :name end @db.sqls.should == ["CREATE INDEX cats_name_index ON cats (name)"] end specify "should ignore errors if the database raises an error on an add_index call and the :ignore_errors option is used" do meta_def(@db, :execute_ddl){|*a| raise Sequel::DatabaseError} lambda{@db.add_index(:cats, :id)}.should raise_error(Sequel::DatabaseError) lambda{@db.add_index(:cats, :id, :ignore_errors=>true)}.should_not raise_error @db.sqls.should == [] end specify "should support add_primary_key" do @db.alter_table(:cats) do add_primary_key :id end @db.sqls.should == ["ALTER TABLE cats ADD COLUMN id integer PRIMARY KEY AUTOINCREMENT"] end specify "should support add_primary_key with composite primary keys" do @db.alter_table(:cats) do add_primary_key [:id, :type] end @db.sqls.should == ["ALTER TABLE cats ADD PRIMARY KEY (id, type)"] @db.alter_table(:cats) do add_primary_key [:id, :type], :name => :cpk end @db.sqls.should == ["ALTER TABLE cats ADD CONSTRAINT cpk PRIMARY KEY (id, type)"] end specify "should support drop_column" do @db.alter_table(:cats) do drop_column :score end @db.sqls.should == ["ALTER TABLE cats DROP COLUMN score"] end specify "should support drop_column with :cascade=>true option" do @db.alter_table(:cats) do drop_column :score, :cascade=>true end @db.sqls.should == ["ALTER TABLE cats DROP COLUMN score CASCADE"] end specify "should support drop_constraint" do @db.alter_table(:cats) do drop_constraint :valid_score end @db.sqls.should == ["ALTER TABLE cats DROP CONSTRAINT valid_score"] end specify "should support drop_constraint with :cascade=>true option" do @db.alter_table(:cats) do drop_constraint :valid_score, :cascade=>true end @db.sqls.should == ["ALTER TABLE cats DROP CONSTRAINT valid_score CASCADE"] end specify "should support drop_foreign_key" do def @db.foreign_key_list(table_name) [{:name=>:cats_node_id_fkey, :columns=>[:node_id]}] end @db.alter_table(:cats) do drop_foreign_key :node_id end @db.sqls.should == ["ALTER TABLE cats DROP CONSTRAINT cats_node_id_fkey", "ALTER TABLE cats DROP COLUMN node_id"] end specify "should support drop_foreign_key with composite foreign keys" do def @db.foreign_key_list(table_name) [{:name=>:cats_node_id_prop_id_fkey, :columns=>[:node_id, :prop_id]}] end @db.alter_table(:cats) do drop_foreign_key [:node_id, :prop_id] end @db.sqls.should == ["ALTER TABLE cats DROP CONSTRAINT cats_node_id_prop_id_fkey"] @db.alter_table(:cats) do drop_foreign_key [:node_id, :prop_id], :name => :cfk end @db.sqls.should == ["ALTER TABLE cats DROP CONSTRAINT cfk"] end specify "should have drop_foreign_key raise Error if no name is found" do def @db.foreign_key_list(table_name) [{:name=>:cats_node_id_fkey, :columns=>[:foo_id]}] end lambda{@db.alter_table(:cats){drop_foreign_key :node_id}}.should raise_error(Sequel::Error) end specify "should have drop_foreign_key raise Error if multiple foreign keys found" do def @db.foreign_key_list(table_name) [{:name=>:cats_node_id_fkey, :columns=>[:node_id]}, {:name=>:cats_node_id_fkey2, :columns=>[:node_id]}] end lambda{@db.alter_table(:cats){drop_foreign_key :node_id}}.should raise_error(Sequel::Error) end specify "should support drop_index" do @db.alter_table(:cats) do drop_index :name end @db.sqls.should == ["DROP INDEX cats_name_index"] end specify "should support drop_index with a given name" do @db.alter_table(:cats) do drop_index :name, :name=>:blah_blah end @db.sqls.should == ["DROP INDEX blah_blah"] end specify "should support rename_column" do @db.alter_table(:cats) do rename_column :name, :old_name end @db.sqls.should == ["ALTER TABLE cats RENAME COLUMN name TO old_name"] end specify "should support set_column_default" do @db.alter_table(:cats) do set_column_default :score, 3 end @db.sqls.should == ["ALTER TABLE cats ALTER COLUMN score SET DEFAULT 3"] end specify "should support set_column_type" do @db.alter_table(:cats) do set_column_type :score, :real end @db.sqls.should == ["ALTER TABLE cats ALTER COLUMN score TYPE real"] end specify "should support set_column_type with options" do @db.alter_table(:cats) do set_column_type :score, :integer, :unsigned=>true set_column_type :score, :varchar, :size=>30 set_column_type :score, :enum, :elements=>['a', 'b'] end @db.sqls.should == ["ALTER TABLE cats ALTER COLUMN score TYPE integer UNSIGNED", "ALTER TABLE cats ALTER COLUMN score TYPE varchar(30)", "ALTER TABLE cats ALTER COLUMN score TYPE enum('a', 'b')"] end specify "should combine operations into a single query if the database supports it" do meta_def(@db, :supports_combining_alter_table_ops?){true} @db.alter_table(:cats) do add_column :a, Integer drop_column :b set_column_not_null :c rename_column :d, :e set_column_default :f, 'g' set_column_type :h, Integer add_constraint(:i){a > 1} drop_constraint :j end @db.sqls.should == ["ALTER TABLE cats ADD COLUMN a integer, DROP COLUMN b, ALTER COLUMN c SET NOT NULL, RENAME COLUMN d TO e, ALTER COLUMN f SET DEFAULT 'g', ALTER COLUMN h TYPE integer, ADD CONSTRAINT i CHECK (a > 1), DROP CONSTRAINT j"] end specify "should combine operations into consecutive groups of combinable operations if the database supports combining operations" do meta_def(@db, :supports_combining_alter_table_ops?){true} @db.alter_table(:cats) do add_column :a, Integer drop_column :b set_column_not_null :c rename_column :d, :e add_index :e set_column_default :f, 'g' set_column_type :h, Integer add_constraint(:i){a > 1} drop_constraint :j end @db.sqls.should == ["ALTER TABLE cats ADD COLUMN a integer, DROP COLUMN b, ALTER COLUMN c SET NOT NULL, RENAME COLUMN d TO e", "CREATE INDEX cats_e_index ON cats (e)", "ALTER TABLE cats ALTER COLUMN f SET DEFAULT 'g', ALTER COLUMN h TYPE integer, ADD CONSTRAINT i CHECK (a > 1), DROP CONSTRAINT j"] end end describe "Database#create_table" do before do @db = Sequel.mock end specify "should construct proper SQL" do @db.create_table :test do primary_key :id, :integer, :null => false column :name, :text index :name, :unique => true end @db.sqls.should == ['CREATE TABLE test (id integer NOT NULL PRIMARY KEY AUTOINCREMENT, name text)', 'CREATE UNIQUE INDEX test_name_index ON test (name)'] end specify "should create a temporary table" do @db.create_table :test_tmp, :temp => true do primary_key :id, :integer, :null => false column :name, :text index :name, :unique => true end @db.sqls.should == ['CREATE TEMPORARY TABLE test_tmp (id integer NOT NULL PRIMARY KEY AUTOINCREMENT, name text)', 'CREATE UNIQUE INDEX test_tmp_name_index ON test_tmp (name)'] end end describe "Database#alter_table" do before do @db = Sequel.mock end specify "should construct proper SQL" do @db.alter_table :xyz do add_column :aaa, :text, :null => false, :unique => true drop_column :bbb rename_column :ccc, :ddd set_column_type :eee, :integer set_column_default :hhh, 'abcd' add_index :fff, :unique => true drop_index :ggg end @db.sqls.should == ['ALTER TABLE xyz ADD COLUMN aaa text NOT NULL UNIQUE', 'ALTER TABLE xyz DROP COLUMN bbb', 'ALTER TABLE xyz RENAME COLUMN ccc TO ddd', 'ALTER TABLE xyz ALTER COLUMN eee TYPE integer', "ALTER TABLE xyz ALTER COLUMN hhh SET DEFAULT 'abcd'", 'CREATE UNIQUE INDEX xyz_fff_index ON xyz (fff)', 'DROP INDEX xyz_ggg_index'] end end describe "Database#add_column" do specify "should construct proper SQL" do db = Sequel.mock db.add_column :test, :name, :text, :unique => true db.sqls.should == ['ALTER TABLE test ADD COLUMN name text UNIQUE'] end end describe "Database#drop_column" do before do @db = Sequel.mock end specify "should construct proper SQL" do @db.drop_column :test, :name @db.sqls.should == ['ALTER TABLE test DROP COLUMN name'] end specify "should use CASCADE for :cascade=>true option" do @db.drop_column :test, :name, :cascade=>true @db.sqls.should == ['ALTER TABLE test DROP COLUMN name CASCADE'] end end describe "Database#rename_column" do before do @db = Sequel.mock end specify "should construct proper SQL" do @db.rename_column :test, :abc, :def @db.sqls.should == ['ALTER TABLE test RENAME COLUMN abc TO def'] end end describe "Database#set_column_type" do before do @db = Sequel.mock end specify "should construct proper SQL" do @db.set_column_type :test, :name, :integer @db.sqls.should == ['ALTER TABLE test ALTER COLUMN name TYPE integer'] end end describe "Database#set_column_default" do before do @db = Sequel.mock end specify "should construct proper SQL" do @db.set_column_default :test, :name, 'zyx' @db.sqls.should == ["ALTER TABLE test ALTER COLUMN name SET DEFAULT 'zyx'"] end end describe "Database#add_index" do before do @db = Sequel.mock end specify "should construct proper SQL" do @db.add_index :test, :name, :unique => true @db.sqls.should == ['CREATE UNIQUE INDEX test_name_index ON test (name)'] end specify "should accept multiple columns" do @db.add_index :test, [:one, :two] @db.sqls.should == ['CREATE INDEX test_one_two_index ON test (one, two)'] end end describe "Database#drop_index" do before do @db = Sequel.mock end specify "should construct proper SQL" do @db.drop_index :test, :name @db.sqls.should == ['DROP INDEX test_name_index'] end end describe "Database#drop_table" do before do @db = Sequel.mock end specify "should construct proper SQL" do @db.drop_table :test @db.sqls.should == ['DROP TABLE test'] end specify "should accept multiple table names" do @db.drop_table :a, :bb, :ccc @db.sqls.should == ['DROP TABLE a', 'DROP TABLE bb', 'DROP TABLE ccc'] end end describe "Database#rename_table" do before do @db = Sequel.mock end specify "should construct proper SQL" do @db.rename_table :abc, :xyz @db.sqls.should == ['ALTER TABLE abc RENAME TO xyz'] end end describe "Database#create_view" do before do @db = Sequel.mock end specify "should construct proper SQL with raw SQL" do @db.create_view :test, "SELECT * FROM xyz" @db.sqls.should == ['CREATE VIEW test AS SELECT * FROM xyz'] @db.create_view Sequel.identifier(:test), "SELECT * FROM xyz" @db.sqls.should == ['CREATE VIEW test AS SELECT * FROM xyz'] end specify "should construct proper SQL with dataset" do @db.create_view :test, @db[:items].select(:a, :b).order(:c) @db.sqls.should == ['CREATE VIEW test AS SELECT a, b FROM items ORDER BY c'] end specify "should handle :columns option" do @db.create_view :test, @db[:items].select(:a, :b).order(:c), :columns=>[:d, :e] @db.sqls.should == ['CREATE VIEW test (d, e) AS SELECT a, b FROM items ORDER BY c'] @db.create_view :test, @db[:items].select(:a, :b).order(:c), :columns=>%w'd e' @db.sqls.should == ['CREATE VIEW test (d, e) AS SELECT a, b FROM items ORDER BY c'] @db.create_view :test, @db[:items].select(:a, :b).order(:c), :columns=>[Sequel.identifier('d'), Sequel.lit('e')] @db.sqls.should == ['CREATE VIEW test (d, e) AS SELECT a, b FROM items ORDER BY c'] end specify "should handle create_or_replace_view" do @db.create_or_replace_view :sch__test, "SELECT * FROM xyz" @db.sqls.should == ['DROP VIEW sch.test', 'CREATE VIEW sch.test AS SELECT * FROM xyz'] @db.create_or_replace_view :test, @db[:items].select(:a, :b).order(:c) @db.sqls.should == ['DROP VIEW test', 'CREATE VIEW test AS SELECT a, b FROM items ORDER BY c'] @db.create_or_replace_view Sequel.identifier(:test), @db[:items].select(:a, :b).order(:c) @db.sqls.should == ['DROP VIEW test', 'CREATE VIEW test AS SELECT a, b FROM items ORDER BY c'] end specify "should use CREATE OR REPLACE VIEW if such syntax is supported" do def @db.supports_create_or_replace_view?() true end @db.create_or_replace_view :test, @db[:items] @db.sqls.should == ['CREATE OR REPLACE VIEW test AS SELECT * FROM items'] end end describe "Database#drop_view" do before do @db = Sequel.mock end specify "should construct proper SQL" do @db.drop_view :test @db.drop_view Sequel.identifier(:test) @db.drop_view :sch__test @db.drop_view Sequel.qualify(:sch, :test) @db.sqls.should == ['DROP VIEW test', 'DROP VIEW test', 'DROP VIEW sch.test', 'DROP VIEW sch.test'] end specify "should drop multiple views at once" do @db.drop_view :cats, :dogs @db.sqls.should == ['DROP VIEW cats', 'DROP VIEW dogs'] end specify "should take an options hash and support the :cascade option" do @db.drop_view :cats, :dogs, :cascade=>true @db.sqls.should == ['DROP VIEW cats CASCADE', 'DROP VIEW dogs CASCADE'] end end describe "Database#alter_table_sql" do specify "should raise error for an invalid op" do proc {Sequel.mock.send(:alter_table_sql, :mau, :op => :blah)}.should raise_error(Sequel::Error) end end describe "Schema Parser" do before do @sqls = [] @db = Sequel::Database.new end specify "should raise an error if there are no columns" do meta_def(@db, :schema_parse_table) do |t, opts| [] end proc{@db.schema(:x)}.should raise_error(Sequel::Error) end specify "should cache data by default" do meta_def(@db, :schema_parse_table) do |t, opts| [[:a, {}]] end @db.schema(:x).should equal(@db.schema(:x)) end specify "should not cache data if :reload=>true is given" do meta_def(@db, :schema_parse_table) do |t, opts| [[:a, {}]] end @db.schema(:x).should_not equal(@db.schema(:x, :reload=>true)) end specify "should not cache schema metadata if cache_schema is false" do @db.cache_schema = false meta_def(@db, :schema_parse_table) do |t, opts| [[:a, {}]] end @db.schema(:x).should_not equal(@db.schema(:x)) end specify "should provide options if given a table name" do c = nil meta_def(@db, :schema_parse_table) do |t, opts| c = [t, opts] [[:a, {:db_type=>t.to_s}]] end @db.schema(:x) c.should == ["x", {}] @db.schema(:s__x) c.should == ["x", {:schema=>"s"}] ds = @db[:s__y] @db.schema(ds) c.should == ["y", {:schema=>"s", :dataset=>ds}] end specify "should parse the schema correctly for a single table" do sqls = @sqls proc{@db.schema(:x)}.should raise_error(Sequel::Error) meta_def(@db, :schema_parse_table) do |t, opts| sqls << t [[:a, {:db_type=>t.to_s}]] end @db.schema(:x).should == [[:a, {:db_type=>"x", :ruby_default=>nil}]] @sqls.should == ['x'] @db.schema(:x).should == [[:a, {:db_type=>"x", :ruby_default=>nil}]] @sqls.should == ['x'] @db.schema(:x, :reload=>true).should == [[:a, {:db_type=>"x", :ruby_default=>nil}]] @sqls.should == ['x', 'x'] end specify "should convert various types of table name arguments" do meta_def(@db, :schema_parse_table) do |t, opts| [[t, opts]] end s1 = @db.schema(:x) s1.should == [['x', {:ruby_default=>nil}]] @db.schema(:x).object_id.should == s1.object_id @db.schema(Sequel.identifier(:x)).object_id.should == s1.object_id s2 = @db.schema(:x__y) s2.should == [['y', {:schema=>'x', :ruby_default=>nil}]] @db.schema(:x__y).object_id.should == s2.object_id @db.schema(Sequel.qualify(:x, :y)).object_id.should == s2.object_id s2 = @db.schema(Sequel.qualify(:v, :x__y)) s2.should == [['y', {:schema=>'x', :ruby_default=>nil, :information_schema_schema=>Sequel.identifier('v')}]] @db.schema(Sequel.qualify(:v, :x__y)).object_id.should == s2.object_id @db.schema(Sequel.qualify(:v__x, :y)).object_id.should == s2.object_id s2 = @db.schema(Sequel.qualify(:u__v, :x__y)) s2.should == [['y', {:schema=>'x', :ruby_default=>nil, :information_schema_schema=>Sequel.qualify('u', 'v')}]] @db.schema(Sequel.qualify(:u__v, :x__y)).object_id.should == s2.object_id @db.schema(Sequel.qualify(Sequel.qualify(:u, :v), Sequel.qualify(:x, :y))).object_id.should == s2.object_id end specify "should correctly parse all supported data types" do sm = Module.new do def schema_parse_table(t, opts) [[:x, {:type=>schema_column_type(t.to_s)}]] end end @db.extend(sm) @db.schema(:tinyint).first.last[:type].should == :integer @db.schema(:int).first.last[:type].should == :integer @db.schema(:integer).first.last[:type].should == :integer @db.schema(:bigint).first.last[:type].should == :integer @db.schema(:smallint).first.last[:type].should == :integer @db.schema(:character).first.last[:type].should == :string @db.schema(:"character varying").first.last[:type].should == :string @db.schema(:varchar).first.last[:type].should == :string @db.schema(:"varchar(255)").first.last[:type].should == :string @db.schema(:text).first.last[:type].should == :string @db.schema(:date).first.last[:type].should == :date @db.schema(:datetime).first.last[:type].should == :datetime @db.schema(:timestamp).first.last[:type].should == :datetime @db.schema(:"timestamp with time zone").first.last[:type].should == :datetime @db.schema(:"timestamp without time zone").first.last[:type].should == :datetime @db.schema(:time).first.last[:type].should == :time @db.schema(:"time with time zone").first.last[:type].should == :time @db.schema(:"time without time zone").first.last[:type].should == :time @db.schema(:boolean).first.last[:type].should == :boolean @db.schema(:real).first.last[:type].should == :float @db.schema(:float).first.last[:type].should == :float @db.schema(:double).first.last[:type].should == :float @db.schema(:"double(1,2)").first.last[:type].should == :float @db.schema(:"double precision").first.last[:type].should == :float @db.schema(:number).first.last[:type].should == :decimal @db.schema(:numeric).first.last[:type].should == :decimal @db.schema(:decimal).first.last[:type].should == :decimal @db.schema(:"number(10,0)").first.last[:type].should == :integer @db.schema(:"numeric(10, 10)").first.last[:type].should == :decimal @db.schema(:"decimal(10,1)").first.last[:type].should == :decimal @db.schema(:bytea).first.last[:type].should == :blob @db.schema(:blob).first.last[:type].should == :blob @db.schema(:image).first.last[:type].should == :blob @db.schema(:nchar).first.last[:type].should == :string @db.schema(:nvarchar).first.last[:type].should == :string @db.schema(:ntext).first.last[:type].should == :string @db.schema(:smalldatetime).first.last[:type].should == :datetime @db.schema(:binary).first.last[:type].should == :blob @db.schema(:varbinary).first.last[:type].should == :blob @db.schema(:enum).first.last[:type].should == :enum @db = Sequel.mock(:host=>'postgres') @db.extend(sm) @db.schema(:interval).first.last[:type].should == :interval @db = Sequel.mock(:host=>'mysql') @db.extend(sm) @db.schema(:set).first.last[:type].should == :set @db.schema(:mediumint).first.last[:type].should == :integer @db.schema(:mediumtext).first.last[:type].should == :string end end ��������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/core/spec_helper.rb����������������������������������������������������������0000664�0000000�0000000�00000002644�12201565355�0020504�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'rubygems' if ENV['COVERAGE'] require File.join(File.dirname(File.expand_path(__FILE__)), "../sequel_coverage") SimpleCov.sequel_coverage(:filter=>%r{lib/sequel/(\w+\.rb|(dataset|database|model|connection_pool)/\w+\.rb|adapters/mock\.rb)\z}) end unless Object.const_defined?('Sequel') $:.unshift(File.join(File.dirname(File.expand_path(__FILE__)), "../../lib/")) require 'sequel/core' end Sequel::Deprecation.backtrace_filter = lambda{|line, lineno| lineno < 4 || line =~ /_spec\.rb/} (defined?(RSpec) ? RSpec::Core::ExampleGroup : Spec::Example::ExampleGroup).class_eval do def meta_def(obj, name, &block) (class << obj; self end).send(:define_method, name, &block) end if ENV['SEQUEL_DEPRECATION_WARNINGS'] class << self alias qspecify specify end else def self.qspecify(*a, &block) specify(*a) do begin output = Sequel::Deprecation.output Sequel::Deprecation.output = false instance_exec(&block) ensure Sequel::Deprecation.output = output end end end end end if ENV['SEQUEL_COLUMNS_INTROSPECTION'] Sequel.extension :columns_introspection Sequel::Database.extension :columns_introspection Sequel.require 'adapters/mock' Sequel::Mock::Dataset.send(:include, Sequel::ColumnsIntrospection) end Sequel.quote_identifiers = false Sequel.identifier_input_method = nil Sequel.identifier_output_method = nil ��������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/core/version_spec.rb���������������������������������������������������������0000664�0000000�0000000�00000000346�12201565355�0020707�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') describe "Sequel.version" do specify "should be in the form X.Y.Z with all being numbers" do Sequel.version.should =~ /\A\d+\.\d+\.\d+\z/ end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/core_extensions_spec.rb������������������������������������������������������0000664�0000000�0000000�00000065612�12201565355�0021510�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'rubygems' if ENV['COVERAGE'] require File.join(File.dirname(File.expand_path(__FILE__)), "sequel_coverage") SimpleCov.sequel_coverage(:filter=>%r{lib/sequel/extensions/core_extensions\.rb\z}) end unless Object.const_defined?('Sequel') && Sequel.const_defined?('Model') $:.unshift(File.join(File.dirname(File.expand_path(__FILE__)), "../../lib/")) require 'sequel' Sequel::Deprecation.backtrace_filter = true end Sequel.quote_identifiers = false Sequel.identifier_input_method = nil Sequel.identifier_output_method = nil Regexp.send(:include, Sequel::SQL::StringMethods) String.send(:include, Sequel::SQL::StringMethods) Sequel.extension :core_extensions if RUBY_VERSION < '1.9.0' Sequel.extension :ruby18_symbol_extensions end describe "Sequel core extensions" do specify "should have Sequel.core_extensions? be true if enabled" do Sequel.core_extensions?.should be_true end end describe "Core extensions" do before do db = Sequel::Database.new @d = db[:items] def @d.supports_regexp?; true end def @d.l(*args, &block) literal(filter_expr(*args, &block)) end def @d.lit(*args) literal(*args) end end if RUBY_VERSION < '1.9.0' it "should not allow inequality operations on true, false, or nil" do @d.lit(:x > 1).should == "(x > 1)" @d.lit(:x < true).should == "(x < 't')" @d.lit(:x >= false).should == "(x >= 'f')" @d.lit(:x <= nil).should == "(x <= NULL)" end it "should not allow inequality operations on boolean complex expressions" do @d.lit(:x > (:y > 5)).should == "(x > (y > 5))" @d.lit(:x < (:y < 5)).should == "(x < (y < 5))" @d.lit(:x >= (:y >= 5)).should == "(x >= (y >= 5))" @d.lit(:x <= (:y <= 5)).should == "(x <= (y <= 5))" @d.lit(:x > {:y => nil}).should == "(x > (y IS NULL))" @d.lit(:x < ~{:y => nil}).should == "(x < (y IS NOT NULL))" @d.lit(:x >= {:y => 5}).should == "(x >= (y = 5))" @d.lit(:x <= ~{:y => 5}).should == "(x <= (y != 5))" @d.lit(:x >= {:y => [1,2,3]}).should == "(x >= (y IN (1, 2, 3)))" @d.lit(:x <= ~{:y => [1,2,3]}).should == "(x <= (y NOT IN (1, 2, 3)))" end it "should support >, <, >=, and <= via Symbol#>,<,>=,<=" do @d.l(:x > 100).should == '(x > 100)' @d.l(:x < 100.01).should == '(x < 100.01)' @d.l(:x >= 100000000000000000000000000000000000).should == '(x >= 100000000000000000000000000000000000)' @d.l(:x <= 100).should == '(x <= 100)' end it "should support negation of >, <, >=, and <= via Symbol#~" do @d.l(~(:x > 100)).should == '(x <= 100)' @d.l(~(:x < 100.01)).should == '(x >= 100.01)' @d.l(~(:x >= 100000000000000000000000000000000000)).should == '(x < 100000000000000000000000000000000000)' @d.l(~(:x <= 100)).should == '(x > 100)' end it "should support double negation via ~" do @d.l(~~(:x > 100)).should == '(x > 100)' end end it "should support NOT via Symbol#~" do @d.l(~:x).should == 'NOT x' @d.l(~:x__y).should == 'NOT x.y' end it "should support + - * / via Symbol#+,-,*,/" do @d.l(:x + 1 > 100).should == '((x + 1) > 100)' @d.l((:x * :y) < 100.01).should == '((x * y) < 100.01)' @d.l((:x - :y/2) >= 100000000000000000000000000000000000).should == '((x - (y / 2)) >= 100000000000000000000000000000000000)' @d.l((((:x - :y)/(:x + :y))*:z) <= 100).should == '((((x - y) / (x + y)) * z) <= 100)' @d.l(~((((:x - :y)/(:x + :y))*:z) <= 100)).should == '((((x - y) / (x + y)) * z) > 100)' end it "should support LIKE via Symbol#like" do @d.l(:x.like('a')).should == '(x LIKE \'a\' ESCAPE \'\\\')' @d.l(:x.like(/a/)).should == '(x ~ \'a\')' @d.l(:x.like('a', 'b')).should == '((x LIKE \'a\' ESCAPE \'\\\') OR (x LIKE \'b\' ESCAPE \'\\\'))' @d.l(:x.like(/a/, /b/i)).should == '((x ~ \'a\') OR (x ~* \'b\'))' @d.l(:x.like('a', /b/)).should == '((x LIKE \'a\' ESCAPE \'\\\') OR (x ~ \'b\'))' @d.l('a'.like(:x)).should == "('a' LIKE x ESCAPE '\\')" @d.l('a'.like(:x, 'b')).should == "(('a' LIKE x ESCAPE '\\') OR ('a' LIKE 'b' ESCAPE '\\'))" @d.l('a'.like(:x, /b/)).should == "(('a' LIKE x ESCAPE '\\') OR ('a' ~ 'b'))" @d.l('a'.like(:x, /b/i)).should == "(('a' LIKE x ESCAPE '\\') OR ('a' ~* 'b'))" @d.l(/a/.like(:x)).should == "('a' ~ x)" @d.l(/a/.like(:x, 'b')).should == "(('a' ~ x) OR ('a' ~ 'b'))" @d.l(/a/.like(:x, /b/)).should == "(('a' ~ x) OR ('a' ~ 'b'))" @d.l(/a/.like(:x, /b/i)).should == "(('a' ~ x) OR ('a' ~* 'b'))" @d.l(/a/i.like(:x)).should == "('a' ~* x)" @d.l(/a/i.like(:x, 'b')).should == "(('a' ~* x) OR ('a' ~* 'b'))" @d.l(/a/i.like(:x, /b/)).should == "(('a' ~* x) OR ('a' ~* 'b'))" @d.l(/a/i.like(:x, /b/i)).should == "(('a' ~* x) OR ('a' ~* 'b'))" end it "should support NOT LIKE via Symbol#like and Symbol#~" do @d.l(~:x.like('a')).should == '(x NOT LIKE \'a\' ESCAPE \'\\\')' @d.l(~:x.like(/a/)).should == '(x !~ \'a\')' @d.l(~:x.like('a', 'b')).should == '((x NOT LIKE \'a\' ESCAPE \'\\\') AND (x NOT LIKE \'b\' ESCAPE \'\\\'))' @d.l(~:x.like(/a/, /b/i)).should == '((x !~ \'a\') AND (x !~* \'b\'))' @d.l(~:x.like('a', /b/)).should == '((x NOT LIKE \'a\' ESCAPE \'\\\') AND (x !~ \'b\'))' @d.l(~'a'.like(:x)).should == "('a' NOT LIKE x ESCAPE '\\')" @d.l(~'a'.like(:x, 'b')).should == "(('a' NOT LIKE x ESCAPE '\\') AND ('a' NOT LIKE 'b' ESCAPE '\\'))" @d.l(~'a'.like(:x, /b/)).should == "(('a' NOT LIKE x ESCAPE '\\') AND ('a' !~ 'b'))" @d.l(~'a'.like(:x, /b/i)).should == "(('a' NOT LIKE x ESCAPE '\\') AND ('a' !~* 'b'))" @d.l(~/a/.like(:x)).should == "('a' !~ x)" @d.l(~/a/.like(:x, 'b')).should == "(('a' !~ x) AND ('a' !~ 'b'))" @d.l(~/a/.like(:x, /b/)).should == "(('a' !~ x) AND ('a' !~ 'b'))" @d.l(~/a/.like(:x, /b/i)).should == "(('a' !~ x) AND ('a' !~* 'b'))" @d.l(~/a/i.like(:x)).should == "('a' !~* x)" @d.l(~/a/i.like(:x, 'b')).should == "(('a' !~* x) AND ('a' !~* 'b'))" @d.l(~/a/i.like(:x, /b/)).should == "(('a' !~* x) AND ('a' !~* 'b'))" @d.l(~/a/i.like(:x, /b/i)).should == "(('a' !~* x) AND ('a' !~* 'b'))" end it "should support ILIKE via Symbol#ilike" do @d.l(:x.ilike('a')).should == '(UPPER(x) LIKE UPPER(\'a\') ESCAPE \'\\\')' @d.l(:x.ilike(/a/)).should == '(x ~* \'a\')' @d.l(:x.ilike('a', 'b')).should == '((UPPER(x) LIKE UPPER(\'a\') ESCAPE \'\\\') OR (UPPER(x) LIKE UPPER(\'b\') ESCAPE \'\\\'))' @d.l(:x.ilike(/a/, /b/i)).should == '((x ~* \'a\') OR (x ~* \'b\'))' @d.l(:x.ilike('a', /b/)).should == '((UPPER(x) LIKE UPPER(\'a\') ESCAPE \'\\\') OR (x ~* \'b\'))' @d.l('a'.ilike(:x)).should == "(UPPER('a') LIKE UPPER(x) ESCAPE '\\')" @d.l('a'.ilike(:x, 'b')).should == "((UPPER('a') LIKE UPPER(x) ESCAPE '\\') OR (UPPER('a') LIKE UPPER('b') ESCAPE '\\'))" @d.l('a'.ilike(:x, /b/)).should == "((UPPER('a') LIKE UPPER(x) ESCAPE '\\') OR ('a' ~* 'b'))" @d.l('a'.ilike(:x, /b/i)).should == "((UPPER('a') LIKE UPPER(x) ESCAPE '\\') OR ('a' ~* 'b'))" @d.l(/a/.ilike(:x)).should == "('a' ~* x)" @d.l(/a/.ilike(:x, 'b')).should == "(('a' ~* x) OR ('a' ~* 'b'))" @d.l(/a/.ilike(:x, /b/)).should == "(('a' ~* x) OR ('a' ~* 'b'))" @d.l(/a/.ilike(:x, /b/i)).should == "(('a' ~* x) OR ('a' ~* 'b'))" @d.l(/a/i.ilike(:x)).should == "('a' ~* x)" @d.l(/a/i.ilike(:x, 'b')).should == "(('a' ~* x) OR ('a' ~* 'b'))" @d.l(/a/i.ilike(:x, /b/)).should == "(('a' ~* x) OR ('a' ~* 'b'))" @d.l(/a/i.ilike(:x, /b/i)).should == "(('a' ~* x) OR ('a' ~* 'b'))" end it "should support NOT ILIKE via Symbol#ilike and Symbol#~" do @d.l(~:x.ilike('a')).should == '(UPPER(x) NOT LIKE UPPER(\'a\') ESCAPE \'\\\')' @d.l(~:x.ilike(/a/)).should == '(x !~* \'a\')' @d.l(~:x.ilike('a', 'b')).should == '((UPPER(x) NOT LIKE UPPER(\'a\') ESCAPE \'\\\') AND (UPPER(x) NOT LIKE UPPER(\'b\') ESCAPE \'\\\'))' @d.l(~:x.ilike(/a/, /b/i)).should == '((x !~* \'a\') AND (x !~* \'b\'))' @d.l(~:x.ilike('a', /b/)).should == '((UPPER(x) NOT LIKE UPPER(\'a\') ESCAPE \'\\\') AND (x !~* \'b\'))' @d.l(~'a'.ilike(:x)).should == "(UPPER('a') NOT LIKE UPPER(x) ESCAPE '\\')" @d.l(~'a'.ilike(:x, 'b')).should == "((UPPER('a') NOT LIKE UPPER(x) ESCAPE '\\') AND (UPPER('a') NOT LIKE UPPER('b') ESCAPE '\\'))" @d.l(~'a'.ilike(:x, /b/)).should == "((UPPER('a') NOT LIKE UPPER(x) ESCAPE '\\') AND ('a' !~* 'b'))" @d.l(~'a'.ilike(:x, /b/i)).should == "((UPPER('a') NOT LIKE UPPER(x) ESCAPE '\\') AND ('a' !~* 'b'))" @d.l(~/a/.ilike(:x)).should == "('a' !~* x)" @d.l(~/a/.ilike(:x, 'b')).should == "(('a' !~* x) AND ('a' !~* 'b'))" @d.l(~/a/.ilike(:x, /b/)).should == "(('a' !~* x) AND ('a' !~* 'b'))" @d.l(~/a/.ilike(:x, /b/i)).should == "(('a' !~* x) AND ('a' !~* 'b'))" @d.l(~/a/i.ilike(:x)).should == "('a' !~* x)" @d.l(~/a/i.ilike(:x, 'b')).should == "(('a' !~* x) AND ('a' !~* 'b'))" @d.l(~/a/i.ilike(:x, /b/)).should == "(('a' !~* x) AND ('a' !~* 'b'))" @d.l(~/a/i.ilike(:x, /b/i)).should == "(('a' !~* x) AND ('a' !~* 'b'))" end it "should support sql_expr on arrays with all two pairs" do @d.l([[:x, 100],[:y, 'a']].sql_expr).should == '((x = 100) AND (y = \'a\'))' @d.l([[:x, true], [:y, false]].sql_expr).should == '((x IS TRUE) AND (y IS FALSE))' @d.l([[:x, nil], [:y, [1,2,3]]].sql_expr).should == '((x IS NULL) AND (y IN (1, 2, 3)))' end it "should support sql_negate on arrays with all two pairs" do @d.l([[:x, 100],[:y, 'a']].sql_negate).should == '((x != 100) AND (y != \'a\'))' @d.l([[:x, true], [:y, false]].sql_negate).should == '((x IS NOT TRUE) AND (y IS NOT FALSE))' @d.l([[:x, nil], [:y, [1,2,3]]].sql_negate).should == '((x IS NOT NULL) AND (y NOT IN (1, 2, 3)))' end it "should support ~ on arrays with all two pairs" do @d.l(~[[:x, 100],[:y, 'a']]).should == '((x != 100) OR (y != \'a\'))' @d.l(~[[:x, true], [:y, false]]).should == '((x IS NOT TRUE) OR (y IS NOT FALSE))' @d.l(~[[:x, nil], [:y, [1,2,3]]]).should == '((x IS NOT NULL) OR (y NOT IN (1, 2, 3)))' end it "should support sql_or on arrays with all two pairs" do @d.l([[:x, 100],[:y, 'a']].sql_or).should == '((x = 100) OR (y = \'a\'))' @d.l([[:x, true], [:y, false]].sql_or).should == '((x IS TRUE) OR (y IS FALSE))' @d.l([[:x, nil], [:y, [1,2,3]]].sql_or).should == '((x IS NULL) OR (y IN (1, 2, 3)))' end it "should support Array#sql_string_join for concatenation of SQL strings" do @d.lit([:x].sql_string_join).should == '(x)' @d.lit([:x].sql_string_join(', ')).should == '(x)' @d.lit([:x, :y].sql_string_join).should == '(x || y)' @d.lit([:x, :y].sql_string_join(', ')).should == "(x || ', ' || y)" @d.lit([:x.sql_function(1), :y.sql_subscript(1)].sql_string_join).should == '(x(1) || y[1])' @d.lit([:x.sql_function(1), 'y.z'.lit].sql_string_join(', ')).should == "(x(1) || ', ' || y.z)" @d.lit([:x, 1, :y].sql_string_join).should == "(x || '1' || y)" @d.lit([:x, 1, :y].sql_string_join(', ')).should == "(x || ', ' || '1' || ', ' || y)" @d.lit([:x, 1, :y].sql_string_join(:y__z)).should == "(x || y.z || '1' || y.z || y)" @d.lit([:x, 1, :y].sql_string_join(1)).should == "(x || '1' || '1' || '1' || y)" @d.lit([:x, :y].sql_string_join('y.x || x.y'.lit)).should == "(x || y.x || x.y || y)" @d.lit([[:x, :y].sql_string_join, [:a, :b].sql_string_join].sql_string_join).should == "(x || y || a || b)" end it "should support sql_expr on hashes" do @d.l({:x => 100, :y => 'a'}.sql_expr)[1...-1].split(' AND ').sort.should == ['(x = 100)', '(y = \'a\')'] @d.l({:x => true, :y => false}.sql_expr)[1...-1].split(' AND ').sort.should == ['(x IS TRUE)', '(y IS FALSE)'] @d.l({:x => nil, :y => [1,2,3]}.sql_expr)[1...-1].split(' AND ').sort.should == ['(x IS NULL)', '(y IN (1, 2, 3))'] end it "should support sql_negate on hashes" do @d.l({:x => 100, :y => 'a'}.sql_negate)[1...-1].split(' AND ').sort.should == ['(x != 100)', '(y != \'a\')'] @d.l({:x => true, :y => false}.sql_negate)[1...-1].split(' AND ').sort.should == ['(x IS NOT TRUE)', '(y IS NOT FALSE)'] @d.l({:x => nil, :y => [1,2,3]}.sql_negate)[1...-1].split(' AND ').sort.should == ['(x IS NOT NULL)', '(y NOT IN (1, 2, 3))'] end it "should support ~ on hashes" do @d.l(~{:x => 100, :y => 'a'})[1...-1].split(' OR ').sort.should == ['(x != 100)', '(y != \'a\')'] @d.l(~{:x => true, :y => false})[1...-1].split(' OR ').sort.should == ['(x IS NOT TRUE)', '(y IS NOT FALSE)'] @d.l(~{:x => nil, :y => [1,2,3]})[1...-1].split(' OR ').sort.should == ['(x IS NOT NULL)', '(y NOT IN (1, 2, 3))'] end it "should support sql_or on hashes" do @d.l({:x => 100, :y => 'a'}.sql_or)[1...-1].split(' OR ').sort.should == ['(x = 100)', '(y = \'a\')'] @d.l({:x => true, :y => false}.sql_or)[1...-1].split(' OR ').sort.should == ['(x IS TRUE)', '(y IS FALSE)'] @d.l({:x => nil, :y => [1,2,3]}.sql_or)[1...-1].split(' OR ').sort.should == ['(x IS NULL)', '(y IN (1, 2, 3))'] end it "should Hash#& and Hash#|" do @d.l({:y => :z} & :x).should == '((y = z) AND x)' @d.l({:x => :a} & {:y => :z}).should == '((x = a) AND (y = z))' @d.l({:y => :z} | :x).should == '((y = z) OR x)' @d.l({:x => :a} | {:y => :z}).should == '((x = a) OR (y = z))' end end describe "Array#case and Hash#case" do before do @d = Sequel.mock.dataset end specify "should return SQL CASE expression" do @d.literal({:x=>:y}.case(:z)).should == '(CASE WHEN x THEN y ELSE z END)' @d.literal({:x=>:y}.case(:z, :exp)).should == '(CASE exp WHEN x THEN y ELSE z END)' ['(CASE WHEN x THEN y WHEN a THEN b ELSE z END)', '(CASE WHEN a THEN b WHEN x THEN y ELSE z END)'].should(include(@d.literal({:x=>:y, :a=>:b}.case(:z)))) @d.literal([[:x, :y]].case(:z)).should == '(CASE WHEN x THEN y ELSE z END)' @d.literal([[:x, :y], [:a, :b]].case(:z)).should == '(CASE WHEN x THEN y WHEN a THEN b ELSE z END)' @d.literal([[:x, :y], [:a, :b]].case(:z, :exp)).should == '(CASE exp WHEN x THEN y WHEN a THEN b ELSE z END)' @d.literal([[:x, :y], [:a, :b]].case(:z, :exp__w)).should == '(CASE exp.w WHEN x THEN y WHEN a THEN b ELSE z END)' end specify "should return SQL CASE expression with expression even if nil" do @d.literal({:x=>:y}.case(:z, nil)).should == '(CASE NULL WHEN x THEN y ELSE z END)' end specify "should raise an error if an array that isn't all two pairs is used" do proc{[:b].case(:a)}.should raise_error(Sequel::Error) proc{[:b, :c].case(:a)}.should raise_error(Sequel::Error) proc{[[:b, :c], :d].case(:a)}.should raise_error(Sequel::Error) end specify "should raise an error if an empty array/hash is used" do proc{[].case(:a)}.should raise_error(Sequel::Error) proc{{}.case(:a)}.should raise_error(Sequel::Error) end end describe "Array#sql_value_list and #sql_array" do before do @d = Sequel.mock.dataset end specify "should treat the array as an SQL value list instead of conditions when used as a placeholder value" do @d.filter("(a, b) IN ?", [[:x, 1], [:y, 2]]).sql.should == 'SELECT * WHERE ((a, b) IN ((x = 1) AND (y = 2)))' @d.filter("(a, b) IN ?", [[:x, 1], [:y, 2]].sql_value_list).sql.should == 'SELECT * WHERE ((a, b) IN ((x, 1), (y, 2)))' @d.filter("(a, b) IN ?", [[:x, 1], [:y, 2]].sql_array).sql.should == 'SELECT * WHERE ((a, b) IN ((x, 1), (y, 2)))' end specify "should be no difference when used as a hash value" do @d.filter([:a, :b]=>[[:x, 1], [:y, 2]]).sql.should == 'SELECT * WHERE ((a, b) IN ((x, 1), (y, 2)))' @d.filter([:a, :b]=>[[:x, 1], [:y, 2]].sql_value_list).sql.should == 'SELECT * WHERE ((a, b) IN ((x, 1), (y, 2)))' @d.filter([:a, :b]=>[[:x, 1], [:y, 2]].sql_array).sql.should == 'SELECT * WHERE ((a, b) IN ((x, 1), (y, 2)))' end end describe "String#lit" do before do @ds = ds = Sequel::Database.new[:t] end specify "should return an LiteralString object" do 'xyz'.lit.should be_a_kind_of(Sequel::LiteralString) 'xyz'.lit.to_s.should == 'xyz' end specify "should inhibit string literalization" do @ds.update_sql(:stamp => "NOW()".lit).should == "UPDATE t SET stamp = NOW()" end specify "should return a PlaceholderLiteralString object if args are given" do a = 'DISTINCT ?'.lit(:a) a.should be_a_kind_of(Sequel::SQL::PlaceholderLiteralString) @ds.literal(a).should == 'DISTINCT a' @ds.quote_identifiers = true @ds.literal(a).should == 'DISTINCT "a"' end specify "should handle named placeholders if given a single argument hash" do a = 'DISTINCT :b'.lit(:b=>:a) a.should be_a_kind_of(Sequel::SQL::PlaceholderLiteralString) @ds.literal(a).should == 'DISTINCT a' @ds.quote_identifiers = true @ds.literal(a).should == 'DISTINCT "a"' end specify "should treat placeholder literal strings as generic expressions" do a = ':b'.lit(:b=>:a) @ds.literal(a + 1).should == "(a + 1)" @ds.literal(a & :b).should == "(a AND b)" @ds.literal(a.sql_string + :b).should == "(a || b)" end end describe "String#to_sequel_blob" do specify "should return a Blob object" do 'xyz'.to_sequel_blob.should be_a_kind_of(::Sequel::SQL::Blob) 'xyz'.to_sequel_blob.should == 'xyz' end specify "should retain binary data" do "\1\2\3\4".to_sequel_blob.should == "\1\2\3\4" end end describe "String cast methods" do before do @ds = Sequel.mock.dataset end specify "should support cast method" do @ds.literal('abc'.cast(:integer)).should == "CAST('abc' AS integer)" end specify "should support cast_numeric and cast_string" do x = 'abc'.cast_numeric x.should be_a_kind_of(Sequel::SQL::NumericExpression) @ds.literal(x).should == "CAST('abc' AS integer)" x = 'abc'.cast_numeric(:real) x.should be_a_kind_of(Sequel::SQL::NumericExpression) @ds.literal(x).should == "CAST('abc' AS real)" x = 'abc'.cast_string x.should be_a_kind_of(Sequel::SQL::StringExpression) @ds.literal(x).should == "CAST('abc' AS varchar(255))" x = 'abc'.cast_string(:varchar) x.should be_a_kind_of(Sequel::SQL::StringExpression) @ds.literal(x).should == "CAST('abc' AS varchar(255))" end end describe "#desc" do before do @ds = Sequel.mock.dataset end specify "should format a DESC clause for a column ref" do @ds.literal(:test.desc).should == 'test DESC' @ds.literal(:items__price.desc).should == 'items.price DESC' end specify "should format a DESC clause for a function" do @ds.literal(:avg.sql_function(:test).desc).should == 'avg(test) DESC' end end describe "#asc" do before do @ds = Sequel.mock.dataset end specify "should format a ASC clause for a column ref" do @ds.literal(:test.asc).should == 'test ASC' @ds.literal(:items__price.asc).should == 'items.price ASC' end specify "should format a ASC clause for a function" do @ds.literal(:avg.sql_function(:test).asc).should == 'avg(test) ASC' end end describe "#as" do before do @ds = Sequel.mock.dataset end specify "should format a AS clause for a column ref" do @ds.literal(:test.as(:t)).should == 'test AS t' @ds.literal(:items__price.as(:p)).should == 'items.price AS p' end specify "should format a AS clause for a function" do @ds.literal(:avg.sql_function(:test).as(:avg)).should == 'avg(test) AS avg' end specify "should format a AS clause for a literal value" do @ds.literal('abc'.as(:abc)).should == "'abc' AS abc" end end describe "Column references" do before do @ds = Sequel::Database.new.dataset def @ds.quoted_identifier_append(sql, c) sql << "`#{c}`" end @ds.quote_identifiers = true end specify "should be quoted properly" do @ds.literal(:xyz).should == "`xyz`" @ds.literal(:xyz__abc).should == "`xyz`.`abc`" @ds.literal(:xyz.as(:x)).should == "`xyz` AS `x`" @ds.literal(:xyz__abc.as(:x)).should == "`xyz`.`abc` AS `x`" @ds.literal(:xyz___x).should == "`xyz` AS `x`" @ds.literal(:xyz__abc___x).should == "`xyz`.`abc` AS `x`" end specify "should be quoted properly in SQL functions" do @ds.literal(:avg.sql_function(:xyz)).should == "avg(`xyz`)" @ds.literal(:avg.sql_function(:xyz, 1)).should == "avg(`xyz`, 1)" @ds.literal(:avg.sql_function(:xyz).as(:a)).should == "avg(`xyz`) AS `a`" end specify "should be quoted properly in ASC/DESC clauses" do @ds.literal(:xyz.asc).should == "`xyz` ASC" @ds.literal(:avg.sql_function(:xyz, 1).desc).should == "avg(`xyz`, 1) DESC" end specify "should be quoted properly in a cast function" do @ds.literal(:x.cast(:integer)).should == "CAST(`x` AS integer)" @ds.literal(:x__y.cast('varchar(20)')).should == "CAST(`x`.`y` AS varchar(20))" end end describe "Blob" do specify "#to_sequel_blob should return self" do blob = "x".to_sequel_blob blob.to_sequel_blob.object_id.should == blob.object_id end end if RUBY_VERSION < '1.9.0' describe "Symbol#[]" do specify "should format an SQL Function" do ds = Sequel.mock.dataset ds.literal(:xyz[]).should == 'xyz()' ds.literal(:xyz[1]).should == 'xyz(1)' ds.literal(:xyz[1, 2, :abc[3]]).should == 'xyz(1, 2, abc(3))' end end end describe "Symbol#*" do before do @ds = Sequel.mock.dataset end specify "should format a qualified wildcard if no argument" do @ds.literal(:xyz.*).should == 'xyz.*' @ds.literal(:abc.*).should == 'abc.*' end specify "should format a filter expression if an argument" do @ds.literal(:xyz.*(3)).should == '(xyz * 3)' @ds.literal(:abc.*(5)).should == '(abc * 5)' end specify "should support qualified symbols if no argument" do @ds.literal(:xyz__abc.*).should == 'xyz.abc.*' end end describe "Symbol" do before do @ds = Sequel.mock.dataset @ds.quote_identifiers = true @ds.identifier_input_method = :upcase end specify "#identifier should format an identifier" do @ds.literal(:xyz__abc.identifier).should == '"XYZ__ABC"' end specify "#qualify should format a qualified column" do @ds.literal(:xyz.qualify(:abc)).should == '"ABC"."XYZ"' end specify "#qualify should work on QualifiedIdentifiers" do @ds.literal(:xyz.qualify(:abc).qualify(:def)).should == '"DEF"."ABC"."XYZ"' end specify "should be able to qualify an identifier" do @ds.literal(:xyz.identifier.qualify(:xyz__abc)).should == '"XYZ"."ABC"."XYZ"' end specify "should be able to specify a schema.table.column" do @ds.literal(:column.qualify(:table.qualify(:schema))).should == '"SCHEMA"."TABLE"."COLUMN"' @ds.literal(:column.qualify(:table__name.identifier.qualify(:schema))).should == '"SCHEMA"."TABLE__NAME"."COLUMN"' end specify "should be able to specify order" do @oe = :xyz.desc @oe.class.should == Sequel::SQL::OrderedExpression @oe.descending.should == true @oe = :xyz.asc @oe.class.should == Sequel::SQL::OrderedExpression @oe.descending.should == false end specify "should work correctly with objects" do o = Object.new def o.sql_literal(ds) "(foo)" end @ds.literal(:column.qualify(o)).should == '(foo)."COLUMN"' end end describe "Symbol" do before do @ds = Sequel::Database.new.dataset end specify "should support sql_function method" do @ds.literal(:COUNT.sql_function('1')).should == "COUNT('1')" @ds.select(:COUNT.sql_function('1')).sql.should == "SELECT COUNT('1')" end specify "should support cast method" do @ds.literal(:abc.cast(:integer)).should == "CAST(abc AS integer)" end specify "should support sql array accesses via sql_subscript" do @ds.literal(:abc.sql_subscript(1)).should == "abc[1]" @ds.literal(:abc__def.sql_subscript(1)).should == "abc.def[1]" @ds.literal(:abc.sql_subscript(1)|2).should == "abc[1, 2]" @ds.literal(:abc.sql_subscript(1)[2]).should == "abc[1][2]" end specify "should support cast_numeric and cast_string" do x = :abc.cast_numeric x.should be_a_kind_of(Sequel::SQL::NumericExpression) @ds.literal(x).should == "CAST(abc AS integer)" x = :abc.cast_numeric(:real) x.should be_a_kind_of(Sequel::SQL::NumericExpression) @ds.literal(x).should == "CAST(abc AS real)" x = :abc.cast_string x.should be_a_kind_of(Sequel::SQL::StringExpression) @ds.literal(x).should == "CAST(abc AS varchar(255))" x = :abc.cast_string(:varchar) x.should be_a_kind_of(Sequel::SQL::StringExpression) @ds.literal(x).should == "CAST(abc AS varchar(255))" end specify "should support boolean methods" do @ds.literal(~:x).should == "NOT x" @ds.literal(:x & :y).should == "(x AND y)" @ds.literal(:x | :y).should == "(x OR y)" end specify "should support complex expression methods" do @ds.literal(:x.sql_boolean & 1).should == "(x AND 1)" @ds.literal(:x.sql_number & :y).should == "(x & y)" @ds.literal(:x.sql_string + :y).should == "(x || y)" end specify "should allow database independent types when casting" do db = @ds.db def db.cast_type_literal(type) return :foo if type == Integer return :bar if type == String type end @ds.literal(:abc.cast(String)).should == "CAST(abc AS bar)" @ds.literal(:abc.cast(String)).should == "CAST(abc AS bar)" @ds.literal(:abc.cast_string).should == "CAST(abc AS bar)" @ds.literal(:abc.cast_string(Integer)).should == "CAST(abc AS foo)" @ds.literal(:abc.cast_numeric).should == "CAST(abc AS foo)" @ds.literal(:abc.cast_numeric(String)).should == "CAST(abc AS bar)" end specify "should support SQL EXTRACT function via #extract " do @ds.literal(:abc.extract(:year)).should == "extract(year FROM abc)" end end describe "Postgres extensions integration" do before do @db = Sequel.mock Sequel.extension(:pg_array, :pg_array_ops, :pg_hstore, :pg_hstore_ops, :pg_json, :pg_json_ops, :pg_range, :pg_range_ops, :pg_row, :pg_row_ops) end it "Symbol#pg_array should return an ArrayOp" do @db.literal(:a.pg_array.unnest).should == "unnest(a)" end it "Symbol#pg_row should return a PGRowOp" do @db.literal(:a.pg_row[:a]).should == "(a).a" end it "Symbol#hstore should return an HStoreOp" do @db.literal(:a.hstore['a']).should == "(a -> 'a')" end it "Symbol#pg_json should return an JSONOp" do @db.literal(:a.pg_json[%w'a b']).should == "(a #> ARRAY['a','b'])" end it "Symbol#pg_range should return a RangeOp" do @db.literal(:a.pg_range.lower).should == "lower(a)" end it "Array#pg_array should return a PGArray" do @db.literal([1].pg_array.op.unnest).should == "unnest(ARRAY[1])" @db.literal([1].pg_array(:int4).op.unnest).should == "unnest(ARRAY[1]::int4[])" end it "Array#pg_json should return a JSONArray" do @db.literal([1].pg_json).should == "'[1]'::json" end it "Array#pg_row should return a ArrayRow" do @db.literal([1].pg_row).should == "ROW(1)" end it "Hash#hstore should return an HStore" do @db.literal({'a'=>1}.hstore.op['a']).should == '(\'"a"=>"1"\'::hstore -> \'a\')' end it "Hash#pg_json should return an JSONHash" do @db.literal({'a'=>'b'}.pg_json).should == "'{\"a\":\"b\"}'::json" end it "Range#pg_range should return an PGRange" do @db.literal((1..2).pg_range).should == "'[1,2]'" @db.literal((1..2).pg_range(:int4range)).should == "'[1,2]'::int4range" end end ����������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/������������������������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0017127�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/active_model_spec.rb����������������������������������������������0000664�0000000�0000000�00000005773�12201565355�0023135�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") begin require 'active_model' require 'test/unit' if Test::Unit.respond_to?(:run=) Test::Unit.run = false require 'test/unit/testresult' elsif defined?(MiniTest::Unit) class << MiniTest::Unit def autorun; end end end rescue LoadError => e skip_warn "active_model plugin: can't load active_model (#{e.class}: #{e})" else describe "ActiveModel plugin" do specify "should be compliant to the ActiveModel spec" do tc = Class.new(Test::Unit::TestCase) tc.class_eval do define_method(:setup) do class ::AMLintTest < Sequel::Model set_primary_key :id columns :id, :id2 def delete; end end module ::Blog class Post < Sequel::Model plugin :active_model end end @c = AMLintTest @c.plugin :active_model @m = @model = @c.new @o = @c.load({}) super() end def teardown super Object.send(:remove_const, :AMLintTest) Object.send(:remove_const, :Blog) end include ActiveModel::Lint::Tests # Should return self, not a proxy object def test__to_model assert_equal @m.to_model.object_id.should, @m.object_id end def test__to_key assert_equal nil, @m.to_key @o.id = 1 assert_equal [1], @o.to_key @o.id = nil assert_equal nil, @o.to_key @c.set_primary_key [:id2, :id] assert_equal nil, @o.to_key @o.id = 1 @o.id2 = 2 assert_equal [2, 1], @o.to_key @o.destroy assert_equal [2, 1], @o.to_key @o.id = nil assert_equal nil, @o.to_key end def test__to_param assert_equal nil, @m.to_param @o.id = 1 assert_equal '1', @o.to_param @c.set_primary_key [:id2, :id] @o.id2 = 2 assert_equal '2-1', @o.to_param @o.meta_def(:to_param_joiner){'|'} assert_equal '2|1', @o.to_param @o.destroy assert_equal nil, @o.to_param end def test__persisted? assert_equal false, @m.persisted? assert_equal true, @o.persisted? @m.destroy @o.destroy assert_equal false, @m.persisted? assert_equal false, @o.persisted? end # Should return self, not a proxy object def test__to_partial_path assert_equal 'am_lint_tests/am_lint_test', @m.to_partial_path assert_equal 'blog/posts/post', Blog::Post.new.to_partial_path end end if defined?(MiniTest::Unit) tc.instance_methods.map{|x| x.to_s}.reject{|n| n !~ /\Atest_/}.each do |m| i = tc.new(m) i.setup i.send(m) i.teardown end else res = ::Test::Unit::TestResult.new tc.suite.run(res){} if res.failure_count > 0 puts res.instance_variable_get(:@failures) end res.failure_count.should == 0 end end end end �����ruby-sequel-4.1.1/spec/extensions/after_initialize_spec.rb������������������������������������������0000664�0000000�0000000�00000001326�12201565355�0024012�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::AfterInitialize" do before do @db = Sequel.mock(:host=>'mysql', :numrows=>1) @c = Class.new(Sequel::Model(@db[:test])) @c.class_eval do columns :id, :name plugin :after_initialize def after_initialize self.name *= 2 self.id *= 3 if id end end end it "should have after_initialize hook be called for new objects" do @c.new(:name=>'foo').values.should == {:name=>'foofoo'} end it "should have after_initialize hook be called for objects loaded from the database" do @c.call(:id=>1, :name=>'foo').values.should == {:id=>3, :name=>'foofoo'} end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/arbitrary_servers_spec.rb�����������������������������������������0000664�0000000�0000000�00000006205�12201565355�0024241�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "arbtirary servers" do before do @db = Sequel.mock(:servers=>{}) @db.extension :arbitrary_servers end specify "should allow arbitrary server options using a hash" do @db.synchronize(:host=>'host1', :database=>'db1') do |c| c.opts[:host].should == 'host1' c.opts[:database].should == 'db1' end end specify "should not cache connections to arbitrary servers" do x = nil @db.synchronize(:host=>'host1', :database=>'db1') do |c| x = c end @db.synchronize(:host=>'host1', :database=>'db1') do |c2| c2.should_not equal(x) end end specify "should yield same connection correctly when nesting" do @db.synchronize(:host=>'host1', :database=>'db1') do |c| @db.synchronize(:host=>'host1', :database=>'db1') do |c2| c2.should equal(c) end end end specify "should disconnect when connection is finished" do x, x1 = nil, nil @db.meta_def(:disconnect_connection){|c| x = c} @db.synchronize(:host=>'host1', :database=>'db1') do |c| x1 = c @db.synchronize(:host=>'host1', :database=>'db1') do |c2| c2.should equal(c) end x.should equal(nil) end x.should equal(x1) end specify "should yield different connection correctly when nesting" do @db.synchronize(:host=>'host1', :database=>'db1') do |c| c.opts[:host].should == 'host1' @db.synchronize(:host=>'host2', :database=>'db1') do |c2| c2.opts[:host].should == 'host2' c2.should_not equal(c) end end end specify "should respect multithreaded access" do @db.synchronize(:host=>'host1', :database=>'db1') do |c| Thread.new do @db.synchronize(:host=>'host1', :database=>'db1') do |c2| c2.should_not equal(c) end end.join end end specify "should work correctly with server_block plugin" do @db.extension :server_block @db.with_server(:host=>'host1', :database=>'db1') do @db.synchronize do |c| c.opts[:host].should == 'host1' c.opts[:database].should == 'db1' @db.synchronize do |c2| c2.should equal(c) end end end end specify "should respect multithreaded access with server block plugin" do @db.extension :server_block q, q1 = Queue.new, Queue.new t = nil @db[:t].all @db.with_server(:host=>'a') do @db[:t].all t = Thread.new do @db[:t].all @db.with_server(:host=>'c') do @db[:t].all @db.with_server(:host=>'d'){@db[:t].all} q.push nil q1.pop @db[:t].all end @db[:t].all end q.pop @db.with_server(:host=>'b'){@db[:t].all} @db[:t].all end @db[:t].all q1.push nil t.join @db.sqls.should == ['SELECT * FROM t', 'SELECT * FROM t -- {:host=>"a"}', 'SELECT * FROM t', 'SELECT * FROM t -- {:host=>"c"}', 'SELECT * FROM t -- {:host=>"d"}', 'SELECT * FROM t -- {:host=>"b"}', 'SELECT * FROM t -- {:host=>"a"}', 'SELECT * FROM t', 'SELECT * FROM t -- {:host=>"c"}', 'SELECT * FROM t'] end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/association_dependencies_spec.rb����������������������������������0000664�0000000�0000000�00000013412�12201565355�0025511�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "AssociationDependencies plugin" do before do @mods = [] @c = Class.new(Sequel::Model) @c.plugin :association_dependencies @Artist = Class.new(@c).set_dataset(:artists) @Artist.dataset._fetch = {:id=>2, :name=>'Ar'} @Album = Class.new(@c).set_dataset(:albums) @Album.dataset._fetch = {:id=>1, :name=>'Al', :artist_id=>2} @Artist.columns :id, :name @Album.columns :id, :name, :artist_id @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id @Artist.one_to_one :first_album, :class=>@Album, :key=>:artist_id, :conditions=>{:position=>1} @Artist.many_to_many :other_artists, :class=>@Artist, :join_table=>:aoa, :left_key=>:l, :right_key=>:r @Album.many_to_one :artist, :class=>@Artist DB.reset end specify "should allow destroying associated many_to_one associated object" do @Album.add_association_dependencies :artist=>:destroy @Album.load(:id=>1, :name=>'Al', :artist_id=>2).destroy DB.sqls.should == ['DELETE FROM albums WHERE id = 1', 'SELECT * FROM artists WHERE (artists.id = 2) LIMIT 1', 'DELETE FROM artists WHERE id = 2'] end specify "should allow deleting associated many_to_one associated object" do @Album.add_association_dependencies :artist=>:delete @Album.load(:id=>1, :name=>'Al', :artist_id=>2).destroy DB.sqls.should == ['DELETE FROM albums WHERE id = 1', 'DELETE FROM artists WHERE (artists.id = 2)'] end specify "should allow destroying associated one_to_one associated object" do @Artist.add_association_dependencies :first_album=>:destroy @Artist.load(:id=>2, :name=>'Ar').destroy DB.sqls.should == ['SELECT * FROM albums WHERE ((position = 1) AND (albums.artist_id = 2)) LIMIT 1', 'DELETE FROM albums WHERE id = 1', 'DELETE FROM artists WHERE id = 2'] end specify "should allow deleting associated one_to_one associated object" do @Artist.add_association_dependencies :first_album=>:delete @Artist.load(:id=>2, :name=>'Ar').destroy DB.sqls.should == ['DELETE FROM albums WHERE ((position = 1) AND (albums.artist_id = 2))', 'DELETE FROM artists WHERE id = 2'] end specify "should allow destroying associated one_to_many objects" do @Artist.add_association_dependencies :albums=>:destroy @Artist.load(:id=>2, :name=>'Ar').destroy DB.sqls.should == ['SELECT * FROM albums WHERE (albums.artist_id = 2)', 'DELETE FROM albums WHERE id = 1', 'DELETE FROM artists WHERE id = 2'] end specify "should allow deleting associated one_to_many objects" do @Artist.add_association_dependencies :albums=>:delete @Artist.load(:id=>2, :name=>'Ar').destroy DB.sqls.should == ['DELETE FROM albums WHERE (albums.artist_id = 2)', 'DELETE FROM artists WHERE id = 2'] end specify "should allow nullifying associated one_to_one objects" do @Artist.add_association_dependencies :first_album=>:nullify @Artist.load(:id=>2, :name=>'Ar').destroy DB.sqls.should == ['UPDATE albums SET artist_id = NULL WHERE ((position = 1) AND (artist_id = 2))', 'DELETE FROM artists WHERE id = 2'] end specify "should allow nullifying associated one_to_many objects" do @Artist.add_association_dependencies :albums=>:nullify @Artist.load(:id=>2, :name=>'Ar').destroy DB.sqls.should == ['UPDATE albums SET artist_id = NULL WHERE (artist_id = 2)', 'DELETE FROM artists WHERE id = 2'] end specify "should allow nullifying associated many_to_many associations" do @Artist.add_association_dependencies :other_artists=>:nullify @Artist.load(:id=>2, :name=>'Ar').destroy DB.sqls.should == ['DELETE FROM aoa WHERE (l = 2)', 'DELETE FROM artists WHERE id = 2'] end specify "should raise an error if attempting to nullify a many_to_one association" do proc{@Album.add_association_dependencies :artist=>:nullify}.should raise_error(Sequel::Error) end specify "should raise an error if using an unrecognized dependence action" do proc{@Album.add_association_dependencies :artist=>:blah}.should raise_error(Sequel::Error) end specify "should raise an error if a nonexistent association is used" do proc{@Album.add_association_dependencies :blah=>:delete}.should raise_error(Sequel::Error) end specify "should raise an error if a invalid association type is used" do @Artist.plugin :many_through_many @Artist.many_through_many :other_albums, [[:id, :id, :id]] proc{@Artist.add_association_dependencies :other_albums=>:nullify}.should raise_error(Sequel::Error) end specify "should raise an error if using a many_to_many association type without nullify" do proc{@Artist.add_association_dependencies :other_artists=>:delete}.should raise_error(Sequel::Error) end specify "should allow specifying association dependencies in the plugin call" do @Album.plugin :association_dependencies, :artist=>:destroy @Album.load(:id=>1, :name=>'Al', :artist_id=>2).destroy DB.sqls.should == ['DELETE FROM albums WHERE id = 1', 'SELECT * FROM artists WHERE (artists.id = 2) LIMIT 1', 'DELETE FROM artists WHERE id = 2'] end specify "should work with subclasses" do c = Class.new(@Album) c.add_association_dependencies :artist=>:destroy c.load(:id=>1, :name=>'Al', :artist_id=>2).destroy DB.sqls.should == ['DELETE FROM albums WHERE id = 1', 'SELECT * FROM artists WHERE (artists.id = 2) LIMIT 1', 'DELETE FROM artists WHERE id = 2'] @Album.load(:id=>1, :name=>'Al', :artist_id=>2).destroy DB.sqls.should == ['DELETE FROM albums WHERE id = 1'] @Album.add_association_dependencies :artist=>:destroy c2 = Class.new(@Album) c2.load(:id=>1, :name=>'Al', :artist_id=>2).destroy DB.sqls.should == ['DELETE FROM albums WHERE id = 1', 'SELECT * FROM artists WHERE (artists.id = 2) LIMIT 1', 'DELETE FROM artists WHERE id = 2'] end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/association_pks_spec.rb�������������������������������������������0000664�0000000�0000000�00000037176�12201565355�0023675�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::AssociationPks" do before do @db = Sequel.mock(:fetch=>proc do |sql| case sql when "SELECT id FROM albums WHERE (albums.artist_id = 1)" [{:id=>1}, {:id=>2}, {:id=>3}] when /SELECT tag_id FROM albums_tags WHERE \(album_id = (\d)\)/ a = [] a << {:tag_id=>1} if $1 == '1' a << {:tag_id=>2} if $1 != '3' a << {:tag_id=>3} if $1 == '2' a when "SELECT first, last FROM vocalists WHERE (vocalists.album_id = 1)" [{:first=>"F1", :last=>"L1"}, {:first=>"F2", :last=>"L2"}] when /SELECT first, last FROM albums_vocalists WHERE \(album_id = (\d)\)/ a = [] a << {:first=>"F1", :last=>"L1"} if $1 == '1' a << {:first=>"F2", :last=>"L2"} if $1 != '3' a << {:first=>"F3", :last=>"L3"} if $1 == '2' a when "SELECT id FROM instruments WHERE ((instruments.first = 'F1') AND (instruments.last = 'L1'))" [{:id=>1}, {:id=>2}] when /SELECT instrument_id FROM vocalists_instruments WHERE \(\((?:first|last) = '?[FL1](\d)/ a = [] a << {:instrument_id=>1} if $1 == "1" a << {:instrument_id=>2} if $1 != "3" a << {:instrument_id=>3} if $1 == "2" a when "SELECT year, week FROM hits WHERE ((hits.first = 'F1') AND (hits.last = 'L1'))" [{:year=>1997, :week=>1}, {:year=>1997, :week=>2}] when /SELECT year, week FROM vocalists_hits WHERE \(\((?:first|last) = '?[FL1](\d)/ a = [] a << {:year=>1997, :week=>1} if $1 == "1" a << {:year=>1997, :week=>2} if $1 != "3" a << {:year=>1997, :week=>3} if $1 == "2" a end end) @Artist = Class.new(Sequel::Model(@db[:artists])) @Artist.columns :id @Album = Class.new(Sequel::Model(@db[:albums])) @Album.columns :id, :artist_id @Tag = Class.new(Sequel::Model(@db[:tags])) @Tag.columns :id @Vocalist = Class.new(Sequel::Model(@db[:vocalists])) @Vocalist.columns :first, :last, :album_id @Vocalist.set_primary_key [:first, :last] @Instrument = Class.new(Sequel::Model(@db[:instruments])) @Instrument.columns :id, :first, :last @Hit = Class.new(Sequel::Model(@db[:hits])) @Hit.columns :year, :week, :first, :last @Hit.set_primary_key [:year, :week] @Artist.plugin :association_pks @Album.plugin :association_pks @Vocalist.plugin :association_pks @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id @Album.many_to_many :tags, :class=>@Tag, :join_table=>:albums_tags, :left_key=>:album_id @db.sqls end specify "should return correct associated pks for one_to_many associations" do @Artist.load(:id=>1).album_pks.should == [1,2,3] @Artist.load(:id=>2).album_pks.should == [] end specify "should return correct associated pks for many_to_many associations" do @Album.load(:id=>1).tag_pks.should == [1, 2] @Album.load(:id=>2).tag_pks.should == [2, 3] @Album.load(:id=>3).tag_pks.should == [] end specify "should set associated pks correctly for a one_to_many association" do @Artist.load(:id=>1).album_pks = [1, 2] @db.sqls.should == ["UPDATE albums SET artist_id = 1 WHERE (id IN (1, 2))", "UPDATE albums SET artist_id = NULL WHERE ((albums.artist_id = 1) AND (id NOT IN (1, 2)))"] end specify "should use associated class's primary key for a one_to_many association" do @Album.set_primary_key :foo @Artist.load(:id=>1).album_pks = [1, 2] @db.sqls.should == ["UPDATE albums SET artist_id = 1 WHERE (foo IN (1, 2))", "UPDATE albums SET artist_id = NULL WHERE ((albums.artist_id = 1) AND (foo NOT IN (1, 2)))"] end specify "should set associated pks correctly for a many_to_many association" do @Album.load(:id=>2).tag_pks = [1, 3] sqls = @db.sqls sqls[0].should == "DELETE FROM albums_tags WHERE ((album_id = 2) AND (tag_id NOT IN (1, 3)))" sqls[1].should == 'SELECT tag_id FROM albums_tags WHERE (album_id = 2)' sqls[2].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, 1|1, 2)\)/ sqls.length.should == 3 end specify "should return correct right-side associated cpks for one_to_many associations" do @Album.one_to_many :vocalists, :class=>@Vocalist, :key=>:album_id @Album.load(:id=>1).vocalist_pks.should == [["F1", "L1"], ["F2", "L2"]] @Album.load(:id=>2).vocalist_pks.should == [] end specify "should return correct right-side associated cpks for many_to_many associations" do @Album.many_to_many :vocalists, :class=>@Vocalist, :join_table=>:albums_vocalists, :left_key=>:album_id, :right_key=>[:first, :last] @Album.load(:id=>1).vocalist_pks.should == [["F1", "L1"], ["F2", "L2"]] @Album.load(:id=>2).vocalist_pks.should == [["F2", "L2"], ["F3", "L3"]] @Album.load(:id=>3).vocalist_pks.should == [] end specify "should set associated right-side cpks correctly for a one_to_many association" do @Album.one_to_many :vocalists, :class=>@Vocalist, :key=>:album_id @Album.load(:id=>1).vocalist_pks = [["F1", "L1"], ["F2", "L2"]] @db.sqls.should == ["UPDATE vocalists SET album_id = 1 WHERE ((first, last) IN (('F1', 'L1'), ('F2', 'L2')))", "UPDATE vocalists SET album_id = NULL WHERE ((vocalists.album_id = 1) AND ((first, last) NOT IN (('F1', 'L1'), ('F2', 'L2'))))"] end specify "should set associated right-side cpks correctly for a many_to_many association" do @Album.many_to_many :vocalists, :class=>@Vocalist, :join_table=>:albums_vocalists, :left_key=>:album_id, :right_key=>[:first, :last] @Album.load(:id=>2).vocalist_pks = [["F1", "L1"], ["F2", "L2"]] sqls = @db.sqls sqls[0].should == "DELETE FROM albums_vocalists WHERE ((album_id = 2) AND ((first, last) NOT IN (('F1', 'L1'), ('F2', 'L2'))))" sqls[1].should == 'SELECT first, last FROM albums_vocalists WHERE (album_id = 2)' match = sqls[2].match(/INSERT INTO albums_vocalists \((.*)\) VALUES \((.*)\)/) Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"'F1'", "last"=>"'L1'", "album_id"=>"2"} sqls.length.should == 3 end specify "should return correct associated pks for left-side cpks for one_to_many associations" do @Vocalist.one_to_many :instruments, :class=>@Instrument, :key=>[:first, :last] @Vocalist.load(:first=>'F1', :last=>'L1').instrument_pks.should == [1, 2] @Vocalist.load(:first=>'F2', :last=>'L2').instrument_pks.should == [] end specify "should return correct associated pks for left-side cpks for many_to_many associations" do @Vocalist.many_to_many :instruments, :class=>@Instrument, :join_table=>:vocalists_instruments, :left_key=>[:first, :last] @Vocalist.load(:first=>'F1', :last=>'L1').instrument_pks.should == [1, 2] @Vocalist.load(:first=>'F2', :last=>'L2').instrument_pks.should == [2, 3] @Vocalist.load(:first=>'F3', :last=>'L3').instrument_pks.should == [] end specify "should set associated pks correctly for left-side cpks for a one_to_many association" do @Vocalist.one_to_many :instruments, :class=>@Instrument, :key=>[:first, :last] @Vocalist.load(:first=>'F1', :last=>'L1').instrument_pks = [1, 2] sqls = @db.sqls sqls[0].should =~ /UPDATE instruments SET (first = 'F1', last = 'L1'|last = 'L1', first = 'F1') WHERE \(id IN \(1, 2\)\)/ sqls[1].should =~ /UPDATE instruments SET (first = NULL, last = NULL|last = NULL, first = NULL) WHERE \(\(instruments.first = 'F1'\) AND \(instruments.last = 'L1'\) AND \(id NOT IN \(1, 2\)\)\)/ sqls.length.should == 2 end specify "should set associated pks correctly for left-side cpks for a many_to_many association" do @Vocalist.many_to_many :instruments, :class=>@Instrument, :join_table=>:vocalists_instruments, :left_key=>[:first, :last] @Vocalist.load(:first=>'F2', :last=>'L2').instrument_pks = [1, 2] sqls = @db.sqls sqls[0].should == "DELETE FROM vocalists_instruments WHERE ((first = 'F2') AND (last = 'L2') AND (instrument_id NOT IN (1, 2)))" sqls[1].should == "SELECT instrument_id FROM vocalists_instruments WHERE ((first = 'F2') AND (last = 'L2'))" match = sqls[2].match(/INSERT INTO vocalists_instruments \((.*)\) VALUES \((.*)\)/) Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"'F2'", "last"=>"'L2'", "instrument_id"=>"1"} sqls.length.should == 3 end specify "should return correct right-side associated cpks for left-side cpks for one_to_many associations" do @Vocalist.one_to_many :hits, :class=>@Hit, :key=>[:first, :last] @Vocalist.load(:first=>'F1', :last=>'L1').hit_pks.should == [[1997, 1], [1997, 2]] @Vocalist.load(:first=>'F2', :last=>'L2').hit_pks.should == [] end specify "should return correct right-side associated cpks for left-side cpks for many_to_many associations" do @Vocalist.many_to_many :hits, :class=>@Hit, :join_table=>:vocalists_hits, :left_key=>[:first, :last], :right_key=>[:year, :week] @Vocalist.load(:first=>'F1', :last=>'L1').hit_pks.should == [[1997, 1], [1997, 2]] @Vocalist.load(:first=>'F2', :last=>'L2').hit_pks.should == [[1997, 2], [1997, 3]] @Vocalist.load(:first=>'F3', :last=>'L3').hit_pks.should == [] end specify "should set associated right-side cpks correctly for left-side cpks for a one_to_many association" do @Vocalist.one_to_many :hits, :class=>@Hit, :key=>[:first, :last], :order=>:week @Vocalist.load(:first=>'F1', :last=>'L1').hit_pks = [[1997, 1], [1997, 2]] sqls = @db.sqls sqls[0].should =~ /UPDATE hits SET (first = 'F1', last = 'L1'|last = 'L1', first = 'F1') WHERE \(\(year, week\) IN \(\(1997, 1\), \(1997, 2\)\)\)/ sqls[1].should =~ /UPDATE hits SET (first = NULL, last = NULL|last = NULL, first = NULL) WHERE \(\(hits.first = 'F1'\) AND \(hits.last = 'L1'\) AND \(\(year, week\) NOT IN \(\(1997, 1\), \(1997, 2\)\)\)\)/ sqls.length.should == 2 end specify "should set associated right-side cpks correctly for left-side cpks for a many_to_many association" do @Vocalist.many_to_many :hits, :class=>@Hit, :join_table=>:vocalists_hits, :left_key=>[:first, :last], :right_key=>[:year, :week] @Vocalist.load(:first=>'F2', :last=>'L2').hit_pks = [[1997, 1], [1997, 2]] sqls = @db.sqls sqls[0].should == "DELETE FROM vocalists_hits WHERE ((first = 'F2') AND (last = 'L2') AND ((year, week) NOT IN ((1997, 1), (1997, 2))))" sqls[1].should == "SELECT year, week FROM vocalists_hits WHERE ((first = 'F2') AND (last = 'L2'))" match = sqls[2].match(/INSERT INTO vocalists_hits \((.*)\) VALUES \((.*)\)/) Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"'F2'", "last"=>"'L2'", "year"=>"1997", "week"=>"1"} sqls.length.should == 3 end specify "should use transactions if the object is configured to use transactions" do artist = @Artist.load(:id=>1) artist.use_transactions = true artist.album_pks = [1, 2] @db.sqls.should == ["BEGIN", "UPDATE albums SET artist_id = 1 WHERE (id IN (1, 2))", "UPDATE albums SET artist_id = NULL WHERE ((albums.artist_id = 1) AND (id NOT IN (1, 2)))", "COMMIT"] album = @Album.load(:id=>2) album.use_transactions = true album.tag_pks = [1, 3] sqls = @db.sqls sqls[0].should == "BEGIN" sqls[1].should == "DELETE FROM albums_tags WHERE ((album_id = 2) AND (tag_id NOT IN (1, 3)))" sqls[2].should == 'SELECT tag_id FROM albums_tags WHERE (album_id = 2)' sqls[3].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, 1|1, 2)\)/ sqls[4].should == "COMMIT" sqls.length.should == 5 end specify "should automatically convert keys to numbers if the primary key is an integer for one_to_many associations" do @Album.db_schema[:id][:type] = :integer @Artist.load(:id=>1).album_pks = %w'1 2' @db.sqls.should == ["UPDATE albums SET artist_id = 1 WHERE (id IN (1, 2))", "UPDATE albums SET artist_id = NULL WHERE ((albums.artist_id = 1) AND (id NOT IN (1, 2)))"] end specify "should not automatically convert keys if the primary key is not an integer for one_to_many associations" do @Album.db_schema[:id][:type] = :string @Artist.load(:id=>1).album_pks = %w'1 2' @db.sqls.should == ["UPDATE albums SET artist_id = 1 WHERE (id IN ('1', '2'))", "UPDATE albums SET artist_id = NULL WHERE ((albums.artist_id = 1) AND (id NOT IN ('1', '2')))"] end specify "should automatically convert keys to numbers if the primary key is an integer for many_to_many associations" do @Tag.db_schema[:id][:type] = :integer @Album.load(:id=>2).tag_pks = %w'1 3' sqls = @db.sqls sqls[0].should == "DELETE FROM albums_tags WHERE ((album_id = 2) AND (tag_id NOT IN (1, 3)))" sqls[1].should == 'SELECT tag_id FROM albums_tags WHERE (album_id = 2)' sqls[2].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, 1|1, 2)\)/ sqls.length.should == 3 end specify "should not automatically convert keys to numbers if the primary key is an integer for many_to_many associations" do @Tag.db_schema[:id][:type] = :string @Album.load(:id=>2).tag_pks = %w'1 3' sqls = @db.sqls sqls[0].should == "DELETE FROM albums_tags WHERE ((album_id = 2) AND (tag_id NOT IN ('1', '3')))" sqls[1].should == 'SELECT tag_id FROM albums_tags WHERE (album_id = 2)' sqls[2].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, '1'|'1', 2)\)/ sqls[3].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, '3'|'3', 2)\)/ sqls.length.should == 4 end specify "should automatically convert keys to numbers for appropriate integer primary key for composite key associations" do @Hit.db_schema[:year][:type] = :integer @Hit.db_schema[:week][:type] = :integer @Vocalist.many_to_many :hits, :class=>@Hit, :join_table=>:vocalists_hits, :left_key=>[:first, :last], :right_key=>[:year, :week] @Vocalist.load(:first=>'F2', :last=>'L2').hit_pks = [['1997', '1'], ['1997', '2']] sqls = @db.sqls sqls[0].should == "DELETE FROM vocalists_hits WHERE ((first = 'F2') AND (last = 'L2') AND ((year, week) NOT IN ((1997, 1), (1997, 2))))" sqls[1].should == "SELECT year, week FROM vocalists_hits WHERE ((first = 'F2') AND (last = 'L2'))" match = sqls[2].match(/INSERT INTO vocalists_hits \((.*)\) VALUES \((.*)\)/) Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"'F2'", "last"=>"'L2'", "year"=>"1997", "week"=>"1"} sqls.length.should == 3 @Vocalist.db_schema[:first][:type] = :integer @Vocalist.db_schema[:last][:type] = :integer @Album.one_to_many :vocalists, :class=>@Vocalist, :key=>:album_id @Album.load(:id=>1).vocalist_pks = [["11", "11"], ["12", "12"]] @db.sqls.should == ["UPDATE vocalists SET album_id = 1 WHERE ((first, last) IN ((11, 11), (12, 12)))", "UPDATE vocalists SET album_id = NULL WHERE ((vocalists.album_id = 1) AND ((first, last) NOT IN ((11, 11), (12, 12))))"] @Album.many_to_many :vocalists, :class=>@Vocalist, :join_table=>:albums_vocalists, :left_key=>:album_id, :right_key=>[:first, :last] @Album.load(:id=>2).vocalist_pks = [["11", "11"], ["12", "12"]] sqls = @db.sqls sqls[0].should == "DELETE FROM albums_vocalists WHERE ((album_id = 2) AND ((first, last) NOT IN ((11, 11), (12, 12))))" sqls[1].should == 'SELECT first, last FROM albums_vocalists WHERE (album_id = 2)' match = sqls[2].match(/INSERT INTO albums_vocalists \((.*)\) VALUES \((.*)\)/) Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"11", "last"=>"11", "album_id"=>"2"} match = sqls[3].match(/INSERT INTO albums_vocalists \((.*)\) VALUES \((.*)\)/) Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"12", "last"=>"12", "album_id"=>"2"} sqls.length.should == 4 end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/association_proxies_spec.rb���������������������������������������0000664�0000000�0000000�00000007722�12201565355�0024563�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::AssociationProxies" do before do class ::Tag < Sequel::Model end class ::Item < Sequel::Model plugin :association_proxies many_to_many :tags, :extend=>Module.new{def size; count end} end @i = Item.load(:id=>1) @t = @i.tags Item.db.reset end after do Object.send(:remove_const, :Tag) Object.send(:remove_const, :Item) end it "should send method calls to the associated object array if sent an array method" do @i.associations.has_key?(:tags).should == false @t.select{|x| false}.should == [] @i.associations.has_key?(:tags).should == true end it "should send method calls to the association dataset if sent a non-array method" do @i.associations.has_key?(:tags).should == false @t.filter(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)" @i.associations.has_key?(:tags).should == false end it "should accept block to plugin to specify which methods to proxy to dataset" do Item.plugin :association_proxies do |opts| opts[:method] == :where || opts[:arguments].length == 2 || opts[:block] end @i.associations.has_key?(:tags).should == false @t.where(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)" @t.filter('a = ?', 1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)" @t.filter{{:a=>1}}.sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)" @i.associations.has_key?(:tags).should == false Item.plugin :association_proxies do |opts| proxy_arg = opts[:proxy_argument] proxy_block = opts[:proxy_block] cached = opts[:instance].associations[opts[:reflection][:name]] is_size = opts[:method] == :size is_size && !cached && !proxy_arg && !proxy_block end @t.size.should == 1 Item.db.sqls.should == ["SELECT count(*) AS count FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) LIMIT 1"] @i.tags{|ds| ds}.size.should == 1 Item.db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1))"] @i.tags(true).size.should == 1 Item.db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1))"] @t.size.should == 1 Item.db.sqls.should == [] end it "should reload the cached association if sent an array method and the reload flag was given" do @t.select{|x| false}.should == [] Item.db.sqls.length.should == 1 @t.select{|x| false}.should == [] Item.db.sqls.length.should == 0 @i.tags(true).select{|x| false}.should == [] Item.db.sqls.length.should == 1 @t.filter(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)" Item.db.sqls.length.should == 0 end it "should not return a proxy object for associations that do not return an array" do Item.many_to_one :tag proc{@i.tag.filter(:a=>1)}.should raise_error(NoMethodError) Tag.one_to_one :item proc{Tag.load(:id=>1, :item_id=>2).item.filter(:a=>1)}.should raise_error(NoMethodError) end it "should work correctly in subclasses" do i = Class.new(Item).load(:id=>1) i.associations.has_key?(:tags).should == false i.tags.select{|x| false}.should == [] i.associations.has_key?(:tags).should == true i.tags.filter(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)" end end ����������������������������������������������ruby-sequel-4.1.1/spec/extensions/auto_validations_spec.rb������������������������������������������0000664�0000000�0000000�00000007363�12201565355�0024044�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::AutoValidations" do before do db = Sequel.mock(:fetch=>{:v=>1}) def db.schema_parse_table(*) true; end def db.schema(t, *) t = t.first_source if t.is_a?(Sequel::Dataset) return [] if t != :test [[:id, {:primary_key=>true, :type=>:integer, :allow_null=>false}], [:name, {:primary_key=>false, :type=>:string, :allow_null=>false}], [:num, {:primary_key=>false, :type=>:integer, :allow_null=>true}], [:d, {:primary_key=>false, :type=>:date, :allow_null=>false}], [:nnd, {:primary_key=>false, :type=>:string, :allow_null=>false, :ruby_default=>'nnd'}]] end def db.supports_index_parsing?() true end def db.indexes(t, *) return [] if t != :test {:a=>{:columns=>[:name, :num], :unique=>true}, :b=>{:columns=>[:num], :unique=>false}} end @c = Class.new(Sequel::Model(db[:test])) @c.send(:def_column_accessor, :id, :name, :num, :d, :nnd) @c.raise_on_typecast_failure = false @c.plugin :auto_validations @m = @c.new db.sqls end it "should have automatically created validations" do @m.valid?.should be_false @m.errors.should == {:d=>["is not present"], :name=>["is not present"]} @m.name = '' @m.valid?.should be_false @m.errors.should == {:d=>["is not present"]} @m.set(:d=>'/', :num=>'a', :name=>'1') @m.valid?.should be_false @m.errors.should == {:d=>["is not a valid date"], :num=>["is not a valid integer"]} @m.set(:d=>Date.today, :num=>1) @m.valid?.should be_false @m.errors.should == {[:name, :num]=>["is already taken"]} end it "should handle databases that don't support index parsing" do def (@m.db).supports_index_parsing?() false end @m.model.send(:setup_auto_validations) @m.set(:d=>Date.today, :num=>1, :name=>'1') @m.valid?.should be_true end it "should support :not_null=>:presence option" do @c.plugin :auto_validations, :not_null=>:presence @m.set(:d=>Date.today, :num=>'') @m.valid?.should be_false @m.errors.should == {:name=>["is not present"]} end it "should automatically validate explicit nil values for columns with not nil defaults" do @m.set(:d=>Date.today, :name=>1, :nnd=>nil) @m.id = nil @m.valid?.should be_false @m.errors.should == {:id=>["is not present"], :nnd=>["is not present"]} end it "should allow skipping validations by type" do @c = Class.new(@c) @m = @c.new @c.skip_auto_validations(:not_null) @m.valid?.should be_true @m.set(:d=>'/', :num=>'a', :name=>'1') @m.valid?.should be_false @m.errors.should == {:d=>["is not a valid date"], :num=>["is not a valid integer"]} @c.skip_auto_validations(:types) @m.valid?.should be_false @m.errors.should == {[:name, :num]=>["is already taken"]} @c.skip_auto_validations(:unique) @m.valid?.should be_true end it "should allow skipping all auto validations" do @c = Class.new(@c) @m = @c.new @c.skip_auto_validations(:all) @m.valid?.should be_true @m.set(:d=>'/', :num=>'a', :name=>'1') @m.valid?.should be_true end it "should work correctly in subclasses" do @c = Class.new(@c) @m = @c.new @m.valid?.should be_false @m.errors.should == {:d=>["is not present"], :name=>["is not present"]} @m.set(:d=>'/', :num=>'a', :name=>'1') @m.valid?.should be_false @m.errors.should == {:d=>["is not a valid date"], :num=>["is not a valid integer"]} @m.set(:d=>Date.today, :num=>1) @m.valid?.should be_false @m.errors.should == {[:name, :num]=>["is already taken"]} end it "should work correctly when changing the dataset" do @c.set_dataset(@c.db[:foo]) @c.new.valid?.should be_true end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/blacklist_security_spec.rb����������������������������������������0000664�0000000�0000000�00000006104�12201565355�0024366�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "#(set|update)_except" do before do @c = Class.new(Sequel::Model(:items)) @c.class_eval do plugin :blacklist_security set_primary_key :id columns :x, :y, :z, :id set_restricted_columns :y end @c.strict_param_setting = false @o1 = @c.new DB.reset end it "should raise errors if not all hash fields can be set and strict_param_setting is true" do @c.strict_param_setting = true proc{@c.new.set_except({:x => 1, :y => 2, :z=>3, :id=>4}, :x, :y)}.should raise_error(Sequel::Error) proc{@c.new.set_except({:x => 1, :y => 2, :z=>3}, :x, :y)}.should raise_error(Sequel::Error) (o = @c.new).set_except({:z => 3}, :x, :y) o.values.should == {:z=>3} end it "#set_except should not set given attributes or the primary key" do @o1.set_except({:x => 1, :y => 2, :z=>3, :id=>4}, [:y, :z]) @o1.values.should == {:x => 1} @o1.set_except({:x => 4, :y => 2, :z=>3, :id=>4}, :y, :z) @o1.values.should == {:x => 4} end it "#update_except should not update given attributes" do @o1.update_except({:x => 1, :y => 2, :z=>3, :id=>4}, [:y, :z]) DB.sqls.should == ["INSERT INTO items (x) VALUES (1)", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] @c.new.update_except({:x => 1, :y => 2, :z=>3, :id=>4}, :y, :z) DB.sqls.should == ["INSERT INTO items (x) VALUES (1)", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end end describe Sequel::Model, ".restricted_columns " do before do @c = Class.new(Sequel::Model(:blahblah)) @c.class_eval do plugin :blacklist_security columns :x, :y, :z end @c.strict_param_setting = false @c.instance_variable_set(:@columns, [:x, :y, :z]) end it "should set the restricted columns correctly" do @c.restricted_columns.should == nil @c.set_restricted_columns :x @c.restricted_columns.should == [:x] @c.set_restricted_columns :x, :y @c.restricted_columns.should == [:x, :y] end it "should not set restricted columns by default" do @c.set_restricted_columns :z i = @c.new(:x => 1, :y => 2, :z => 3) i.values.should == {:x => 1, :y => 2} i.set(:x => 4, :y => 5, :z => 6) i.values.should == {:x => 4, :y => 5} @c.instance_dataset._fetch = @c.dataset._fetch = {:x => 7} i = @c.new i.update(:x => 7, :z => 9) i.values.should == {:x => 7} DB.sqls.should == ["INSERT INTO blahblah (x) VALUES (7)", "SELECT * FROM blahblah WHERE (id = 10) LIMIT 1"] end it "should have allowed take precedence over restricted" do @c.set_allowed_columns :x, :y @c.set_restricted_columns :y, :z i = @c.new(:x => 1, :y => 2, :z => 3) i.values.should == {:x => 1, :y => 2} i.set(:x => 4, :y => 5, :z => 6) i.values.should == {:x => 4, :y => 5} @c.instance_dataset._fetch = @c.dataset._fetch = {:y => 7} i = @c.new i.update(:y => 7, :z => 9) i.values.should == {:y => 7} DB.sqls.should == ["INSERT INTO blahblah (y) VALUES (7)", "SELECT * FROM blahblah WHERE (id = 10) LIMIT 1"] end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/blank_spec.rb�����������������������������������������������������0000664�0000000�0000000�00000003536�12201565355�0021564�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') Sequel.extension :blank describe "Object#blank?" do specify "it should be true if the object responds true to empty?" do [].blank?.should == true {}.blank?.should == true o = Object.new def o.empty?; true; end o.blank?.should == true end specify "it should be false if the object doesn't respond true to empty?" do [2].blank?.should == false {1=>2}.blank?.should == false Object.new.blank?.should == false end end describe "Numeric#blank?" do specify "it should always be false" do 1.blank?.should == false 0.blank?.should == false -1.blank?.should == false 1.0.blank?.should == false 0.0.blank?.should == false -1.0.blank?.should == false 10000000000000000.blank?.should == false -10000000000000000.blank?.should == false 10000000000000000.0.blank?.should == false -10000000000000000.0.blank?.should == false end end describe "NilClass#blank?" do specify "it should always be true" do nil.blank?.should == true end end describe "TrueClass#blank?" do specify "it should always be false" do true.blank?.should == false end end describe "FalseClass#blank?" do specify "it should always be true" do false.blank?.should == true end end describe "String#blank?" do specify "it should be true if the string is empty" do ''.blank?.should == true end specify "it should be true if the string is composed of just whitespace" do ' '.blank?.should == true "\r\n\t".blank?.should == true (' '*4000).blank?.should == true ("\r\n\t"*4000).blank?.should == true end specify "it should be false if the string has any non whitespace characters" do '1'.blank?.should == false ("\r\n\t"*4000 + 'a').blank?.should == false ("\r\na\t"*4000).blank?.should == false end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/boolean_readers_spec.rb�������������������������������������������0000664�0000000�0000000�00000005017�12201565355�0023615�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "BooleanReaders plugin" do before do @db = Sequel::Database.new def @db.supports_schema_parsing?() true end def @db.schema(*args) [[:id, {}], [:z, {:type=>:integer, :db_type=>'tinyint(1)'}], [:b, {:type=>:boolean, :db_type=>'boolean'}]] end @c = Class.new(Sequel::Model(@db[:items])) @p = proc do @columns = [:id, :b, :z] def columns; @columns; end end @c.instance_eval(&@p) end specify "should create attribute? readers for all boolean attributes" do @c.plugin(:boolean_readers) o = @c.new o.b?.should == nil o.b = '1' o.b?.should == true o.b = '0' o.b?.should == false o.b = '' o.b?.should == nil end specify "should not create attribute? readers for non-boolean attributes" do @c.plugin(:boolean_readers) proc{@c.new.z?}.should raise_error(NoMethodError) proc{@c.new.id?}.should raise_error(NoMethodError) end specify "should accept a block to determine if an attribute is boolean" do @c.plugin(:boolean_readers){|c| db_schema[c][:db_type] == 'tinyint(1)'} proc{@c.new.b?}.should raise_error(NoMethodError) o = @c.new o.z.should == nil o.z?.should == nil o.z = '1' o.z.should == 1 o.z?.should == true o.z = '0' o.z.should == 0 o.z?.should == false o.z = '' o.z.should == nil o.z?.should == nil end specify "should create boolean readers when set_dataset is defined" do c = Class.new(Sequel::Model(@db)) c.instance_eval(&@p) c.plugin(:boolean_readers) c.set_dataset(@db[:a]) o = c.new o.b?.should == nil o.b = '1' o.b?.should == true o.b = '0' o.b?.should == false o.b = '' o.b?.should == nil proc{o.i?}.should raise_error(NoMethodError) c = Class.new(Sequel::Model(@db)) c.instance_eval(&@p) c.plugin(:boolean_readers){|x| db_schema[x][:db_type] == 'tinyint(1)'} c.set_dataset(@db[:a]) o = c.new o.z.should == nil o.z?.should == nil o.z = '1' o.z.should == 1 o.z?.should == true o.z = '0' o.z.should == 0 o.z?.should == false o.z = '' o.z.should == nil o.z?.should == nil proc{o.b?}.should raise_error(NoMethodError) end specify "should handle cases where getting the columns raises an error" do @c.meta_def(:columns){raise Sequel::Error} proc{@c.plugin(:boolean_readers)}.should_not raise_error proc{@c.new.b?}.should raise_error(NoMethodError) end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/caching_spec.rb���������������������������������������������������0000664�0000000�0000000�00000017136�12201565355�0022072�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "caching" do before do @cache_class = Class.new(Hash) do attr_accessor :ttl def set(k, v, ttl); self[k] = v; @ttl = ttl; end def get(k); self[k]; end end cache = @cache_class.new @cache = cache @memcached_class = Class.new(Hash) do attr_accessor :ttl def set(k, v, ttl); self[k] = v; @ttl = ttl; end def get(k); if self[k] then return self[k]; else raise ArgumentError; end end def delete(k); if self[k] then super; else raise ArgumentError; end end end cache2 = @memcached_class.new @memcached = cache2 @c = Class.new(Sequel::Model(:items)) @c.class_eval do plugin :caching, cache def self.name; 'Item' end columns :name, :id end @c3 = Class.new(Sequel::Model(:items)) @c3.class_eval do plugin :caching, cache2 def self.name; 'Item' end columns :name, :id end @c4 = Class.new(Sequel::Model(:items)) @c4.class_eval do plugin :caching, cache2, :ignore_exceptions => true def self.name; 'Item' end columns :name, :id end @dataset = @c.dataset = @c3.dataset = @c4.dataset @dataset._fetch = {:name => 'sharon', :id => 1} @dataset.numrows = 1 @c2 = Class.new(@c) do def self.name; 'SubItem' end end @c.db.reset end it "should set the model's cache store" do @c.cache_store.should be(@cache) @c2.cache_store.should be(@cache) end it "should have a default ttl of 3600" do @c.cache_ttl.should == 3600 @c2.cache_ttl.should == 3600 end it "should take a ttl option" do c = Class.new(Sequel::Model(:items)) c.plugin :caching, @cache, :ttl => 1234 c.cache_ttl.should == 1234 Class.new(c).cache_ttl.should == 1234 end it "should allow overriding the ttl option via a plugin :caching call" do @c.plugin :caching, @cache, :ttl => 1234 @c.cache_ttl.should == 1234 Class.new(@c).cache_ttl.should == 1234 end it "should offer a set_cache_ttl method for setting the ttl" do @c.cache_ttl.should == 3600 @c.set_cache_ttl 1234 @c.cache_ttl.should == 1234 Class.new(@c).cache_ttl.should == 1234 end it "should generate a cache key appropriate to the class via the Model#cache_key" do m = @c.new m.values[:id] = 1 m.cache_key.should == "#{m.class}:1" m = @c2.new m.values[:id] = 1 m.cache_key.should == "#{m.class}:1" # custom primary key @c.set_primary_key :ttt m = @c.new m.values[:ttt] = 333 m.cache_key.should == "#{m.class}:333" c = Class.new(@c) m = c.new m.values[:ttt] = 333 m.cache_key.should == "#{m.class}:333" # composite primary key @c.set_primary_key [:a, :b, :c] m = @c.new m.values[:a] = 123 m.values[:c] = 456 m.values[:b] = 789 m.cache_key.should == "#{m.class}:123,789,456" c = Class.new(@c) m = c.new m.values[:a] = 123 m.values[:c] = 456 m.values[:b] = 789 m.cache_key.should == "#{m.class}:123,789,456" end it "should generate a cache key via the Model.cache_key method" do @c.cache_key(1).should == "#{@c}:1" @c.cache_key([1, 2]).should == "#{@c}:1,2" end it "should raise error if attempting to generate cache_key and primary key value is null" do m = @c.new proc {m.cache_key}.should raise_error(Sequel::Error) m.values[:id] = 1 proc {m.cache_key}.should_not raise_error m = @c2.new proc {m.cache_key}.should raise_error(Sequel::Error) m.values[:id] = 1 proc {m.cache_key}.should_not raise_error end it "should not raise error if trying to save a new record" do proc {@c.new(:name=>'blah').save}.should_not raise_error proc {@c.create(:name=>'blah')}.should_not raise_error proc {@c2.new(:name=>'blah').save}.should_not raise_error proc {@c2.create(:name=>'blah')}.should_not raise_error end it "should set the cache when reading from the database" do @c.db.sqls.should == [] @cache.should be_empty m = @c[1] @c.db.sqls.should == ['SELECT * FROM items WHERE id = 1'] m.values.should == {:name=>"sharon", :id=>1} @cache[m.cache_key].should == m m2 = @c[1] @c.db.sqls.should == [] m2.should == m m2.values.should == {:name=>"sharon", :id=>1} m = @c2[1] @c.db.sqls.should == ['SELECT * FROM items WHERE id = 1'] m.values.should == {:name=>"sharon", :id=>1} @cache[m.cache_key].should == m m2 = @c2[1] @c.db.sqls.should == [] m2.should == m m2.values.should == {:name=>"sharon", :id=>1} end it "should handle lookups by nil primary keys" do @c[nil].should == nil @c.db.sqls.should == [] end it "should delete the cache when writing to the database" do m = @c[1] @cache[m.cache_key].should == m m.name = 'hey' m.save @cache.has_key?(m.cache_key).should be_false @c.db.sqls.should == ["SELECT * FROM items WHERE id = 1", "UPDATE items SET name = 'hey' WHERE (id = 1)"] m = @c2[1] @cache[m.cache_key].should == m m.name = 'hey' m.save @cache.has_key?(m.cache_key).should be_false @c.db.sqls.should == ["SELECT * FROM items WHERE id = 1", "UPDATE items SET name = 'hey' WHERE (id = 1)"] end it "should delete the cache when deleting the record" do m = @c[1] @cache[m.cache_key].should == m m.delete @cache.has_key?(m.cache_key).should be_false @c.db.sqls.should == ["SELECT * FROM items WHERE id = 1", "DELETE FROM items WHERE id = 1"] m = @c2[1] @cache[m.cache_key].should == m m.delete @cache.has_key?(m.cache_key).should be_false @c.db.sqls.should == ["SELECT * FROM items WHERE id = 1", "DELETE FROM items WHERE id = 1"] end it "should support #[] as a shortcut to #find with hash" do m = @c[:id => 3] @cache[m.cache_key].should be_nil @c.db.sqls.should == ["SELECT * FROM items WHERE (id = 3) LIMIT 1"] m = @c[1] @cache[m.cache_key].should == m @c.db.sqls.should == ["SELECT * FROM items WHERE id = 1"] @c[:id => 4] @c.db.sqls.should == ["SELECT * FROM items WHERE (id = 4) LIMIT 1"] m = @c2[:id => 3] @cache[m.cache_key].should be_nil @c.db.sqls.should == ["SELECT * FROM items WHERE (id = 3) LIMIT 1"] m = @c2[1] @cache[m.cache_key].should == m @c.db.sqls.should == ["SELECT * FROM items WHERE id = 1"] @c2[:id => 4] @c.db.sqls.should == ["SELECT * FROM items WHERE (id = 4) LIMIT 1"] end it "should support ignore_exception option" do c = Class.new(Sequel::Model(:items)) c.plugin :caching, @cache, :ignore_exceptions => true Class.new(c).cache_ignore_exceptions.should == true end it "should raise an exception if cache_store is memcached and ignore_exception is not enabled" do proc{@c3[1]}.should raise_error m = @c3.new.save proc{m.update({:name=>'blah'})}.should raise_error end it "should rescue an exception if cache_store is memcached and ignore_exception is enabled" do @c4[1].values.should == {:name => 'sharon', :id => 1} m = @c4.new.save proc{m.update({:name=>'blah'})}.should_not raise_error m.values.should == {:name => 'blah', :id => 1, :x => 1} end it "should support Model.cache_get_pk for getting a value from the cache by primary key" do @c.cache_get_pk(1).should == nil m = @c[1] @c.cache_get_pk(1).should == m end it "should support Model.cache_delete_pk for removing a value from the cache by primary key" do @c[1] @c.cache_get_pk(1).should_not == nil @c.cache_delete_pk(1).should == nil @c.cache_get_pk(1).should == nil end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/class_table_inheritance_spec.rb�����������������������������������0000664�0000000�0000000�00000024413�12201565355�0025317�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "class_table_inheritance plugin" do before do @db = Sequel.mock(:autoid=>proc{|sql| 1}) def @db.supports_schema_parsing?() true end def @db.schema(table, opts={}) {:employees=>[[:id, {:primary_key=>true, :type=>:integer}], [:name, {:type=>:string}], [:kind, {:type=>:string}]], :managers=>[[:id, {:type=>:integer}], [:num_staff, {:type=>:integer}]], :executives=>[[:id, {:type=>:integer}], [:num_managers, {:type=>:integer}]], :staff=>[[:id, {:type=>:integer}], [:manager_id, {:type=>:integer}]], }[table.is_a?(Sequel::Dataset) ? table.first_source_table : table] end @db.extend_datasets do def columns {[:employees]=>[:id, :name, :kind], [:managers]=>[:id, :num_staff], [:executives]=>[:id, :num_managers], [:staff]=>[:id, :manager_id], [:employees, :managers]=>[:id, :name, :kind, :num_staff], [:employees, :managers, :executives]=>[:id, :name, :kind, :num_staff, :num_managers], [:employees, :staff]=>[:id, :name, :kind, :manager_id], }[opts[:from] + (opts[:join] || []).map{|x| x.table}] end end class ::Employee < Sequel::Model(@db) def _save_refresh; @values[:id] = 1 end def self.columns dataset.columns end plugin :class_table_inheritance, :key=>:kind, :table_map=>{:Staff=>:staff} end class ::Manager < Employee one_to_many :staff_members, :class=>:Staff end class ::Executive < Manager end class ::Staff < Employee many_to_one :manager end @ds = Employee.dataset @db.sqls end after do Object.send(:remove_const, :Executive) Object.send(:remove_const, :Manager) Object.send(:remove_const, :Staff) Object.send(:remove_const, :Employee) end specify "should have simple_table = nil for subclasses" do Employee.simple_table.should == "employees" Manager.simple_table.should == nil Executive.simple_table.should == nil Staff.simple_table.should == nil end specify "should have working row_proc if using set_dataset in subclass to remove columns" do Manager.set_dataset(Manager.dataset.select(*(Manager.columns - [:blah]))) Manager.dataset._fetch = {:id=>1, :kind=>'Executive'} Manager[1].should == Executive.load(:id=>1, :kind=>'Executive') end specify "should use a joined dataset in subclasses" do Employee.dataset.sql.should == 'SELECT * FROM employees' Manager.dataset.sql.should == 'SELECT * FROM employees INNER JOIN managers USING (id)' Executive.dataset.sql.should == 'SELECT * FROM employees INNER JOIN managers USING (id) INNER JOIN executives USING (id)' Staff.dataset.sql.should == 'SELECT * FROM employees INNER JOIN staff USING (id)' end it "should return rows with the correct class based on the polymorphic_key value" do @ds._fetch = [{:kind=>'Employee'}, {:kind=>'Manager'}, {:kind=>'Executive'}, {:kind=>'Staff'}] Employee.all.collect{|x| x.class}.should == [Employee, Manager, Executive, Staff] end it "should return rows with the correct class based on the polymorphic_key value for subclasses" do Manager.dataset._fetch = [{:kind=>'Manager'}, {:kind=>'Executive'}] Manager.all.collect{|x| x.class}.should == [Manager, Executive] end it "should return rows with the current class if cti_key is nil" do Employee.plugin(:class_table_inheritance) @ds._fetch = [{:kind=>'Employee'}, {:kind=>'Manager'}, {:kind=>'Executive'}, {:kind=>'Staff'}] Employee.all.collect{|x| x.class}.should == [Employee, Employee, Employee, Employee] end it "should return rows with the current class if cti_key is nil in subclasses" do Employee.plugin(:class_table_inheritance) Object.send(:remove_const, :Executive) Object.send(:remove_const, :Manager) class ::Manager < Employee; end class ::Executive < Manager; end Manager.dataset._fetch = [{:kind=>'Manager'}, {:kind=>'Executive'}] Manager.all.collect{|x| x.class}.should == [Manager, Manager] end it "should fallback to the main class if the given class does not exist" do @ds._fetch = [{:kind=>'Employee'}, {:kind=>'Manager'}, {:kind=>'Blah'}, {:kind=>'Staff'}] Employee.all.collect{|x| x.class}.should == [Employee, Manager, Employee, Staff] end it "should fallback to the main class if the given class does not exist in subclasses" do Manager.dataset._fetch = [{:kind=>'Manager'}, {:kind=>'Executive'}, {:kind=>'Blah'}] Manager.all.collect{|x| x.class}.should == [Manager, Executive, Manager] end it "should add a before_create hook that sets the model class name for the key" do Employee.create @db.sqls.should == ["INSERT INTO employees (kind) VALUES ('Employee')"] end it "should add a before_create hook that sets the model class name for the key in subclasses" do Executive.create @db.sqls.should == ["INSERT INTO employees (kind) VALUES ('Executive')", "INSERT INTO managers (id) VALUES (1)", "INSERT INTO executives (id) VALUES (1)"] end it "should ignore existing cti_key value" do Employee.create(:kind=>'Manager') @db.sqls.should == ["INSERT INTO employees (kind) VALUES ('Employee')"] end it "should ignore existing cti_key value in subclasses" do Manager.create(:kind=>'Executive') @db.sqls.should == ["INSERT INTO employees (kind) VALUES ('Manager')", "INSERT INTO managers (id) VALUES (1)"] end it "should raise an error if attempting to create an anonymous subclass" do proc{Class.new(Manager)}.should raise_error(Sequel::Error) end it "should allow specifying a map of names to tables to override implicit mapping" do Manager.dataset.sql.should == 'SELECT * FROM employees INNER JOIN managers USING (id)' Staff.dataset.sql.should == 'SELECT * FROM employees INNER JOIN staff USING (id)' end it "should lazily load attributes for columns in subclass tables" do Manager.instance_dataset._fetch = Manager.dataset._fetch = {:id=>1, :name=>'J', :kind=>'Executive', :num_staff=>2} m = Manager[1] @db.sqls.should == ['SELECT * FROM employees INNER JOIN managers USING (id) WHERE (id = 1) LIMIT 1'] Executive.instance_dataset._fetch = Executive.dataset._fetch = {:num_managers=>3} m.num_managers.should == 3 @db.sqls.should == ['SELECT num_managers FROM employees INNER JOIN managers USING (id) INNER JOIN executives USING (id) WHERE (id = 1) LIMIT 1'] m.values.should == {:id=>1, :name=>'J', :kind=>'Executive', :num_staff=>2, :num_managers=>3} end it "should include schema for columns for tables for ancestor classes" do Employee.db_schema.should == {:id=>{:primary_key=>true, :type=>:integer}, :name=>{:type=>:string}, :kind=>{:type=>:string}} Manager.db_schema.should == {:id=>{:primary_key=>true, :type=>:integer}, :name=>{:type=>:string}, :kind=>{:type=>:string}, :num_staff=>{:type=>:integer}} Executive.db_schema.should == {:id=>{:primary_key=>true, :type=>:integer}, :name=>{:type=>:string}, :kind=>{:type=>:string}, :num_staff=>{:type=>:integer}, :num_managers=>{:type=>:integer}} Staff.db_schema.should == {:id=>{:primary_key=>true, :type=>:integer}, :name=>{:type=>:string}, :kind=>{:type=>:string}, :manager_id=>{:type=>:integer}} end it "should use the correct primary key (which should have the same name in all subclasses)" do [Employee, Manager, Executive, Staff].each{|c| c.primary_key.should == :id} end it "should have table_name return the table name of the most specific table" do Employee.table_name.should == :employees Manager.table_name.should == :managers Executive.table_name.should == :executives Staff.table_name.should == :staff end it "should delete the correct rows from all tables when deleting" do Executive.load(:id=>1).delete @db.sqls.should == ["DELETE FROM executives WHERE (id = 1)", "DELETE FROM managers WHERE (id = 1)", "DELETE FROM employees WHERE (id = 1)"] end it "should not allow deletion of frozen object" do o = Executive.load(:id=>1) o.freeze proc{o.delete}.should raise_error(Sequel::Error) @db.sqls.should == [] end it "should insert the correct rows into all tables when inserting" do Executive.create(:num_managers=>3, :num_staff=>2, :name=>'E') sqls = @db.sqls sqls.length.should == 3 sqls[0].should =~ /INSERT INTO employees \((name|kind), (name|kind)\) VALUES \('(E|Executive)', '(E|Executive)'\)/ sqls[1].should =~ /INSERT INTO managers \((num_staff|id), (num_staff|id)\) VALUES \([12], [12]\)/ sqls[2].should =~ /INSERT INTO executives \((num_managers|id), (num_managers|id)\) VALUES \([13], [13]\)/ end it "should insert the correct rows into all tables with a given primary key" do e = Executive.new(:num_managers=>3, :num_staff=>2, :name=>'E') e.id = 2 e.save sqls = @db.sqls sqls.length.should == 3 sqls[0].should =~ /INSERT INTO employees \((name|kind|id), (name|kind|id), (name|kind|id)\) VALUES \(('E'|'Executive'|2), ('E'|'Executive'|2), ('E'|'Executive'|2)\)/ sqls[1].should =~ /INSERT INTO managers \((num_staff|id), (num_staff|id)\) VALUES \(2, 2\)/ sqls[2].should =~ /INSERT INTO executives \((num_managers|id), (num_managers|id)\) VALUES \([23], [23]\)/ end it "should update the correct rows in all tables when updating" do Executive.load(:id=>2).update(:num_managers=>3, :num_staff=>2, :name=>'E') @db.sqls.should == ["UPDATE employees SET name = 'E' WHERE (id = 2)", "UPDATE managers SET num_staff = 2 WHERE (id = 2)", "UPDATE executives SET num_managers = 3 WHERE (id = 2)"] end it "should handle many_to_one relationships correctly" do Manager.dataset._fetch = {:id=>3, :name=>'E', :kind=>'Executive', :num_managers=>3} Staff.load(:manager_id=>3).manager.should == Executive.load(:id=>3, :name=>'E', :kind=>'Executive', :num_managers=>3) @db.sqls.should == ['SELECT * FROM employees INNER JOIN managers USING (id) WHERE (id = 3) LIMIT 1'] end it "should handle one_to_many relationships correctly" do Staff.dataset._fetch = {:id=>1, :name=>'S', :kind=>'Staff', :manager_id=>3} Executive.load(:id=>3).staff_members.should == [Staff.load(:id=>1, :name=>'S', :kind=>'Staff', :manager_id=>3)] @db.sqls.should == ['SELECT * FROM employees INNER JOIN staff USING (id) WHERE (staff.manager_id = 3)'] end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/columns_introspection_spec.rb�������������������������������������0000664�0000000�0000000�00000005762�12201565355�0025140�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') Sequel.extension :columns_introspection describe "columns_introspection extension" do before do @db = Sequel.mock.extension(:columns_introspection) @ds = @db[:a] @db.sqls end specify "should not issue a database query if the columns are already loaded" do @ds.instance_variable_set(:@columns, [:x]) @ds.columns.should == [:x] @db.sqls.length.should == 0 end specify "should handle plain symbols without a database query" do @ds.select(:x).columns.should == [:x] @db.sqls.length.should == 0 end specify "should handle qualified symbols without a database query" do @ds.select(:t__x).columns.should == [:x] @db.sqls.length.should == 0 end specify "should handle aliased symbols without a database query" do @ds.select(:x___a).columns.should == [:a] @db.sqls.length.should == 0 end specify "should handle qualified and aliased symbols without a database query" do @ds.select(:t__x___a).columns.should == [:a] @db.sqls.length.should == 0 end specify "should handle SQL::Identifiers " do @ds.select(Sequel.identifier(:x)).columns.should == [:x] @db.sqls.length.should == 0 end specify "should handle SQL::QualifiedIdentifiers" do @ds.select(Sequel.qualify(:t, :x)).columns.should == [:x] @ds.select(Sequel.identifier(:x).qualify(:t)).columns.should == [:x] @db.sqls.length.should == 0 end specify "should handle SQL::AliasedExpressions" do @ds.select(Sequel.as(:x, :a)).columns.should == [:a] @ds.select(Sequel.as(:x, Sequel.identifier(:a))).columns.should == [:a] @db.sqls.length.should == 0 end specify "should handle selecting * from a single subselect with no joins without a database query if the subselect's columns can be handled" do @ds.select(:x).from_self.columns.should == [:x] @db.sqls.length.should == 0 @ds.select(:x).from_self.from_self.columns.should == [:x] @db.sqls.length.should == 0 end specify "should handle selecting * from a single table with no joins without a database query if the database has cached schema columns for the table" do @db.instance_variable_set(:@schemas, "a"=>[[:x, {}]]) @ds.columns.should == [:x] @db.sqls.length.should == 0 end specify "should issue a database query for multiple subselects or joins" do @ds.from(@ds.select(:x), @ds.select(:y)).columns @db.sqls.length.should == 1 @ds.select(:x).from_self.natural_join(:a).columns @db.sqls.length.should == 1 end specify "should issue a database query when common table expressions are used" do @db.instance_variable_set(:@schemas, "a"=>[[:x, {}]]) @ds.with(:a, @ds).columns @db.sqls.length.should == 1 end specify "should issue a database query if the wildcard is selected" do @ds.columns @db.sqls.length.should == 1 end specify "should issue a database query if an unsupported type is used" do @ds.select(1).columns @db.sqls.length.should == 1 end end ��������������ruby-sequel-4.1.1/spec/extensions/composition_spec.rb�����������������������������������������������0000664�0000000�0000000�00000015423�12201565355�0023036�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Composition plugin" do before do @c = Class.new(Sequel::Model(:items)) @c.plugin :composition @c.columns :id, :year, :month, :day @o = @c.load(:id=>1, :year=>1, :month=>2, :day=>3) DB.reset end it ".composition should add compositions" do @o.should_not respond_to(:date) @c.composition :date, :mapping=>[:year, :month, :day] @o.date.should == Date.new(1, 2, 3) end it "loading the plugin twice should not remove existing compositions" do @c.composition :date, :mapping=>[:year, :month, :day] @c.plugin :composition @c.compositions.keys.should == [:date] end it ".composition should raise an error if :composer and :decomposer options are not present and :mapping option is not provided" do proc{@c.composition :date}.should raise_error(Sequel::Error) proc{@c.composition :date, :composer=>proc{}, :decomposer=>proc{}}.should_not raise_error proc{@c.composition :date, :mapping=>[]}.should_not raise_error end it ".compositions should return the reflection hash of compositions" do @c.compositions.should == {} @c.composition :date, :mapping=>[:year, :month, :day] @c.compositions.keys.should == [:date] r = @c.compositions.values.first r[:mapping].should == [:year, :month, :day] r[:composer].should be_a_kind_of(Proc) r[:decomposer].should be_a_kind_of(Proc) end it "#compositions should be a hash of cached values of compositions" do @o.compositions.should == {} @c.composition :date, :mapping=>[:year, :month, :day] @o.date @o.compositions.should == {:date=>Date.new(1, 2, 3)} end it "should work with custom :composer and :decomposer options" do @c.composition :date, :composer=>proc{Date.new(year+1, month+2, day+3)}, :decomposer=>proc{[:year, :month, :day].each{|s| self.send("#{s}=", date.send(s) * 2)}} @o.date.should == Date.new(2, 4, 6) @o.save sql = DB.sqls.last sql.should include("year = 4") sql.should include("month = 8") sql.should include("day = 12") end it "should allow call super in composition getter and setter method definition in class" do @c.composition :date, :mapping=>[:year, :month, :day] @c.class_eval do def date super + 1 end def date=(v) super(v - 3) end end @o.date.should == Date.new(1, 2, 4) @o.compositions[:date].should == Date.new(1, 2, 3) @o.date = Date.new(1, 3, 5) @o.compositions[:date].should == Date.new(1, 3, 2) @o.date.should == Date.new(1, 3, 3) end it "should mark the object as modified whenever the composition is set" do @c.composition :date, :mapping=>[:year, :month, :day] @o.modified?.should == false @o.date = Date.new(3, 4, 5) @o.modified?.should == true end it "should only decompose existing compositions" do called = false @c.composition :date, :composer=>proc{}, :decomposer=>proc{called = true} called.should == false @o.save called.should == false @o.date = Date.new(1,2,3) called.should == false @o.save_changes called.should == true end it "should clear compositions cache when refreshing" do @c.composition :date, :composer=>proc{}, :decomposer=>proc{} @o.date = Date.new(3, 4, 5) @o.refresh @o.compositions.should == {} end it "should not clear compositions cache when refreshing after save" do @c.composition :date, :composer=>proc{}, :decomposer=>proc{} @c.create(:date=>Date.new(3, 4, 5)).compositions.should == {:date=>Date.new(3, 4, 5)} end it "should not clear compositions cache when saving with insert_select" do def (@c.instance_dataset).supports_insert_select?() true end def (@c.instance_dataset).insert_select(*) {:id=>1} end @c.composition :date, :composer=>proc{}, :decomposer=>proc{} @c.create(:date=>Date.new(3, 4, 5)).compositions.should == {:date=>Date.new(3, 4, 5)} end it "should instantiate compositions lazily" do @c.composition :date, :mapping=>[:year, :month, :day] @o.compositions.should == {} @o.date @o.compositions.should == {:date=>Date.new(1,2,3)} end it "should cache value of composition" do times = 0 @c.composition :date, :composer=>proc{times+=1}, :decomposer=>proc{} times.should == 0 @o.date times.should == 1 @o.date times.should == 1 end it ":class option should take an string, symbol, or class" do @c.composition :date1, :class=>'Date', :mapping=>[:year, :month, :day] @c.composition :date2, :class=>:Date, :mapping=>[:year, :month, :day] @c.composition :date3, :class=>Date, :mapping=>[:year, :month, :day] @o.date1.should == Date.new(1, 2, 3) @o.date2.should == Date.new(1, 2, 3) @o.date3.should == Date.new(1, 2, 3) end it ":mapping option should work with a single array of symbols" do c = Class.new do def initialize(y, m) @y, @m = y, m end def year @y * 2 end def month @m * 3 end end @c.composition :date, :class=>c, :mapping=>[:year, :month] @o.date.year.should == 2 @o.date.month.should == 6 @o.date = c.new(3, 4) @o.save sql = DB.sqls.last sql.should include("year = 6") sql.should include("month = 12") end it ":mapping option should work with an array of two pairs of symbols" do c = Class.new do def initialize(y, m) @y, @m = y, m end def y @y * 2 end def m @m * 3 end end @c.composition :date, :class=>c, :mapping=>[[:year, :y], [:month, :m]] @o.date.y.should == 2 @o.date.m.should == 6 @o.date = c.new(3, 4) @o.save sql = DB.sqls.last sql.should include("year = 6") sql.should include("month = 12") end it ":mapping option :composer should return nil if all values are nil" do @c.composition :date, :mapping=>[:year, :month, :day] @c.new.date.should == nil end it ":mapping option :decomposer should set all related fields to nil if nil" do @c.composition :date, :mapping=>[:year, :month, :day] @o.date = nil @o.save sql = DB.sqls.last sql.should include("year = NULL") sql.should include("month = NULL") sql.should include("day = NULL") end it "should work with frozen instances" do @c.composition :date, :mapping=>[:year, :month, :day] @o.freeze @o.date.should == Date.new(1, 2, 3) proc{@o.date = Date.today}.should raise_error end it "should work correctly with subclasses" do @c.composition :date, :mapping=>[:year, :month, :day] c = Class.new(@c) o = c.load(:id=>1, :year=>1, :month=>2, :day=>3) o.date.should == Date.new(1, 2, 3) o.save sql = DB.sqls.last sql.should include("year = 1") sql.should include("month = 2") sql.should include("day = 3") end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/connection_validator_spec.rb��������������������������������������0000664�0000000�0000000�00000006461�12201565355�0024701�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") shared_examples_for "Sequel::ConnectionValidator" do before do @db.extend(Module.new do def disconnect_connection(conn) @sqls << 'disconnect' end def valid_connection?(conn) super conn.valid end def connect(server) conn = super conn.extend(Module.new do attr_accessor :valid end) conn.valid = true conn end end) @db.extension(:connection_validator) end it "should still allow new connections" do @db.synchronize{|c| c}.should be_a_kind_of(Sequel::Mock::Connection) end it "should only validate if connection idle longer than timeout" do c1 = @db.synchronize{|c| c} @db.sqls.should == [] @db.synchronize{|c| c}.should equal(c1) @db.sqls.should == [] @db.pool.connection_validation_timeout = -1 @db.synchronize{|c| c}.should equal(c1) @db.sqls.should == ['SELECT NULL'] @db.pool.connection_validation_timeout = 1 @db.synchronize{|c| c}.should equal(c1) @db.sqls.should == [] @db.synchronize{|c| c}.should equal(c1) @db.sqls.should == [] end it "should disconnect connection if not valid" do c1 = @db.synchronize{|c| c} @db.sqls.should == [] c1.valid = false @db.pool.connection_validation_timeout = -1 c2 = @db.synchronize{|c| c} @db.sqls.should == ['SELECT NULL', 'disconnect'] c2.should_not equal(c1) end it "should disconnect multiple connections repeatedly if they are not valid" do q, q1 = Queue.new, Queue.new c1 = nil c2 = nil @db.pool.connection_validation_timeout = -1 @db.synchronize do |c| Thread.new do @db.synchronize do |cc| c2 = cc end q1.pop q.push nil end q1.push nil q.pop c1 = c end c1.valid = false c2.valid = false c3 = @db.synchronize{|c| c} @db.sqls.should == ['SELECT NULL', 'disconnect', 'SELECT NULL', 'disconnect'] c3.should_not equal(c1) c3.should_not equal(c2) end it "should not leak connection references" do c1 = @db.synchronize do |c| @db.pool.instance_variable_get(:@connection_timestamps).should == {} c end @db.pool.instance_variable_get(:@connection_timestamps).should have_key(c1) c1.valid = false @db.pool.connection_validation_timeout = -1 c2 = @db.synchronize do |c| @db.pool.instance_variable_get(:@connection_timestamps).should == {} c end c2.should_not equal(c1) @db.pool.instance_variable_get(:@connection_timestamps).should_not have_key(c1) @db.pool.instance_variable_get(:@connection_timestamps).should have_key(c2) end it "should handle case where determining validity requires a connection" do @db.meta_def(:valid_connection?){|c| synchronize{}; true} @db.pool.connection_validation_timeout = -1 c1 = @db.synchronize{|c| c} @db.synchronize{|c| c}.should equal(c1) end end describe "Sequel::ConnectionValidator with threaded pool" do before do @db = Sequel.mock end it_should_behave_like "Sequel::ConnectionValidator" end describe "Sequel::ConnectionValidator with sharded threaded pool" do before do @db = Sequel.mock(:servers=>{}) end it_should_behave_like "Sequel::ConnectionValidator" end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/constraint_validations_plugin_spec.rb�����������������������������0000664�0000000�0000000�00000033152�12201565355�0026631�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::ConstraintValidations" do def model_class(opts={}) return @c if @c @c = Class.new(Sequel::Model(@db[:items])) @c.columns :name @db.sqls set_fetch(opts) @c.plugin :constraint_validations @c end def set_fetch(opts) @db.fetch = {:table=>'items', :message=>nil, :allow_nil=>nil, :constraint_name=>nil, :validation_type=>'presence', :argument=>nil, :column=>'name'}.merge(opts) end before do @db = Sequel.mock set_fetch({}) @ds = @db[:items] @ds.instance_variable_set(:@columns, [:name]) @ds2 = Sequel.mock[:items2] @ds2.instance_variable_set(:@columns, [:name]) end it "should load the validation_helpers plugin into the class" do model_class.new.should respond_to(:validates_presence) end it "should parse constraint validations when loading plugin" do @c = model_class @db.sqls.should == ["SELECT * FROM sequel_constraint_validations"] @db.constraint_validations.should == {"items"=>[{:allow_nil=>nil, :constraint_name=>nil, :message=>nil, :validation_type=>"presence", :column=>"name", :argument=>nil, :table=>"items"}]} @c.constraint_validations.should == [[:validates_presence, :name]] @c.constraint_validation_reflections.should == {:name=>[[:presence, {}]]} end it "should parse constraint validations with a custom constraint validations table" do c = Class.new(Sequel::Model(@db[:items])) @db.sqls c.plugin :constraint_validations, :constraint_validations_table=>:foo @db.sqls.should == ["SELECT * FROM foo"] @db.constraint_validations.should == {"items"=>[{:allow_nil=>nil, :constraint_name=>nil, :message=>nil, :validation_type=>"presence", :column=>"name", :argument=>nil, :table=>"items"}]} c.constraint_validations.should == [[:validates_presence, :name]] c.constraint_validation_reflections.should == {:name=>[[:presence, {}]]} end it "should populate constraint_validations when subclassing" do c = Class.new(Sequel::Model(@db)) c.plugin :constraint_validations @db.sqls.should == ["SELECT * FROM sequel_constraint_validations"] sc = Class.new(c) sc.set_dataset @ds @db.sqls.should == [] sc.constraint_validations.should == [[:validates_presence, :name]] sc.constraint_validation_reflections.should == {:name=>[[:presence, {}]]} end it "should handle plugin being loaded in subclass when superclass uses a custom constraint validations table" do c = Class.new(Sequel::Model(@db)) c.plugin :constraint_validations, :constraint_validations_table=>:foo @db.sqls.should == ["SELECT * FROM foo"] sc = Class.new(c) sc.plugin :constraint_validations sc.constraint_validations_table.should == :foo sc.set_dataset @ds @db.sqls.should == [] sc.constraint_validations.should == [[:validates_presence, :name]] sc.constraint_validation_reflections.should == {:name=>[[:presence, {}]]} end it "should populate constraint_validations when changing the model's dataset" do c = Class.new(Sequel::Model(@db[:foo])) c.columns :name @db.sqls c.plugin :constraint_validations @db.sqls.should == ["SELECT * FROM sequel_constraint_validations"] sc = Class.new(c) sc.set_dataset @ds @db.sqls.should == [] sc.constraint_validations.should == [[:validates_presence, :name]] sc.constraint_validation_reflections.should == {:name=>[[:presence, {}]]} end it "should reparse constraint validations when changing the model's database" do c = Class.new(Sequel::Model(@ds2)) c.plugin :constraint_validations @ds2.db.sqls.should == ["SELECT * FROM sequel_constraint_validations"] sc = Class.new(c) sc.set_dataset @ds @db.sqls.should == ["SELECT * FROM sequel_constraint_validations"] sc.constraint_validations.should == [[:validates_presence, :name]] sc.constraint_validation_reflections.should == {:name=>[[:presence, {}]]} end it "should reparse constraint validations when changing the model's database with a custom constraint validations table" do c = Class.new(Sequel::Model(@ds2)) c.plugin :constraint_validations, :constraint_validations_table=>:foo @ds2.db.sqls.should == ["SELECT * FROM foo"] sc = Class.new(c) sc.set_dataset @ds @db.sqls.should == ["SELECT * FROM foo"] sc.constraint_validations.should == [[:validates_presence, :name]] sc.constraint_validation_reflections.should == {:name=>[[:presence, {}]]} end it "should correctly retrieve :message option from constraint validations table" do model_class(:message=>'foo').constraint_validations.should == [[:validates_presence, :name, {:message=>'foo'}]] @c.constraint_validation_reflections.should == {:name=>[[:presence, {:message=>'foo'}]]} end it "should correctly retrieve :allow_nil option from constraint validations table" do model_class(:allow_nil=>true).constraint_validations.should == [[:validates_presence, :name, {:allow_nil=>true}]] @c.constraint_validation_reflections.should == {:name=>[[:presence, {:allow_nil=>true}]]} end it "should handle presence validation" do model_class(:validation_type=>'presence').constraint_validations.should == [[:validates_presence, :name]] @c.constraint_validation_reflections.should == {:name=>[[:presence, {}]]} end it "should handle exact_length validation" do model_class(:validation_type=>'exact_length', :argument=>'5').constraint_validations.should == [[:validates_exact_length, 5, :name]] @c.constraint_validation_reflections.should == {:name=>[[:exact_length, {:argument=>5}]]} end it "should handle min_length validation" do model_class(:validation_type=>'min_length', :argument=>'5').constraint_validations.should == [[:validates_min_length, 5, :name]] @c.constraint_validation_reflections.should == {:name=>[[:min_length, {:argument=>5}]]} end it "should handle max_length validation" do model_class(:validation_type=>'max_length', :argument=>'5').constraint_validations.should == [[:validates_max_length, 5, :name]] @c.constraint_validation_reflections.should == {:name=>[[:max_length, {:argument=>5}]]} end it "should handle length_range validation" do model_class(:validation_type=>'length_range', :argument=>'3..5').constraint_validations.should == [[:validates_length_range, 3..5, :name]] @c.constraint_validation_reflections.should == {:name=>[[:length_range, {:argument=>3..5}]]} end it "should handle length_range validation with an exclusive end" do model_class(:validation_type=>'length_range', :argument=>'3...5').constraint_validations.should == [[:validates_length_range, 3...5, :name]] @c.constraint_validation_reflections.should == {:name=>[[:length_range, {:argument=>3...5}]]} end it "should handle format validation" do model_class(:validation_type=>'format', :argument=>'^foo.*').constraint_validations.should == [[:validates_format, /^foo.*/, :name]] @c.constraint_validation_reflections.should == {:name=>[[:format, {:argument=>/^foo.*/}]]} end it "should handle format validation with case insensitive format" do model_class(:validation_type=>'iformat', :argument=>'^foo.*').constraint_validations.should == [[:validates_format, /^foo.*/i, :name]] @c.constraint_validation_reflections.should == {:name=>[[:format, {:argument=>/^foo.*/i}]]} end it "should handle includes validation with array of strings" do model_class(:validation_type=>'includes_str_array', :argument=>'a,b,c').constraint_validations.should == [[:validates_includes, %w'a b c', :name]] @c.constraint_validation_reflections.should == {:name=>[[:includes, {:argument=>%w'a b c'}]]} end it "should handle includes validation with array of integers" do model_class(:validation_type=>'includes_int_array', :argument=>'1,2,3').constraint_validations.should == [[:validates_includes, [1, 2, 3], :name]] @c.constraint_validation_reflections.should == {:name=>[[:includes, {:argument=>[1, 2, 3]}]]} end it "should handle includes validation with inclusive range of integers" do model_class(:validation_type=>'includes_int_range', :argument=>'3..5').constraint_validations.should == [[:validates_includes, 3..5, :name]] @c.constraint_validation_reflections.should == {:name=>[[:includes, {:argument=>3..5}]]} end it "should handle includes validation with exclusive range of integers" do model_class(:validation_type=>'includes_int_range', :argument=>'3...5').constraint_validations.should == [[:validates_includes, 3...5, :name]] @c.constraint_validation_reflections.should == {:name=>[[:includes, {:argument=>3...5}]]} end it "should handle like validation" do model_class(:validation_type=>'like', :argument=>'foo').constraint_validations.should == [[:validates_format, /\Afoo\z/, :name]] @c.constraint_validation_reflections.should == {:name=>[[:format, {:argument=>/\Afoo\z/}]]} end it "should handle ilike validation" do model_class(:validation_type=>'ilike', :argument=>'foo').constraint_validations.should == [[:validates_format, /\Afoo\z/i, :name]] @c.constraint_validation_reflections.should == {:name=>[[:format, {:argument=>/\Afoo\z/i}]]} end it "should handle like validation with % metacharacter" do model_class(:validation_type=>'like', :argument=>'%foo%').constraint_validations.should == [[:validates_format, /\A.*foo.*\z/, :name]] @c.constraint_validation_reflections.should == {:name=>[[:format, {:argument=>/\A.*foo.*\z/}]]} end it "should handle like validation with %% metacharacter" do model_class(:validation_type=>'like', :argument=>'%%foo%%').constraint_validations.should == [[:validates_format, /\A%foo%\z/, :name]] @c.constraint_validation_reflections.should == {:name=>[[:format, {:argument=>/\A%foo%\z/}]]} end it "should handle like validation with _ metacharacter" do model_class(:validation_type=>'like', :argument=>'f_o').constraint_validations.should == [[:validates_format, /\Af.o\z/, :name]] @c.constraint_validation_reflections.should == {:name=>[[:format, {:argument=>/\Af.o\z/}]]} end it "should handle like validation with Regexp metacharacter" do model_class(:validation_type=>'like', :argument=>'\wfoo\d').constraint_validations.should == [[:validates_format, /\A\\wfoo\\d\z/, :name]] @c.constraint_validation_reflections.should == {:name=>[[:format, {:argument=>/\A\\wfoo\\d\z/}]]} end it "should handle unique validation" do model_class(:validation_type=>'unique').constraint_validations.should == [[:validates_unique, [:name]]] @c.constraint_validation_reflections.should == {:name=>[[:unique, {}]]} end it "should handle unique validation with multiple columns" do model_class(:validation_type=>'unique', :column=>'name,id').constraint_validations.should == [[:validates_unique, [:name, :id]]] @c.constraint_validation_reflections.should == {[:name, :id]=>[[:unique, {}]]} end it "should handle :validation_options" do c = model_class(:validation_type=>'unique', :column=>'name') c.plugin :constraint_validations, :validation_options=>{:unique=>{:message=>'is bad'}} c.constraint_validations.should == [[:validates_unique, [:name], {:message=>'is bad'}]] c.constraint_validation_reflections.should == {:name=>[[:unique, {:message=>'is bad'}]]} c.dataset._fetch = {:count=>1} o = c.new(:name=>'a') o.valid?.should == false o.errors.full_messages.should == ['name is bad'] end it "should handle :validation_options merging with constraint validation options" do c = model_class(:validation_type=>'unique', :column=>'name', :allow_nil=>true) c.plugin :constraint_validations, :validation_options=>{:unique=>{:message=>'is bad'}} c.constraint_validations.should == [[:validates_unique, [:name], {:message=>'is bad', :allow_nil=>true}]] c.constraint_validation_reflections.should == {:name=>[[:unique, {:message=>'is bad', :allow_nil=>true}]]} c.dataset._fetch = {:count=>1} o = c.new(:name=>'a') o.valid?.should == false o.errors.full_messages.should == ['name is bad'] end it "should handle :validation_options merging with subclasses" do c = model_class(:validation_type=>'unique', :column=>'name') c.plugin :constraint_validations, :validation_options=>{:unique=>{:message=>'is bad', :allow_nil=>true}} sc = Class.new(c) sc.plugin :constraint_validations, :validation_options=>{:unique=>{:allow_missing=>true, :allow_nil=>false}} sc.constraint_validations.should == [[:validates_unique, [:name], {:message=>'is bad', :allow_missing=>true, :allow_nil=>false}]] sc.constraint_validation_reflections.should == {:name=>[[:unique, {:message=>'is bad', :allow_missing=>true, :allow_nil=>false}]]} sc.dataset._fetch = {:count=>1} o = sc.new(:name=>'a') o.valid?.should == false o.errors.full_messages.should == ['name is bad'] end it "should used parsed constraint validations when validating" do o = model_class.new o.valid?.should == false o.errors.full_messages.should == ['name is not present'] end it "should handle a table name specified as SQL::Identifier" do set_fetch(:table=>'sch__items') c = Class.new(Sequel::Model(@db[Sequel.identifier(:sch__items)])) c.plugin :constraint_validations c.constraint_validations.should == [[:validates_presence, :name]] c.constraint_validation_reflections.should == {:name=>[[:presence, {}]]} end it "should handle a table name specified as SQL::QualifiedIdentifier" do set_fetch(:table=>'sch.items') c = Class.new(Sequel::Model(@db[Sequel.qualify(:sch, :items)])) c.plugin :constraint_validations c.constraint_validations.should == [[:validates_presence, :name]] c.constraint_validation_reflections.should == {:name=>[[:presence, {}]]} end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/constraint_validations_spec.rb������������������������������������0000664�0000000�0000000�00000042562�12201565355�0025260�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "constraint_validations extension" do def parse_insert(s) m = /\AINSERT INTO sequel_constraint_validations \((.*)\) VALUES \((.*)\)\z/.match(s) Hash[*m[1].split(', ').map{|v| v.to_sym}.zip(m[2].split(', ').map{|v| parse_insert_value(v)}).reject{|k, v| v.nil?}.flatten] end def parse_insert_value(s) case s when 'NULL' nil when /\A'(.*)'\z/ $1 else raise Sequel::Error, "unhandled insert value: #{s.inspect}" end end before do @db = Sequel.mock @db.extend(Module.new{attr_writer :schema; def schema(table, *) execute("parse schema for #{table}"); @schema; end}) @db.extension(:constraint_validations) end it "should allow creating the sequel_constraint_validations table" do @db.create_constraint_validations_table @db.sqls.should == ["CREATE TABLE sequel_constraint_validations (table varchar(255) NOT NULL, constraint_name varchar(255), validation_type varchar(255) NOT NULL, column varchar(255) NOT NULL, argument varchar(255), message varchar(255), allow_nil boolean)"] end it "should allow creating the sequel_constraint_validations table with a non-default table name" do @db.constraint_validations_table = :foo @db.create_constraint_validations_table @db.sqls.should == ["CREATE TABLE foo (table varchar(255) NOT NULL, constraint_name varchar(255), validation_type varchar(255) NOT NULL, column varchar(255) NOT NULL, argument varchar(255), message varchar(255), allow_nil boolean)"] end it "should allow dropping the sequel_constraint_validations table" do @db.drop_constraint_validations_table @db.sqls.should == ["DROP TABLE sequel_constraint_validations"] end it "should allow dropping the sequel_constraint_validations table with a non-default table name" do @db.constraint_validations_table = :foo @db.drop_constraint_validations_table @db.sqls.should == ["DROP TABLE foo"] end it "should allow dropping validations for a given table" do @db.drop_constraint_validations_for(:table=>:foo) @db.sqls.should == ["DELETE FROM sequel_constraint_validations WHERE (table = 'foo')"] end it "should allow dropping validations for a given table and column" do @db.drop_constraint_validations_for(:table=>:foo, :column=>:bar) @db.sqls.should == ["DELETE FROM sequel_constraint_validations WHERE ((table = 'foo') AND (column = 'bar'))"] end it "should allow dropping validations for a given table and constraint" do @db.drop_constraint_validations_for(:table=>:foo, :constraint=>:bar) @db.sqls.should == ["DELETE FROM sequel_constraint_validations WHERE ((table = 'foo') AND (constraint_name = 'bar'))"] end it "should allow dropping validations for a non-default constraint_validations table" do @db.constraint_validations_table = :cv @db.drop_constraint_validations_for(:table=>:foo) @db.sqls.should == ["DELETE FROM cv WHERE (table = 'foo')"] end it "should raise an error without deleting if attempting to drop validations without table, column, or constraint" do proc{@db.drop_constraint_validations_for({})}.should raise_error(Sequel::Error) @db.sqls.should == [] end it "should allow adding constraint validations via create_table validate" do @db.create_table(:foo){String :name; validate{presence :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"name", :table=>"foo"} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (trim(name) != '')))"] end it "should allow adding constraint validations via alter_table validate" do @db.schema = [[:name, {:type=>:string}]] @db.alter_table(:foo){validate{presence :name}} sqls = @db.sqls parse_insert(sqls.slice!(2)).should == {:validation_type=>"presence", :column=>"name", :table=>"foo"} sqls.should == ["parse schema for foo", "BEGIN", "COMMIT", "ALTER TABLE foo ADD CHECK ((name IS NOT NULL) AND (trim(name) != ''))"] end it "should handle :message option when adding validations" do @db.create_table(:foo){String :name; validate{presence :name, :message=>'not there'}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"name", :table=>"foo", :message=>'not there'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (trim(name) != '')))"] end it "should handle :allow_nil option when adding validations" do @db.create_table(:foo){String :name; validate{presence :name, :allow_nil=>true}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"name", :table=>"foo", :allow_nil=>'t'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NULL) OR (trim(name) != '')))"] end it "should handle :name option when adding validations" do @db.create_table(:foo){String :name; validate{presence :name, :name=>'cons'}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"name", :table=>"foo", :constraint_name=>'cons'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CONSTRAINT cons CHECK ((name IS NOT NULL) AND (trim(name) != '')))"] end it "should handle multiple columns when adding validations" do @db.create_table(:foo){String :name; String :bar; validate{presence [:name, :bar]}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"name", :table=>"foo"} parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"bar", :table=>"foo"} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), bar varchar(255), CHECK ((name IS NOT NULL) AND (bar IS NOT NULL) AND (trim(name) != '') AND (trim(bar) != '')))"] end it "should handle presence validation on non-String columns" do @db.create_table(:foo){Integer :name; validate{presence :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"name", :table=>"foo"} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name integer, CHECK (name IS NOT NULL))"] @db.schema = [[:name, {:type=>:integer}]] @db.alter_table(:foo){validate{presence :name}} sqls = @db.sqls parse_insert(sqls.slice!(2)).should == {:validation_type=>"presence", :column=>"name", :table=>"foo"} sqls.should == ["parse schema for foo", "BEGIN", "COMMIT", "ALTER TABLE foo ADD CHECK (name IS NOT NULL)"] end it "should handle presence validation on Oracle with IS NOT NULL instead of != ''" do @db = Sequel.mock(:host=>'oracle') @db.extension(:constraint_validations) @db.create_table(:foo){String :name; validate{presence :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"name", :table=>"foo"} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (trim(name) IS NOT NULL)))"] end it "should assume column is not a String if it can't determine the type" do @db.create_table(:foo){Integer :name; validate{presence :bar}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"bar", :table=>"foo"} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name integer, CHECK (bar IS NOT NULL))"] @db.schema = [[:name, {:type=>:integer}]] @db.alter_table(:foo){validate{presence :bar}} sqls = @db.sqls parse_insert(sqls.slice!(2)).should == {:validation_type=>"presence", :column=>"bar", :table=>"foo"} sqls.should == ["parse schema for foo", "BEGIN", "COMMIT", "ALTER TABLE foo ADD CHECK (bar IS NOT NULL)"] end it "should handle presence validation on non-String columns with :allow_nil option" do @db.create_table(:foo){Integer :name; validate{presence :name, :allow_nil=>true}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"name", :table=>"foo", :allow_nil=>'t'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name integer)"] end it "should support :exact_length constraint validation" do @db.create_table(:foo){String :name; validate{exact_length 5, :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"exact_length", :column=>"name", :table=>"foo", :argument=>'5'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (char_length(name) = 5)))"] end it "should support :min_length constraint validation" do @db.create_table(:foo){String :name; validate{min_length 5, :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"min_length", :column=>"name", :table=>"foo", :argument=>'5'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (char_length(name) >= 5)))"] end it "should support :max_length constraint validation" do @db.create_table(:foo){String :name; validate{max_length 5, :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"max_length", :column=>"name", :table=>"foo", :argument=>'5'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (char_length(name) <= 5)))"] end it "should support :length_range constraint validation" do @db.create_table(:foo){String :name; validate{length_range 3..5, :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"length_range", :column=>"name", :table=>"foo", :argument=>'3..5'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (char_length(name) >= 3) AND (char_length(name) <= 5)))"] @db.create_table(:foo){String :name; validate{length_range 3...5, :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"length_range", :column=>"name", :table=>"foo", :argument=>'3...5'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (char_length(name) >= 3) AND (char_length(name) < 5)))"] end it "should support :format constraint validation" do @db = Sequel.mock(:host=>'postgres') @db.extension(:constraint_validations) @db.create_table(:foo){String :name; validate{format(/^foo.*/, :name)}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"format", :column=>"name", :table=>"foo", :argument=>'^foo.*'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name text, CHECK ((name IS NOT NULL) AND (name ~ '^foo.*')))"] end it "should support :format constraint validation with case insensitive format" do @db = Sequel.mock(:host=>'postgres') @db.extension(:constraint_validations) @db.create_table(:foo){String :name; validate{format(/^foo.*/i, :name)}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"iformat", :column=>"name", :table=>"foo", :argument=>'^foo.*'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name text, CHECK ((name IS NOT NULL) AND (name ~* '^foo.*')))"] end it "should support :includes constraint validation with an array of strings" do @db.create_table(:foo){String :name; validate{includes %w'a b c', :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"includes_str_array", :column=>"name", :table=>"foo", :argument=>'a,b,c'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (name IN ('a', 'b', 'c'))))"] end it "should support :includes constraint validation with an array of integers" do @db.create_table(:foo){String :name; validate{includes [1, 2, 3], :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"includes_int_array", :column=>"name", :table=>"foo", :argument=>'1,2,3'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (name IN (1, 2, 3))))"] end it "should support :includes constraint validation with a inclusive range of integers" do @db.create_table(:foo){String :name; validate{includes 3..5, :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"includes_int_range", :column=>"name", :table=>"foo", :argument=>'3..5'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (name >= 3) AND (name <= 5)))"] end it "should support :includes constraint validation with a exclusive range of integers" do @db.create_table(:foo){String :name; validate{includes 3...5, :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"includes_int_range", :column=>"name", :table=>"foo", :argument=>'3...5'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (name >= 3) AND (name < 5)))"] end it "should support :like constraint validation" do @db.create_table(:foo){String :name; validate{like 'foo%', :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"like", :column=>"name", :table=>"foo", :argument=>'foo%'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (name LIKE 'foo%' ESCAPE '\\')))"] end it "should support :ilike constraint validation" do @db.create_table(:foo){String :name; validate{ilike 'foo%', :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"ilike", :column=>"name", :table=>"foo", :argument=>'foo%'} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (UPPER(name) LIKE UPPER('foo%') ESCAPE '\\')))"] end it "should support :unique constraint validation" do @db.create_table(:foo){String :name; validate{unique :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"unique", :column=>"name", :table=>"foo"} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), UNIQUE (name))"] end it "should support :unique constraint validation with multiple columns" do @db.create_table(:foo){String :name; Integer :id; validate{unique [:name, :id]}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"unique", :column=>"name,id", :table=>"foo"} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), id integer, UNIQUE (name, id))"] end it "should support :unique constraint validation in alter_table" do @db.alter_table(:foo){validate{unique :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"unique", :column=>"name", :table=>"foo"} sqls.should == ["BEGIN", "COMMIT", "ALTER TABLE foo ADD UNIQUE (name)"] end it "should drop constraints and validations when dropping a constraint validation" do @db.alter_table(:foo){String :name; validate{drop :bar}} @db.sqls.should == ["DELETE FROM sequel_constraint_validations WHERE ((table, constraint_name) IN (('foo', 'bar')))", "ALTER TABLE foo DROP CONSTRAINT bar"] end it "should raise an error if attempting to validate inclusion with a range of non-integers" do proc{@db.create_table(:foo){String :name; validate{includes 'a'..'z', :name}}}.should raise_error(Sequel::Error) end it "should raise an error if attempting to validate inclusion with a range of non-integers or strings" do proc{@db.create_table(:foo){String :name; validate{includes [1.0, 2.0], :name}}}.should raise_error(Sequel::Error) end it "should raise an error if attempting to validate inclusion with a unsupported object" do proc{@db.create_table(:foo){String :name; validate{includes 'a', :name}}}.should raise_error(Sequel::Error) end it "should raise an error if attempting to drop a constraint validation in a create_table generator" do proc{@db.create_table(:foo){String :name; validate{drop :foo}}}.should raise_error(Sequel::Error) end it "should raise an error if attempting to drop a constraint validation without a name" do proc{@db.alter_table(:foo){String :name; validate{drop nil}}}.should raise_error(Sequel::Error) end it "should raise an error if attempting attempting to process a constraint validation with an unsupported type" do proc{@db.alter_table(:foo){String :name; validations << {:type=>:foo}}}.should raise_error(Sequel::Error) end it "should allow adding constraint validations for tables specified as a SQL::Identifier" do @db.create_table(Sequel.identifier(:sch__foo)){String :name; validate{presence :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"name", :table=>"sch__foo"} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE sch__foo (name varchar(255), CHECK ((name IS NOT NULL) AND (trim(name) != '')))"] end it "should allow adding constraint validations for tables specified as a SQL::QualifiedIdentifier" do @db.create_table(Sequel.qualify(:sch, :foo)){String :name; validate{presence :name}} sqls = @db.sqls parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"name", :table=>"sch.foo"} sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE sch.foo (name varchar(255), CHECK ((name IS NOT NULL) AND (trim(name) != '')))"] end end ����������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/core_refinements_spec.rb������������������������������������������0000664�0000000�0000000�00000046241�12201565355�0024024�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") if RUBY_VERSION >= '2.0.0' Sequel.extension :core_refinements, :pg_array, :pg_hstore, :pg_row, :pg_range, :pg_row_ops, :pg_range_ops, :pg_array_ops, :pg_hstore_ops, :pg_json, :pg_json_ops using Sequel::CoreRefinements describe "Core refinements" do before do db = Sequel::Database.new @d = db[:items] def @d.supports_regexp?; true end def @d.l(*args, &block) literal(filter_expr(*args, &block)) end def @d.lit(*args) literal(*args) end end it "should support NOT via Symbol#~" do @d.l(~:x).should == 'NOT x' @d.l(~:x__y).should == 'NOT x.y' end it "should support + - * / via Symbol#+,-,*,/" do @d.l(:x + 1 > 100).should == '((x + 1) > 100)' @d.l((:x * :y) < 100.01).should == '((x * y) < 100.01)' @d.l((:x - :y/2) >= 100000000000000000000000000000000000).should == '((x - (y / 2)) >= 100000000000000000000000000000000000)' @d.l((((:x - :y)/(:x + :y))*:z) <= 100).should == '((((x - y) / (x + y)) * z) <= 100)' @d.l(~((((:x - :y)/(:x + :y))*:z) <= 100)).should == '((((x - y) / (x + y)) * z) > 100)' end it "should support LIKE via Symbol#like" do @d.l(:x.like('a')).should == '(x LIKE \'a\' ESCAPE \'\\\')' @d.l(:x.like(/a/)).should == '(x ~ \'a\')' @d.l(:x.like('a', 'b')).should == '((x LIKE \'a\' ESCAPE \'\\\') OR (x LIKE \'b\' ESCAPE \'\\\'))' @d.l(:x.like(/a/, /b/i)).should == '((x ~ \'a\') OR (x ~* \'b\'))' @d.l(:x.like('a', /b/)).should == '((x LIKE \'a\' ESCAPE \'\\\') OR (x ~ \'b\'))' end it "should support NOT LIKE via Symbol#like and Symbol#~" do @d.l(~:x.like('a')).should == '(x NOT LIKE \'a\' ESCAPE \'\\\')' @d.l(~:x.like(/a/)).should == '(x !~ \'a\')' @d.l(~:x.like('a', 'b')).should == '((x NOT LIKE \'a\' ESCAPE \'\\\') AND (x NOT LIKE \'b\' ESCAPE \'\\\'))' @d.l(~:x.like(/a/, /b/i)).should == '((x !~ \'a\') AND (x !~* \'b\'))' @d.l(~:x.like('a', /b/)).should == '((x NOT LIKE \'a\' ESCAPE \'\\\') AND (x !~ \'b\'))' end it "should support ILIKE via Symbol#ilike" do @d.l(:x.ilike('a')).should == '(UPPER(x) LIKE UPPER(\'a\') ESCAPE \'\\\')' @d.l(:x.ilike(/a/)).should == '(x ~* \'a\')' @d.l(:x.ilike('a', 'b')).should == '((UPPER(x) LIKE UPPER(\'a\') ESCAPE \'\\\') OR (UPPER(x) LIKE UPPER(\'b\') ESCAPE \'\\\'))' @d.l(:x.ilike(/a/, /b/i)).should == '((x ~* \'a\') OR (x ~* \'b\'))' @d.l(:x.ilike('a', /b/)).should == '((UPPER(x) LIKE UPPER(\'a\') ESCAPE \'\\\') OR (x ~* \'b\'))' end it "should support NOT ILIKE via Symbol#ilike and Symbol#~" do @d.l(~:x.ilike('a')).should == '(UPPER(x) NOT LIKE UPPER(\'a\') ESCAPE \'\\\')' @d.l(~:x.ilike(/a/)).should == '(x !~* \'a\')' @d.l(~:x.ilike('a', 'b')).should == '((UPPER(x) NOT LIKE UPPER(\'a\') ESCAPE \'\\\') AND (UPPER(x) NOT LIKE UPPER(\'b\') ESCAPE \'\\\'))' @d.l(~:x.ilike(/a/, /b/i)).should == '((x !~* \'a\') AND (x !~* \'b\'))' @d.l(~:x.ilike('a', /b/)).should == '((UPPER(x) NOT LIKE UPPER(\'a\') ESCAPE \'\\\') AND (x !~* \'b\'))' end it "should support sql_expr on arrays with all two pairs" do @d.l([[:x, 100],[:y, 'a']].sql_expr).should == '((x = 100) AND (y = \'a\'))' @d.l([[:x, true], [:y, false]].sql_expr).should == '((x IS TRUE) AND (y IS FALSE))' @d.l([[:x, nil], [:y, [1,2,3]]].sql_expr).should == '((x IS NULL) AND (y IN (1, 2, 3)))' end it "should support sql_negate on arrays with all two pairs" do @d.l([[:x, 100],[:y, 'a']].sql_negate).should == '((x != 100) AND (y != \'a\'))' @d.l([[:x, true], [:y, false]].sql_negate).should == '((x IS NOT TRUE) AND (y IS NOT FALSE))' @d.l([[:x, nil], [:y, [1,2,3]]].sql_negate).should == '((x IS NOT NULL) AND (y NOT IN (1, 2, 3)))' end it "should support ~ on arrays with all two pairs" do @d.l(~[[:x, 100],[:y, 'a']]).should == '((x != 100) OR (y != \'a\'))' @d.l(~[[:x, true], [:y, false]]).should == '((x IS NOT TRUE) OR (y IS NOT FALSE))' @d.l(~[[:x, nil], [:y, [1,2,3]]]).should == '((x IS NOT NULL) OR (y NOT IN (1, 2, 3)))' end it "should support sql_or on arrays with all two pairs" do @d.l([[:x, 100],[:y, 'a']].sql_or).should == '((x = 100) OR (y = \'a\'))' @d.l([[:x, true], [:y, false]].sql_or).should == '((x IS TRUE) OR (y IS FALSE))' @d.l([[:x, nil], [:y, [1,2,3]]].sql_or).should == '((x IS NULL) OR (y IN (1, 2, 3)))' end it "should support Array#sql_string_join for concatenation of SQL strings" do @d.lit([:x].sql_string_join).should == '(x)' @d.lit([:x].sql_string_join(', ')).should == '(x)' @d.lit([:x, :y].sql_string_join).should == '(x || y)' @d.lit([:x, :y].sql_string_join(', ')).should == "(x || ', ' || y)" @d.lit([:x.sql_function(1), :y.sql_subscript(1)].sql_string_join).should == '(x(1) || y[1])' @d.lit([:x.sql_function(1), 'y.z'.lit].sql_string_join(', ')).should == "(x(1) || ', ' || y.z)" @d.lit([:x, 1, :y].sql_string_join).should == "(x || '1' || y)" @d.lit([:x, 1, :y].sql_string_join(', ')).should == "(x || ', ' || '1' || ', ' || y)" @d.lit([:x, 1, :y].sql_string_join(:y__z)).should == "(x || y.z || '1' || y.z || y)" @d.lit([:x, 1, :y].sql_string_join(1)).should == "(x || '1' || '1' || '1' || y)" @d.lit([:x, :y].sql_string_join('y.x || x.y'.lit)).should == "(x || y.x || x.y || y)" @d.lit([[:x, :y].sql_string_join, [:a, :b].sql_string_join].sql_string_join).should == "(x || y || a || b)" end it "should support sql_expr on hashes" do @d.l({:x => 100, :y => 'a'}.sql_expr)[1...-1].split(' AND ').sort.should == ['(x = 100)', '(y = \'a\')'] @d.l({:x => true, :y => false}.sql_expr)[1...-1].split(' AND ').sort.should == ['(x IS TRUE)', '(y IS FALSE)'] @d.l({:x => nil, :y => [1,2,3]}.sql_expr)[1...-1].split(' AND ').sort.should == ['(x IS NULL)', '(y IN (1, 2, 3))'] end it "should support sql_negate on hashes" do @d.l({:x => 100, :y => 'a'}.sql_negate)[1...-1].split(' AND ').sort.should == ['(x != 100)', '(y != \'a\')'] @d.l({:x => true, :y => false}.sql_negate)[1...-1].split(' AND ').sort.should == ['(x IS NOT TRUE)', '(y IS NOT FALSE)'] @d.l({:x => nil, :y => [1,2,3]}.sql_negate)[1...-1].split(' AND ').sort.should == ['(x IS NOT NULL)', '(y NOT IN (1, 2, 3))'] end it "should support ~ on hashes" do @d.l(~{:x => 100, :y => 'a'})[1...-1].split(' OR ').sort.should == ['(x != 100)', '(y != \'a\')'] @d.l(~{:x => true, :y => false})[1...-1].split(' OR ').sort.should == ['(x IS NOT TRUE)', '(y IS NOT FALSE)'] @d.l(~{:x => nil, :y => [1,2,3]})[1...-1].split(' OR ').sort.should == ['(x IS NOT NULL)', '(y NOT IN (1, 2, 3))'] end it "should support sql_or on hashes" do @d.l({:x => 100, :y => 'a'}.sql_or)[1...-1].split(' OR ').sort.should == ['(x = 100)', '(y = \'a\')'] @d.l({:x => true, :y => false}.sql_or)[1...-1].split(' OR ').sort.should == ['(x IS TRUE)', '(y IS FALSE)'] @d.l({:x => nil, :y => [1,2,3]}.sql_or)[1...-1].split(' OR ').sort.should == ['(x IS NULL)', '(y IN (1, 2, 3))'] end it "should Hash#& and Hash#|" do @d.l({:y => :z} & :x).should == '((y = z) AND x)' @d.l({:x => :a} & {:y => :z}).should == '((x = a) AND (y = z))' @d.l({:y => :z} | :x).should == '((y = z) OR x)' @d.l({:x => :a} | {:y => :z}).should == '((x = a) OR (y = z))' end end describe "Array#case and Hash#case" do before do @d = Sequel.mock.dataset end specify "should return SQL CASE expression" do @d.literal({:x=>:y}.case(:z)).should == '(CASE WHEN x THEN y ELSE z END)' @d.literal({:x=>:y}.case(:z, :exp)).should == '(CASE exp WHEN x THEN y ELSE z END)' ['(CASE WHEN x THEN y WHEN a THEN b ELSE z END)', '(CASE WHEN a THEN b WHEN x THEN y ELSE z END)'].should(include(@d.literal({:x=>:y, :a=>:b}.case(:z)))) @d.literal([[:x, :y]].case(:z)).should == '(CASE WHEN x THEN y ELSE z END)' @d.literal([[:x, :y], [:a, :b]].case(:z)).should == '(CASE WHEN x THEN y WHEN a THEN b ELSE z END)' @d.literal([[:x, :y], [:a, :b]].case(:z, :exp)).should == '(CASE exp WHEN x THEN y WHEN a THEN b ELSE z END)' @d.literal([[:x, :y], [:a, :b]].case(:z, :exp__w)).should == '(CASE exp.w WHEN x THEN y WHEN a THEN b ELSE z END)' end specify "should return SQL CASE expression with expression even if nil" do @d.literal({:x=>:y}.case(:z, nil)).should == '(CASE NULL WHEN x THEN y ELSE z END)' end specify "should raise an error if an array that isn't all two pairs is used" do proc{[:b].case(:a)}.should raise_error(Sequel::Error) proc{[:b, :c].case(:a)}.should raise_error(Sequel::Error) proc{[[:b, :c], :d].case(:a)}.should raise_error(Sequel::Error) end specify "should raise an error if an empty array/hash is used" do proc{[].case(:a)}.should raise_error(Sequel::Error) proc{{}.case(:a)}.should raise_error(Sequel::Error) end end describe "Array#sql_value_list and #sql_array" do before do @d = Sequel.mock.dataset end specify "should treat the array as an SQL value list instead of conditions when used as a placeholder value" do @d.filter("(a, b) IN ?", [[:x, 1], [:y, 2]]).sql.should == 'SELECT * WHERE ((a, b) IN ((x = 1) AND (y = 2)))' @d.filter("(a, b) IN ?", [[:x, 1], [:y, 2]].sql_value_list).sql.should == 'SELECT * WHERE ((a, b) IN ((x, 1), (y, 2)))' end specify "should be no difference when used as a hash value" do @d.filter([:a, :b]=>[[:x, 1], [:y, 2]]).sql.should == 'SELECT * WHERE ((a, b) IN ((x, 1), (y, 2)))' @d.filter([:a, :b]=>[[:x, 1], [:y, 2]].sql_value_list).sql.should == 'SELECT * WHERE ((a, b) IN ((x, 1), (y, 2)))' end end describe "String#lit" do before do @ds = Sequel::Database.new[:t] end specify "should return an LiteralString object" do 'xyz'.lit.should be_a_kind_of(Sequel::LiteralString) 'xyz'.lit.to_s.should == 'xyz' end specify "should inhibit string literalization" do @ds.update_sql(:stamp => "NOW()".lit).should == "UPDATE t SET stamp = NOW()" end specify "should return a PlaceholderLiteralString object if args are given" do a = 'DISTINCT ?'.lit(:a) a.should be_a_kind_of(Sequel::SQL::PlaceholderLiteralString) @ds.literal(a).should == 'DISTINCT a' @ds.quote_identifiers = true @ds.literal(a).should == 'DISTINCT "a"' end specify "should handle named placeholders if given a single argument hash" do a = 'DISTINCT :b'.lit(:b=>:a) a.should be_a_kind_of(Sequel::SQL::PlaceholderLiteralString) @ds.literal(a).should == 'DISTINCT a' @ds.quote_identifiers = true @ds.literal(a).should == 'DISTINCT "a"' end specify "should treat placeholder literal strings as generic expressions" do a = ':b'.lit(:b=>:a) @ds.literal(a + 1).should == "(a + 1)" @ds.literal(a & :b).should == "(a AND b)" @ds.literal(a.sql_string + :b).should == "(a || b)" end end describe "String#to_sequel_blob" do specify "should return a Blob object" do 'xyz'.to_sequel_blob.should be_a_kind_of(::Sequel::SQL::Blob) 'xyz'.to_sequel_blob.should == 'xyz' end specify "should retain binary data" do "\1\2\3\4".to_sequel_blob.should == "\1\2\3\4" end end describe "#desc" do before do @ds = Sequel.mock.dataset end specify "should format a DESC clause for a column ref" do @ds.literal(:test.desc).should == 'test DESC' @ds.literal(:items__price.desc).should == 'items.price DESC' end specify "should format a DESC clause for a function" do @ds.literal(:avg.sql_function(:test).desc).should == 'avg(test) DESC' end end describe "#asc" do before do @ds = Sequel.mock.dataset end specify "should format a ASC clause for a column ref" do @ds.literal(:test.asc).should == 'test ASC' @ds.literal(:items__price.asc).should == 'items.price ASC' end specify "should format a ASC clause for a function" do @ds.literal(:avg.sql_function(:test).asc).should == 'avg(test) ASC' end end describe "#as" do before do @ds = Sequel.mock.dataset end specify "should format a AS clause for a column ref" do @ds.literal(:test.as(:t)).should == 'test AS t' @ds.literal(:items__price.as(:p)).should == 'items.price AS p' end specify "should format a AS clause for a function" do @ds.literal(:avg.sql_function(:test).as(:avg)).should == 'avg(test) AS avg' end specify "should format a AS clause for a literal value" do @ds.literal('abc'.as(:abc)).should == "'abc' AS abc" end end describe "Column references" do before do @ds = Sequel::Database.new.dataset def @ds.quoted_identifier_append(sql, c) sql << "`#{c}`" end @ds.quote_identifiers = true end specify "should be quoted properly" do @ds.literal(:xyz).should == "`xyz`" @ds.literal(:xyz__abc).should == "`xyz`.`abc`" @ds.literal(:xyz.as(:x)).should == "`xyz` AS `x`" @ds.literal(:xyz__abc.as(:x)).should == "`xyz`.`abc` AS `x`" @ds.literal(:xyz___x).should == "`xyz` AS `x`" @ds.literal(:xyz__abc___x).should == "`xyz`.`abc` AS `x`" end specify "should be quoted properly in SQL functions" do @ds.literal(:avg.sql_function(:xyz)).should == "avg(`xyz`)" @ds.literal(:avg.sql_function(:xyz, 1)).should == "avg(`xyz`, 1)" @ds.literal(:avg.sql_function(:xyz).as(:a)).should == "avg(`xyz`) AS `a`" end specify "should be quoted properly in ASC/DESC clauses" do @ds.literal(:xyz.asc).should == "`xyz` ASC" @ds.literal(:avg.sql_function(:xyz, 1).desc).should == "avg(`xyz`, 1) DESC" end specify "should be quoted properly in a cast function" do @ds.literal(:x.cast(:integer)).should == "CAST(`x` AS integer)" @ds.literal(:x__y.cast('varchar(20)')).should == "CAST(`x`.`y` AS varchar(20))" end end describe "Blob" do specify "#to_sequel_blob should return self" do blob = "x".to_sequel_blob blob.to_sequel_blob.object_id.should == blob.object_id end end describe "Symbol#*" do before do @ds = Sequel.mock.dataset end specify "should format a qualified wildcard if no argument" do @ds.literal(:xyz.*).should == 'xyz.*' @ds.literal(:abc.*).should == 'abc.*' end specify "should format a filter expression if an argument" do @ds.literal(:xyz.*(3)).should == '(xyz * 3)' @ds.literal(:abc.*(5)).should == '(abc * 5)' end specify "should support qualified symbols if no argument" do @ds.literal(:xyz__abc.*).should == 'xyz.abc.*' end end describe "Symbol" do before do @ds = Sequel.mock.dataset @ds.quote_identifiers = true @ds.identifier_input_method = :upcase end specify "#identifier should format an identifier" do @ds.literal(:xyz__abc.identifier).should == '"XYZ__ABC"' end specify "#qualify should format a qualified column" do @ds.literal(:xyz.qualify(:abc)).should == '"ABC"."XYZ"' end specify "#qualify should work on QualifiedIdentifiers" do @ds.literal(:xyz.qualify(:abc).qualify(:def)).should == '"DEF"."ABC"."XYZ"' end specify "should be able to qualify an identifier" do @ds.literal(:xyz.identifier.qualify(:xyz__abc)).should == '"XYZ"."ABC"."XYZ"' end specify "should be able to specify a schema.table.column" do @ds.literal(:column.qualify(:table.qualify(:schema))).should == '"SCHEMA"."TABLE"."COLUMN"' @ds.literal(:column.qualify(:table__name.identifier.qualify(:schema))).should == '"SCHEMA"."TABLE__NAME"."COLUMN"' end specify "should be able to specify order" do @oe = :xyz.desc @oe.class.should == Sequel::SQL::OrderedExpression @oe.descending.should == true @oe = :xyz.asc @oe.class.should == Sequel::SQL::OrderedExpression @oe.descending.should == false end specify "should work correctly with objects" do o = Object.new def o.sql_literal(ds) "(foo)" end @ds.literal(:column.qualify(o)).should == '(foo)."COLUMN"' end end describe "Symbol" do before do @ds = Sequel::Database.new.dataset end specify "should support sql_function method" do @ds.literal(:COUNT.sql_function('1')).should == "COUNT('1')" @ds.select(:COUNT.sql_function('1')).sql.should == "SELECT COUNT('1')" end specify "should support cast method" do @ds.literal(:abc.cast(:integer)).should == "CAST(abc AS integer)" end specify "should support sql array accesses via sql_subscript" do @ds.literal(:abc.sql_subscript(1)).should == "abc[1]" @ds.literal(:abc__def.sql_subscript(1)).should == "abc.def[1]" @ds.literal(:abc.sql_subscript(1)|2).should == "abc[1, 2]" @ds.literal(:abc.sql_subscript(1)[2]).should == "abc[1][2]" end specify "should support cast_numeric and cast_string" do x = :abc.cast_numeric x.should be_a_kind_of(Sequel::SQL::NumericExpression) @ds.literal(x).should == "CAST(abc AS integer)" x = :abc.cast_numeric(:real) x.should be_a_kind_of(Sequel::SQL::NumericExpression) @ds.literal(x).should == "CAST(abc AS real)" x = :abc.cast_string x.should be_a_kind_of(Sequel::SQL::StringExpression) @ds.literal(x).should == "CAST(abc AS varchar(255))" x = :abc.cast_string(:varchar) x.should be_a_kind_of(Sequel::SQL::StringExpression) @ds.literal(x).should == "CAST(abc AS varchar(255))" end specify "should allow database independent types when casting" do db = @ds.db def db.cast_type_literal(type) return :foo if type == Integer return :bar if type == String type end @ds.literal(:abc.cast(String)).should == "CAST(abc AS bar)" @ds.literal(:abc.cast(String)).should == "CAST(abc AS bar)" @ds.literal(:abc.cast_string).should == "CAST(abc AS bar)" @ds.literal(:abc.cast_string(Integer)).should == "CAST(abc AS foo)" @ds.literal(:abc.cast_numeric).should == "CAST(abc AS foo)" @ds.literal(:abc.cast_numeric(String)).should == "CAST(abc AS bar)" end specify "should support SQL EXTRACT function via #extract " do @ds.literal(:abc.extract(:year)).should == "extract(year FROM abc)" end end describe "Postgres extensions integration" do before do @db = Sequel.mock end it "Symbol#pg_array should return an ArrayOp" do @db.literal(:a.pg_array.unnest).should == "unnest(a)" end it "Symbol#pg_row should return a PGRowOp" do @db.literal(:a.pg_row[:a]).should == "(a).a" end it "Symbol#hstore should return an HStoreOp" do @db.literal(:a.hstore['a']).should == "(a -> 'a')" end it "Symbol#pg_json should return an JSONOp" do @db.literal(:a.pg_json[%w'a b']).should == "(a #> ARRAY['a','b'])" end it "Symbol#pg_range should return a RangeOp" do @db.literal(:a.pg_range.lower).should == "lower(a)" end it "Array#pg_array should return a PGArray" do @db.literal([1].pg_array.op.unnest).should == "unnest(ARRAY[1])" @db.literal([1].pg_array(:int4).op.unnest).should == "unnest(ARRAY[1]::int4[])" end it "Array#pg_json should return a JSONArray" do @db.literal([1].pg_json).should == "'[1]'::json" end it "Array#pg_row should return a ArrayRow" do @db.literal([1].pg_row).should == "ROW(1)" end it "Hash#hstore should return an HStore" do @db.literal({'a'=>1}.hstore.op['a']).should == '(\'"a"=>"1"\'::hstore -> \'a\')' end it "Hash#pg_json should return an JSONHash" do @db.literal({'a'=>'b'}.pg_json).should == "'{\"a\":\"b\"}'::json" end it "Range#pg_range should return an PGRange" do @db.literal((1..2).pg_range).should == "'[1,2]'" @db.literal((1..2).pg_range(:int4range)).should == "'[1,2]'::int4range" end end else skip_warn "core_refinements extension: only works on ruby 2.0+" end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/dataset_associations_spec.rb��������������������������������������0000664�0000000�0000000�00000024133�12201565355�0024675�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::DatasetAssociations" do before do @db = Sequel.mock @Base = Class.new(Sequel::Model) @Base.plugin :dataset_associations @Artist = Class.new(@Base) @Album = Class.new(@Base) @Tag = Class.new(@Base) @Artist.meta_def(:name){'Artist'} @Album.meta_def(:name){'Album'} @Tag.meta_def(:name){'Tag'} @Artist.dataset = @db[:artists] @Album.dataset = @db[:albums] @Tag.dataset = @db[:tags] @Artist.columns :id, :name @Album.columns :id, :name, :artist_id @Tag.columns :id, :name @Artist.plugin :many_through_many @Artist.plugin :pg_array_associations @Tag.plugin :pg_array_associations @Artist.one_to_many :albums, :class=>@Album @Artist.one_to_one :first_album, :class=>@Album @Album.many_to_one :artist, :class=>@Artist @Album.many_to_many :tags, :class=>@Tag @Tag.many_to_many :albums, :class=>@Album @Artist.pg_array_to_many :artist_tags, :class=>@Tag, :key=>:tag_ids @Tag.many_to_pg_array :artists, :class=>@Artist @Artist.many_through_many :tags, [[:albums, :artist_id, :id], [:albums_tags, :album_id, :tag_id]], :class=>@Tag end it "should work for many_to_one associations" do ds = @Album.artists ds.should be_a_kind_of(Sequel::Dataset) ds.model.should == @Artist ds.sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums.artist_id FROM albums))" end it "should work for one_to_many associations" do ds = @Artist.albums ds.should be_a_kind_of(Sequel::Dataset) ds.model.should == @Album ds.sql.should == "SELECT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists))" end it "should work for one_to_one associations" do ds = @Artist.first_albums ds.should be_a_kind_of(Sequel::Dataset) ds.model.should == @Album ds.sql.should == "SELECT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists))" end it "should work for many_to_many associations" do ds = @Album.tags ds.should be_a_kind_of(Sequel::Dataset) ds.model.should == @Tag ds.sql.should == "SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM albums INNER JOIN albums_tags ON (albums_tags.album_id = albums.id)))" end it "should work for many_through_many associations" do ds = @Artist.tags ds.should be_a_kind_of(Sequel::Dataset) ds.model.should == @Tag ds.sql.should == "SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM artists INNER JOIN albums ON (albums.artist_id = artists.id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id)))" end it "should work for pg_array_to_many associations" do ds = @Artist.artist_tags ds.should be_a_kind_of(Sequel::Dataset) ds.model.should == @Tag ds.sql.should == "SELECT * FROM tags WHERE (id IN (SELECT unnest(artists.tag_ids) FROM artists))" end it "should work for many_to_pg_array associations" do ds = @Tag.artists ds.should be_a_kind_of(Sequel::Dataset) ds.model.should == @Artist ds.sql.should == "SELECT * FROM artists WHERE coalesce((tag_ids && (SELECT array_agg(tags.id) FROM tags)), 'f')" end it "should have an associated method that takes an association symbol" do ds = @Album.associated(:artist) ds.should be_a_kind_of(Sequel::Dataset) ds.model.should == @Artist ds.sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums.artist_id FROM albums))" end it "should raise an Error if an invalid association is given to associated" do proc{@Album.associated(:foo)}.should raise_error(Sequel::Error) end it "should raise an Error if an unrecognized association type is used" do @Album.association_reflection(:artist)[:type] = :foo proc{@Album.artists}.should raise_error(Sequel::Error) end it "should work correctly when chaining" do ds = @Artist.albums.tags ds.should be_a_kind_of(Sequel::Dataset) ds.model.should == @Tag ds.sql.should == "SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM albums INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE (albums.artist_id IN (SELECT artists.id FROM artists))))" end it "should deal correctly with filters before the association method" do @Artist.filter(:id=>1).albums.sql.should == "SELECT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists WHERE (id = 1)))" end it "should deal correctly with filters after the association method" do @Artist.albums.filter(:id=>1).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id IN (SELECT artists.id FROM artists)) AND (id = 1))" end it "should deal correctly with block on the association" do @Artist.one_to_many :albums, :clone=>:albums do |ds| ds.filter(:id=>1..100) end @Artist.albums.sql.should == "SELECT * FROM albums WHERE ((albums.artist_id IN (SELECT artists.id FROM artists)) AND (id >= 1) AND (id <= 100))" end it "should deal correctly with :conditions option on the association" do @Artist.one_to_many :albums, :clone=>:albums, :conditions=>{:id=>1..100} @Artist.albums.sql.should == "SELECT * FROM albums WHERE ((albums.artist_id IN (SELECT artists.id FROM artists)) AND (id >= 1) AND (id <= 100))" end it "should deal correctly with :distinct option on the association" do @Artist.one_to_many :albums, :clone=>:albums, :distinct=>true @Artist.albums.sql.should == "SELECT DISTINCT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists))" end it "should deal correctly with :eager option on the association" do @Artist.one_to_many :albums, :clone=>:albums, :eager=>:tags @Artist.albums.opts[:eager].should == {:tags=>nil} end it "should deal correctly with :eager_block option on the association, ignoring the association block" do @Artist.one_to_many :albums, :clone=>:albums, :eager_block=>proc{|ds| ds.filter(:id=>1..100)} do |ds| ds.filter(:id=>2..200) end @Artist.albums.sql.should == "SELECT * FROM albums WHERE ((albums.artist_id IN (SELECT artists.id FROM artists)) AND (id >= 1) AND (id <= 100))" end it "should deal correctly with :extend option on the association" do @Artist.one_to_many :albums, :clone=>:albums, :extend=>Module.new{def foo(x) filter(:id=>x) end} @Artist.albums.foo(1).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id IN (SELECT artists.id FROM artists)) AND (id = 1))" end it "should deal correctly with :order option on the association" do @Artist.one_to_many :albums, :clone=>:albums, :order=>:name @Artist.albums.sql.should == "SELECT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists)) ORDER BY name" end it "should deal correctly with :select option on the association" do @Artist.one_to_many :albums, :clone=>:albums, :select=>[:id, :name] @Artist.albums.sql.should == "SELECT id, name FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists))" end end describe "Sequel::Plugins::DatasetAssociations with composite keys" do before do @db = Sequel.mock @Base = Class.new(Sequel::Model) @Base.plugin :dataset_associations @Artist = Class.new(@Base) @Album = Class.new(@Base) @Tag = Class.new(@Base) @Artist.meta_def(:name){'Artist'} @Album.meta_def(:name){'Album'} @Tag.meta_def(:name){'Tag'} @Artist.dataset = @db[:artists] @Album.dataset = @db[:albums] @Tag.dataset = @db[:tags] @Artist.set_primary_key([:id1, :id2]) @Album.set_primary_key([:id1, :id2]) @Tag.set_primary_key([:id1, :id2]) @Artist.columns :id1, :id2, :name @Album.columns :id1, :id2, :name, :artist_id1, :artist_id2 @Tag.columns :id1, :id2, :name @Artist.plugin :many_through_many @Artist.one_to_many :albums, :class=>@Album, :key=>[:artist_id1, :artist_id2] @Artist.one_to_one :first_album, :class=>@Album, :key=>[:artist_id1, :artist_id2] @Album.many_to_one :artist, :class=>@Artist, :key=>[:artist_id1, :artist_id2] @Album.many_to_many :tags, :class=>@Tag, :left_key=>[:album_id1, :album_id2], :right_key=>[:tag_id1, :tag_id2] @Tag.many_to_many :albums, :class=>@Album, :right_key=>[:album_id1, :album_id2], :left_key=>[:tag_id1, :tag_id2] @Artist.many_through_many :tags, [[:albums, [:artist_id1, :artist_id2], [:id1, :id2]], [:albums_tags, [:album_id1, :album_id2], [:tag_id1, :tag_id2]]], :class=>@Tag end it "should work for many_to_one associations" do @Album.artists.sql.should == "SELECT * FROM artists WHERE ((artists.id1, artists.id2) IN (SELECT albums.artist_id1, albums.artist_id2 FROM albums))" end it "should work for one_to_many associations" do @Artist.albums.sql.should == "SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists))" end it "should work for one_to_one associations" do @Artist.first_albums.sql.should == "SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists))" end it "should work for many_to_many associations" do @Album.tags.sql.should == "SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM albums INNER JOIN albums_tags ON ((albums_tags.album_id1 = albums.id1) AND (albums_tags.album_id2 = albums.id2))))" end it "should work for many_through_many associations" do @Artist.tags.sql.should == "SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM artists INNER JOIN albums ON ((albums.artist_id1 = artists.id1) AND (albums.artist_id2 = artists.id2)) INNER JOIN albums_tags ON ((albums_tags.album_id1 = albums.id1) AND (albums_tags.album_id2 = albums.id2))))" end it "should work correctly when chaining" do @Artist.albums.tags.sql.should == "SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM albums INNER JOIN albums_tags ON ((albums_tags.album_id1 = albums.id1) AND (albums_tags.album_id2 = albums.id2)) WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists))))" end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/date_arithmetic_spec.rb�������������������������������������������0000664�0000000�0000000�00000017276�12201565355�0023631�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") asd = begin require 'active_support/duration' true rescue LoadError => e skip_warn "date_arithmetic extension (partial): can't load active_support/duration (#{e.class}: #{e})" false end Sequel.extension :date_arithmetic describe "date_arithmetic extension" do dbf = lambda do |db_type| db = Sequel.connect("mock://#{db_type}") db.extension :date_arithmetic db end before do @h0 = {:days=>0} @h1 = {:days=>1, :years=>nil, :hours=>0} @h2 = {:years=>1, :months=>1, :days=>1, :hours=>1, :minutes=>1, :seconds=>1} end it "should have Sequel.date_add with an interval hash return an appropriate Sequel::SQL::DateAdd expression" do da = Sequel.date_add(:a, :days=>1) da.should be_a_kind_of(Sequel::SQL::DateAdd) da.expr.should == :a da.interval.should == {:days=>1} Sequel.date_add(:a, :years=>1, :months=>2, :days=>3, :hours=>1, :minutes=>1, :seconds=>1).interval.should == {:years=>1, :months=>2, :days=>3, :hours=>1, :minutes=>1, :seconds=>1} end it "should have Sequel.date_sub with an interval hash return an appropriate Sequel::SQL::DateAdd expression" do da = Sequel.date_sub(:a, :days=>1) da.should be_a_kind_of(Sequel::SQL::DateAdd) da.expr.should == :a da.interval.should == {:days=>-1} Sequel.date_sub(:a, :years=>1, :months=>2, :days=>3, :hours=>1, :minutes=>1, :seconds=>1).interval.should == {:years=>-1, :months=>-2, :days=>-3, :hours=>-1, :minutes=>-1, :seconds=>-1} end it "should have Sequel.date_* with an interval hash handle nil values" do Sequel.date_sub(:a, :days=>1, :hours=>nil).interval.should == {:days=>-1} end it "should raise an error if given string values in an interval hash" do lambda{Sequel.date_add(:a, :days=>'1')}.should raise_error(Sequel::InvalidValue) end if asd it "should have Sequel.date_add with an ActiveSupport::Duration return an appropriate Sequel::SQL::DateAdd expression" do da = Sequel.date_add(:a, ActiveSupport::Duration.new(1, [[:days, 1]])) da.should be_a_kind_of(Sequel::SQL::DateAdd) da.expr.should == :a da.interval.should == {:days=>1} Sequel.date_add(:a, ActiveSupport::Duration.new(1, [[:years, 1], [:months, 1], [:days, 1], [:minutes, 61], [:seconds, 1]])).interval.should == {:years=>1, :months=>1, :days=>1, :minutes=>61, :seconds=>1} end it "should have Sequel.date_sub with an ActiveSupport::Duration return an appropriate Sequel::SQL::DateAdd expression" do da = Sequel.date_sub(:a, ActiveSupport::Duration.new(1, [[:days, 1]])) da.should be_a_kind_of(Sequel::SQL::DateAdd) da.expr.should == :a da.interval.should == {:days=>-1} Sequel.date_sub(:a, ActiveSupport::Duration.new(1, [[:years, 1], [:months, 1], [:days, 1], [:minutes, 61], [:seconds, 1]])).interval.should == {:years=>-1, :months=>-1, :days=>-1, :minutes=>-61, :seconds=>-1} end end it "should correctly literalize on Postgres" do db = dbf.call(:postgres) db.literal(Sequel.date_add(:a, @h0)).should == "CAST(a AS timestamp)" db.literal(Sequel.date_add(:a, @h1)).should == "(CAST(a AS timestamp) + CAST('1 days ' AS interval))" db.literal(Sequel.date_add(:a, @h2)).should == "(CAST(a AS timestamp) + CAST('1 years 1 months 1 days 1 hours 1 minutes 1 seconds ' AS interval))" end it "should correctly literalize on SQLite" do db = dbf.call(:sqlite) db.literal(Sequel.date_add(:a, @h0)).should == "datetime(a)" db.literal(Sequel.date_add(:a, @h1)).should == "datetime(a, '1 days')" db.literal(Sequel.date_add(:a, @h2)).should == "datetime(a, '1 years', '1 months', '1 days', '1 hours', '1 minutes', '1 seconds')" end it "should correctly literalize on MySQL" do db = dbf.call(:mysql) db.literal(Sequel.date_add(:a, @h0)).should == "CAST(a AS DATETIME)" db.literal(Sequel.date_add(:a, @h1)).should == "DATE_ADD(a, INTERVAL 1 DAY)" db.literal(Sequel.date_add(:a, @h2)).should == "DATE_ADD(DATE_ADD(DATE_ADD(DATE_ADD(DATE_ADD(DATE_ADD(a, INTERVAL 1 YEAR), INTERVAL 1 MONTH), INTERVAL 1 DAY), INTERVAL 1 HOUR), INTERVAL 1 MINUTE), INTERVAL 1 SECOND)" end it "should correctly literalize on HSQLDB" do db = Sequel.mock def db.database_type; :hsqldb end db.extension :date_arithmetic db.literal(Sequel.date_add(:a, @h0)).should == "CAST(CAST(a AS timestamp) AS timestamp)" db.literal(Sequel.date_add(:a, @h1)).should == "DATE_ADD(CAST(a AS timestamp), INTERVAL 1 DAY)" db.literal(Sequel.date_add(:a, @h2)).should == "DATE_ADD(DATE_ADD(DATE_ADD(DATE_ADD(DATE_ADD(DATE_ADD(CAST(a AS timestamp), INTERVAL 1 YEAR), INTERVAL 1 MONTH), INTERVAL 1 DAY), INTERVAL 1 HOUR), INTERVAL 1 MINUTE), INTERVAL 1 SECOND)" end it "should correctly literalize on MSSQL" do db = dbf.call(:mssql) db.literal(Sequel.date_add(:a, @h0)).should == "CAST(a AS datetime)" db.literal(Sequel.date_add(:a, @h1)).should == "DATEADD(day, 1, a)" db.literal(Sequel.date_add(:a, @h2)).should == "DATEADD(second, 1, DATEADD(minute, 1, DATEADD(hour, 1, DATEADD(day, 1, DATEADD(month, 1, DATEADD(year, 1, a))))))" end it "should correctly literalize on H2" do db = Sequel.mock def db.database_type; :h2 end db.extension :date_arithmetic db.literal(Sequel.date_add(:a, @h0)).should == "CAST(a AS timestamp)" db.literal(Sequel.date_add(:a, @h1)).should == "DATEADD('day', 1, a)" db.literal(Sequel.date_add(:a, @h2)).should == "DATEADD('second', 1, DATEADD('minute', 1, DATEADD('hour', 1, DATEADD('day', 1, DATEADD('month', 1, DATEADD('year', 1, a))))))" end it "should correctly literalize on access" do db = dbf.call(:access) db.literal(Sequel.date_add(:a, @h0)).should == "CDate(a)" db.literal(Sequel.date_add(:a, @h1)).should == "DATEADD('d', 1, a)" db.literal(Sequel.date_add(:a, @h2)).should == "DATEADD('s', 1, DATEADD('n', 1, DATEADD('h', 1, DATEADD('d', 1, DATEADD('m', 1, DATEADD('yyyy', 1, a))))))" end it "should correctly literalize on Derby" do db = Sequel.mock def db.database_type; :derby end db.extension :date_arithmetic db.literal(Sequel.date_add(:a, @h0)).should == "CAST(a AS timestamp)" db.literal(Sequel.date_add(:a, @h1)).should == "{fn timestampadd(SQL_TSI_DAY, 1, timestamp(a))}" db.literal(Sequel.date_add(:a, @h2)).should == "{fn timestampadd(SQL_TSI_SECOND, 1, timestamp({fn timestampadd(SQL_TSI_MINUTE, 1, timestamp({fn timestampadd(SQL_TSI_HOUR, 1, timestamp({fn timestampadd(SQL_TSI_DAY, 1, timestamp({fn timestampadd(SQL_TSI_MONTH, 1, timestamp({fn timestampadd(SQL_TSI_YEAR, 1, timestamp(a))}))}))}))}))}))}" db.literal(Sequel.date_add(Date.civil(2012, 11, 12), @h1)).should == "{fn timestampadd(SQL_TSI_DAY, 1, timestamp((CAST('2012-11-12' AS varchar(255)) || ' 00:00:00')))}" end it "should correctly literalize on Oracle" do db = dbf.call(:oracle) db.literal(Sequel.date_add(:a, @h0)).should == "CAST(a AS timestamp)" db.literal(Sequel.date_add(:a, @h1)).should == "(a + INTERVAL '1' DAY)" db.literal(Sequel.date_add(:a, @h2)).should == "(a + INTERVAL '1' YEAR + INTERVAL '1' MONTH + INTERVAL '1' DAY + INTERVAL '1' HOUR + INTERVAL '1' MINUTE + INTERVAL '1' SECOND)" end it "should correctly literalize on DB2" do db = dbf.call(:db2) db.literal(Sequel.date_add(:a, @h0)).should == "CAST(a AS timestamp)" db.literal(Sequel.date_add(:a, @h1)).should == "(CAST(a AS timestamp) + 1 days)" db.literal(Sequel.date_add(:a, @h2)).should == "(CAST(a AS timestamp) + 1 years + 1 months + 1 days + 1 hours + 1 minutes + 1 seconds)" end specify "should raise error if literalizing on an unsupported database" do db = Sequel.mock db.extension :date_arithmetic lambda{db.literal(Sequel.date_add(:a, @h0))}.should raise_error(Sequel::Error) end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/defaults_setter_spec.rb�������������������������������������������0000664�0000000�0000000�00000005020�12201565355�0023660�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::DefaultsSetter" do before do @db = db = Sequel.mock def db.supports_schema_parsing?() true end @c = c = Class.new(Sequel::Model(db[:foo])) @c.instance_variable_set(:@db_schema, {:a=>{}}) @c.plugin :defaults_setter @c.columns :a @pr = proc{|x| db.meta_def(:schema){|*| [[:a, {:ruby_default => x}]]}; c.dataset = c.dataset; c} end after do Sequel.datetime_class = Time end it "should set default value upon initialization" do @pr.call(2).new.a.should == 2 end it "should not mark the column as modified" do @pr.call(2).new.changed_columns.should == [] end it "should not set a default of nil" do @pr.call(nil).new.class.default_values.should == {} end it "should set a default of false" do @pr.call(false).new.a.should == false end it "should handle Sequel::CURRENT_DATE default by using the current Date" do @pr.call(Sequel::CURRENT_DATE).new.a.should == Date.today end it "should handle Sequel::CURRENT_TIMESTAMP default by using the current Time" do t = @pr.call(Sequel::CURRENT_TIMESTAMP).new.a t.should be_a_kind_of(Time) (t - Time.now).should < 1 end it "should handle Sequel::CURRENT_TIMESTAMP default by using the current DateTime if Sequel.datetime_class is DateTime" do Sequel.datetime_class = DateTime t = @pr.call(Sequel::CURRENT_TIMESTAMP).new.a t.should be_a_kind_of(DateTime) (t - DateTime.now).should < 1/86400.0 end it "should not override a given value" do @pr.call(2) @c.new('a'=>3).a.should == 3 @c.new('a'=>nil).a.should == nil end it "should work correctly when subclassing" do Class.new(@pr.call(2)).new.a.should == 2 end it "should contain the default values in default_values" do @pr.call(2).default_values.should == {:a=>2} @pr.call(nil).default_values.should == {} end it "should allow modifications of default values" do @pr.call(2) @c.default_values[:a] = 3 @c.new.a.should == 3 end it "should allow proc default values" do @pr.call(2) @c.default_values[:a] = proc{3} @c.new.a.should == 3 end it "should have procs that set default values set them to nil" do @pr.call(2) @c.default_values[:a] = proc{nil} @c.new.a.should == nil end it "should work correctly on a model without a dataset" do @pr.call(2) c = Class.new(Sequel::Model(@db[:bar])) c.plugin :defaults_setter c.default_values.should == {:a=>2} end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/dirty_spec.rb�����������������������������������������������������0000664�0000000�0000000�00000013560�12201565355�0021626�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::Dirty" do before do @db = Sequel.mock(:fetch=>{:initial=>'i', :initial_changed=>'ic'}, :numrows=>1) @c = Class.new(Sequel::Model(@db[:c])) @c.plugin :dirty @c.columns :initial, :initial_changed, :missing, :missing_changed end shared_examples_for "dirty plugin" do it "initial_value should be the current value if value has not changed" do @o.initial_value(:initial).should == 'i' @o.initial_value(:missing).should == nil end it "initial_value should be the intial value if value has changed" do @o.initial_value(:initial_changed).should == 'ic' @o.initial_value(:missing_changed).should == nil end it "initial_value should handle case where initial value is reassigned later" do @o.initial_changed = 'ic' @o.initial_value(:initial_changed).should == 'ic' @o.missing_changed = nil @o.initial_value(:missing_changed).should == nil end it "changed_columns should handle case where initial value is reassigned later" do @o.changed_columns.should == [:initial_changed, :missing_changed] @o.initial_changed = 'ic' @o.changed_columns.should == [:missing_changed] @o.missing_changed = nil @o.changed_columns.should == [:missing_changed] end it "column_change should give initial and current values if there has been a change made" do @o.column_change(:initial_changed).should == ['ic', 'ic2'] @o.column_change(:missing_changed).should == [nil, 'mc2'] end it "column_change should be nil if no change has been made" do @o.column_change(:initial).should == nil @o.column_change(:missing).should == nil end it "column_changed? should return whether the column has changed" do @o.column_changed?(:initial).should == false @o.column_changed?(:initial_changed).should == true @o.column_changed?(:missing).should == false @o.column_changed?(:missing_changed).should == true end it "column_changed? should handle case where initial value is reassigned later" do @o.initial_changed = 'ic' @o.column_changed?(:initial_changed).should == false @o.missing_changed = nil @o.column_changed?(:missing_changed).should == false end it "changed_columns should handle case where initial value is reassigned later" do @o.changed_columns.should == [:initial_changed, :missing_changed] @o.initial_changed = 'ic' @o.changed_columns.should == [:missing_changed] @o.missing_changed = nil @o.changed_columns.should == [:missing_changed] end it "column_changes should give initial and current values" do @o.column_changes.should == {:initial_changed=>['ic', 'ic2'], :missing_changed=>[nil, 'mc2']} end it "reset_column should reset the column to its initial value" do @o.reset_column(:initial) @o.initial.should == 'i' @o.reset_column(:initial_changed) @o.initial_changed.should == 'ic' @o.reset_column(:missing) @o.missing.should == nil @o.reset_column(:missing_changed) @o.missing_changed.should == nil end it "reset_column should remove missing values from the values" do @o.reset_column(:missing) @o.values.has_key?(:missing).should == false @o.reset_column(:missing_changed) @o.values.has_key?(:missing_changed).should == false end it "refresh should clear the cached initial values" do @o.refresh @o.column_changes.should == {} end it "will_change_column should be used to signal in-place modification to column" do @o.will_change_column(:initial) @o.initial << 'b' @o.column_change(:initial).should == ['i', 'ib'] @o.will_change_column(:initial_changed) @o.initial_changed << 'b' @o.column_change(:initial_changed).should == ['ic', 'ic2b'] @o.will_change_column(:missing) @o.values[:missing] = 'b' @o.column_change(:missing).should == [nil, 'b'] @o.will_change_column(:missing_changed) @o.missing_changed << 'b' @o.column_change(:missing_changed).should == [nil, 'mc2b'] end it "will_change_column should different types of existing objects" do [nil, true, false, Class.new{undef_method :clone}.new, Class.new{def clone; raise TypeError; end}.new].each do |v| o = @c.new(:initial=>v) o.will_change_column(:initial) o.initial = 'a' o.column_change(:initial).should == [v, 'a'] end end it "should work when freezing objects" do @o.freeze @o.initial_value(:initial).should == 'i' proc{@o.initial = 'b'}.should raise_error end end describe "with new instance" do before do @o = @c.new(:initial=>'i', :initial_changed=>'ic') @o.initial_changed = 'ic2' @o.missing_changed = 'mc2' end it_should_behave_like "dirty plugin" it "save should clear the cached initial values" do @o.save @o.column_changes.should == {} end it "save_changes should clear the cached initial values" do def (@c.instance_dataset).supports_insert_select?() true end def (@c.instance_dataset).insert_select(*) {:id=>1} end @o.save @o.column_changes.should == {} end end describe "with existing instance" do before do @o = @c[1] @o.initial_changed = 'ic2' @o.missing_changed = 'mc2' end it_should_behave_like "dirty plugin" it "previous_changes should be the previous changes after saving" do @o.save @o.previous_changes.should == {:initial_changed=>['ic', 'ic2'], :missing_changed=>[nil, 'mc2']} end it "should work when freezing objects after saving" do @o.initial = 'a' @o.save @o.freeze @o.previous_changes[:initial].should == ['i', 'a'] proc{@o.initial = 'b'}.should raise_error end end end ������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/eager_each_spec.rb������������������������������������������������0000664�0000000�0000000�00000004025�12201565355�0022532�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::EagerEach" do before do @c = Class.new(Sequel::Model(:items)) @c.columns :id, :parent_id @c.plugin :eager_each @c.one_to_many :children, :class=>@c, :key=>:parent_id @c.db.sqls end it "should make #each on an eager dataset do eager loading" do a = [] ds = @c.eager(:children) ds._fetch = [{:id=>1, :parent_id=>nil}, {:id=>2, :parent_id=>nil}] @c.dataset._fetch = [{:id=>3, :parent_id=>1}, {:id=>4, :parent_id=>1}, {:id=>5, :parent_id=>2}, {:id=>6, :parent_id=>2}] ds.each{|c| a << c} a.should == [@c.load(:id=>1, :parent_id=>nil), @c.load(:id=>2, :parent_id=>nil)] a.map{|c| c.associations[:children]}.should == [[@c.load(:id=>3, :parent_id=>1), @c.load(:id=>4, :parent_id=>1)], [@c.load(:id=>5, :parent_id=>2), @c.load(:id=>6, :parent_id=>2)]] sqls = @c.db.sqls sqls.shift.should == 'SELECT * FROM items' ['SELECT * FROM items WHERE (items.parent_id IN (1, 2))', 'SELECT * FROM items WHERE (items.parent_id IN (2, 1))'].should include(sqls.pop) end it "should make #each on an eager_graph dataset do eager loading" do a = [] ds = @c.eager_graph(:children) ds._fetch = [] ds._fetch = [{:id=>1, :parent_id=>nil, :children_id=>3, :children_parent_id=>1}, {:id=>1, :parent_id=>nil, :children_id=>4, :children_parent_id=>1}, {:id=>2, :parent_id=>nil, :children_id=>5, :children_parent_id=>2}, {:id=>2, :parent_id=>nil, :children_id=>6, :children_parent_id=>2}] ds.each{|c| a << c} a.should == [@c.load(:id=>1, :parent_id=>nil), @c.load(:id=>2, :parent_id=>nil)] a.map{|c| c.associations[:children]}.should == [[@c.load(:id=>3, :parent_id=>1), @c.load(:id=>4, :parent_id=>1)], [@c.load(:id=>5, :parent_id=>2), @c.load(:id=>6, :parent_id=>2)]] @c.db.sqls.should == ['SELECT items.id, items.parent_id, children.id AS children_id, children.parent_id AS children_parent_id FROM items LEFT OUTER JOIN items AS children ON (children.parent_id = items.id)'] end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/empty_array_ignore_nulls_spec.rb����������������������������������0000664�0000000�0000000�00000002355�12201565355�0025607�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "filter_having extension" do before do @dataset = Sequel.mock[:test].extension(:empty_array_ignore_nulls) end specify "should handle all types of IN/NOT IN queries with empty arrays" do @dataset.filter(:id => []).sql.should == "SELECT * FROM test WHERE (1 = 0)" @dataset.filter([:id1, :id2] => []).sql.should == "SELECT * FROM test WHERE (1 = 0)" @dataset.exclude(:id => []).sql.should == "SELECT * FROM test WHERE (1 = 1)" @dataset.exclude([:id1, :id2] => []).sql.should == "SELECT * FROM test WHERE (1 = 1)" end specify "should handle IN/NOT IN queries with multiple columns and an empty dataset where the database doesn't support it" do @dataset.meta_def(:supports_multiple_column_in?){false} db = Sequel.mock d1 = db[:test].select(:id1, :id2).filter(:region=>'Asia').columns(:id1, :id2) @dataset.filter([:id1, :id2] => d1).sql.should == "SELECT * FROM test WHERE (1 = 0)" db.sqls.should == ["SELECT id1, id2 FROM test WHERE (region = 'Asia')"] @dataset.exclude([:id1, :id2] => d1).sql.should == "SELECT * FROM test WHERE (1 = 1)" db.sqls.should == ["SELECT id1, id2 FROM test WHERE (region = 'Asia')"] end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/error_splitter_spec.rb��������������������������������������������0000664�0000000�0000000�00000000721�12201565355�0023545�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::ErrorSplitter" do before do @c = Class.new(Sequel::Model) @c.plugin :error_splitter @m = @c.new def @m.validate errors.add([:a, :b], 'is bad') end end it "should split errors for multiple columns and assign them to each column" do @m.valid?.should be_false @m.errors.should == {:a=>['is bad'], :b=>['is bad']} end end �����������������������������������������������ruby-sequel-4.1.1/spec/extensions/eval_inspect_spec.rb����������������������������������������������0000664�0000000�0000000�00000006713�12201565355�0023151�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") Sequel.extension :eval_inspect describe "eval_inspect extension" do before do @ds = Sequel.mock.dataset @ds.meta_def(:supports_window_functions?){true} @ds.meta_def(:literal_blob_append){|sql, s| sql << "X'#{s}'"} end specify "should make eval(obj.inspect) == obj for all Sequel::SQL::Expression subclasses" do [ # Objects with components where eval(inspect) == self Sequel::SQL::AliasedExpression.new(:b, :a), Sequel::SQL::CaseExpression.new({:b=>:a}, :c), Sequel::SQL::CaseExpression.new({:b=>:a}, :c, :d), Sequel::SQL::Cast.new(:a, :b), Sequel::SQL::ColumnAll.new(:a), Sequel::SQL::ComplexExpression.new(:'=', :b, :a), Sequel::SQL::Constant.new(:a), Sequel::CURRENT_DATE, Sequel::CURRENT_TIMESTAMP, Sequel::CURRENT_TIME, Sequel::SQLTRUE, Sequel::SQLFALSE, Sequel::NULL, Sequel::NOTNULL, Sequel::SQL::Function.new(:a, :b, :c), Sequel::SQL::Identifier.new(:a), Sequel::SQL::JoinClause.new(:inner, :b, :c), Sequel::SQL::JoinOnClause.new({:d=>:a}, :inner, :b, :c), Sequel::SQL::JoinUsingClause.new([:a], :inner, :b, :c), Sequel::SQL::PlaceholderLiteralString.new('? = ?', [:a, :b]), Sequel::SQL::PlaceholderLiteralString.new(':a = :b', [{:a=>:b, :b=>42}]), Sequel::SQL::OrderedExpression.new(:a), Sequel::SQL::OrderedExpression.new(:a, false), Sequel::SQL::OrderedExpression.new(:a, false, :nulls=>:first), Sequel::SQL::OrderedExpression.new(:a, false, :nulls=>:last), Sequel::SQL::QualifiedIdentifier.new(:b, :a), Sequel::SQL::Subscript.new(:a, [1, 2]), Sequel::SQL::Window.new(:order=>:a, :partition=>:b), Sequel::SQL::WindowFunction.new(Sequel::SQL::Function.new(:a, :b, :c), Sequel::SQL::Window.new(:order=>:a, :partition=>:b)), Sequel::SQL::Wrapper.new(:a), # Objects with components where eval(inspect) != self Sequel::SQL::AliasedExpression.new(Sequel::SQL::Blob.new('s'), :a), Sequel::SQL::AliasedExpression.new(Sequel::LiteralString.new('s'), :a), Sequel::SQL::PlaceholderLiteralString.new('(a, b) IN ?', [Sequel::SQL::ValueList.new([[1, 2]])]), Sequel::SQL::CaseExpression.new({{:d=>Sequel::LiteralString.new('e')}=>:a}, :c, :d), Sequel::SQL::AliasedExpression.new(Date.new(2011, 10, 11), :a), Sequel::SQL::AliasedExpression.new(Sequel::SQLTime.create(10, 20, 30, 500000.125), :a), Sequel::SQL::AliasedExpression.new(DateTime.new(2011, 9, 11, 10, 20, 30), :a), Sequel::SQL::AliasedExpression.new(DateTime.new(2011, 9, 11, 10, 20, 30, 0.25), :a), Sequel::SQL::AliasedExpression.new(DateTime.new(2011, 9, 11, 10, 20, 30, -0.25), :a), Sequel::SQL::AliasedExpression.new(Time.local(2011, 9, 11, 10, 20, 30), :a), Sequel::SQL::AliasedExpression.new(Time.local(2011, 9, 11, 10, 20, 30, 500000.125), :a), Sequel::SQL::AliasedExpression.new(Time.utc(2011, 9, 11, 10, 20, 30), :a), Sequel::SQL::AliasedExpression.new(Time.utc(2011, 9, 11, 10, 20, 30, 500000.125), :a), Sequel::SQL::AliasedExpression.new(BigDecimal.new('1.000000000000000000000000000000000000000000000001'), :a), Sequel::SQL::AliasedExpression.new(Sequel::CURRENT_DATE, :a), Sequel::SQL::AliasedExpression.new(Sequel::CURRENT_TIMESTAMP, :a), ].each do |o| v = eval(o.inspect) v.should == o @ds.literal(v).should == @ds.literal(o) end end end �����������������������������������������������������ruby-sequel-4.1.1/spec/extensions/filter_having_spec.rb���������������������������������������������0000664�0000000�0000000�00000002754�12201565355�0023317�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "filter_having extension" do before do @ds = Sequel.mock[:t].extension(:filter_having) @dsh = @ds.having(:a) end it "should make filter operate on HAVING clause if dataset has a HAVING clause" do @dsh.filter(:b).sql.should == 'SELECT * FROM t HAVING (a AND b)' end it "should make filter operate on WHERE clause if dataset does not have a HAVING clause" do @ds.filter(:b).sql.should == 'SELECT * FROM t WHERE b' end it "should make and operate on HAVING clause if dataset has a HAVING clause" do @dsh.and(:b).sql.should == 'SELECT * FROM t HAVING (a AND b)' end it "should make and operate on WHERE clause if dataset does not have a HAVING clause" do @ds.where(:a).and(:b).sql.should == 'SELECT * FROM t WHERE (a AND b)' end it "should make or operate on HAVING clause if dataset has a HAVING clause" do @dsh.or(:b).sql.should == 'SELECT * FROM t HAVING (a OR b)' end it "should make or operate on WHERE clause if dataset does not have a HAVING clause" do @ds.where(:a).or(:b).sql.should == 'SELECT * FROM t WHERE (a OR b)' end it "should make exclude operate on HAVING clause if dataset has a HAVING clause" do @dsh.exclude(:b).sql.should == 'SELECT * FROM t HAVING (a AND NOT b)' end it "should make exclude operate on WHERE clause if dataset does not have a HAVING clause" do @ds.exclude(:b).sql.should == 'SELECT * FROM t WHERE NOT b' end end ��������������������ruby-sequel-4.1.1/spec/extensions/force_encoding_spec.rb��������������������������������������������0000664�0000000�0000000�00000005513�12201565355�0023436�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") if RUBY_VERSION >= '1.9.0' describe "force_encoding plugin" do before do @c = Class.new(Sequel::Model) do end @c.columns :id, :x @c.plugin :force_encoding, 'UTF-8' @e1 = Encoding.find('UTF-8') end specify "should force encoding to given encoding on load" do s = 'blah' s.force_encoding('US-ASCII') o = @c.load(:id=>1, :x=>s) o.x.should == 'blah' o.x.encoding.should == @e1 end specify "should force encoding to given encoding when setting column values" do s = 'blah' s.force_encoding('US-ASCII') o = @c.new(:x=>s) o.x.should == 'blah' o.x.encoding.should == @e1 end specify "should work correctly when given a frozen string" do s = 'blah' s.force_encoding('US-ASCII') s.freeze o = @c.new(:x=>s) o.x.should == 'blah' o.x.encoding.should == @e1 end specify "should have a forced_encoding class accessor" do s = 'blah' s.force_encoding('US-ASCII') @c.forced_encoding = 'Windows-1258' o = @c.load(:id=>1, :x=>s) o.x.should == 'blah' o.x.encoding.should == Encoding.find('Windows-1258') end specify "should not force encoding if forced_encoding is nil" do s = 'blah' s.force_encoding('US-ASCII') @c.forced_encoding = nil o = @c.load(:id=>1, :x=>s) o.x.should == 'blah' o.x.encoding.should == Encoding.find('US-ASCII') end specify "should work correctly when subclassing" do c = Class.new(@c) s = 'blah' s.force_encoding('US-ASCII') o = c.load(:id=>1, :x=>s) o.x.should == 'blah' o.x.encoding.should == @e1 c.plugin :force_encoding, 'UTF-16LE' s = '' s.force_encoding('US-ASCII') o = c.load(:id=>1, :x=>s) o.x.should == '' o.x.encoding.should == Encoding.find('UTF-16LE') @c.plugin :force_encoding, 'UTF-32LE' s = '' s.force_encoding('US-ASCII') o = @c.load(:id=>1, :x=>s) o.x.should == '' o.x.encoding.should == Encoding.find('UTF-32LE') s = '' s.force_encoding('US-ASCII') o = c.load(:id=>1, :x=>s) o.x.should == '' o.x.encoding.should == Encoding.find('UTF-16LE') end specify "should work when saving new model instances" do o = @c.new ds = DB[:a] def ds.first s = 'blah' s.force_encoding('US-ASCII') {:id=>1, :x=>s} end @c.dataset = ds o.save o.x.should == 'blah' o.x.encoding.should == @e1 end specify "should work when refreshing model instances" do o = @c.load(:id=>1, :x=>'as') ds = DB[:a] def ds.first s = 'blah' s.force_encoding('US-ASCII') {:id=>1, :x=>s} end @c.dataset = ds o.refresh o.x.should == 'blah' o.x.encoding.should == @e1 end end else skip_warn "force_encoding plugin: only works on ruby 1.9+" end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/graph_each_spec.rb������������������������������������������������0000664�0000000�0000000�00000015750�12201565355�0022557�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Dataset, " graphing" do before do @db = Sequel.mock(:columns=>proc do |sql| case sql when /points/ [:id, :x, :y] when /lines/ [:id, :x, :y, :graph_id] else [:id, :name, :x, :y, :lines_x] end end).extension(:graph_each) @ds1 = @db.from(:points) @ds2 = @db.from(:lines) @ds3 = @db.from(:graphs) [@ds1, @ds2, @ds3].each{|ds| ds.columns} @db.sqls end it "#graph_each should handle graph using currently selected columns as the basis for the selected columns in a new graph" do ds = @ds1.select(:id).graph(@ds2, :x=>:id) ds._fetch = {:id=>1, :lines_id=>2, :x=>3, :y=>4, :graph_id=>5} ds.all.should == [{:points=>{:id=>1}, :lines=>{:id=>2, :x=>3, :y=>4, :graph_id=>5}}] ds = @ds1.select(:id, :x).graph(@ds2, :x=>:id) ds._fetch = {:id=>1, :x=>-1, :lines_id=>2, :lines_x=>3, :y=>4, :graph_id=>5} ds.all.should == [{:points=>{:id=>1, :x=>-1}, :lines=>{:id=>2, :x=>3, :y=>4, :graph_id=>5}}] ds = @ds1.select(Sequel.identifier(:id), Sequel.qualify(:points, :x)).graph(@ds2, :x=>:id) ds._fetch = {:id=>1, :x=>-1, :lines_id=>2, :lines_x=>3, :y=>4, :graph_id=>5} ds.all.should == [{:points=>{:id=>1, :x=>-1}, :lines=>{:id=>2, :x=>3, :y=>4, :graph_id=>5}}] ds = @ds1.select(Sequel.identifier(:id).qualify(:points), Sequel.identifier(:x).as(:y)).graph(@ds2, :x=>:id) ds._fetch = {:id=>1, :y=>-1, :lines_id=>2, :x=>3, :lines_y=>4, :graph_id=>5} ds.all.should == [{:points=>{:id=>1, :y=>-1}, :lines=>{:id=>2, :x=>3, :y=>4, :graph_id=>5}}] ds = @ds1.select(:id, Sequel.identifier(:x).qualify(Sequel.identifier(:points)).as(Sequel.identifier(:y))).graph(@ds2, :x=>:id) ds._fetch = {:id=>1, :y=>-1, :lines_id=>2, :x=>3, :lines_y=>4, :graph_id=>5} ds.all.should == [{:points=>{:id=>1, :y=>-1}, :lines=>{:id=>2, :x=>3, :y=>4, :graph_id=>5}}] end it "#graph_each should split the result set into component tables" do @db.fetch = [[{:id=>1,:x=>2,:y=>3,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7}], [{:id=>1,:x=>2,:y=>3,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7, :graphs_id=>8, :name=>9, :graphs_x=>10, :graphs_y=>11, :graphs_lines_x=>12}], [{:id=>1,:x=>2,:y=>3,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7, :graph_id_0=>8, :graph_x=>9, :graph_y=>10, :graph_graph_id=>11}]] @ds1.graph(@ds2, :x=>:id).all.should == [{:points=>{:id=>1, :x=>2, :y=>3}, :lines=>{:id=>4, :x=>5, :y=>6, :graph_id=>7}}] @ds1.graph(@ds2, :x=>:id).graph(@ds3, :id=>:graph_id).all.should == [{:points=>{:id=>1, :x=>2, :y=>3}, :lines=>{:id=>4, :x=>5, :y=>6, :graph_id=>7}, :graphs=>{:id=>8, :name=>9, :x=>10, :y=>11, :lines_x=>12}}] @ds1.graph(@ds2, :x=>:id).graph(@ds2, {:y=>:points__id}, :table_alias=>:graph).all.should == [{:points=>{:id=>1, :x=>2, :y=>3}, :lines=>{:id=>4, :x=>5, :y=>6, :graph_id=>7}, :graph=>{:id=>8, :x=>9, :y=>10, :graph_id=>11}}] end it "#graph_each should give a nil value instead of a hash when all values for a table are nil" do @db.fetch = [[{:id=>1,:x=>2,:y=>3,:lines_id=>nil,:lines_x=>nil,:lines_y=>nil,:graph_id=>nil}], [{:id=>1,:x=>2,:y=>3,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7, :graphs_id=>nil, :name=>nil, :graphs_x=>nil, :graphs_y=>nil, :graphs_lines_x=>nil}, {:id=>2,:x=>4,:y=>5,:lines_id=>nil,:lines_x=>nil,:lines_y=>nil,:graph_id=>nil, :graphs_id=>nil, :name=>nil, :graphs_x=>nil, :graphs_y=>nil, :graphs_lines_x=>nil}, {:id=>3,:x=>5,:y=>6,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7, :graphs_id=>7, :name=>8, :graphs_x=>9, :graphs_y=>10, :graphs_lines_x=>11}, {:id=>3,:x=>5,:y=>6,:lines_id=>7,:lines_x=>5,:lines_y=>8,:graph_id=>9, :graphs_id=>9, :name=>10, :graphs_x=>10, :graphs_y=>11, :graphs_lines_x=>12}]] @ds1.graph(@ds2, :x=>:id).all.should == [{:points=>{:id=>1, :x=>2, :y=>3}, :lines=>nil}] @ds1.graph(@ds2, :x=>:id).graph(@ds3, :id=>:graph_id).all.should == [{:points=>{:id=>1, :x=>2, :y=>3}, :lines=>{:id=>4, :x=>5, :y=>6, :graph_id=>7}, :graphs=>nil}, {:points=>{:id=>2, :x=>4, :y=>5}, :lines=>nil, :graphs=>nil}, {:points=>{:id=>3, :x=>5, :y=>6}, :lines=>{:id=>4, :x=>5, :y=>6, :graph_id=>7}, :graphs=>{:id=>7, :name=>8, :x=>9, :y=>10, :lines_x=>11}}, {:points=>{:id=>3, :x=>5, :y=>6}, :lines=>{:id=>7, :x=>5, :y=>8, :graph_id=>9}, :graphs=>{:id=>9, :name=>10, :x=>10, :y=>11, :lines_x=>12}}] end it "#graph_each should not give a nil value instead of a hash when any value for a table is false" do @db.fetch = {:id=>1,:x=>2,:y=>3,:lines_id=>nil,:lines_x=>false,:lines_y=>nil,:graph_id=>nil} @ds1.graph(@ds2, :x=>:id).all.should == [{:points=>{:id=>1, :x=>2, :y=>3}, :lines=>{:id=>nil, :x=>false, :y=>nil, :graph_id=>nil}}] end it "#graph_each should not included tables graphed with the :select => false option in the result set" do @db.fetch = {:id=>1,:x=>2,:y=>3,:graphs_id=>8, :name=>9, :graphs_x=>10, :graphs_y=>11, :lines_x=>12} @ds1.graph(:lines, {:x=>:id}, :select=>false).graph(:graphs, :id=>:graph_id).all.should == [{:points=>{:id=>1, :x=>2, :y=>3}, :graphs=>{:id=>8, :name=>9, :x=>10, :y=>11, :lines_x=>12}}] end it "#graph_each should only include the columns selected with #set_graph_aliases and #add_graph_aliases, if called" do @db.fetch = [nil, [{:x=>2,:y=>3}], nil, [{:x=>2}], [{:x=>2, :q=>18}]] @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points, :x], :y=>[:lines, :y]).all.should == [{:points=>{:x=>2}, :lines=>{:y=>3}}] ds = @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points, :x]) ds.all.should == [{:points=>{:x=>2}, :lines=>nil}] ds = ds.add_graph_aliases(:q=>[:points, :r, 18]) ds.all.should == [{:points=>{:x=>2, :r=>18}, :lines=>nil}] end it "#graph_each should correctly map values when #set_graph_aliases is used with a third argument for each entry" do @db.fetch = [nil, {:x=>2,:y=>3}] @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points, :z1, 2], :y=>[:lines, :z2, Sequel.function(:random)]).all.should == [{:points=>{:z1=>2}, :lines=>{:z2=>3}}] end it "#graph_each should correctly map values when #set_graph_aliases is used with a single argument for each entry" do @db.fetch = [nil, {:x=>2,:y=>3}] @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points], :y=>[:lines]).all.should == [{:points=>{:x=>2}, :lines=>{:y=>3}}] end it "#graph_each should correctly map values when #set_graph_aliases is used with a symbol for each entry" do @db.fetch = [nil, {:x=>2,:y=>3}] @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>:points, :y=>:lines).all.should == [{:points=>{:x=>2}, :lines=>{:y=>3}}] end it "#graph_each should run the row_proc for graphed datasets" do @ds1.row_proc = proc{|h| h.keys.each{|k| h[k] *= 2}; h} @ds2.row_proc = proc{|h| h.keys.each{|k| h[k] *= 3}; h} @db.fetch = {:id=>1,:x=>2,:y=>3,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7} @ds1.graph(@ds2, :x=>:id).all.should == [{:points=>{:id=>2, :x=>4, :y=>6}, :lines=>{:id=>12, :x=>15, :y=>18, :graph_id=>21}}] end end ������������������������ruby-sequel-4.1.1/spec/extensions/hash_aliases_spec.rb����������������������������������������������0000664�0000000�0000000�00000001337�12201565355�0023116�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "hash_aliases extension" do before do @ds = Sequel.mock.dataset.extension(:hash_aliases) end it "should make from treat hash arguments as alias specifiers" do @ds.from(:a=>:b).sql.should == "SELECT * FROM a AS b" end it "should not affect other arguments to from" do @ds.from(:a, :b).sql.should == "SELECT * FROM a, b" end it "should make select treat hash arguments as alias specifiers" do @ds.select(:a=>:b).sql.should == "SELECT a AS b" @ds.select{{:a=>:b}}.sql.should == "SELECT a AS b" end it "should not affect other arguments to select" do @ds.select(:a, :b).sql.should == "SELECT a, b" end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/hook_class_methods_spec.rb����������������������������������������0000664�0000000�0000000�00000025601�12201565355�0024342�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") model_class = proc do |klass, &block| c = Class.new(klass) c.plugin :hook_class_methods c.class_eval(&block) if block c end describe Sequel::Model, "hook_class_methods plugin" do before do DB.reset end specify "should be definable using a block" do adds = [] c = model_class.call Sequel::Model do before_save{adds << 'hi'} end c.new.before_save adds.should == ['hi'] end specify "should be definable using a method name" do adds = [] c = model_class.call Sequel::Model do define_method(:bye){adds << 'bye'} before_save :bye end c.new.before_save adds.should == ['bye'] end specify "should be additive" do adds = [] c = model_class.call Sequel::Model do after_save{adds << 'hyiyie'} after_save{adds << 'byiyie'} end c.new.after_save adds.should == ['hyiyie', 'byiyie'] end specify "before hooks should run in reverse order" do adds = [] c = model_class.call Sequel::Model do before_save{adds << 'hyiyie'} before_save{adds << 'byiyie'} end c.new.before_save adds.should == ['byiyie', 'hyiyie'] end specify "should not be additive if the method or tag already exists" do adds = [] c = model_class.call Sequel::Model do define_method(:bye){adds << 'bye'} before_save :bye before_save :bye end c.new.before_save adds.should == ['bye'] adds = [] d = model_class.call Sequel::Model do before_save(:bye){adds << 'hyiyie'} before_save(:bye){adds << 'byiyie'} end d.new.before_save adds.should == ['byiyie'] adds = [] e = model_class.call Sequel::Model do define_method(:bye){adds << 'bye'} before_save :bye before_save(:bye){adds << 'byiyie'} end e.new.before_save adds.should == ['byiyie'] adds = [] e = model_class.call Sequel::Model do define_method(:bye){adds << 'bye'} before_save(:bye){adds << 'byiyie'} before_save :bye end e.new.before_save adds.should == ['bye'] end specify "should be inheritable" do adds = [] a = model_class.call Sequel::Model do after_save{adds << '123'} end b = Class.new(a) b.class_eval do after_save{adds << '456'} after_save{adds << '789'} end b.new.after_save adds.should == ['123', '456', '789'] end specify "should be overridable in descendant classes" do adds = [] a = model_class.call Sequel::Model do before_save{adds << '123'} end b = Class.new(a) b.class_eval do define_method(:before_save){adds << '456'} end a.new.before_save adds.should == ['123'] adds = [] b.new.before_save adds.should == ['456'] end specify "should stop processing if a before hook returns false" do flag = true adds = [] a = model_class.call Sequel::Model do before_save{adds << 'cruel'; flag} before_save{adds << 'blah'; flag} end a.new.before_save adds.should == ['blah', 'cruel'] # chain should not break on nil adds = [] flag = nil a.new.before_save adds.should == ['blah', 'cruel'] adds = [] flag = false a.new.before_save adds.should == ['blah'] b = Class.new(a) b.class_eval do before_save{adds << 'mau'} end adds = [] b.new.before_save adds.should == ['mau', 'blah'] end end describe "Model#before_create && Model#after_create" do before do DB.reset @c = model_class.call Sequel::Model(:items) do columns :x no_primary_key after_create {DB << "BLAH after"} end end specify "should be called around new record creation" do @c.before_create {DB << "BLAH before"} @c.create(:x => 2) DB.sqls.should == ['BLAH before', 'INSERT INTO items (x) VALUES (2)', 'BLAH after'] end specify ".create should cancel the save and raise an error if before_create returns false and raise_on_save_failure is true" do @c.before_create{false} proc{@c.create(:x => 2)}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify ".create should cancel the save and return nil if before_create returns false and raise_on_save_failure is false" do @c.before_create{false} @c.raise_on_save_failure = false @c.create(:x => 2).should == nil DB.sqls.should == [] end end describe "Model#before_update && Model#after_update" do before do DB.reset @c = model_class.call(Sequel::Model(:items)) do after_update {DB << "BLAH after"} end end specify "should be called around record update" do @c.before_update {DB << "BLAH before"} m = @c.load(:id => 2233, :x=>123) m.save DB.sqls.should == ['BLAH before', 'UPDATE items SET x = 123 WHERE (id = 2233)', 'BLAH after'] end specify "#save should cancel the save and raise an error if before_update returns false and raise_on_save_failure is true" do @c.before_update{false} proc{@c.load(:id => 2233).save}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#save should cancel the save and return nil if before_update returns false and raise_on_save_failure is false" do @c.before_update{false} @c.raise_on_save_failure = false @c.load(:id => 2233).save.should == nil DB.sqls.should == [] end end describe "Model#before_save && Model#after_save" do before do DB.reset @c = model_class.call(Sequel::Model(:items)) do columns :x after_save {DB << "BLAH after"} end end specify "should be called around record update" do @c.before_save {DB << "BLAH before"} m = @c.load(:id => 2233, :x=>123) m.save DB.sqls.should == ['BLAH before', 'UPDATE items SET x = 123 WHERE (id = 2233)', 'BLAH after'] end specify "should be called around record creation" do @c.before_save {DB << "BLAH before"} @c.no_primary_key @c.create(:x => 2) DB.sqls.should == ['BLAH before', 'INSERT INTO items (x) VALUES (2)', 'BLAH after'] end specify "#save should cancel the save and raise an error if before_save returns false and raise_on_save_failure is true" do @c.before_save{false} proc{@c.load(:id => 2233).save}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#save should cancel the save and return nil if before_save returns false and raise_on_save_failure is false" do @c.before_save{false} @c.raise_on_save_failure = false @c.load(:id => 2233).save.should == nil DB.sqls.should == [] end end describe "Model#before_destroy && Model#after_destroy" do before do DB.reset @c = model_class.call(Sequel::Model(:items)) do after_destroy {DB << "BLAH after"} end end specify "should be called around record destruction" do @c.before_destroy {DB << "BLAH before"} m = @c.load(:id => 2233) m.destroy DB.sqls.should == ['BLAH before', "DELETE FROM items WHERE id = 2233", 'BLAH after'] end specify "#destroy should cancel the destroy and raise an error if before_destroy returns false and raise_on_save_failure is true" do @c.before_destroy{false} proc{@c.load(:id => 2233).destroy}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#destroy should cancel the destroy and return nil if before_destroy returns false and raise_on_save_failure is false" do @c.before_destroy{false} @c.raise_on_save_failure = false @c.load(:id => 2233).destroy.should == nil DB.sqls.should == [] end end describe "Model#before_validation && Model#after_validation" do before do DB.reset @c = model_class.call(Sequel::Model(:items)) do plugin :validation_class_methods after_validation{DB << "BLAH after"} def self.validate(o) o.errors.add(:id, 'not valid') unless o[:id] == 2233 end columns :id end end specify "should be called around validation" do @c.before_validation{DB << "BLAH before"} m = @c.load(:id => 2233) m.should be_valid DB.sqls.should == ['BLAH before', 'BLAH after'] DB.sqls.clear m = @c.load(:id => 22) m.should_not be_valid DB.sqls.should == ['BLAH before', 'BLAH after'] end specify "should be called when calling save" do @c.before_validation{DB << "BLAH before"} m = @c.load(:id => 2233, :x=>123) m.save.should == m DB.sqls.should == ['BLAH before', 'BLAH after', 'UPDATE items SET x = 123 WHERE (id = 2233)'] DB.sqls.clear m = @c.load(:id => 22) m.raise_on_save_failure = false m.save.should == nil DB.sqls.should == ['BLAH before', 'BLAH after'] end specify "#save should cancel the save and raise an error if before_validation returns false and raise_on_save_failure is true" do @c.before_validation{false} proc{@c.load(:id => 2233).save}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#save should cancel the save and return nil if before_validation returns false and raise_on_save_failure is false" do @c.before_validation{false} @c.raise_on_save_failure = false @c.load(:id => 2233).save.should == nil DB.sqls.should == [] end end describe "Model.has_hooks?" do before do @c = model_class.call(Sequel::Model(:items)) end specify "should return false if no hooks are defined" do @c.has_hooks?(:before_save).should be_false end specify "should return true if hooks are defined" do @c.before_save {'blah'} @c.has_hooks?(:before_save).should be_true end specify "should return true if hooks are inherited" do @d = Class.new(@c) @d.has_hooks?(:before_save).should be_false end end describe "Model#add_hook_type" do before do class ::Foo < Sequel::Model(:items) plugin :hook_class_methods add_hook_type :before_bar, :after_bar def bar return :b if before_bar == false return :a if after_bar == false true end end @f = Class.new(Foo) end after do Object.send(:remove_const, :Foo) end specify "should have before_bar and after_bar class methods" do @f.should respond_to(:before_bar) @f.should respond_to(:before_bar) end specify "should have before_bar and after_bar instance methods" do @f.new.should respond_to(:before_bar) @f.new.should respond_to(:before_bar) end specify "it should return true for bar when before_bar and after_bar hooks are returing true" do a = 1 @f.before_bar { a += 1} @f.new.bar.should be_true a.should == 2 @f.after_bar { a *= 2} @f.new.bar.should be_true a.should == 6 end specify "it should return nil for bar when before_bar and after_bar hooks are returing false" do @f.new.bar.should be_true @f.after_bar { false } @f.new.bar.should == :a @f.before_bar { false } @f.new.bar.should == :b end end �������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/inflector_spec.rb�������������������������������������������������0000664�0000000�0000000�00000013604�12201565355�0022457�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') Sequel.extension :inflector describe String do it "#camelize and #camelcase should transform the word to CamelCase" do "egg_and_hams".camelize.should == "EggAndHams" "egg_and_hams".camelize(false).should == "eggAndHams" "post".camelize.should == "Post" "post".camelcase.should == "Post" end it "#constantize should eval the string to get a constant" do "String".constantize.should == String "String::Inflections".constantize.should == String::Inflections proc{"BKSDDF".constantize}.should raise_error proc{"++A++".constantize}.should raise_error end it "#dasherize should transform underscores to dashes" do "egg_and_hams".dasherize.should == "egg-and-hams" "post".dasherize.should == "post" end it "#demodulize should remove any preceding modules" do "String::Inflections::Blah".demodulize.should == "Blah" "String::Inflections".demodulize.should == "Inflections" "String".demodulize.should == "String" end it "#humanize should remove _i, transform underscore to spaces, and capitalize" do "egg_and_hams".humanize.should == "Egg and hams" "post".humanize.should == "Post" "post_id".humanize.should == "Post" end it "#titleize and #titlecase should underscore, humanize, and capitalize all words" do "egg-and: hams".titleize.should == "Egg And: Hams" "post".titleize.should == "Post" "post".titlecase.should == "Post" end it "#underscore should add underscores between CamelCased words, change :: to / and - to _, and downcase" do "EggAndHams".underscore.should == "egg_and_hams" "EGGAndHams".underscore.should == "egg_and_hams" "Egg::And::Hams".underscore.should == "egg/and/hams" "post".underscore.should == "post" "post-id".underscore.should == "post_id" end it "#pluralize should transform words from singular to plural" do "post".pluralize.should == "posts" "octopus".pluralize.should =="octopuses" "the blue mailman".pluralize.should == "the blue mailmen" "CamelOctopus".pluralize.should == "CamelOctopuses" end it "#singularize should transform words from plural to singular" do "posts".singularize.should == "post" "octopuses".singularize.should == "octopus" "the blue mailmen".singularize.should == "the blue mailman" "CamelOctopuses".singularize.should == "CamelOctopus" end it "#tableize should transform class names to table names" do "RawScaledScorer".tableize.should == "raw_scaled_scorers" "egg_and_ham".tableize.should == "egg_and_hams" "fancyCategory".tableize.should == "fancy_categories" end it "#classify should tranform table names to class names" do "egg_and_hams".classify.should == "EggAndHam" "post".classify.should == "Post" end it "#foreign_key should create a foreign key name from a class name" do "Message".foreign_key.should == "message_id" "Message".foreign_key(false).should == "messageid" "Admin::Post".foreign_key.should == "post_id" end end describe String::Inflections do before do @plurals, @singulars, @uncountables = String.inflections.plurals.dup, String.inflections.singulars.dup, String.inflections.uncountables.dup end after do String.inflections.plurals.replace(@plurals) String.inflections.singulars.replace(@singulars) String.inflections.uncountables.replace(@uncountables) end it "should be possible to clear the list of singulars, plurals, and uncountables" do String.inflections.clear(:plurals) String.inflections.plurals.should == [] String.inflections.plural('blah', 'blahs') String.inflections.clear String.inflections.plurals.should == [] String.inflections.singulars.should == [] String.inflections.uncountables.should == [] end it "should be able to specify new inflection rules" do String.inflections do |i| i.plural(/xx$/i, 'xxx') i.singular(/ttt$/i, 'tt') i.irregular('yy', 'yyy') i.uncountable(%w'zz') end 'roxx'.pluralize.should == 'roxxx' 'rottt'.singularize.should == 'rott' 'yy'.pluralize.should == 'yyy' 'yyy'.singularize.should == 'yy' 'zz'.pluralize.should == 'zz' 'zz'.singularize.should == 'zz' end it "should be yielded and returned by String.inflections" do String.inflections{|i| i.should == String::Inflections}.should == String::Inflections end end describe 'Default inflections' do it "should support the default inflection rules" do { :test=>:tests, :ax=>:axes, :testis=>:testes, :octopus=>:octopuses, :virus=>:viruses, :alias=>:aliases, :status=>:statuses, :bus=>:buses, :buffalo=>:buffaloes, :tomato=>:tomatoes, :datum=>:data, :bacterium=>:bacteria, :analysis=>:analyses, :basis=>:bases, :diagnosis=>:diagnoses, :parenthesis=>:parentheses, :prognosis=>:prognoses, :synopsis=>:synopses, :thesis=>:theses, :wife=>:wives, :giraffe=>:giraffes, :self=>:selves, :dwarf=>:dwarves, :hive=>:hives, :fly=>:flies, :buy=>:buys, :soliloquy=>:soliloquies, :day=>:days, :attorney=>:attorneys, :boy=>:boys, :hoax=>:hoaxes, :lunch=>:lunches, :princess=>:princesses, :matrix=>:matrices, :vertex=>:vertices, :index=>:indices, :mouse=>:mice, :louse=>:lice, :quiz=>:quizzes, :motive=>:motives, :movie=>:movies, :series=>:series, :crisis=>:crises, :person=>:people, :man=>:men, :woman=>:women, :child=>:children, :sex=>:sexes, :move=>:moves }.each do |k, v| k.to_s.pluralize.should == v.to_s v.to_s.singularize.should == k.to_s end [:equipment, :information, :rice, :money, :species, :series, :fish, :sheep, :news].each do |a| a.to_s.pluralize.should == a.to_s.singularize end end end ����������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/input_transformer_spec.rb�����������������������������������������0000664�0000000�0000000�00000003253�12201565355�0024252�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::InputTransformer" do before do @c = Class.new(Sequel::Model) @c.columns :name, :b @c.plugin(:input_transformer, :reverser){|v| v.is_a?(String) ? v.reverse : v} @o = @c.new end it "should apply transformation to input" do @o.name = ' name ' @o.name.should == ' eman ' @o.name = [1, 2, 3] @o.name.should == [1, 2, 3] end it "should not apply any transformers by default" do c = Class.new(Sequel::Model) c.columns :name, :b c.plugin :input_transformer c.new(:name => ' name ').name.should == ' name ' end it "should allow skipping of columns using .skip_input_transformer" do @c.skip_input_transformer :reverser, :name v = ' name ' @o.name = v @o.name.should equal(v) end it "should work correctly in subclasses" do o = Class.new(@c).new o.name = ' name ' o.name.should == ' eman ' end it "should raise an error if adding input filter without name" do proc{@c.add_input_transformer(nil){}}.should raise_error(Sequel::Error) proc{@c.plugin(:input_transformer){}}.should raise_error(Sequel::Error) end it "should raise an error if adding input filter without block" do proc{@c.add_input_transformer(:foo)}.should raise_error(Sequel::Error) proc{@c.plugin(:input_transformer, :foo)}.should raise_error(Sequel::Error) end it "should apply multiple input transformers in reverse order of their call" do @c.add_input_transformer(:add_bar){|v| v << 'bar'} @c.add_input_transformer(:add_foo){|v| v << 'foo'} @o.name = ' name ' @o.name.should == 'raboof eman ' end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/instance_filters_spec.rb������������������������������������������0000664�0000000�0000000�00000005347�12201565355�0024033�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "instance_filters plugin" do before do @c = Class.new(Sequel::Model(:people)) @c.dataset.quote_identifiers = false @c.columns :id, :name, :num @c.plugin :instance_filters @p = @c.load(:id=>1, :name=>'John', :num=>1) DB.sqls end specify "should raise an error when updating a stale record" do @p.update(:name=>'Bob') DB.sqls.should == ["UPDATE people SET name = 'Bob' WHERE (id = 1)"] @p.instance_filter(:name=>'Jim') @p.this.numrows = 0 proc{@p.update(:name=>'Joe')}.should raise_error(Sequel::Plugins::InstanceFilters::Error) DB.sqls.should == ["UPDATE people SET name = 'Joe' WHERE ((id = 1) AND (name = 'Jim'))"] end specify "should raise an error when destroying a stale record" do @p.destroy DB.sqls.should == ["DELETE FROM people WHERE id = 1"] @p.instance_filter(:name=>'Jim') @p.this.numrows = 0 proc{@p.destroy}.should raise_error(Sequel::Plugins::InstanceFilters::Error) DB.sqls.should == ["DELETE FROM people WHERE ((id = 1) AND (name = 'Jim'))"] end specify "should work when using the prepared_statements plugin" do @c.plugin :prepared_statements @p.update(:name=>'Bob') DB.sqls.should == ["UPDATE people SET name = 'Bob' WHERE (id = 1)"] @p.instance_filter(:name=>'Jim') @p.this.numrows = 0 proc{@p.update(:name=>'Joe')}.should raise_error(Sequel::Plugins::InstanceFilters::Error) DB.sqls.should == ["UPDATE people SET name = 'Joe' WHERE ((id = 1) AND (name = 'Jim'))"] @p = @c.load(:id=>1, :name=>'John', :num=>1) @p.this.numrows = 1 @p.destroy DB.sqls.should == ["DELETE FROM people WHERE (id = 1)"] @p.instance_filter(:name=>'Jim') @p.this.numrows = 0 proc{@p.destroy}.should raise_error(Sequel::Plugins::InstanceFilters::Error) DB.sqls.should == ["DELETE FROM people WHERE ((id = 1) AND (name = 'Jim'))"] @c.create.should be_a_kind_of(@c) end specify "should apply all instance filters" do @p.instance_filter(:name=>'Jim') @p.instance_filter{num > 2} @p.update(:name=>'Bob') DB.sqls.should == ["UPDATE people SET name = 'Bob' WHERE ((id = 1) AND (name = 'Jim') AND (num > 2))"] end specify "should drop instance filters after updating" do @p.instance_filter(:name=>'Joe') @p.update(:name=>'Joe') DB.sqls.should == ["UPDATE people SET name = 'Joe' WHERE ((id = 1) AND (name = 'Joe'))"] @p.update(:name=>'Bob') DB.sqls.should == ["UPDATE people SET name = 'Bob' WHERE (id = 1)"] end specify "shouldn't allow instance filters on frozen objects" do @p.instance_filter(:name=>'Joe') @p.freeze proc{@p.instance_filter(:name=>'Jim')}.should raise_error end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/instance_hooks_spec.rb��������������������������������������������0000664�0000000�0000000�00000017074�12201565355�0023506�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "InstanceHooks plugin" do def r(x) @r << x x end before do @c = Class.new(Sequel::Model(:items)) @c.plugin :instance_hooks @c.raise_on_save_failure = false @o = @c.new @x = @c.load({:id=>1}) @r = [] end it "should support before_create_hook and after_create_hook" do @o.after_create_hook{r 1} @o.before_create_hook{r 2} @o.after_create_hook{r 3} @o.before_create_hook{r 4} @o.save.should_not == nil @r.should == [4, 2, 1, 3] end it "should cancel the save if before_create_hook block returns false" do @o.after_create_hook{r 1} @o.before_create_hook{r false} @o.before_create_hook{r 4} @o.save.should == nil @r.should == [4, false] @r.clear @o.save.should == nil @r.should == [4, false] end it "should support before_update_hook and after_update_hook" do @x.after_update_hook{r 1} @x.before_update_hook{r 2} @x.after_update_hook{r 3} @x.before_update_hook{r 4} @x.save.should_not == nil @r.should == [4, 2, 1, 3] @x.save.should_not == nil @r.should == [4, 2, 1, 3] end it "should cancel the save if before_update_hook block returns false" do @x.after_update_hook{r 1} @x.before_update_hook{r false} @x.before_update_hook{r 4} @x.save.should == nil @r.should == [4, false] @r.clear @x.save.should == nil @r.should == [4, false] end it "should support before_save_hook and after_save_hook" do @o.after_save_hook{r 1} @o.before_save_hook{r 2} @o.after_save_hook{r 3} @o.before_save_hook{r 4} @o.save.should_not == nil @r.should == [4, 2, 1, 3] @r.clear @x.after_save_hook{r 1} @x.before_save_hook{r 2} @x.after_save_hook{r 3} @x.before_save_hook{r 4} @x.save.should_not == nil @r.should == [4, 2, 1, 3] @x.save.should_not == nil @r.should == [4, 2, 1, 3] end it "should cancel the save if before_save_hook block returns false" do @x.after_save_hook{r 1} @x.before_save_hook{r false} @x.before_save_hook{r 4} @x.save.should == nil @r.should == [4, false] @r.clear @x.after_save_hook{r 1} @x.before_save_hook{r false} @x.before_save_hook{r 4} @x.save.should == nil @r.should == [4, false] @r.clear @x.save.should == nil @r.should == [4, false] end it "should support before_destroy_hook and after_destroy_hook" do @x.after_destroy_hook{r 1} @x.before_destroy_hook{r 2} @x.after_destroy_hook{r 3} @x.before_destroy_hook{r 4} @x.destroy.should_not == nil @r.should == [4, 2, 1, 3] end it "should cancel the destroy if before_destroy_hook block returns false" do @x.after_destroy_hook{r 1} @x.before_destroy_hook{r false} @x.before_destroy_hook{r 4} @x.destroy.should == nil @r.should == [4, false] end it "should support before_validation_hook and after_validation_hook" do @o.after_validation_hook{r 1} @o.before_validation_hook{r 2} @o.after_validation_hook{r 3} @o.before_validation_hook{r 4} @o.valid?.should == true @r.should == [4, 2, 1, 3] end it "should cancel the save if before_validation_hook block returns false" do @o.after_validation_hook{r 1} @o.before_validation_hook{r false} @o.before_validation_hook{r 4} @o.valid?.should == false @r.should == [4, false] @r.clear @o.valid?.should == false @r.should == [4, false] end it "should clear only related hooks on successful create" do @o.after_destroy_hook{r 1} @o.before_destroy_hook{r 2} @o.after_update_hook{r 3} @o.before_update_hook{r 4} @o.before_save_hook{r 5} @o.after_save_hook{r 6} @o.before_create_hook{r 7} @o.after_create_hook{r 8} @o.save.should_not == nil @r.should == [5, 7, 8, 6] @o.instance_variable_set(:@new, false) @o.save.should_not == nil @r.should == [5, 7, 8, 6, 4, 3] @o.save.should_not == nil @r.should == [5, 7, 8, 6, 4, 3] @o.destroy @r.should == [5, 7, 8, 6, 4, 3, 2, 1] end it "should clear only related hooks on successful update" do @x.after_destroy_hook{r 1} @x.before_destroy_hook{r 2} @x.before_update_hook{r 3} @x.after_update_hook{r 4} @x.before_save_hook{r 5} @x.after_save_hook{r 6} @x.save.should_not == nil @r.should == [5, 3, 4, 6] @x.save.should_not == nil @r.should == [5, 3, 4, 6] @x.destroy @r.should == [5, 3, 4, 6, 2, 1] end it "should clear only related hooks on successful destroy" do @x.after_destroy_hook{r 1} @x.before_destroy_hook{r 2} @x.before_update_hook{r 3} @x.before_save_hook{r 4} @x.destroy @r.should == [2, 1] @x.save.should_not == nil @r.should == [2, 1, 4, 3] end it "should not allow addition of instance hooks to frozen instances" do @x.after_destroy_hook{r 1} @x.before_destroy_hook{r 2} @x.before_update_hook{r 3} @x.before_save_hook{r 4} @x.freeze proc{@x.after_destroy_hook{r 1}}.should raise_error(Sequel::Error) proc{@x.before_destroy_hook{r 2}}.should raise_error(Sequel::Error) proc{@x.before_update_hook{r 3}}.should raise_error(Sequel::Error) proc{@x.before_save_hook{r 4}}.should raise_error(Sequel::Error) end end describe "InstanceHooks plugin with transactions" do before do @db = Sequel.mock(:numrows=>1) @c = Class.new(Sequel::Model(@db[:items])) do attr_accessor :rb def after_save db.execute('as') raise Sequel::Rollback if rb end def after_destroy db.execute('ad') raise Sequel::Rollback if rb end end @c.use_transactions = true @c.plugin :instance_hooks @o = @c.load({:id=>1}) @or = @c.load({:id=>1}) @or.rb = true @r = [] @db.sqls end it "should support after_commit_hook" do @o.after_commit_hook{@db.execute('ac1')} @o.after_commit_hook{@db.execute('ac2')} @o.save.should_not be_nil @db.sqls.should == ['BEGIN', 'as', 'COMMIT', 'ac1', 'ac2'] end it "should support after_rollback_hook" do @or.after_rollback_hook{@db.execute('ar1')} @or.after_rollback_hook{@db.execute('ar2')} @or.save.should be_nil @db.sqls.should == ['BEGIN', 'as', 'ROLLBACK', 'ar1', 'ar2'] end it "should support after_commit_hook" do @o.after_destroy_commit_hook{@db.execute('adc1')} @o.after_destroy_commit_hook{@db.execute('adc2')} @o.destroy.should_not be_nil @db.sqls.should == ['BEGIN', "DELETE FROM items WHERE (id = 1)", 'ad', 'COMMIT', 'adc1', 'adc2'] end it "should support after_rollback_hook" do @or.after_destroy_rollback_hook{@db.execute('adr1')} @or.after_destroy_rollback_hook{@db.execute('adr2')} @or.destroy.should be_nil @db.sqls.should == ['BEGIN', "DELETE FROM items WHERE (id = 1)", 'ad', 'ROLLBACK', 'adr1', 'adr2'] end it "should have *_hook methods return self "do @o.before_destroy_hook{r 1}.should equal(@o) @o.before_validation_hook{r 1}.should equal(@o) @o.before_save_hook{r 1}.should equal(@o) @o.before_update_hook{r 1}.should equal(@o) @o.before_create_hook{r 1}.should equal(@o) @o.after_destroy_hook{r 1}.should equal(@o) @o.after_validation_hook{r 1}.should equal(@o) @o.after_save_hook{r 1}.should equal(@o) @o.after_update_hook{r 1}.should equal(@o) @o.after_create_hook{r 1}.should equal(@o) @o.after_commit_hook{r 1}.should equal(@o) @o.after_rollback_hook{r 1}.should equal(@o) @o.after_destroy_commit_hook{r 1}.should equal(@o) @o.after_destroy_rollback_hook{r 1}.should equal(@o) end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/json_serializer_spec.rb�������������������������������������������0000664�0000000�0000000�00000027224�12201565355�0023677�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::JsonSerializer" do before do class ::Artist < Sequel::Model unrestrict_primary_key plugin :json_serializer, :naked=>true columns :id, :name def_column_accessor :id, :name @db_schema = {:id=>{:type=>:integer}} one_to_many :albums end class ::Album < Sequel::Model unrestrict_primary_key attr_accessor :blah plugin :json_serializer, :naked=>true columns :id, :name, :artist_id def_column_accessor :id, :name, :artist_id many_to_one :artist end @artist = Artist.load(:id=>2, :name=>'YJM') @artist.associations[:albums] = [] @album = Album.load(:id=>1, :name=>'RF') @album.artist = @artist @album.blah = 'Blah' end after do Object.send(:remove_const, :Artist) Object.send(:remove_const, :Album) end it "should round trip successfully" do Artist.from_json(@artist.to_json).should == @artist Album.from_json(@album.to_json).should == @album end it "should handle ruby objects in values" do class ::Artist def name=(v) super(Date.parse(v)) end end Artist.from_json(Artist.load(:name=>Date.today).to_json).should == Artist.load(:name=>Date.today) end it "should handle the :only option" do Artist.from_json(@artist.to_json(:only=>:name)).should == Artist.load(:name=>@artist.name) Album.from_json(@album.to_json(:only=>[:id, :name])).should == Album.load(:id=>@album.id, :name=>@album.name) end it "should handle the :except option" do Artist.from_json(@artist.to_json(:except=>:id)).should == Artist.load(:name=>@artist.name) Album.from_json(@album.to_json(:except=>[:id, :artist_id])).should == Album.load(:name=>@album.name) end it "should handle the :include option for associations" do Artist.from_json(@artist.to_json(:include=>:albums), :associations=>:albums).albums.should == [@album] Album.from_json(@album.to_json(:include=>:artist), :associations=>:artist).artist.should == @artist end it "should raise an error if attempting to parse json when providing array to non-array association or vice-versa" do proc{Artist.from_json('{"albums":{"id":1,"name":"RF","artist_id":2,"json_class":"Album"},"id":2,"name":"YJM"}', :associations=>:albums)}.should raise_error(Sequel::Error) proc{Album.from_json('{"artist":[{"id":2,"name":"YJM","json_class":"Artist"}],"id":1,"name":"RF","artist_id":2}', :associations=>:artist)}.should raise_error(Sequel::Error) end it "should raise an error if attempting to parse an array containing non-hashes" do proc{Artist.from_json('[{"id":2,"name":"YJM","json_class":"Artist"}, 2]')}.should raise_error(Sequel::Error) end it "should raise an error if attempting to parse invalid JSON" do begin Sequel.instance_eval do alias pj parse_json def parse_json(v) v end end proc{Album.from_json('1')}.should raise_error(Sequel::Error) ensure Sequel.instance_eval do alias parse_json pj end end end it "should handle case where Sequel.parse_json already returns an instance" do begin Sequel.instance_eval do alias pj parse_json def parse_json(v) Album.load(:id=>3) end end ::Album.from_json('1').should == Album.load(:id=>3) ensure Sequel.instance_eval do alias parse_json pj end end end it "should handle the :include option for arbitrary attributes" do Album.from_json(@album.to_json(:include=>:blah)).blah.should == @album.blah end it "should handle multiple inclusions using an array for the :include option" do a = Album.from_json(@album.to_json(:include=>[:blah, :artist]), :associations=>:artist) a.blah.should == @album.blah a.artist.should == @artist end it "should handle cascading using a hash for the :include option" do Artist.from_json(@artist.to_json(:include=>{:albums=>{:include=>:artist}}), :associations=>{:albums=>{:associations=>:artist}}).albums.map{|a| a.artist}.should == [@artist] Album.from_json(@album.to_json(:include=>{:artist=>{:include=>:albums}}), :associations=>{:artist=>{:associations=>:albums}}).artist.albums.should == [@album] Artist.from_json(@artist.to_json(:include=>{:albums=>{:only=>:name}}), :associations=>[:albums]).albums.should == [Album.load(:name=>@album.name)] Album.from_json(@album.to_json(:include=>{:artist=>{:except=>:name}}), :associations=>[:artist]).artist.should == Artist.load(:id=>@artist.id) Artist.from_json(@artist.to_json(:include=>{:albums=>{:include=>{:artist=>{:include=>:albums}}}}), :associations=>{:albums=>{:associations=>{:artist=>{:associations=>:albums}}}}).albums.map{|a| a.artist.albums}.should == [[@album]] Album.from_json(@album.to_json(:include=>{:artist=>{:include=>{:albums=>{:only=>:name}}}}), :associations=>{:artist=>{:associations=>:albums}}).artist.albums.should == [Album.load(:name=>@album.name)] end it "should handle the :include option cascading with an empty hash" do Album.from_json(@album.to_json(:include=>{:artist=>{}}), :associations=>:artist).artist.should == @artist Album.from_json(@album.to_json(:include=>{:blah=>{}})).blah.should == @album.blah end it "should accept a :naked option to not include the JSON.create_id, so parsing yields a plain hash" do Sequel.parse_json(@album.to_json(:naked=>true)).should == @album.values.inject({}){|h, (k, v)| h[k.to_s] = v; h} end it "should support #from_json to set column values" do @artist.from_json('{"name": "AS"}') @artist.name.should == 'AS' @artist.id.should == 2 end it "should support #from_json to support specific :fields" do @album.from_json('{"name": "AS", "artist_id": 3}', :fields=>['name']) @album.name.should == 'AS' @album.artist_id.should == 2 end it "should support #from_json to support :missing=>:skip option" do @album.from_json('{"artist_id": 3}', :fields=>['name'], :missing=>:skip) @album.name.should == 'RF' @album.artist_id.should == 2 end it "should support #from_json to support :missing=>:raise option" do proc{@album.from_json('{"artist_id": 3}', :fields=>['name'], :missing=>:raise)}.should raise_error(Sequel::Error) end it "should have #from_json raise an error if parsed json isn't a hash" do proc{@artist.from_json('[]')}.should raise_error(Sequel::Error) end it "should raise an exception for json keys that aren't associations, columns, or setter methods" do Album.send(:undef_method, :blah=) proc{Album.from_json(@album.to_json(:include=>:blah))}.should raise_error(Sequel::Error) end it "should support a to_json class and dataset method" do Album.dataset._fetch = {:id=>1, :name=>'RF', :artist_id=>2} Artist.dataset._fetch = {:id=>2, :name=>'YJM'} Album.array_from_json(Album.to_json).should == [@album] Album.array_from_json(Album.to_json(:include=>:artist), :associations=>:artist).map{|x| x.artist}.should == [@artist] Album.array_from_json(Album.dataset.to_json(:only=>:name)).should == [Album.load(:name=>@album.name)] end it "should have dataset to_json method work with naked datasets" do ds = Album.dataset.naked ds._fetch = {:id=>1, :name=>'RF', :artist_id=>2} Sequel.parse_json(ds.to_json).should == [@album.values.inject({}){|h, (k, v)| h[k.to_s] = v; h}] end it "should have dataset to_json method respect :array option for the array to use" do a = Album.new(:name=>'RF', :artist_id=>3) Album.array_from_json(Album.to_json(:array=>[a])).should == [a] a.associations[:artist] = artist = Artist.load(:id=>3, :name=>'YJM') Album.array_from_json(Album.to_json(:array=>[a], :include=>:artist), :associations=>:artist).first.artist.should == artist artist.associations[:albums] = [a] x = Artist.array_from_json(Artist.to_json(:array=>[artist], :include=>:albums), :associations=>[:albums]) x.should == [artist] x.first.albums.should == [a] end it "should propagate class default options to instance to_json output" do class ::Album2 < Sequel::Model attr_accessor :blah plugin :json_serializer, :naked => true, :except => :id columns :id, :name, :artist_id many_to_one :artist end @album2 = Album2.load(:id=>2, :name=>'JK') @album2.artist = @artist @album2.blah = 'Gak' JSON.parse(@album2.to_json).should == @album2.values.reject{|k,v| k.to_s == 'id'}.inject({}){|h, (k, v)| h[k.to_s] = v; h} JSON.parse(@album2.to_json(:only => :name)).should == @album2.values.reject{|k,v| k.to_s != 'name'}.inject({}){|h, (k, v)| h[k.to_s] = v; h} JSON.parse(@album2.to_json(:except => :artist_id)).should == @album2.values.reject{|k,v| k.to_s == 'artist_id'}.inject({}){|h, (k, v)| h[k.to_s] = v; h} end it "should handle the :root option to qualify single records" do @album.to_json(:root=>true, :except => [:name, :artist_id]).to_s.should == '{"album":{"id":1}}' @album.to_json(:root=>true, :only => :name).to_s.should == '{"album":{"name":"RF"}}' end it "should handle the :root=>:both option to qualify a dataset of records" do Album.dataset._fetch = [{:id=>1, :name=>'RF'}, {:id=>1, :name=>'RF'}] Album.dataset.to_json(:root=>:both, :only => :id).to_s.should == '{"albums":[{"album":{"id":1}},{"album":{"id":1}}]}' end it "should handle the :root=>:collection option to qualify just the collection" do Album.dataset._fetch = [{:id=>1, :name=>'RF'}, {:id=>1, :name=>'RF'}] Album.dataset.to_json(:root=>:collection, :only => :id).to_s.should == '{"albums":[{"id":1},{"id":1}]}' Album.dataset.to_json(:root=>true, :only => :id).to_s.should == '{"albums":[{"id":1},{"id":1}]}' end it "should handle the :root=>:instance option to qualify just the instances" do Album.dataset._fetch = [{:id=>1, :name=>'RF'}, {:id=>1, :name=>'RF'}] Album.dataset.to_json(:root=>:instance, :only => :id).to_s.should == '[{"album":{"id":1}},{"album":{"id":1}}]' end it "should store the default options in json_serializer_opts" do Album.json_serializer_opts.should == {:naked=>true} c = Class.new(Album) c.plugin :json_serializer, :naked=>false c.json_serializer_opts.should == {:naked=>false} end it "should work correctly when subclassing" do class ::Artist2 < Artist plugin :json_serializer, :only=>:name end Artist2.from_json(Artist2.load(:id=>2, :name=>'YYY').to_json).should == Artist2.load(:name=>'YYY') class ::Artist3 < Artist2 plugin :json_serializer, :naked=>:true end Sequel.parse_json(Artist3.load(:id=>2, :name=>'YYY').to_json).should == {"name"=>'YYY'} Object.send(:remove_const, :Artist2) Object.send(:remove_const, :Artist3) end it "should raise an error if attempting to set a restricted column and :all_columns is not used" do Artist.restrict_primary_key proc{Artist.from_json(@artist.to_json)}.should raise_error(Sequel::Error) end it "should raise an error if an unsupported association is passed in the :associations option" do Artist.association_reflections.delete(:albums) proc{Artist.from_json(@artist.to_json(:include=>:albums), :associations=>:albums)}.should raise_error(Sequel::Error) end it "should raise an error if using from_json and JSON parsing returns an array" do proc{Artist.from_json([@artist].to_json)}.should raise_error(Sequel::Error) end it "should raise an error if using array_from_json and JSON parsing does not return an array" do proc{Artist.array_from_json(@artist.to_json)}.should raise_error(Sequel::Error) end it "should raise an error if using an unsupported :associations option" do proc{Artist.from_json(@artist.to_json, :associations=>'')}.should raise_error(Sequel::Error) end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/lazy_attributes_spec.rb�������������������������������������������0000664�0000000�0000000�00000013505�12201565355�0023717�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") require 'yaml' describe "Sequel::Plugins::LazyAttributes" do before do @db = Sequel.mock def @db.supports_schema_parsing?() true end @db.meta_def(:schema){|*a| [[:id, {:type=>:integer}], [:name,{:type=>:string}]]} class ::LazyAttributesModel < Sequel::Model(@db[:la]) plugin :lazy_attributes set_columns([:id, :name]) meta_def(:columns){[:id, :name]} lazy_attributes :name meta_def(:columns){[:id]} instance_dataset._fetch = dataset._fetch = proc do |sql| if sql !~ /WHERE/ if sql =~ /name/ [{:id=>1, :name=>'1'}, {:id=>2, :name=>'2'}] else [{:id=>1}, {:id=>2}] end else if sql =~ /id IN \(([\d, ]+)\)/ $1.split(', ') elsif sql =~ /id = (\d)/ [$1] end.map do |x| if sql =~ /SELECT name FROM/ {:name=>x.to_s} else {:id=>x.to_i, :name=>x.to_s} end end end end end @c = ::LazyAttributesModel @ds = LazyAttributesModel.dataset @db.sqls end after do Object.send(:remove_const, :LazyAttributesModel) end it "should allowing adding additional lazy attributes via plugin :lazy_attributes" do @c.set_dataset(@ds.select(:id, :blah)) @c.dataset.sql.should == 'SELECT id, blah FROM la' @c.plugin :lazy_attributes, :blah @c.dataset.opts[:select].should == [:id] @c.dataset.sql.should == 'SELECT id FROM la' end it "should allowing adding additional lazy attributes via lazy_attributes" do @c.set_dataset(@ds.select(:id, :blah)) @c.dataset.sql.should == 'SELECT id, blah FROM la' @c.lazy_attributes :blah @c.dataset.opts[:select].should == [:id] @c.dataset.sql.should == 'SELECT id FROM la' end it "should remove the attributes given from the SELECT columns of the model's dataset" do @ds.opts[:select].should == [:id] @ds.sql.should == 'SELECT id FROM la' end it "should still typecast correctly in lazy loaded column setters" do m = @c.new m.name = 1 m.name.should == '1' end it "should raise error if the model has no primary key" do m = @c.first @c.no_primary_key proc{m.name}.should raise_error(Sequel::Error) end it "should lazily load the attribute for a single model object" do m = @c.first m.values.should == {:id=>1} m.name.should == '1' m.values.should == {:id=>1, :name=>'1'} @db.sqls.should == ['SELECT id FROM la LIMIT 1', 'SELECT name FROM la WHERE (id = 1) LIMIT 1'] end it "should lazily load the attribute for a frozen model object" do m = @c.first m.freeze m.name.should == '1' @db.sqls.should == ['SELECT id FROM la LIMIT 1', 'SELECT name FROM la WHERE (id = 1) LIMIT 1'] m.name.should == '1' @db.sqls.should == ['SELECT name FROM la WHERE (id = 1) LIMIT 1'] end it "should not lazily load the attribute for a single model object if the value already exists" do m = @c.first m.values.should == {:id=>1} m[:name] = '1' m.name.should == '1' m.values.should == {:id=>1, :name=>'1'} @db.sqls.should == ['SELECT id FROM la LIMIT 1'] end it "should not lazily load the attribute for a single model object if it is a new record" do m = @c.new m.values.should == {} m.name.should == nil @db.sqls.should == [] end it "should eagerly load the attribute for all model objects reteived with it" do ms = @c.all ms.map{|m| m.values}.should == [{:id=>1}, {:id=>2}] ms.map{|m| m.name}.should == %w'1 2' ms.map{|m| m.values}.should == [{:id=>1, :name=>'1'}, {:id=>2, :name=>'2'}] sqls = @db.sqls ['SELECT id, name FROM la WHERE (id IN (1, 2))', 'SELECT id, name FROM la WHERE (id IN (2, 1))'].should include(sqls.pop) sqls.should == ['SELECT id FROM la'] end it "should not eagerly load the attribute if model instance is frozen, and deal with other frozen instances if not frozen" do ms = @c.all ms.first.freeze ms.map{|m| m.name}.should == %w'1 2' @db.sqls.should == ['SELECT id FROM la', 'SELECT name FROM la WHERE (id = 1) LIMIT 1', 'SELECT id, name FROM la WHERE (id IN (2))'] end it "should add the accessors to a module included in the class, so they can be easily overridden" do @c.class_eval do def name "#{super}-blah" end end ms = @c.all ms.map{|m| m.values}.should == [{:id=>1}, {:id=>2}] ms.map{|m| m.name}.should == %w'1-blah 2-blah' ms.map{|m| m.values}.should == [{:id=>1, :name=>'1'}, {:id=>2, :name=>'2'}] sqls = @db.sqls ['SELECT id, name FROM la WHERE (id IN (1, 2))', 'SELECT id, name FROM la WHERE (id IN (2, 1))'].should include(sqls.pop) sqls.should == ['SELECT id FROM la'] end it "should work with the serialization plugin" do @c.plugin :serialization, :yaml, :name @c.instance_dataset._fetch = @ds._fetch = [[{:id=>1}, {:id=>2}], [{:id=>1, :name=>"--- 3\n"}, {:id=>2, :name=>"--- 6\n"}], [{:id=>1}], [{:name=>"--- 3\n"}]] ms = @ds.all ms.map{|m| m.values}.should == [{:id=>1}, {:id=>2}] ms.map{|m| m.name}.should == [3,6] ms.map{|m| m.values}.should == [{:id=>1, :name=>"--- 3\n"}, {:id=>2, :name=>"--- 6\n"}] ms.map{|m| m.deserialized_values}.should == [{:name=>3}, {:name=>6}] ms.map{|m| m.name}.should == [3,6] sqls = @db.sqls ['SELECT id, name FROM la WHERE (id IN (1, 2))', 'SELECT id, name FROM la WHERE (id IN (2, 1))'].should include(sqls.pop) sqls.should == ['SELECT id FROM la'] m = @ds.first m.values.should == {:id=>1} m.name.should == 3 m.values.should == {:id=>1, :name=>"--- 3\n"} m.deserialized_values.should == {:name=>3} m.name.should == 3 @db.sqls.should == ["SELECT id FROM la LIMIT 1", "SELECT name FROM la WHERE (id = 1) LIMIT 1"] end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/list_spec.rb������������������������������������������������������0000664�0000000�0000000�00000027630�12201565355�0021451�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') describe "List plugin" do def klass(opts={}) @db = DB c = Class.new(Sequel::Model(@db[:items])) c.class_eval do columns :id, :position, :scope_id, :pos plugin :list, opts self.use_transactions = false end c end before do @c = klass @o = @c.load(:id=>7, :position=>3) @sc = klass(:scope=>:scope_id) @so = @sc.load(:id=>7, :position=>3, :scope_id=>5) @db.reset end it "should default to using :position as the position field" do @c.position_field.should == :position @c.new.list_dataset.sql.should == 'SELECT * FROM items ORDER BY position' end it "should accept a :field option to modify the position field" do klass(:field=>:pos).position_field.should == :pos end it "should accept a :scope option with a symbol for a single scope column" do @sc.new(:scope_id=>4).list_dataset.sql.should == 'SELECT * FROM items WHERE (scope_id = 4) ORDER BY scope_id, position' end it "should accept a :scope option with an array of symbols for multiple scope columns" do ['SELECT * FROM items WHERE ((scope_id = 4) AND (pos = 3)) ORDER BY scope_id, pos, position', 'SELECT * FROM items WHERE ((pos = 3) AND (scope_id = 4)) ORDER BY scope_id, pos, position']. should include(klass(:scope=>[:scope_id, :pos]).new(:scope_id=>4, :pos=>3).list_dataset.sql) end it "should accept a :scope option with a proc for a custom list scope" do klass(:scope=>proc{|o| o.model.dataset.filter(:active).filter(:scope_id=>o.scope_id)}).new(:scope_id=>4).list_dataset.sql.should == 'SELECT * FROM items WHERE (active AND (scope_id = 4)) ORDER BY position' end it "should modify the order when using the plugin" do c = Class.new(Sequel::Model(:items)) c.dataset.sql.should == 'SELECT * FROM items' c.plugin :list c.dataset.sql.should == 'SELECT * FROM items ORDER BY position' end it "should be able to access the position field as a class attribute" do @c.position_field.should == :position klass(:field=>:pos).position_field.should == :pos end it "should be able to access the scope proc as a class attribute" do @c.scope_proc.should == nil @sc.scope_proc[@sc.new(:scope_id=>4)].sql.should == 'SELECT * FROM items WHERE (scope_id = 4) ORDER BY scope_id, position' end it "should work correctly in subclasses" do c = Class.new(klass(:scope=>:scope_id)) c.position_field.should == :position c.scope_proc[c.new(:scope_id=>4)].sql.should == 'SELECT * FROM items WHERE (scope_id = 4) ORDER BY scope_id, position' end it "should have at_position return the model object at the given position" do @c.dataset._fetch = {:id=>1, :position=>1} @o.at_position(10).should == @c.load(:id=>1, :position=>1) @sc.dataset._fetch = {:id=>2, :position=>2, :scope_id=>5} @so.at_position(20).should == @sc.load(:id=>2, :position=>2, :scope_id=>5) @db.sqls.should == ["SELECT * FROM items WHERE (position = 10) ORDER BY position LIMIT 1", "SELECT * FROM items WHERE ((scope_id = 5) AND (position = 20)) ORDER BY scope_id, position LIMIT 1"] end it "should have position field set to max+1 when creating if not already set" do @c.instance_dataset._fetch = @c.dataset._fetch = [[{:pos=>nil}], [{:id=>1, :position=>1}], [{:pos=>1}], [{:id=>2, :position=>2}]] @c.instance_dataset.autoid = @c.dataset.autoid = 1 @c.create.values.should == {:id=>1, :position=>1} @c.create.values.should == {:id=>2, :position=>2} @db.sqls.should == ["SELECT max(position) AS max FROM items LIMIT 1", "INSERT INTO items (position) VALUES (1)", "SELECT * FROM items WHERE (id = 1) ORDER BY position LIMIT 1", "SELECT max(position) AS max FROM items LIMIT 1", "INSERT INTO items (position) VALUES (2)", "SELECT * FROM items WHERE (id = 2) ORDER BY position LIMIT 1"] end it "should have position field set to max+1 in scope when creating if not already set" do @sc.instance_dataset._fetch = @sc.dataset._fetch = [[{:pos=>nil}], [{:id=>1, :scope_id=>1, :position=>1}], [{:pos=>1}], [{:id=>2, :scope_id=>1, :position=>2}], [{:pos=>nil}], [{:id=>3, :scope_id=>2, :position=>1}]] @sc.instance_dataset.autoid = @sc.dataset.autoid = 1 @sc.create(:scope_id=>1).values.should == {:id=>1, :scope_id=>1, :position=>1} @sc.create(:scope_id=>1).values.should == {:id=>2, :scope_id=>1, :position=>2} @sc.create(:scope_id=>2).values.should == {:id=>3, :scope_id=>2, :position=>1} sqls = @db.sqls sqls.slice!(7).should =~ /INSERT INTO items \((scope_id|position), (scope_id|position)\) VALUES \([12], [12]\)/ sqls.slice!(4).should =~ /INSERT INTO items \((scope_id|position), (scope_id|position)\) VALUES \([12], [12]\)/ sqls.slice!(1).should =~ /INSERT INTO items \((scope_id|position), (scope_id|position)\) VALUES \(1, 1\)/ sqls.should == ["SELECT max(position) AS max FROM items WHERE (scope_id = 1) LIMIT 1", "SELECT * FROM items WHERE (id = 1) ORDER BY scope_id, position LIMIT 1", "SELECT max(position) AS max FROM items WHERE (scope_id = 1) LIMIT 1", "SELECT * FROM items WHERE (id = 2) ORDER BY scope_id, position LIMIT 1", "SELECT max(position) AS max FROM items WHERE (scope_id = 2) LIMIT 1", "SELECT * FROM items WHERE (id = 3) ORDER BY scope_id, position LIMIT 1"] end it "should have last_position return the last position in the list" do @c.dataset._fetch = {:max=>10} @o.last_position.should == 10 @sc.dataset._fetch = {:max=>20} @so.last_position.should == 20 @db.sqls.should == ["SELECT max(position) AS max FROM items LIMIT 1", "SELECT max(position) AS max FROM items WHERE (scope_id = 5) LIMIT 1"] end it "should have list_dataset return the model's dataset for non scoped lists" do @o.list_dataset.sql.should == 'SELECT * FROM items ORDER BY position' end it "should have list dataset return a scoped dataset for scoped lists" do @so.list_dataset.sql.should == 'SELECT * FROM items WHERE (scope_id = 5) ORDER BY scope_id, position' end it "should have move_down without an argument move down a single position" do @c.dataset._fetch = {:max=>10} @o.move_down.should == @o @o.position.should == 4 @db.sqls.should == ["SELECT max(position) AS max FROM items LIMIT 1", "UPDATE items SET position = (position - 1) WHERE ((position >= 4) AND (position <= 4))", "UPDATE items SET position = 4 WHERE (id = 7)"] end it "should have move_down with an argument move down the given number of positions" do @c.dataset._fetch = {:max=>10} @o.move_down(3).should == @o @o.position.should == 6 @db.sqls.should == ["SELECT max(position) AS max FROM items LIMIT 1", "UPDATE items SET position = (position - 1) WHERE ((position >= 4) AND (position <= 6))", "UPDATE items SET position = 6 WHERE (id = 7)"] end it "should have move_down with a negative argument move up the given number of positions" do @o.move_down(-1).should == @o @o.position.should == 2 @db.sqls.should == ["UPDATE items SET position = (position + 1) WHERE ((position >= 2) AND (position < 3))", "UPDATE items SET position = 2 WHERE (id = 7)"] end it "should have move_to raise an error if an invalid target is used" do proc{@o.move_to(0)}.should raise_error(Sequel::Error) @c.dataset._fetch = {:max=>10} proc{@o.move_to(11)}.should raise_error(Sequel::Error) end it "should have move_to use a transaction if the instance is configured to use transactions" do @o.use_transactions = true @o.move_to(2) @db.sqls.should == ["BEGIN", "UPDATE items SET position = (position + 1) WHERE ((position >= 2) AND (position < 3))", "UPDATE items SET position = 2 WHERE (id = 7)", "COMMIT"] end it "should have move_to do nothing if the target position is the same as the current position" do @o.use_transactions = true @o.move_to(@o.position).should == @o @o.position.should == 3 @db.sqls.should == [] end it "should have move to shift entries correctly between current and target if moving up" do @o.move_to(2) @db.sqls.first.should == "UPDATE items SET position = (position + 1) WHERE ((position >= 2) AND (position < 3))" end it "should have move to shift entries correctly between current and target if moving down" do @c.dataset._fetch = {:max=>10} @o.move_to(4) @db.sqls[1].should == "UPDATE items SET position = (position - 1) WHERE ((position >= 4) AND (position <= 4))" end it "should have move_to_bottom move the item to the last position" do @c.dataset._fetch = {:max=>10} @o.move_to_bottom @db.sqls.should == ["SELECT max(position) AS max FROM items LIMIT 1", "UPDATE items SET position = (position - 1) WHERE ((position >= 4) AND (position <= 10))", "UPDATE items SET position = 10 WHERE (id = 7)"] end it "should have move_to_top move the item to the first position" do @o.move_to_top @db.sqls.should == ["UPDATE items SET position = (position + 1) WHERE ((position >= 1) AND (position < 3))", "UPDATE items SET position = 1 WHERE (id = 7)"] end it "should have move_up without an argument move up a single position" do @o.move_up.should == @o @o.position.should == 2 @db.sqls.should == ["UPDATE items SET position = (position + 1) WHERE ((position >= 2) AND (position < 3))", "UPDATE items SET position = 2 WHERE (id = 7)"] end it "should have move_up with an argument move up the given number of positions" do @o.move_up(2).should == @o @o.position.should == 1 @db.sqls.should == ["UPDATE items SET position = (position + 1) WHERE ((position >= 1) AND (position < 3))", "UPDATE items SET position = 1 WHERE (id = 7)"] end it "should have move_up with a negative argument move down the given number of positions" do @c.dataset._fetch = {:max=>10} @o.move_up(-1).should == @o @o.position.should == 4 @db.sqls.should == ["SELECT max(position) AS max FROM items LIMIT 1", "UPDATE items SET position = (position - 1) WHERE ((position >= 4) AND (position <= 4))", "UPDATE items SET position = 4 WHERE (id = 7)"] end it "should have next return the next entry in the list if not given an argument" do @c.dataset._fetch = {:id=>9, :position=>4} @o.next.should == @c.load(:id=>9, :position=>4) @db.sqls.should == ["SELECT * FROM items WHERE (position = 4) ORDER BY position LIMIT 1"] end it "should have next return the entry the given number of positions below the instance if given an argument" do @c.dataset._fetch = {:id=>9, :position=>5} @o.next(2).should == @c.load(:id=>9, :position=>5) @db.sqls.should == ["SELECT * FROM items WHERE (position = 5) ORDER BY position LIMIT 1"] end it "should have next return a previous entry if given a negative argument" do @c.dataset._fetch = {:id=>9, :position=>2} @o.next(-1).should == @c.load(:id=>9, :position=>2) @db.sqls.should == ["SELECT * FROM items WHERE (position = 2) ORDER BY position LIMIT 1"] end it "should have position_value return the value of the position field" do @o.position_value.should == 3 end it "should have prev return the previous entry in the list if not given an argument" do @c.dataset._fetch = {:id=>9, :position=>2} @o.prev.should == @c.load(:id=>9, :position=>2) @db.sqls.should == ["SELECT * FROM items WHERE (position = 2) ORDER BY position LIMIT 1"] end it "should have prev return the entry the given number of positions above the instance if given an argument" do @c.dataset._fetch = {:id=>9, :position=>1} @o.prev(2).should == @c.load(:id=>9, :position=>1) @db.sqls.should == ["SELECT * FROM items WHERE (position = 1) ORDER BY position LIMIT 1"] end it "should have prev return a following entry if given a negative argument" do @c.dataset._fetch = {:id=>9, :position=>4} @o.prev(-1).should == @c.load(:id=>9, :position=>4) @db.sqls.should == ["SELECT * FROM items WHERE (position = 4) ORDER BY position LIMIT 1"] end end ��������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/looser_typecasting_spec.rb����������������������������������������0000664�0000000�0000000�00000002672�12201565355�0024412�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "LooserTypecasting Extension" do before do @db = Sequel::Database.new def @db.supports_schema_parsing?() true end def @db.schema(*args) [[:id, {}], [:z, {:type=>:float}], [:b, {:type=>:integer}], [:d, {:type=>:decimal}], [:s, {:type=>:string}]] end @c = Class.new(Sequel::Model(@db[:items])) @db.extension(:looser_typecasting) @c.instance_eval do @columns = [:id, :b, :z, :d, :s] def columns; @columns; end end end specify "should not raise errors for invalid strings in integer columns" do @c.new(:b=>'a').b.should == 0 @c.new(:b=>'a').b.should be_a_kind_of(Integer) end specify "should not raise errors for invalid strings in float columns" do @c.new(:z=>'a').z.should == 0.0 @c.new(:z=>'a').z.should be_a_kind_of(Float) end specify "should not raise errors for hash or array input to string columns" do @c.new(:s=>'a').s.should == 'a' @c.new(:s=>[]).s.should be_a_kind_of(String) @c.new(:s=>{}).s.should be_a_kind_of(String) end specify "should not raise errors for invalid strings in decimal columns" do @c.new(:d=>'a').d.should == 0.0 @c.new(:d=>'a').d.should be_a_kind_of(BigDecimal) end specify "should not affect conversions of other types in decimal columns" do @c.new(:d=>1).d.should == 1 @c.new(:d=>1).d.should be_a_kind_of(BigDecimal) end end ����������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/many_through_many_spec.rb�����������������������������������������0000664�0000000�0000000�00000244236�12201565355�0024231�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "many_through_many" do before do class ::Artist < Sequel::Model attr_accessor :yyy columns :id plugin :many_through_many end class ::Tag < Sequel::Model columns :id, :h1, :h2 end @c1 = Artist @c2 = Tag @dataset = @c2.dataset @dataset._fetch = {:id=>1} DB.reset end after do Object.send(:remove_const, :Artist) Object.send(:remove_const, :Tag) end it "should populate :key_hash and :id_map option correctly for custom eager loaders" do khs = [] pr = proc{|h| khs << [h[:key_hash], h[:id_map]]} @c1.many_through_many :tags, :through=>[[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager_loader=>pr @c1.eager(:tags).all khs.should == [[{:id=>{1=>[Artist.load(:x=>1, :id=>1)]}}, {1=>[Artist.load(:x=>1, :id=>1)]}]] khs.clear @c1.many_through_many :tags, :through=>[[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :left_primary_key=>:id, :left_primary_key_column=>:i, :eager_loader=>pr @c1.eager(:tags).all khs.should == [[{:id=>{1=>[Artist.load(:x=>1, :id=>1)]}}, {1=>[Artist.load(:x=>1, :id=>1)]}]] end it "should support using a custom :left_primary_key option when eager loading many_to_many associations" do @c1.send(:define_method, :id3){id*3} @c1.dataset._fetch = {:id=>1} @c2.dataset._fetch = {:id=>4, :x_foreign_key_x=>3} @c1.many_through_many :tags, :through=>[[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :left_primary_key=>:id3 a = @c1.eager(:tags).all a.should == [@c1.load(:id => 1)] DB.sqls.should == ['SELECT * FROM artists', "SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (3)))"] a.first.tags.should == [@c2.load(:id=>4)] DB.sqls.should == [] end it "should handle a :eager_loading_predicate_key option to change the SQL used in the lookup" do @c1.dataset._fetch = {:id=>1} @c2.dataset._fetch = {:id=>4, :x_foreign_key_x=>1} @c1.many_through_many :tags, :through=>[[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager_loading_predicate_key=>Sequel./(:albums_artists__artist_id, 3) a = @c1.eager(:tags).all a.should == [@c1.load(:id => 1)] DB.sqls.should == ['SELECT * FROM artists', "SELECT tags.*, (albums_artists.artist_id / 3) AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND ((albums_artists.artist_id / 3) IN (1)))"] a.first.tags.should == [@c2.load(:id=>4)] end it "should default to associating to other models in the same scope" do begin class ::AssociationModuleTest class Artist < Sequel::Model plugin :many_through_many many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] end class Tag < Sequel::Model end end ::AssociationModuleTest::Artist.association_reflection(:tags).associated_class.should == ::AssociationModuleTest::Tag ensure Object.send(:remove_const, :AssociationModuleTest) end end it "should raise an error if in invalid form of through is used" do proc{@c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id]]}.should raise_error(Sequel::Error) proc{@c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], {:table=>:album_tags, :left=>:album_id}]}.should raise_error(Sequel::Error) proc{@c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], :album_tags]}.should raise_error(Sequel::Error) end it "should allow only two arguments with the :through option" do @c1.many_through_many :tags, :through=>[[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] n = @c1.load(:id => 1234) n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))' n.tags.should == [@c2.load(:id=>1)] end it "should be clonable" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] @c1.many_through_many :other_tags, :clone=>:tags n = @c1.load(:id => 1234) n.other_tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))' n.tags.should == [@c2.load(:id=>1)] end it "should use join tables given" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] n = @c1.load(:id => 1234) n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))' n.tags.should == [@c2.load(:id=>1)] end it "should handle multiple aliasing of tables" do class ::Album < Sequel::Model end @c1.many_through_many :albums, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id], [:artists, :id, :id], [:albums_artists, :artist_id, :album_id]] n = @c1.load(:id => 1234) n.albums_dataset.sql.should == 'SELECT albums.* FROM albums INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) INNER JOIN artists ON (artists.id = albums_artists.artist_id) INNER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) INNER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_0.album_id) INNER JOIN albums_artists AS albums_artists_1 ON ((albums_artists_1.album_id = albums_0.id) AND (albums_artists_1.artist_id = 1234))' n.albums.should == [Album.load(:id=>1, :x=>1)] Object.send(:remove_const, :Album) end it "should use explicit class if given" do @c1.many_through_many :albums_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag n = @c1.load(:id => 1234) n.albums_tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))' n.albums_tags.should == [@c2.load(:id=>1)] end it "should accept :left_primary_key and :right_primary_key option for primary keys to use in current and associated table" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :right_primary_key=>:tag_id, :left_primary_key=>:yyy n = @c1.load(:id => 1234) n.yyy = 85 n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.tag_id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 85))' n.tags.should == [@c2.load(:id=>1)] end it "should handle composite keys" do @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy] n = @c1.load(:id => 1234) n.yyy = 85 n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2) AND (albums_artists.b1 = 1234) AND (albums_artists.b2 = 85))' n.tags.should == [@c2.load(:id=>1)] end it "should allowing filtering by many_through_many associations" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] @c1.filter(:tags=>@c2.load(:id=>1234)).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id = 1234) AND (albums_artists.artist_id IS NOT NULL))))' end it "should allowing filtering by many_through_many associations with a single through table" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id]] @c1.filter(:tags=>@c2.load(:id=>1234)).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM albums_artists WHERE ((albums_artists.album_id = 1234) AND (albums_artists.artist_id IS NOT NULL))))' end it "should allowing filtering by many_through_many associations with aliased tables" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums_artists, :id, :id], [:albums_artists, :album_id, :tag_id]] @c1.filter(:tags=>@c2.load(:id=>1234)).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.id = albums_artists.album_id) INNER JOIN albums_artists AS albums_artists_1 ON (albums_artists_1.album_id = albums_artists_0.id) WHERE ((albums_artists_1.tag_id = 1234) AND (albums_artists.artist_id IS NOT NULL))))' end it "should allowing filtering by many_through_many associations with composite keys" do @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy] @c1.filter(:tags=>@c2.load(:h1=>1234, :h2=>85)).sql.should == 'SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE ((albums_tags.g1 = 1234) AND (albums_tags.g2 = 85) AND (albums_artists.b1 IS NOT NULL) AND (albums_artists.b2 IS NOT NULL))))' end it "should allowing excluding by many_through_many associations" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] @c1.exclude(:tags=>@c2.load(:id=>1234)).sql.should == 'SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id = 1234) AND (albums_artists.artist_id IS NOT NULL)))) OR (artists.id IS NULL))' end it "should allowing excluding by many_through_many associations with composite keys" do @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy] @c1.exclude(:tags=>@c2.load(:h1=>1234, :h2=>85)).sql.should == 'SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE ((albums_tags.g1 = 1234) AND (albums_tags.g2 = 85) AND (albums_artists.b1 IS NOT NULL) AND (albums_artists.b2 IS NOT NULL)))) OR (artists.id IS NULL) OR (artists.yyy IS NULL))' end it "should allowing filtering by multiple many_through_many associations" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] @c1.filter(:tags=>[@c2.load(:id=>1234), @c2.load(:id=>2345)]).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id IN (1234, 2345)) AND (albums_artists.artist_id IS NOT NULL))))' end it "should allowing filtering by multiple many_through_many associations with composite keys" do @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy] @c1.filter(:tags=>[@c2.load(:h1=>1234, :h2=>85), @c2.load(:h1=>2345, :h2=>95)]).sql.should == 'SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN ((1234, 85), (2345, 95))) AND (albums_artists.b1 IS NOT NULL) AND (albums_artists.b2 IS NOT NULL))))' end it "should allowing excluding by multiple many_through_many associations" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] @c1.exclude(:tags=>[@c2.load(:id=>1234), @c2.load(:id=>2345)]).sql.should == 'SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id IN (1234, 2345)) AND (albums_artists.artist_id IS NOT NULL)))) OR (artists.id IS NULL))' end it "should allowing excluding by multiple many_through_many associations with composite keys" do @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy] @c1.exclude(:tags=>[@c2.load(:h1=>1234, :h2=>85), @c2.load(:h1=>2345, :h2=>95)]).sql.should == 'SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN ((1234, 85), (2345, 95))) AND (albums_artists.b1 IS NOT NULL) AND (albums_artists.b2 IS NOT NULL)))) OR (artists.id IS NULL) OR (artists.yyy IS NULL))' end it "should allowing filtering/excluding many_through_many associations with NULL values" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] @c1.filter(:tags=>@c2.new).sql.should == 'SELECT * FROM artists WHERE \'f\'' @c1.exclude(:tags=>@c2.new).sql.should == 'SELECT * FROM artists WHERE \'t\'' end it "should allowing filtering by many_through_many association datasets" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] @c1.filter(:tags=>@c2.filter(:x=>1)).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id IN (SELECT tags.id FROM tags WHERE ((x = 1) AND (tags.id IS NOT NULL)))) AND (albums_artists.artist_id IS NOT NULL))))' end it "should allowing filtering by many_through_many association datasets with composite keys" do @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy] @c1.filter(:tags=>@c2.filter(:x=>1)).sql.should == 'SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN (SELECT tags.h1, tags.h2 FROM tags WHERE ((x = 1) AND (tags.h1 IS NOT NULL) AND (tags.h2 IS NOT NULL)))) AND (albums_artists.b1 IS NOT NULL) AND (albums_artists.b2 IS NOT NULL))))' end it "should allowing excluding by many_through_many association datasets" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] @c1.exclude(:tags=>@c2.filter(:x=>1)).sql.should == 'SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id IN (SELECT tags.id FROM tags WHERE ((x = 1) AND (tags.id IS NOT NULL)))) AND (albums_artists.artist_id IS NOT NULL)))) OR (artists.id IS NULL))' end it "should allowing excluding by many_through_many association datasets with composite keys" do @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy] @c1.exclude(:tags=>@c2.filter(:x=>1)).sql.should == 'SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN (SELECT tags.h1, tags.h2 FROM tags WHERE ((x = 1) AND (tags.h1 IS NOT NULL) AND (tags.h2 IS NOT NULL)))) AND (albums_artists.b1 IS NOT NULL) AND (albums_artists.b2 IS NOT NULL)))) OR (artists.id IS NULL) OR (artists.yyy IS NULL))' end it "should support a :conditions option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:a=>32} n = @c1.load(:id => 1234) n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) WHERE (a = 32)' n.tags.should == [@c2.load(:id=>1)] @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>['a = ?', 42] n = @c1.load(:id => 1234) n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) WHERE (a = 42)' n.tags.should == [@c2.load(:id=>1)] end it "should support an :order option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>:blah n = @c1.load(:id => 1234) n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) ORDER BY blah' n.tags.should == [@c2.load(:id=>1)] end it "should support an array for the :order option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>[:blah1, :blah2] n = @c1.load(:id => 1234) n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) ORDER BY blah1, blah2' n.tags.should == [@c2.load(:id=>1)] end it "should support a select option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :select=>:blah n = @c1.load(:id => 1234) n.tags_dataset.sql.should == 'SELECT blah FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))' n.tags.should == [@c2.load(:id=>1)] end it "should support an array for the select option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :select=>[Sequel::SQL::ColumnAll.new(:tags), :albums__name] n = @c1.load(:id => 1234) n.tags_dataset.sql.should == 'SELECT tags.*, albums.name FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))' n.tags.should == [@c2.load(:id=>1)] end it "should accept a block" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] do |ds| ds.filter(:yyy=>@yyy) end n = @c1.load(:id => 1234) n.yyy = 85 n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) WHERE (yyy = 85)' n.tags.should == [@c2.load(:id=>1)] end it "should allow the :order option while accepting a block" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>:blah do |ds| ds.filter(:yyy=>@yyy) end n = @c1.load(:id => 1234) n.yyy = 85 n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) WHERE (yyy = 85) ORDER BY blah' n.tags.should == [@c2.load(:id=>1)] end it "should support a :dataset option that is used instead of the default" do @c1.many_through_many :tags, [[:a, :b, :c]], :dataset=>proc{Tag.join(:albums_tags, [:tag_id]).join(:albums, [:album_id]).join(:albums_artists, [:album_id]).filter(:albums_artists__artist_id=>id)} n = @c1.load(:id => 1234) n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags USING (tag_id) INNER JOIN albums USING (album_id) INNER JOIN albums_artists USING (album_id) WHERE (albums_artists.artist_id = 1234)' n.tags.should == [@c2.load(:id=>1)] end it "should support a :limit option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :limit=>10 n = @c1.load(:id => 1234) n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) LIMIT 10' n.tags.should == [@c2.load(:id=>1)] @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :limit=>[10, 10] n = @c1.load(:id => 1234) n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) LIMIT 10 OFFSET 10' n.tags.should == [@c2.load(:id=>1)] end it "should have the :eager option affect the _dataset method" do @c2.many_to_many :fans @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager=>:fans @c1.load(:id => 1234).tags_dataset.opts[:eager].should == {:fans=>nil} end it "should provide an array with all members of the association" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] @c1.load(:id => 1234).tags.should == [@c2.load(:id=>1)] DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))'] end it "should populate cache when accessed" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] n = @c1.load(:id => 1234) n.associations[:tags].should == nil DB.sqls.should == [] n.tags.should == [@c2.load(:id=>1)] DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))'] n.associations[:tags].should == n.tags DB.sqls.length.should == 0 end it "should use cache if available" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] n = @c1.load(:id => 1234) n.associations[:tags] = [] n.tags.should == [] DB.sqls.should == [] end it "should not use cache if asked to reload" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] n = @c1.load(:id => 1234) n.associations[:tags] = [] DB.sqls.should == [] n.tags(true).should == [@c2.load(:id=>1)] DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))'] n.associations[:tags].should == n.tags DB.sqls.length.should == 0 end it "should not add associations methods directly to class" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] im = @c1.instance_methods.collect{|x| x.to_s} im.should(include('tags')) im.should(include('tags_dataset')) im2 = @c1.instance_methods(false).collect{|x| x.to_s} im2.should_not(include('tags')) im2.should_not(include('tags_dataset')) end it "should support after_load association callback" do h = [] @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :after_load=>:al @c1.class_eval do self::Foo = h def al(v) v.each{|x| model::Foo << x.pk * 20} end end @c2.dataset._fetch = [{:id=>20}, {:id=>30}] p = @c1.load(:id=>10, :parent_id=>20) p.tags h.should == [400, 600] p.tags.collect{|a| a.pk}.should == [20, 30] end it "should support a :uniq option that removes duplicates from the association" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :uniq=>true @c2.dataset._fetch = [{:id=>20}, {:id=>30}, {:id=>20}, {:id=>30}] @c1.load(:id=>10).tags.should == [@c2.load(:id=>20), @c2.load(:id=>30)] end end describe 'Sequel::Plugins::ManyThroughMany::ManyThroughManyAssociationReflection' do before do class ::Artist < Sequel::Model plugin :many_through_many many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] end class ::Tag < Sequel::Model end DB.reset @ar = Artist.association_reflection(:tags) end after do Object.send(:remove_const, :Artist) Object.send(:remove_const, :Tag) end it "#edges should be an array of joins to make when eager graphing" do @ar.edges.should == [{:conditions=>[], :left=>:id, :right=>:artist_id, :table=>:albums_artists, :join_type=>:left_outer, :block=>nil}, {:conditions=>[], :left=>:album_id, :right=>:id, :table=>:albums, :join_type=>:left_outer, :block=>nil}, {:conditions=>[], :left=>:id, :right=>:album_id, :table=>:albums_tags, :join_type=>:left_outer, :block=>nil}] end it "#edges should handle composite keys" do Artist.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy] Artist.association_reflection(:tags).edges.should == [{:conditions=>[], :left=>[:id, :yyy], :right=>[:b1, :b2], :table=>:albums_artists, :join_type=>:left_outer, :block=>nil}, {:conditions=>[], :left=>[:c1, :c2], :right=>[:d1, :d2], :table=>:albums, :join_type=>:left_outer, :block=>nil}, {:conditions=>[], :left=>[:e1, :e2], :right=>[:f1, :f2], :table=>:albums_tags, :join_type=>:left_outer, :block=>nil}] end it "#reverse_edges should be an array of joins to make when lazy loading or eager loading" do @ar.reverse_edges.should == [{:alias=>:albums_tags, :left=>:tag_id, :right=>:id, :table=>:albums_tags}, {:alias=>:albums, :left=>:id, :right=>:album_id, :table=>:albums}] end it "#reverse_edges should handle composite keys" do Artist.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy] Artist.association_reflection(:tags).reverse_edges.should == [{:alias=>:albums_tags, :left=>[:g1, :g2], :right=>[:h1, :h2], :table=>:albums_tags}, {:alias=>:albums, :left=>[:e1, :e2], :right=>[:f1, :f2], :table=>:albums}] end it "#reciprocal should be nil" do @ar.reciprocal.should == nil end end describe "Sequel::Plugins::ManyThroughMany eager loading methods" do before do class ::Artist < Sequel::Model plugin :many_through_many many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] many_through_many :other_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>:Tag many_through_many :albums, [[:albums_artists, :artist_id, :album_id]] many_through_many :artists, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id]] end class ::Tag < Sequel::Model plugin :many_through_many many_through_many :tracks, [[:albums_tags, :tag_id, :album_id], [:albums, :id, :id]], :right_primary_key=>:album_id end class ::Album < Sequel::Model end class ::Track < Sequel::Model end Artist.dataset.columns(:id)._fetch = proc do |sql| h = {:id => 1} if sql =~ /FROM artists LEFT OUTER JOIN albums_artists/ h[:tags_id] = 2 h[:albums_0_id] = 3 if sql =~ /LEFT OUTER JOIN albums AS albums_0/ h[:tracks_id] = 4 if sql =~ /LEFT OUTER JOIN tracks/ h[:other_tags_id] = 9 if sql =~ /other_tags\.id AS other_tags_id/ h[:artists_0_id] = 10 if sql =~ /artists_0\.id AS artists_0_id/ end h end Tag.dataset._fetch = proc do |sql| h = {:id => 2} if sql =~ /albums_artists.artist_id IN \(([18])\)/ h[:x_foreign_key_x] = $1.to_i elsif sql =~ /\(\(albums_artists.b1, albums_artists.b2\) IN \(\(1, 8\)\)\)/ h.merge!(:x_foreign_key_0_x=>1, :x_foreign_key_1_x=>8) end h[:tag_id] = h.delete(:id) if sql =~ /albums_artists.artist_id IN \(8\)/ h end Album.dataset._fetch = proc do |sql| h = {:id => 3} h[:x_foreign_key_x] = 1 if sql =~ /albums_artists.artist_id IN \(1\)/ h end Track.dataset._fetch = proc do |sql| h = {:id => 4} h[:x_foreign_key_x] = 2 if sql =~ /albums_tags.tag_id IN \(2\)/ h end @c1 = Artist DB.reset end after do [:Artist, :Tag, :Album, :Track].each{|x| Object.send(:remove_const, x)} end it "should eagerly load a single many_through_many association" do a = @c1.eager(:tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))'] a.first.tags.should == [Tag.load(:id=>2)] DB.sqls.length.should == 0 end it "should eagerly load multiple associations in a single call" do a = @c1.eager(:tags, :albums).all a.should == [@c1.load(:id=>1)] sqls = DB.sqls sqls.length.should == 3 sqls[0].should == 'SELECT * FROM artists' sqls[1..-1].should(include('SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))')) sqls[1..-1].should(include('SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))')) a = a.first a.tags.should == [Tag.load(:id=>2)] a.albums.should == [Album.load(:id=>3)] DB.sqls.length.should == 0 end it "should eagerly load multiple associations in separate" do a = @c1.eager(:tags).eager(:albums).all a.should == [@c1.load(:id=>1)] sqls = DB.sqls sqls.length.should == 3 sqls[0].should == 'SELECT * FROM artists' sqls[1..-1].should(include('SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))')) sqls[1..-1].should(include('SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))')) a = a.first a.tags.should == [Tag.load(:id=>2)] a.albums.should == [Album.load(:id=>3)] DB.sqls.length.should == 0 end it "should allow cascading of eager loading for associations of associated models" do a = @c1.eager(:tags=>:tracks).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))', 'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON ((albums_tags.album_id = albums.id) AND (albums_tags.tag_id IN (2)))'] a = a.first a.tags.should == [Tag.load(:id=>2)] a.tags.first.tracks.should == [Track.load(:id=>4)] DB.sqls.length.should == 0 end it "should cascade eagerly loading when the :eager association option is used" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager=>:tracks a = @c1.eager(:tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))', 'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON ((albums_tags.album_id = albums.id) AND (albums_tags.tag_id IN (2)))'] a = a.first a.tags.should == [Tag.load(:id=>2)] a.tags.first.tracks.should == [Track.load(:id=>4)] DB.sqls.length.should == 0 end it "should respect :eager when lazily loading an association" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager=>:tracks a = @c1.load(:id=>1) a.tags.should == [Tag.load(:id=>2)] DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1))', 'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON ((albums_tags.album_id = albums.id) AND (albums_tags.tag_id IN (2)))'] a.tags.first.tracks.should == [Track.load(:id=>4)] DB.sqls.length.should == 0 end it "should raise error if attempting to eagerly load an association using :eager_graph option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager_graph=>:tracks proc{@c1.eager(:tags).all}.should raise_error(Sequel::Error) end it "should respect :eager_graph when lazily loading an association" do Tag.dataset._fetch = {:id=>2, :tracks_id=>4} Tag.dataset.extend(Module.new { def columns [:id] end }) @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager_graph=>:tracks a = @c1.load(:id=>1) a.tags DB.sqls.should == [ 'SELECT tags.id, tracks.id AS tracks_id FROM (SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1))) AS tags LEFT OUTER JOIN albums_tags AS albums_tags_0 ON (albums_tags_0.tag_id = tags.id) LEFT OUTER JOIN albums ON (albums.id = albums_tags_0.album_id) LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id)'] a.tags.should == [Tag.load(:id=>2)] a.tags.first.tracks.should == [Track.load(:id=>4)] DB.sqls.length.should == 0 end it "should respect :conditions when eagerly loading" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:a=>32} a = @c1.eager(:tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1))) WHERE (a = 32)'] a.first.tags.should == [Tag.load(:id=>2)] DB.sqls.length.should == 0 end it "should respect :order when eagerly loading" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>:blah a = @c1.eager(:tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1))) ORDER BY blah'] a.first.tags.should == [Tag.load(:id=>2)] DB.sqls.length.should == 0 end it "should use the association's block when eager loading by default" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] do |ds| ds.filter(:a) end a = @c1.eager(:tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1))) WHERE a'] a.first.tags.should == [Tag.load(:id=>2)] DB.sqls.length.should == 0 end it "should use the :eager_block option when eager loading if given" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager_block=>proc{|ds| ds.filter(:b)} do |ds| ds.filter(:a) end a = @c1.eager(:tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1))) WHERE b'] a.first.tags.should == [Tag.load(:id=>2)] DB.sqls.length.should == 0 end it "should respect the :limit option on a many_through_many association" do @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>2 Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>5},{:x_foreign_key_x=>1, :id=>6}, {:x_foreign_key_x=>1, :id=>7}] a = @c1.eager(:first_two_tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))'] a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)] DB.sqls.length.should == 0 @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[1,1] a = @c1.eager(:first_two_tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))'] a.first.first_two_tags.should == [Tag.load(:id=>6)] DB.sqls.length.should == 0 @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[nil,1] a = @c1.eager(:first_two_tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))'] a.first.first_two_tags.should == [Tag.load(:id=>6), Tag.load(:id=>7)] DB.sqls.length.should == 0 end it "should respect the :limit option on a many_through_many association using a :window_function strategy" do Tag.dataset.meta_def(:supports_window_functions?){true} @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>2, :order=>:name Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>5},{:x_foreign_key_x=>1, :id=>6}] a = @c1.eager(:first_two_tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))) AS t1 WHERE (x_sequel_row_number_x <= 2)'] a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)] DB.sqls.length.should == 0 @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[2,1], :order=>:name a = @c1.eager(:first_two_tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))) AS t1 WHERE ((x_sequel_row_number_x >= 2) AND (x_sequel_row_number_x < 4))'] a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)] DB.sqls.length.should == 0 @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[nil,1], :order=>:name a = @c1.eager(:first_two_tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))) AS t1 WHERE (x_sequel_row_number_x >= 2)'] a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)] DB.sqls.length.should == 0 end it "should respect the :limit option on a many_through_many association with composite primary keys on the main table using a :window_function strategy" do Tag.dataset.meta_def(:supports_window_functions?){true} @c1.set_primary_key([:id1, :id2]) @c1.columns :id1, :id2 @c1.many_through_many :first_two_tags, [[:albums_artists, [:artist_id1, :artist_id2], :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>2, :order=>:name @c1.dataset._fetch = [{:id1=>1, :id2=>2}] Tag.dataset._fetch = [{:x_foreign_key_0_x=>1, :x_foreign_key_1_x=>2, :id=>5}, {:x_foreign_key_0_x=>1, :x_foreign_key_1_x=>2, :id=>6}] a = @c1.eager(:first_two_tags).all a.should == [@c1.load(:id1=>1, :id2=>2)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT * FROM (SELECT tags.*, albums_artists.artist_id1 AS x_foreign_key_0_x, albums_artists.artist_id2 AS x_foreign_key_1_x, row_number() OVER (PARTITION BY albums_artists.artist_id1, albums_artists.artist_id2 ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND ((albums_artists.artist_id1, albums_artists.artist_id2) IN ((1, 2))))) AS t1 WHERE (x_sequel_row_number_x <= 2)'] a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)] DB.sqls.length.should == 0 @c1.many_through_many :first_two_tags, [[:albums_artists, [:artist_id1, :artist_id2], :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[2,1], :order=>:name a = @c1.eager(:first_two_tags).all a.should == [@c1.load(:id1=>1, :id2=>2)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT * FROM (SELECT tags.*, albums_artists.artist_id1 AS x_foreign_key_0_x, albums_artists.artist_id2 AS x_foreign_key_1_x, row_number() OVER (PARTITION BY albums_artists.artist_id1, albums_artists.artist_id2 ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND ((albums_artists.artist_id1, albums_artists.artist_id2) IN ((1, 2))))) AS t1 WHERE ((x_sequel_row_number_x >= 2) AND (x_sequel_row_number_x < 4))'] a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)] DB.sqls.length.should == 0 end it "should raise an error when attempting to eagerly load an association with the :allow_eager option set to false" do proc{@c1.eager(:tags).all}.should_not raise_error @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :allow_eager=>false proc{@c1.eager(:tags).all}.should raise_error(Sequel::Error) end it "should respect the association's :select option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :select=>:tags__name a = @c1.eager(:tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.name, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))'] a.first.tags.should == [Tag.load(:id=>2)] DB.sqls.length.should == 0 end it "should respect many_through_many association's :left_primary_key and :right_primary_key options" do @c1.send(:define_method, :yyy){values[:yyy]} @c1.dataset._fetch = {:id=>1, :yyy=>8} @c1.dataset.meta_def(:columns){[:id, :yyy]} @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :left_primary_key=>:yyy, :right_primary_key=>:tag_id a = @c1.eager(:tags).all a.should == [@c1.load(:id=>1, :yyy=>8)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.tag_id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (8)))'] a.first.tags.should == [Tag.load(:tag_id=>2)] DB.sqls.length.should == 0 end it "should handle composite keys" do @c1.send(:define_method, :yyy){values[:yyy]} @c1.dataset._fetch = {:id=>1, :yyy=>8} @c1.dataset.meta_def(:columns){[:id, :yyy]} @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy] a = @c1.eager(:tags).all a.should == [@c1.load(:id=>1, :yyy=>8)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.b1 AS x_foreign_key_0_x, albums_artists.b2 AS x_foreign_key_1_x FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2) AND ((albums_artists.b1, albums_artists.b2) IN ((1, 8))))'] a.first.tags.should == [Tag.load(:id=>2)] DB.sqls.length.should == 0 end it "should respect :after_load callbacks on associations when eager loading" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :after_load=>lambda{|o, as| o[:id] *= 2; as.each{|a| a[:id] *= 3}} a = @c1.eager(:tags).all a.should == [@c1.load(:id=>2)] DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))'] a.first.tags.should == [Tag.load(:id=>6)] DB.sqls.length.should == 0 end it "should raise an error if called without a symbol or hash" do proc{@c1.eager_graph(Object.new)}.should raise_error(Sequel::Error) end it "should eagerly graph a single many_through_many association" do a = @c1.eager_graph(:tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id)'] a.first.tags.should == [Tag.load(:id=>2)] DB.sqls.length.should == 0 end it "should eagerly graph multiple associations in a single call" do a = @c1.eager_graph(:tags, :albums).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id, albums_0.id AS albums_0_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_0.album_id)'] a = a.first a.tags.should == [Tag.load(:id=>2)] a.albums.should == [Album.load(:id=>3)] DB.sqls.length.should == 0 end it "should eagerly graph multiple associations in separate calls" do a = @c1.eager_graph(:tags).eager_graph(:albums).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id, albums_0.id AS albums_0_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_0.album_id)'] a = a.first a.tags.should == [Tag.load(:id=>2)] a.albums.should == [Album.load(:id=>3)] DB.sqls.length.should == 0 end it "should allow cascading of eager graphing for associations of associated models" do a = @c1.eager_graph(:tags=>:tracks).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id, tracks.id AS tracks_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) LEFT OUTER JOIN albums_tags AS albums_tags_0 ON (albums_tags_0.tag_id = tags.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = albums_tags_0.album_id) LEFT OUTER JOIN tracks ON (tracks.album_id = albums_0.id)'] a = a.first a.tags.should == [Tag.load(:id=>2)] a.tags.first.tracks.should == [Track.load(:id=>4)] DB.sqls.length.should == 0 end it "eager graphing should eliminate duplicates caused by cartesian products" do ds = @c1.eager_graph(:tags) # Assume artist has 2 albums each with 2 tags ds._fetch = [{:id=>1, :tags_id=>2}, {:id=>1, :tags_id=>3}, {:id=>1, :tags_id=>2}, {:id=>1, :tags_id=>3}] a = ds.all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id)'] a.first.tags.should == [Tag.load(:id=>2), Tag.load(:id=>3)] DB.sqls.length.should == 0 end it "should eager graph multiple associations from the same table" do a = @c1.eager_graph(:tags, :other_tags).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id, other_tags.id AS other_tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_0.album_id) LEFT OUTER JOIN albums_tags AS albums_tags_0 ON (albums_tags_0.album_id = albums_0.id) LEFT OUTER JOIN tags AS other_tags ON (other_tags.id = albums_tags_0.tag_id)'] a = a.first a.tags.should == [Tag.load(:id=>2)] a.other_tags.should == [Tag.load(:id=>9)] DB.sqls.length.should == 0 end it "should eager graph a self_referential association" do a = @c1.eager_graph(:tags, :artists).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id, artists_0.id AS artists_0_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_0.album_id) LEFT OUTER JOIN albums_artists AS albums_artists_1 ON (albums_artists_1.album_id = albums_0.id) LEFT OUTER JOIN artists AS artists_0 ON (artists_0.id = albums_artists_1.artist_id)'] a = a.first a.tags.should == [Tag.load(:id=>2)] a.artists.should == [@c1.load(:id=>10)] DB.sqls.length.should == 0 end it "eager graphing should give you a plain hash when called without .all" do @c1.eager_graph(:tags, :artists).first.should == {:albums_0_id=>3, :artists_0_id=>10, :id=>1, :tags_id=>2} end it "should be able to use eager and eager_graph together" do a = @c1.eager_graph(:tags).eager(:albums).all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id)', 'SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))'] a = a.first a.tags.should == [Tag.load(:id=>2)] a.albums.should == [Album.load(:id=>3)] DB.sqls.length.should == 0 end it "should handle no associated records when eagerly graphing a single many_through_many association" do ds = @c1.eager_graph(:tags) ds._fetch = {:id=>1, :tags_id=>nil} a = ds.all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id)'] a.first.tags.should == [] DB.sqls.length.should == 0 end it "should handle no associated records when eagerly graphing multiple many_through_many associations" do ds = @c1.eager_graph(:tags, :albums) ds._fetch = [{:id=>1, :tags_id=>nil, :albums_0_id=>3}, {:id=>1, :tags_id=>2, :albums_0_id=>nil}, {:id=>1, :tags_id=>5, :albums_0_id=>6}, {:id=>7, :tags_id=>nil, :albums_0_id=>nil}] a = ds.all a.should == [@c1.load(:id=>1), @c1.load(:id=>7)] DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id, albums_0.id AS albums_0_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_0.album_id)'] a.first.tags.should == [Tag.load(:id=>2), Tag.load(:id=>5)] a.first.albums.should == [Album.load(:id=>3), Album.load(:id=>6)] a.last.tags.should == [] a.last.albums.should == [] DB.sqls.length.should == 0 end it "should handle missing associated records when cascading eager graphing for associations of associated models" do ds = @c1.eager_graph(:tags=>:tracks) ds._fetch = [{:id=>1, :tags_id=>2, :tracks_id=>4}, {:id=>1, :tags_id=>3, :tracks_id=>nil}, {:id=>2, :tags_id=>nil, :tracks_id=>nil}] a = ds.all a.should == [@c1.load(:id=>1), @c1.load(:id=>2)] DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id, tracks.id AS tracks_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) LEFT OUTER JOIN albums_tags AS albums_tags_0 ON (albums_tags_0.tag_id = tags.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = albums_tags_0.album_id) LEFT OUTER JOIN tracks ON (tracks.album_id = albums_0.id)'] a.last.tags.should == [] a = a.first a.tags.should == [Tag.load(:id=>2), Tag.load(:id=>3)] a.tags.first.tracks.should == [Track.load(:id=>4)] a.tags.last.tracks.should == [] DB.sqls.length.should == 0 end it "eager graphing should respect :left_primary_key and :right_primary_key options" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :left_primary_key=>:yyy, :right_primary_key=>:tag_id @c1.dataset.meta_def(:columns){[:id, :yyy]} Tag.dataset.meta_def(:columns){[:id, :tag_id]} ds = @c1.eager_graph(:tags) ds._fetch = {:id=>1, :yyy=>8, :tags_id=>2, :tag_id=>4} a = ds.all a.should == [@c1.load(:id=>1, :yyy=>8)] DB.sqls.should == ['SELECT artists.id, artists.yyy, tags.id AS tags_id, tags.tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.yyy) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.tag_id = albums_tags.tag_id)'] a.first.tags.should == [Tag.load(:id=>2, :tag_id=>4)] DB.sqls.length.should == 0 end it "eager graphing should respect composite keys" do @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:id, :tag_id], :left_primary_key=>[:id, :yyy] @c1.dataset.meta_def(:columns){[:id, :yyy]} Tag.dataset.meta_def(:columns){[:id, :tag_id]} ds = @c1.eager_graph(:tags) ds._fetch = {:id=>1, :yyy=>8, :tags_id=>2, :tag_id=>4} a = ds.all a.should == [@c1.load(:id=>1, :yyy=>8)] DB.sqls.should == ['SELECT artists.id, artists.yyy, tags.id AS tags_id, tags.tag_id FROM artists LEFT OUTER JOIN albums_artists ON ((albums_artists.b1 = artists.id) AND (albums_artists.b2 = artists.yyy)) LEFT OUTER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) LEFT OUTER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) LEFT OUTER JOIN tags ON ((tags.id = albums_tags.g1) AND (tags.tag_id = albums_tags.g2))'] a.first.tags.should == [Tag.load(:id=>2, :tag_id=>4)] DB.sqls.length.should == 0 end it "should respect the association's :graph_select option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :graph_select=>:b ds = @c1.eager_graph(:tags) ds._fetch = {:id=>1, :b=>2} a = ds.all a.should == [@c1.load(:id=>1)] DB.sqls.should == ['SELECT artists.id, tags.b FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id)'] a.first.tags.should == [Tag.load(:b=>2)] DB.sqls.length.should == 0 end it "should respect the association's :graph_join_type option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :graph_join_type=>:inner @c1.eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists INNER JOIN albums_artists ON (albums_artists.artist_id = artists.id) INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) INNER JOIN tags ON (tags.id = albums_tags.tag_id)' end it "should respect the association's :join_type option on through" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id, :join_type=>:natural}, [:albums_tags, :album_id, :tag_id]], :graph_join_type=>:inner @c1.eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists INNER JOIN albums_artists ON (albums_artists.artist_id = artists.id) NATURAL JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) INNER JOIN tags ON (tags.id = albums_tags.tag_id)' end it "should respect the association's :conditions option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :conditions=>{:a=>32} @c1.eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON ((tags.id = albums_tags.tag_id) AND (tags.a = 32))' end it "should respect the association's :graph_conditions option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :graph_conditions=>{:a=>42} @c1.eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON ((tags.id = albums_tags.tag_id) AND (tags.a = 42))' @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :graph_conditions=>{:a=>42}, :conditions=>{:a=>32} @c1.eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON ((tags.id = albums_tags.tag_id) AND (tags.a = 42))' end it "should respect the association's :conditions option on through" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id, :conditions=>{:a=>42}}, [:albums_tags, :album_id, :tag_id]] @c1.eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON ((albums.id = albums_artists.album_id) AND (albums.a = 42)) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id)' end it "should respect the association's :graph_block option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :graph_block=>proc{|ja,lja,js| {Sequel.qualify(ja, :active)=>true}} @c1.eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON ((tags.id = albums_tags.tag_id) AND (tags.active IS TRUE))' end it "should respect the association's :block option on through" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id, :block=>proc{|ja,lja,js| {Sequel.qualify(ja, :active)=>true}}}, [:albums_tags, :album_id, :tag_id]] @c1.eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON ((albums.id = albums_artists.album_id) AND (albums.active IS TRUE)) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id)' end it "should respect the association's :graph_only_conditions option" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :graph_only_conditions=>{:a=>32} @c1.eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.a = 32)' end it "should respect the association's :only_conditions option on through" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id, :only_conditions=>{:a=>42}}, [:albums_tags, :album_id, :tag_id]] @c1.eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.a = 42) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id)' end it "should create unique table aliases for all associations" do @c1.eager_graph(:artists=>{:artists=>:artists}).sql.should == "SELECT artists.id, artists_0.id AS artists_0_id, artists_1.id AS artists_1_id, artists_2.id AS artists_2_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.album_id = albums.id) LEFT OUTER JOIN artists AS artists_0 ON (artists_0.id = albums_artists_0.artist_id) LEFT OUTER JOIN albums_artists AS albums_artists_1 ON (albums_artists_1.artist_id = artists_0.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_1.album_id) LEFT OUTER JOIN albums_artists AS albums_artists_2 ON (albums_artists_2.album_id = albums_0.id) LEFT OUTER JOIN artists AS artists_1 ON (artists_1.id = albums_artists_2.artist_id) LEFT OUTER JOIN albums_artists AS albums_artists_3 ON (albums_artists_3.artist_id = artists_1.id) LEFT OUTER JOIN albums AS albums_1 ON (albums_1.id = albums_artists_3.album_id) LEFT OUTER JOIN albums_artists AS albums_artists_4 ON (albums_artists_4.album_id = albums_1.id) LEFT OUTER JOIN artists AS artists_2 ON (artists_2.id = albums_artists_4.artist_id)" end it "should respect the association's :order" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :order=>[:blah1, :blah2] @c1.order(:artists__blah2, :artists__blah3).eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) ORDER BY artists.blah2, artists.blah3, tags.blah1, tags.blah2' end it "should only qualify unqualified symbols, identifiers, or ordered versions in association's :order" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :order=>[Sequel.identifier(:blah__id), Sequel.identifier(:blah__id).desc, Sequel.desc(:blah__id), :blah__id, :album_id, Sequel.desc(:album_id), 1, Sequel.lit('RANDOM()'), Sequel.qualify(:b, :a)] @c1.order(:artists__blah2, :artists__blah3).eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) ORDER BY artists.blah2, artists.blah3, tags.blah__id, tags.blah__id DESC, blah.id DESC, blah.id, tags.album_id, tags.album_id DESC, 1, RANDOM(), b.a' end it "should not respect the association's :order if :order_eager_graph is false" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :order=>[:blah1, :blah2], :order_eager_graph=>false @c1.order(:artists__blah2, :artists__blah3).eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) ORDER BY artists.blah2, artists.blah3' end it "should add the associations :order for multiple associations" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :order=>[:blah1, :blah2] @c1.many_through_many :albums, [[:albums_artists, :artist_id, :album_id]], :order=>[:blah3, :blah4] @c1.eager_graph(:tags, :albums).sql.should == 'SELECT artists.id, tags.id AS tags_id, albums_0.id AS albums_0_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_0.album_id) ORDER BY tags.blah1, tags.blah2, albums_0.blah3, albums_0.blah4' end it "should add the association's :order for cascading associations" do @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :order=>[:blah1, :blah2] Tag.many_through_many :tracks, [[:albums_tags, :tag_id, :album_id], [:albums, :id, :id]], :right_primary_key=>:album_id, :order=>[:blah3, :blah4] @c1.eager_graph(:tags=>:tracks).sql.should == 'SELECT artists.id, tags.id AS tags_id, tracks.id AS tracks_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) LEFT OUTER JOIN albums_tags AS albums_tags_0 ON (albums_tags_0.tag_id = tags.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = albums_tags_0.album_id) LEFT OUTER JOIN tracks ON (tracks.album_id = albums_0.id) ORDER BY tags.blah1, tags.blah2, tracks.blah3, tracks.blah4' end it "should use the correct qualifier when graphing multiple tables with extra conditions" do @c1.many_through_many :tags, [{:table=>:albums_artists, :left=>:artist_id, :right=>:album_id, :conditions=>{:a=>:b}}, {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]] @c1.many_through_many :albums, [{:table=>:albums_artists, :left=>:artist_id, :right=>:album_id, :conditions=>{:c=>:d}}] @c1.eager_graph(:tags, :albums).sql.should == 'SELECT artists.id, tags.id AS tags_id, albums_0.id AS albums_0_id FROM artists LEFT OUTER JOIN albums_artists ON ((albums_artists.artist_id = artists.id) AND (albums_artists.a = artists.b)) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON ((albums_artists_0.artist_id = artists.id) AND (albums_artists_0.c = artists.d)) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_0.album_id)' end end describe "many_through_many associations with non-column expression keys" do before do @db = Sequel.mock(:fetch=>{:id=>1, :object_ids=>[2]}) @Foo = Class.new(Sequel::Model(@db[:foos])) @Foo.columns :id, :object_ids @Foo.plugin :many_through_many m = Module.new{def obj_id; object_ids[0]; end} @Foo.include m @Foo.many_through_many :foos, [ [:f, Sequel.subscript(:l, 0), Sequel.subscript(:r, 0)], [:f, Sequel.subscript(:l, 1), Sequel.subscript(:r, 1)] ], :class=>@Foo, :left_primary_key=>:obj_id, :left_primary_key_column=>Sequel.subscript(:object_ids, 0), :right_primary_key=>Sequel.subscript(:object_ids, 0), :right_primary_key_method=>:obj_id @foo = @Foo.load(:id=>1, :object_ids=>[2]) @db.sqls end it "should have working regular association methods" do @Foo.first.foos.should == [@foo] @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT foos.* FROM foos INNER JOIN f ON (f.r[1] = foos.object_ids[0]) INNER JOIN f AS f_0 ON ((f_0.r[0] = f.l[1]) AND (f_0.l[0] = 2))"] end it "should have working eager loading methods" do @db.fetch = [[{:id=>1, :object_ids=>[2]}], [{:id=>1, :object_ids=>[2], :x_foreign_key_x=>2}]] @Foo.eager(:foos).all.map{|o| [o, o.foos]}.should == [[@foo, [@foo]]] @db.sqls.should == ["SELECT * FROM foos", "SELECT foos.*, f_0.l[0] AS x_foreign_key_x FROM foos INNER JOIN f ON (f.r[1] = foos.object_ids[0]) INNER JOIN f AS f_0 ON ((f_0.r[0] = f.l[1]) AND (f_0.l[0] IN (2)))"] end it "should have working eager graphing methods" do @db.fetch = {:id=>1, :object_ids=>[2], :foos_0_id=>1, :foos_0_object_ids=>[2]} @Foo.eager_graph(:foos).all.map{|o| [o, o.foos]}.should == [[@foo, [@foo]]] @db.sqls.should == ["SELECT foos.id, foos.object_ids, foos_0.id AS foos_0_id, foos_0.object_ids AS foos_0_object_ids FROM foos LEFT OUTER JOIN f ON (f.l[0] = foos.object_ids[0]) LEFT OUTER JOIN f AS f_0 ON (f_0.l[1] = f.r[0]) LEFT OUTER JOIN foos AS foos_0 ON (foos_0.object_ids[0] = f_0.r[1])"] end it "should have working filter by associations with model instances" do @Foo.first(:foos=>@foo).should == @foo @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT f.l[0] FROM f INNER JOIN f AS f_0 ON (f_0.l[1] = f.r[0]) WHERE ((f_0.r[1] = 2) AND (f.l[0] IS NOT NULL)))) LIMIT 1"] end it "should have working filter by associations with model datasets" do @Foo.first(:foos=>@Foo.where(:id=>@foo.id)).should == @foo @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT f.l[0] FROM f INNER JOIN f AS f_0 ON (f_0.l[1] = f.r[0]) WHERE ((f_0.r[1] IN (SELECT foos.object_ids[0] FROM foos WHERE ((id = 1) AND (foos.object_ids[0] IS NOT NULL)))) AND (f.l[0] IS NOT NULL)))) LIMIT 1"] end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/meta_def_spec.rb��������������������������������������������������0000664�0000000�0000000�00000001207�12201565355�0022232�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Metaprogramming" do specify "should add meta_def method to Database, Dataset, and Model classes and instances" do Sequel::Database.meta_def(:foo){1} Sequel::Database.foo.should == 1 Sequel::Dataset.meta_def(:foo){2} Sequel::Dataset.foo.should == 2 Sequel::Model.meta_def(:foo){3} Sequel::Model.foo.should == 3 o = Sequel::Database.new o.meta_def(:foo){4} o.foo.should == 4 o = o[:a] o.meta_def(:foo){5} o.foo.should == 5 o = Sequel::Model.new o.meta_def(:foo){6} o.foo.should == 6 end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/migration_spec.rb�������������������������������������������������0000664�0000000�0000000�00000102132�12201565355�0022456�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') Sequel.extension :migration describe "Migration.descendants" do before do Sequel::Migration.descendants.clear end specify "should include Migration subclasses" do @class = Class.new(Sequel::Migration) Sequel::Migration.descendants.should == [@class] end specify "should include Migration subclasses in order of creation" do @c1 = Class.new(Sequel::Migration) @c2 = Class.new(Sequel::Migration) @c3 = Class.new(Sequel::Migration) Sequel::Migration.descendants.should == [@c1, @c2, @c3] end specify "should include SimpleMigration instances created by migration DSL" do i1 = Sequel.migration{} i2 = Sequel.migration{} i3 = Sequel.migration{} Sequel::Migration.descendants.should == [i1, i2, i3] end end describe "Migration.apply" do before do @c = Class.new do define_method(:one) {|x| [1111, x]} define_method(:two) {|x| [2222, x]} end @db = @c.new end specify "should raise for an invalid direction" do proc {Sequel::Migration.apply(@db, :hahaha)}.should raise_error(ArgumentError) end specify "should apply the up and down directions correctly" do m = Class.new(Sequel::Migration) do define_method(:up) {one(3333)} define_method(:down) {two(4444)} end m.apply(@db, :up).should == [1111, 3333] m.apply(@db, :down).should == [2222, 4444] end specify "should have default up and down actions that do nothing" do m = Class.new(Sequel::Migration) m.apply(@db, :up).should == nil m.apply(@db, :down).should == nil end specify "should respond to the methods the database responds to" do m = Sequel::Migration.new(Sequel.mock) m.respond_to?(:foo).should be_false m.respond_to?(:execute).should be_true end if RUBY_VERSION >= '1.9' end describe "SimpleMigration#apply" do before do @c = Class.new do define_method(:one) {|x| [1111, x]} define_method(:two) {|x| [2222, x]} end @db = @c.new end specify "should raise for an invalid direction" do proc {Sequel.migration{}.apply(@db, :hahaha)}.should raise_error(ArgumentError) end specify "should apply the up and down directions correctly" do m = Sequel.migration do up{one(3333)} down{two(4444)} end m.apply(@db, :up).should == [1111, 3333] m.apply(@db, :down).should == [2222, 4444] end specify "should have default up and down actions that do nothing" do m = Sequel.migration{} m.apply(@db, :up).should == nil m.apply(@db, :down).should == nil end end describe "Reversible Migrations with Sequel.migration{change{}}" do before do @c = Class.new do self::AT = Class.new do attr_reader :actions def initialize(&block) @actions = [] instance_eval(&block) end def method_missing(*args) @actions << args end self end attr_reader :actions def initialize @actions = [] end def method_missing(*args) @actions << args end def alter_table(*args, &block) @actions << [:alter_table, self.class::AT.new(&block).actions] end end @db = @c.new @p = Proc.new do create_table(:a){Integer :a} add_column :a, :b, String add_index :a, :b rename_column :a, :b, :c rename_table :a, :b alter_table(:b) do add_column :d, String add_constraint :blah, 'd IS NOT NULL' add_foreign_key :e, :b add_foreign_key [:e], :b, :name=>'e_fk' add_foreign_key [:e, :a], :b add_primary_key :f, :b add_index :e, :name=>'e_n' add_full_text_index :e, :name=>'e_ft' add_spatial_index :e, :name=>'e_s' rename_column :e, :g end create_view(:c, 'SELECT * FROM b') create_join_table(:cat_id=>:cats, :dog_id=>:dogs) end end specify "should apply up with normal actions in normal order" do p = @p Sequel.migration{change(&p)}.apply(@db, :up) @db.actions.should == [[:create_table, :a], [:add_column, :a, :b, String], [:add_index, :a, :b], [:rename_column, :a, :b, :c], [:rename_table, :a, :b], [:alter_table, [ [:add_column, :d, String], [:add_constraint, :blah, "d IS NOT NULL"], [:add_foreign_key, :e, :b], [:add_foreign_key, [:e], :b, {:name=>"e_fk"}], [:add_foreign_key, [:e, :a], :b], [:add_primary_key, :f, :b], [:add_index, :e, {:name=>"e_n"}], [:add_full_text_index, :e, {:name=>"e_ft"}], [:add_spatial_index, :e, {:name=>"e_s"}], [:rename_column, :e, :g]] ], [:create_view, :c, "SELECT * FROM b"], [:create_join_table, {:cat_id=>:cats, :dog_id=>:dogs}]] end specify "should execute down with reversing actions in reverse order" do p = @p Sequel.migration{change(&p)}.apply(@db, :down) @db.actions.should == [ [:drop_join_table, {:cat_id=>:cats, :dog_id=>:dogs}], [:drop_view, :c], [:alter_table, [ [:rename_column, :g, :e], [:drop_index, :e, {:name=>"e_s"}], [:drop_index, :e, {:name=>"e_ft"}], [:drop_index, :e, {:name=>"e_n"}], [:drop_column, :f], [:drop_foreign_key, [:e, :a]], [:drop_foreign_key, [:e], {:name=>"e_fk"}], [:drop_foreign_key, :e], [:drop_constraint, :blah], [:drop_column, :d]] ], [:rename_table, :b, :a], [:rename_column, :a, :c, :b], [:drop_index, :a, :b], [:drop_column, :a, :b], [:drop_table, :a]] end specify "should raise in the down direction if migration uses unsupported method" do m = Sequel.migration{change{run 'SQL'}} proc{m.apply(@db, :up)}.should_not raise_error proc{m.apply(@db, :down)}.should raise_error(Sequel::Error) end specify "should raise in the down direction if migration uses add_primary_key with an array" do m = Sequel.migration{change{alter_table(:a){add_primary_key [:b]}}} proc{m.apply(@db, :up)}.should_not raise_error proc{m.apply(@db, :down)}.should raise_error(Sequel::Error) end specify "should raise in the down direction if migration uses add_foreign_key with an array" do m = Sequel.migration{change{alter_table(:a){add_foreign_key [:b]}}} proc{m.apply(@db, :up)}.should_not raise_error proc{m.apply(@db, :down)}.should raise_error(Sequel::Error) end end describe "Sequel::IntegerMigrator" do before do dbc = Class.new(Sequel::Mock::Database) do attr_reader :drops, :tables_created, :columns_created, :versions def initialize(*args) super @drops = [] @tables_created = [] @columns_created = [] @versions = Hash.new{|h,k| h[k.to_sym]} end def version; versions.values.first || 0; end def creates; @tables_created.map{|x| y = x.to_s; y !~ /\Asm(\d+)/; $1.to_i if $1}.compact; end def drop_table(*a); super; @drops.concat(a.map{|x| y = x.to_s; y !~ /\Asm(\d+)/; $1.to_i if $1}.compact); end def create_table(name, opts={}, &block) super @columns_created << / \(?(\w+) integer.*\)?\z/.match(@sqls.last)[1].to_sym @tables_created << name.to_sym end def dataset ds = super ds.extend(Module.new do def count; 1; end def columns; db.columns_created end def insert(h); db.versions.merge!(h); db.run insert_sql(h) end def update(h); db.versions.merge!(h); db.run update_sql(h) end def fetch_rows(sql); db.execute(sql); yield(db.versions) unless db.versions.empty? end end) ds end def table_exists?(name) @tables_created.include?(name.to_sym) end end @db = dbc.new @dirname = "spec/files/integer_migrations" end after do Object.send(:remove_const, "CreateSessions") if Object.const_defined?("CreateSessions") end specify "should raise and error if there is a missing integer migration version" do proc{Sequel::Migrator.apply(@db, "spec/files/missing_integer_migrations")}.should raise_error(Sequel::Migrator::Error) end specify "should not raise and error if there is a missing integer migration version and allow_missing_migration_files is true" do proc{Sequel::Migrator.run(@db, "spec/files/missing_integer_migrations", :allow_missing_migration_files => true)}.should_not raise_error end specify "should raise and error if there is a duplicate integer migration version" do proc{Sequel::Migrator.apply(@db, "spec/files/duplicate_integer_migrations")}.should raise_error(Sequel::Migrator::Error) end specify "should add a column name if it doesn't already exist in the schema_info table" do @db.create_table(:schema_info){Integer :v} @db.should_receive(:alter_table).with('schema_info') Sequel::Migrator.apply(@db, @dirname) end specify "should automatically create the schema_info table with the version column" do @db.table_exists?(:schema_info).should be_false Sequel::Migrator.run(@db, @dirname, :target=>0) @db.table_exists?(:schema_info).should be_true @db.dataset.columns.should == [:version] end specify "should allow specifying the table and columns" do @db.table_exists?(:si).should be_false Sequel::Migrator.run(@db, @dirname, :target=>0, :table=>:si, :column=>:sic) @db.table_exists?(:si).should be_true @db.dataset.columns.should == [:sic] end specify "should apply migrations correctly in the up direction if no target is given" do Sequel::Migrator.apply(@db, @dirname) @db.creates.should == [1111, 2222, 3333] @db.version.should == 3 @db.sqls.map{|x| x =~ /\AUPDATE.*(\d+)/ ? $1.to_i : nil}.compact.should == [1, 2, 3] end specify "should be able to tell whether there are outstanding migrations" do Sequel::Migrator.is_current?(@db, @dirname).should be_false Sequel::Migrator.apply(@db, @dirname) Sequel::Migrator.is_current?(@db, @dirname).should be_true end specify "should have #check_current raise an exception if the migrator is not current" do proc{Sequel::Migrator.check_current(@db, @dirname)}.should raise_error(Sequel::Migrator::NotCurrentError) Sequel::Migrator.apply(@db, @dirname) proc{Sequel::Migrator.check_current(@db, @dirname)}.should_not raise_error end specify "should apply migrations correctly in the up direction with target" do Sequel::Migrator.apply(@db, @dirname, 2) @db.creates.should == [1111, 2222] @db.version.should == 2 @db.sqls.map{|x| x =~ /\AUPDATE.*(\d+)/ ? $1.to_i : nil}.compact.should == [1, 2] end specify "should apply migrations correctly in the up direction with target and existing" do Sequel::Migrator.apply(@db, @dirname, 2, 1) @db.creates.should == [2222] @db.version.should == 2 @db.sqls.map{|x| x =~ /\AUPDATE.*(\d+)/ ? $1.to_i : nil}.compact.should == [2] end specify "should apply migrations correctly in the down direction with target" do @db.create_table(:schema_info){Integer :version, :default=>0} @db[:schema_info].insert(:version=>3) @db.version.should == 3 Sequel::Migrator.apply(@db, @dirname, 0) @db.drops.should == [3333, 2222, 1111] @db.version.should == 0 @db.sqls.map{|x| x =~ /\AUPDATE.*(\d+)/ ? $1.to_i : nil}.compact.should == [2, 1, 0] end specify "should apply migrations correctly in the down direction with target and existing" do Sequel::Migrator.apply(@db, @dirname, 1, 2) @db.drops.should == [2222] @db.version.should == 1 @db.sqls.map{|x| x =~ /\AUPDATE.*(\d+)/ ? $1.to_i : nil}.compact.should == [1] end specify "should return the target version" do Sequel::Migrator.apply(@db, @dirname, 3, 2).should == 3 Sequel::Migrator.apply(@db, @dirname, 0).should == 0 Sequel::Migrator.apply(@db, @dirname).should == 3 end specify "should use IntegerMigrator if IntegerMigrator.apply called, even for timestamped migration directory" do proc{Sequel::IntegerMigrator.apply(@db, "spec/files/timestamped_migrations")}.should raise_error(Sequel::Migrator::Error) end specify "should not use transactions by default" do Sequel::Migrator.apply(@db, "spec/files/transaction_unspecified_migrations") @db.sqls.should == ["CREATE TABLE schema_info (version integer DEFAULT 0 NOT NULL)", "SELECT 1 AS one FROM schema_info LIMIT 1", "INSERT INTO schema_info (version) VALUES (0)", "SELECT version FROM schema_info LIMIT 1", "CREATE TABLE sm11111 (smc1 integer)", "UPDATE schema_info SET version = 1", "CREATE TABLE sm (smc1 integer)", "UPDATE schema_info SET version = 2"] end specify "should use transactions by default if the database supports transactional ddl" do @db.meta_def(:supports_transactional_ddl?){true} Sequel::Migrator.apply(@db, "spec/files/transaction_unspecified_migrations") @db.sqls.should == ["CREATE TABLE schema_info (version integer DEFAULT 0 NOT NULL)", "SELECT 1 AS one FROM schema_info LIMIT 1", "INSERT INTO schema_info (version) VALUES (0)", "SELECT version FROM schema_info LIMIT 1", "BEGIN", "CREATE TABLE sm11111 (smc1 integer)", "UPDATE schema_info SET version = 1", "COMMIT", "BEGIN", "CREATE TABLE sm (smc1 integer)", "UPDATE schema_info SET version = 2", "COMMIT"] end specify "should respect transaction use on a per migration basis" do @db.meta_def(:supports_transactional_ddl?){true} Sequel::Migrator.apply(@db, "spec/files/transaction_specified_migrations") @db.sqls.should == ["CREATE TABLE schema_info (version integer DEFAULT 0 NOT NULL)", "SELECT 1 AS one FROM schema_info LIMIT 1", "INSERT INTO schema_info (version) VALUES (0)", "SELECT version FROM schema_info LIMIT 1", "BEGIN", "CREATE TABLE sm11111 (smc1 integer)", "UPDATE schema_info SET version = 1", "COMMIT", "CREATE TABLE sm (smc1 integer)", "UPDATE schema_info SET version = 2"] end specify "should force transactions if enabled in the migrator" do Sequel::Migrator.run(@db, "spec/files/transaction_specified_migrations", :use_transactions=>true) @db.sqls.should == ["CREATE TABLE schema_info (version integer DEFAULT 0 NOT NULL)", "SELECT 1 AS one FROM schema_info LIMIT 1", "INSERT INTO schema_info (version) VALUES (0)", "SELECT version FROM schema_info LIMIT 1", "BEGIN", "CREATE TABLE sm11111 (smc1 integer)", "UPDATE schema_info SET version = 1", "COMMIT", "BEGIN", "CREATE TABLE sm (smc1 integer)", "UPDATE schema_info SET version = 2", "COMMIT"] end specify "should not use transactions if disabled in the migrator" do Sequel::Migrator.run(@db, "spec/files/transaction_unspecified_migrations", :use_transactions=>false) @db.sqls.should == ["CREATE TABLE schema_info (version integer DEFAULT 0 NOT NULL)", "SELECT 1 AS one FROM schema_info LIMIT 1", "INSERT INTO schema_info (version) VALUES (0)", "SELECT version FROM schema_info LIMIT 1", "CREATE TABLE sm11111 (smc1 integer)", "UPDATE schema_info SET version = 1", "CREATE TABLE sm (smc1 integer)", "UPDATE schema_info SET version = 2"] end end describe "Sequel::TimestampMigrator" do before do sequel_migration_version = 0 @dsc = dsc = Class.new(Sequel::Mock::Dataset) do self::FILES =[] define_method(:sequel_migration_version){sequel_migration_version} define_method(:sequel_migration_version=){|v| sequel_migration_version = v} def columns super case opts[:from].first when :schema_info, 'schema_info' [:version] when :schema_migrations, 'schema_migrations' [:filename] when :sm, 'sm' [:fn] end end def fetch_rows(sql) super case opts[:from].first when :schema_info, 'schema_info' yield({:version=>sequel_migration_version}) when :schema_migrations, 'schema_migrations' self.class::FILES.sort.each{|f| yield(:filename=>f)} when :sm, 'sm' self.class::FILES.sort.each{|f| yield(:fn=>f)} end end def insert(h={}) super case opts[:from].first when :schema_info, 'schema_info' self.sequel_migration_version = h.values.first when :schema_migrations, :sm, 'schema_migrations', 'sm' self.class::FILES << h.values.first end end def update(h={}) super case opts[:from].first when :schema_info, 'schema_info' self.sequel_migration_version = h.values.first end end def delete super case opts[:from].first when :schema_migrations, :sm, 'schema_migrations', 'sm' self.class::FILES.delete(opts[:where].args.last) end end end dbc = Class.new(Sequel::Mock::Database) do self::Tables = tables= {} define_method(:dataset){|*a| dsc.new(self, *a)} def create_table(name, *args, &block) super self.class::Tables[name.to_sym] = true end define_method(:drop_table){|*names| super(*names); names.each{|n| tables.delete(n.to_sym)}} define_method(:table_exists?){|name| super(name); tables.has_key?(name.to_sym)} end @db = dbc.new @m = Sequel::Migrator end after do Object.send(:remove_const, "CreateSessions") if Object.const_defined?("CreateSessions") Object.send(:remove_const, "CreateArtists") if Object.const_defined?("CreateArtists") Object.send(:remove_const, "CreateAlbums") if Object.const_defined?("CreateAlbums") end specify "should handle migrating up or down all the way" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb' @m.apply(@db, @dir, 0) [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == [] end specify "should handle migrating up or down to specific timestamps" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir, 1273253851) [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true} @db.table_exists?(:sm3333).should be_false @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb' @m.apply(@db, @dir, 1273253849) [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db.table_exists?(:sm1111).should be_true @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb' end specify "should not be current when there are migrations to apply" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) @m.is_current?(@db, @dir).should be_true @dir = 'spec/files/interleaved_timestamped_migrations' @m.is_current?(@db, @dir).should be_false end specify "should raise an exception if the migrator is not current" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) proc{@m.check_current(@db, @dir)}.should_not raise_error @dir = 'spec/files/interleaved_timestamped_migrations' proc{@m.check_current(@db, @dir)}.should raise_error(Sequel::Migrator::NotCurrentError) end specify "should apply all missing files when migrating up" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) @dir = 'spec/files/interleaved_timestamped_migrations' @m.apply(@db, @dir) [:schema_migrations, :sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253850_create_artists.rb 1273253851_create_nodes.rb 1273253852_create_albums.rb 1273253853_3_create_users.rb' end specify "should not apply down action to migrations where up action hasn't been applied" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) @dir = 'spec/files/interleaved_timestamped_migrations' @m.apply(@db, @dir, 0) [:sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == [] end specify "should handle updating to a specific timestamp when interleaving migrations" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) @dir = 'spec/files/interleaved_timestamped_migrations' @m.apply(@db, @dir, 1273253851) [:schema_migrations, :sm1111, :sm1122, :sm2222].each{|n| @db.table_exists?(n).should be_true} [:sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253850_create_artists.rb 1273253851_create_nodes.rb' end specify "should correctly update schema_migrations table when an error occurs when migrating up or down" do @dir = 'spec/files/bad_timestamped_migrations' proc{@m.apply(@db, @dir)}.should raise_error [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true} @db.table_exists?(:sm3333).should be_false @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb' proc{@m.apply(@db, @dir, 0)}.should raise_error [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db.table_exists?(:sm1111).should be_true @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb' end specify "should handle multiple migrations with the same timestamp correctly" do @dir = 'spec/files/duplicate_timestamped_migrations' @m.apply(@db, @dir) [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253853_create_nodes.rb 1273253853_create_users.rb' @m.apply(@db, @dir, 1273253853) [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253853_create_nodes.rb 1273253853_create_users.rb' @m.apply(@db, @dir, 1273253849) [:sm1111].each{|n| @db.table_exists?(n).should be_true} [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb' @m.apply(@db, @dir, 1273253848) [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == [] end specify "should convert schema_info table to schema_migrations table" do @dir = 'spec/files/integer_migrations' @m.apply(@db, @dir) [:schema_info, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @dir = 'spec/files/convert_to_timestamp_migrations' @m.apply(@db, @dir) [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb 1273253852_create_albums.rb' @m.apply(@db, @dir, 4) [:schema_info, :schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} [:sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb' @m.apply(@db, @dir, 0) [:schema_info, :schema_migrations].each{|n| @db.table_exists?(n).should be_true} [:sm1111, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == [] end specify "should handle unapplied migrations when migrating schema_info table to schema_migrations table" do @dir = 'spec/files/integer_migrations' @m.apply(@db, @dir, 2) [:schema_info, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @dir = 'spec/files/convert_to_timestamp_migrations' @m.apply(@db, @dir, 1273253850) [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122].each{|n| @db.table_exists?(n).should be_true} [:sm2233].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb' end specify "should handle unapplied migrations when migrating schema_info table to schema_migrations table and target is less than last integer migration version" do @dir = 'spec/files/integer_migrations' @m.apply(@db, @dir, 1) [:schema_info, :sm1111].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @dir = 'spec/files/convert_to_timestamp_migrations' @m.apply(@db, @dir, 2) [:schema_info, :sm1111, :sm2222, :schema_migrations].each{|n| @db.table_exists?(n).should be_true} [:sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb' @m.apply(@db, @dir) [:schema_info, :sm1111, :sm2222, :schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb 1273253852_create_albums.rb' end specify "should raise error for applied migrations not in file system" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb' @dir = 'spec/files/missing_timestamped_migrations' proc{@m.apply(@db, @dir, 0)}.should raise_error(Sequel::Migrator::Error) [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb' end specify "should not raise error for applied migrations not in file system if :allow_missing_migration_files is true" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb' @dir = 'spec/files/missing_timestamped_migrations' proc{@m.run(@db, @dir, :allow_missing_migration_files => true)}.should_not raise_error [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb' end specify "should raise error missing column name in existing schema_migrations table" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) proc{@m.run(@db, @dir, :column=>:fn)}.should raise_error(Sequel::Migrator::Error) end specify "should handle migration filenames in a case insensitive manner" do @dir = 'spec/files/uppercase_timestamped_migrations' @m.apply(@db, @dir) [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb' @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir, 0) [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == [] end specify "should :table and :column options" do @dir = 'spec/files/timestamped_migrations' @m.run(@db, @dir, :table=>:sm, :column=>:fn) [:sm, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:sm].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb' @m.run(@db, @dir, :target=>0, :table=>:sm, :column=>:fn) [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:sm].select_order_map(:fn).should == [] end specify "should return nil" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir, 1273253850).should == nil @m.apply(@db, @dir, 0).should == nil @m.apply(@db, @dir).should == nil end specify "should use TimestampMigrator if TimestampMigrator.apply is called even for integer migrations directory" do Sequel::TimestampMigrator.apply(@db, "spec/files/integer_migrations") @db.sqls.should == ["SELECT NULL AS nil FROM schema_migrations LIMIT 1", "CREATE TABLE schema_migrations (filename varchar(255) PRIMARY KEY)", "SELECT NULL AS nil FROM schema_info LIMIT 1", "SELECT filename FROM schema_migrations ORDER BY filename", "CREATE TABLE sm1111 (smc1 integer)", "INSERT INTO schema_migrations (filename) VALUES ('001_create_sessions.rb')", "CREATE TABLE sm2222 (smc2 integer)", "INSERT INTO schema_migrations (filename) VALUES ('002_create_nodes.rb')", "CREATE TABLE sm3333 (smc3 integer)", "INSERT INTO schema_migrations (filename) VALUES ('003_3_create_users.rb')"] end specify "should not use transactions by default" do Sequel::TimestampMigrator.apply(@db, "spec/files/transaction_unspecified_migrations") @db.sqls.should == ["SELECT NULL AS nil FROM schema_migrations LIMIT 1", "CREATE TABLE schema_migrations (filename varchar(255) PRIMARY KEY)", "SELECT NULL AS nil FROM schema_info LIMIT 1", "SELECT filename FROM schema_migrations ORDER BY filename", "CREATE TABLE sm11111 (smc1 integer)", "INSERT INTO schema_migrations (filename) VALUES ('001_create_alt_basic.rb')", "CREATE TABLE sm (smc1 integer)", "INSERT INTO schema_migrations (filename) VALUES ('002_create_basic.rb')"] end specify "should use transactions by default if database supports transactional ddl" do @db.meta_def(:supports_transactional_ddl?){true} Sequel::TimestampMigrator.apply(@db, "spec/files/transaction_unspecified_migrations") @db.sqls.should == ["SELECT NULL AS nil FROM schema_migrations LIMIT 1", "CREATE TABLE schema_migrations (filename varchar(255) PRIMARY KEY)", "SELECT NULL AS nil FROM schema_info LIMIT 1", "SELECT filename FROM schema_migrations ORDER BY filename", "BEGIN", "CREATE TABLE sm11111 (smc1 integer)", "INSERT INTO schema_migrations (filename) VALUES ('001_create_alt_basic.rb')", "COMMIT", "BEGIN", "CREATE TABLE sm (smc1 integer)", "INSERT INTO schema_migrations (filename) VALUES ('002_create_basic.rb')", "COMMIT"] end specify "should support transaction use on a per migration basis" do Sequel::TimestampMigrator.apply(@db, "spec/files/transaction_specified_migrations") @db.sqls.should == ["SELECT NULL AS nil FROM schema_migrations LIMIT 1", "CREATE TABLE schema_migrations (filename varchar(255) PRIMARY KEY)", "SELECT NULL AS nil FROM schema_info LIMIT 1", "SELECT filename FROM schema_migrations ORDER BY filename", "BEGIN", "CREATE TABLE sm11111 (smc1 integer)", "INSERT INTO schema_migrations (filename) VALUES ('001_create_alt_basic.rb')", "COMMIT", "CREATE TABLE sm (smc1 integer)", "INSERT INTO schema_migrations (filename) VALUES ('002_create_basic.rb')"] end specify "should force transactions if enabled by the migrator" do Sequel::TimestampMigrator.run(@db, "spec/files/transaction_specified_migrations", :use_transactions=>true) @db.sqls.should == ["SELECT NULL AS nil FROM schema_migrations LIMIT 1", "CREATE TABLE schema_migrations (filename varchar(255) PRIMARY KEY)", "SELECT NULL AS nil FROM schema_info LIMIT 1", "SELECT filename FROM schema_migrations ORDER BY filename", "BEGIN", "CREATE TABLE sm11111 (smc1 integer)", "INSERT INTO schema_migrations (filename) VALUES ('001_create_alt_basic.rb')", "COMMIT", "BEGIN", "CREATE TABLE sm (smc1 integer)", "INSERT INTO schema_migrations (filename) VALUES ('002_create_basic.rb')", "COMMIT"] end specify "should not use transactions if disabled in the migrator" do Sequel::TimestampMigrator.run(@db, "spec/files/transaction_unspecified_migrations", :use_transactions=>false) @db.sqls.should == ["SELECT NULL AS nil FROM schema_migrations LIMIT 1", "CREATE TABLE schema_migrations (filename varchar(255) PRIMARY KEY)", "SELECT NULL AS nil FROM schema_info LIMIT 1", "SELECT filename FROM schema_migrations ORDER BY filename", "CREATE TABLE sm11111 (smc1 integer)", "INSERT INTO schema_migrations (filename) VALUES ('001_create_alt_basic.rb')", "CREATE TABLE sm (smc1 integer)", "INSERT INTO schema_migrations (filename) VALUES ('002_create_basic.rb')"] end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/named_timezones_spec.rb�������������������������������������������0000664�0000000�0000000�00000007374�12201565355�0023662�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") begin require 'tzinfo' rescue LoadError => e skip_warn "named_timezones_spec: can't load tzinfo (#{e.class}: #{e})" else Sequel.extension :thread_local_timezones Sequel.extension :named_timezones Sequel.datetime_class = Time describe "Sequel named_timezones extension" do before do @tz_in = TZInfo::Timezone.get('America/Los_Angeles') @tz_out = TZInfo::Timezone.get('America/New_York') @db = Sequel.mock @dt = DateTime.civil(2009,6,1,10,20,30,0) Sequel.application_timezone = 'America/Los_Angeles' Sequel.database_timezone = 'America/New_York' Sequel.datetime_class = DateTime end after do Sequel.tzinfo_disambiguator = nil Sequel.default_timezone = nil Sequel.datetime_class = Time end it "should convert string arguments to *_timezone= to TZInfo::Timezone instances" do Sequel.application_timezone.should == @tz_in Sequel.database_timezone.should == @tz_out end it "should convert string arguments for Database#timezone= to TZInfo::Timezone instances for database-specific timezones" do @db.extension :named_timezones @db.timezone = 'America/Los_Angeles' @db.timezone.should == @tz_in end it "should accept TZInfo::Timezone instances in *_timezone=" do Sequel.application_timezone = @tz_in Sequel.database_timezone = @tz_out Sequel.application_timezone.should == @tz_in Sequel.database_timezone.should == @tz_out end it "should convert datetimes going into the database to named database_timezone" do ds = @db[:a] def ds.supports_timestamp_timezones?; true; end def ds.supports_timestamp_usecs?; false; end ds.insert([@dt, DateTime.civil(2009,6,1,3,20,30,-7/24.0), DateTime.civil(2009,6,1,6,20,30,-1/6.0)]) @db.sqls.should == ["INSERT INTO a VALUES ('2009-06-01 06:20:30-0400', '2009-06-01 06:20:30-0400', '2009-06-01 06:20:30-0400')"] end it "should convert datetimes coming out of the database from database_timezone to application_timezone" do dt = Sequel.database_to_application_timestamp('2009-06-01 06:20:30-0400') dt.should == @dt dt.offset.should == -7/24.0 dt = Sequel.database_to_application_timestamp('2009-06-01 10:20:30+0000') dt.should == @dt dt.offset.should == -7/24.0 end it "should raise an error for ambiguous timezones by default" do proc{Sequel.database_to_application_timestamp('2004-10-31T01:30:00')}.should raise_error(Sequel::InvalidValue) end it "should support tzinfo_disambiguator= to handle ambiguous timezones automatically" do Sequel.tzinfo_disambiguator = proc{|datetime, periods| periods.first} Sequel.database_to_application_timestamp('2004-10-31T01:30:00').should == DateTime.parse('2004-10-30T22:30:00-07:00') end it "should assume datetimes coming out of the database that don't have an offset as coming from database_timezone" do dt = Sequel.database_to_application_timestamp('2009-06-01 06:20:30') dt.should == @dt dt.offset.should == -7/24.0 dt = Sequel.database_to_application_timestamp('2009-06-01 10:20:30') dt.should == @dt + 1/6.0 dt.offset.should == -7/24.0 end it "should work with the thread_local_timezones extension" do q, q1, q2 = Queue.new, Queue.new, Queue.new tz1, tz2 = nil, nil t1 = Thread.new do Sequel.thread_application_timezone = 'America/New_York' q2.push nil q.pop tz1 = Sequel.application_timezone end t2 = Thread.new do Sequel.thread_application_timezone = 'America/Los_Angeles' q2.push nil q1.pop tz2 = Sequel.application_timezone end q2.pop q2.pop q.push nil q1.push nil t1.join t2.join tz1.should == @tz_out tz2.should == @tz_in end end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/nested_attributes_spec.rb�����������������������������������������0000664�0000000�0000000�00000065157�12201565355�0024234�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "NestedAttributes plugin" do def check_sqls(should, is) if should.is_a?(Array) should.should include(is) else should.should == is end end def check_sql_array(*shoulds) sqls = @db.sqls sqls.length.should == shoulds.length shoulds.zip(sqls){|s, i| check_sqls(s, i)} end before do @db = Sequel.mock(:autoid=>1, :numrows=>1) @c = Class.new(Sequel::Model(@db)) @c.plugin :nested_attributes @Artist = Class.new(@c).set_dataset(:artists) @Album = Class.new(@c).set_dataset(:albums) @Tag = Class.new(@c).set_dataset(:tags) @Concert = Class.new(@c).set_dataset(:concerts) @Artist.plugin :skip_create_refresh @Album.plugin :skip_create_refresh @Tag.plugin :skip_create_refresh @Concert.plugin :skip_create_refresh @Artist.columns :id, :name @Album.columns :id, :name, :artist_id @Tag.columns :id, :name @Concert.columns :tour, :date, :artist_id, :playlist @Concert.set_primary_key([:tour, :date]) @Concert.unrestrict_primary_key @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id @Artist.one_to_many :concerts, :class=>@Concert, :key=>:artist_id @Artist.one_to_one :first_album, :class=>@Album, :key=>:artist_id @Album.many_to_one :artist, :class=>@Artist, :reciprocal=>:albums @Album.many_to_many :tags, :class=>@Tag, :left_key=>:album_id, :right_key=>:tag_id, :join_table=>:at @Artist.nested_attributes :albums, :first_album, :destroy=>true, :remove=>true @Artist.nested_attributes :concerts, :destroy=>true, :remove=>true @Album.nested_attributes :artist, :tags, :destroy=>true, :remove=>true @db.sqls end it "should support creating new many_to_one objects" do a = @Album.new({:name=>'Al', :artist_attributes=>{:name=>'Ar'}}) @db.sqls.should == [] a.save check_sql_array("INSERT INTO artists (name) VALUES ('Ar')", ["INSERT INTO albums (name, artist_id) VALUES ('Al', 1)", "INSERT INTO albums (artist_id, name) VALUES (1, 'Al')"]) end it "should support creating new one_to_one objects" do a = @Artist.new(:name=>'Ar') a.id = 1 a.first_album_attributes = {:name=>'Al'} @db.sqls.should == [] a.save check_sql_array(["INSERT INTO artists (name, id) VALUES ('Ar', 1)", "INSERT INTO artists (id, name) VALUES (1, 'Ar')"], "UPDATE albums SET artist_id = NULL WHERE (artist_id = 1)", ["INSERT INTO albums (artist_id, name) VALUES (1, 'Al')", "INSERT INTO albums (name, artist_id) VALUES ('Al', 1)"]) end it "should support creating new one_to_many objects" do a = @Artist.new({:name=>'Ar', :albums_attributes=>[{:name=>'Al'}]}) @db.sqls.should == [] a.save check_sql_array("INSERT INTO artists (name) VALUES ('Ar')", ["INSERT INTO albums (artist_id, name) VALUES (1, 'Al')", "INSERT INTO albums (name, artist_id) VALUES ('Al', 1)"]) end it "should support creating new many_to_many objects" do a = @Album.new({:name=>'Al', :tags_attributes=>[{:name=>'T'}]}) @db.sqls.should == [] a.save check_sql_array("INSERT INTO albums (name) VALUES ('Al')", "INSERT INTO tags (name) VALUES ('T')", ["INSERT INTO at (album_id, tag_id) VALUES (1, 2)", "INSERT INTO at (tag_id, album_id) VALUES (2, 1)"]) end it "should add new objects to the cached association array as soon as the *_attributes= method is called" do a = @Artist.new({:name=>'Ar', :albums_attributes=>[{:name=>'Al', :tags_attributes=>[{:name=>'T'}]}]}) a.albums.should == [@Album.new(:name=>'Al')] a.albums.first.tags.should == [@Tag.new(:name=>'T')] end it "should support creating new objects with composite primary keys" do insert = nil @Concert.class_eval do define_method :_insert do insert = values end def before_create # Have to define the CPK somehow. self.tour = 'To' self.date = '2004-04-05' super end end a = @Artist.new({:name=>'Ar', :concerts_attributes=>[{:playlist=>'Pl'}]}) @db.sqls.should == [] a.save @db.sqls.should == ["INSERT INTO artists (name) VALUES ('Ar')"] insert.should == {:tour=>'To', :date=>'2004-04-05', :artist_id=>1, :playlist=>'Pl'} end it "should support creating new objects with specific primary keys if :unmatched_pk => :create is set" do @Artist.nested_attributes :albums, :unmatched_pk=>:create insert = nil @Album.class_eval do unrestrict_primary_key define_method :_insert do insert = values end end a = @Artist.new({:name=>'Ar', :albums_attributes=>[{:id=>7, :name=>'Al'}]}) @db.sqls.should == [] a.save @db.sqls.should == ["INSERT INTO artists (name) VALUES ('Ar')"] insert.should == {:artist_id=>1, :name=>'Al', :id=>7} end it "should support creating new objects with specific composite primary keys if :unmatched_pk => :create is set" do insert = nil @Artist.nested_attributes :concerts, :unmatched_pk=>:create @Concert.class_eval do define_method :_insert do insert = values end end a = @Artist.new({:name=>'Ar', :concerts_attributes=>[{:tour=>'To', :date=>'2004-04-05', :playlist=>'Pl'}]}) @db.sqls.should == [] a.save @db.sqls.should == ["INSERT INTO artists (name) VALUES ('Ar')"] insert.should == {:tour=>'To', :date=>'2004-04-05', :artist_id=>1, :playlist=>'Pl'} end it "should support updating many_to_one objects" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') al.associations[:artist] = ar al.set(:artist_attributes=>{:id=>'20', :name=>'Ar2'}) @db.sqls.should == [] al.save @db.sqls.should == ["UPDATE albums SET name = 'Al' WHERE (id = 10)", "UPDATE artists SET name = 'Ar2' WHERE (id = 20)"] end it "should support updating one_to_one objects" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') ar.associations[:first_album] = al ar.set(:first_album_attributes=>{:id=>10, :name=>'Al2'}) @db.sqls.should == [] ar.save @db.sqls.should == ["UPDATE artists SET name = 'Ar' WHERE (id = 20)", "UPDATE albums SET name = 'Al2' WHERE (id = 10)"] end it "should support updating one_to_many objects" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') ar.associations[:albums] = [al] ar.set(:albums_attributes=>[{:id=>10, :name=>'Al2'}]) @db.sqls.should == [] ar.save @db.sqls.should == ["UPDATE artists SET name = 'Ar' WHERE (id = 20)", "UPDATE albums SET name = 'Al2' WHERE (id = 10)"] end it "should support updating one_to_many objects with _delete/_remove flags set to false" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') ar.associations[:albums] = [al] ar.set(:albums_attributes=>[{:id=>10, :name=>'Al2', :_delete => 'f', :_remove => '0'}]) @db.sqls.should == [] ar.save @db.sqls.should == ["UPDATE artists SET name = 'Ar' WHERE (id = 20)", "UPDATE albums SET name = 'Al2' WHERE (id = 10)"] end it "should support updating many_to_many objects" do a = @Album.load(:id=>10, :name=>'Al') t = @Tag.load(:id=>20, :name=>'T') a.associations[:tags] = [t] a.set(:tags_attributes=>[{:id=>20, :name=>'T2'}]) @db.sqls.should == [] a.save @db.sqls.should == ["UPDATE albums SET name = 'Al' WHERE (id = 10)", "UPDATE tags SET name = 'T2' WHERE (id = 20)"] end it "should support updating many_to_many objects with _delete/_remove flags set to false" do a = @Album.load(:id=>10, :name=>'Al') t = @Tag.load(:id=>20, :name=>'T') a.associations[:tags] = [t] a.set(:tags_attributes=>[{:id=>20, :name=>'T2', '_delete' => false, '_remove' => 'F'}]) @db.sqls.should == [] a.save @db.sqls.should == ["UPDATE albums SET name = 'Al' WHERE (id = 10)", "UPDATE tags SET name = 'T2' WHERE (id = 20)"] end it "should support updating objects with composite primary keys" do ar = @Artist.load(:id=>10, :name=>'Ar') co = @Concert.load(:tour=>'To', :date=>'2004-04-05', :playlist=>'Pl') ar.associations[:concerts] = [co] ar.set(:concerts_attributes=>[{:tour=>'To', :date=>'2004-04-05', :playlist=>'Pl2'}]) @db.sqls.should == [] ar.save check_sql_array("UPDATE artists SET name = 'Ar' WHERE (id = 10)", ["UPDATE concerts SET playlist = 'Pl2' WHERE ((tour = 'To') AND (date = '2004-04-05'))", "UPDATE concerts SET playlist = 'Pl2' WHERE ((date = '2004-04-05') AND (tour = 'To'))"]) end it "should support removing many_to_one objects" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') al.associations[:artist] = ar al.set(:artist_attributes=>{:id=>'20', :_remove=>'1'}) @db.sqls.should == [] al.save check_sql_array(["UPDATE albums SET artist_id = NULL, name = 'Al' WHERE (id = 10)", "UPDATE albums SET name = 'Al', artist_id = NULL WHERE (id = 10)"]) end it "should support removing one_to_one objects" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') ar.associations[:first_album] = al ar.set(:first_album_attributes=>{:id=>10, :_remove=>'t'}) @db.sqls.should == [] ar.save @db.sqls.should == ["UPDATE albums SET artist_id = NULL WHERE (artist_id = 20)", "UPDATE artists SET name = 'Ar' WHERE (id = 20)"] end it "should support removing one_to_many objects" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') ar.associations[:albums] = [al] ar.set(:albums_attributes=>[{:id=>10, :_remove=>'t'}]) @db.sqls.should == [] @Album.dataset._fetch = {:id=>1} ar.save check_sql_array("SELECT 1 AS one FROM albums WHERE ((albums.artist_id = 20) AND (id = 10)) LIMIT 1", ["UPDATE albums SET artist_id = NULL, name = 'Al' WHERE (id = 10)", "UPDATE albums SET name = 'Al', artist_id = NULL WHERE (id = 10)"], "UPDATE artists SET name = 'Ar' WHERE (id = 20)") end it "should support removing many_to_many objects" do a = @Album.load(:id=>10, :name=>'Al') t = @Tag.load(:id=>20, :name=>'T') a.associations[:tags] = [t] a.set(:tags_attributes=>[{:id=>20, :_remove=>true}]) @db.sqls.should == [] a.save @db.sqls.should == ["DELETE FROM at WHERE ((album_id = 10) AND (tag_id = 20))", "UPDATE albums SET name = 'Al' WHERE (id = 10)"] end it "should support removing objects with composite primary keys" do ar = @Artist.load(:id=>10, :name=>'Ar') co = @Concert.load(:tour=>'To', :date=>'2004-04-05', :playlist=>'Pl') ar.associations[:concerts] = [co] ar.set(:concerts_attributes=>[{:tour=>'To', :date=>'2004-04-05', :_remove=>'t'}]) @db.sqls.should == [] @Concert.dataset._fetch = {:id=>1} ar.save check_sql_array(["SELECT 1 AS one FROM concerts WHERE ((concerts.artist_id = 10) AND (tour = 'To') AND (date = '2004-04-05')) LIMIT 1", "SELECT 1 AS one FROM concerts WHERE ((concerts.artist_id = 10) AND (date = '2004-04-05') AND (tour = 'To')) LIMIT 1"], ["UPDATE concerts SET artist_id = NULL, playlist = 'Pl' WHERE ((tour = 'To') AND (date = '2004-04-05'))", "UPDATE concerts SET playlist = 'Pl', artist_id = NULL WHERE ((tour = 'To') AND (date = '2004-04-05'))", "UPDATE concerts SET artist_id = NULL, playlist = 'Pl' WHERE ((date = '2004-04-05') AND (tour = 'To'))", "UPDATE concerts SET playlist = 'Pl', artist_id = NULL WHERE ((date = '2004-04-05') AND (tour = 'To'))"], "UPDATE artists SET name = 'Ar' WHERE (id = 10)") end it "should support destroying many_to_one objects" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') al.associations[:artist] = ar al.set(:artist_attributes=>{:id=>'20', :_delete=>'1'}) @db.sqls.should == [] al.save check_sql_array(["UPDATE albums SET artist_id = NULL, name = 'Al' WHERE (id = 10)", "UPDATE albums SET name = 'Al', artist_id = NULL WHERE (id = 10)"], "DELETE FROM artists WHERE (id = 20)") end it "should support destroying one_to_one objects" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') ar.associations[:first_album] = al ar.set(:first_album_attributes=>{:id=>10, :_delete=>'t'}) @db.sqls.should == [] ar.save @db.sqls.should == ["UPDATE artists SET name = 'Ar' WHERE (id = 20)", "DELETE FROM albums WHERE (id = 10)"] end it "should support destroying one_to_many objects" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') ar.associations[:albums] = [al] ar.set(:albums_attributes=>[{:id=>10, :_delete=>'t'}]) @db.sqls.should == [] ar.save @db.sqls.should == ["UPDATE artists SET name = 'Ar' WHERE (id = 20)", "DELETE FROM albums WHERE (id = 10)"] end it "should support destroying many_to_many objects" do a = @Album.load(:id=>10, :name=>'Al') t = @Tag.load(:id=>20, :name=>'T') a.associations[:tags] = [t] a.set(:tags_attributes=>[{:id=>20, :_delete=>true}]) @db.sqls.should == [] a.save @db.sqls.should == ["DELETE FROM at WHERE ((album_id = 10) AND (tag_id = 20))", "UPDATE albums SET name = 'Al' WHERE (id = 10)", "DELETE FROM tags WHERE (id = 20)"] end it "should support destroying objects with composite primary keys" do ar = @Artist.load(:id=>10, :name=>'Ar') co = @Concert.load(:tour=>'To', :date=>'2004-04-05', :playlist=>'Pl') ar.associations[:concerts] = [co] ar.set(:concerts_attributes=>[{:tour=>'To', :date=>'2004-04-05', :_delete=>'t'}]) @db.sqls.should == [] ar.save check_sql_array("UPDATE artists SET name = 'Ar' WHERE (id = 10)", ["DELETE FROM concerts WHERE ((tour = 'To') AND (date = '2004-04-05'))", "DELETE FROM concerts WHERE ((date = '2004-04-05') AND (tour = 'To'))"]) end it "should support both string and symbol keys in nested attribute hashes" do a = @Album.load(:id=>10, :name=>'Al') t = @Tag.load(:id=>20, :name=>'T') a.associations[:tags] = [t] a.set('tags_attributes'=>[{'id'=>20, '_delete'=>true}]) @db.sqls.should == [] a.save @db.sqls.should == ["DELETE FROM at WHERE ((album_id = 10) AND (tag_id = 20))", "UPDATE albums SET name = 'Al' WHERE (id = 10)", "DELETE FROM tags WHERE (id = 20)"] end it "should support using a hash instead of an array for to_many nested attributes" do a = @Album.load(:id=>10, :name=>'Al') t = @Tag.load(:id=>20, :name=>'T') a.associations[:tags] = [t] a.set('tags_attributes'=>{'1'=>{'id'=>20, '_delete'=>true}}) @db.sqls.should == [] a.save @db.sqls.should == ["DELETE FROM at WHERE ((album_id = 10) AND (tag_id = 20))", "UPDATE albums SET name = 'Al' WHERE (id = 10)", "DELETE FROM tags WHERE (id = 20)"] end it "should only allow destroying associated objects if :destroy option is used in the nested_attributes call" do a = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') a.associations[:artist] = ar @Album.nested_attributes :artist proc{a.set(:artist_attributes=>{:id=>'20', :_delete=>'1'})}.should raise_error(Sequel::Error) @Album.nested_attributes :artist, :destroy=>true proc{a.set(:artist_attributes=>{:id=>'20', :_delete=>'1'})}.should_not raise_error end it "should only allow removing associated objects if :remove option is used in the nested_attributes call" do a = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') a.associations[:artist] = ar @Album.nested_attributes :artist proc{a.set(:artist_attributes=>{:id=>'20', :_remove=>'1'})}.should raise_error(Sequel::Error) @Album.nested_attributes :artist, :remove=>true proc{a.set(:artist_attributes=>{:id=>'20', :_remove=>'1'})}.should_not raise_error end it "should raise an Error if a primary key is given in a nested attribute hash, but no matching associated object exists" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') ar.associations[:albums] = [al] proc{ar.set(:albums_attributes=>[{:id=>30, :_delete=>'t'}])}.should raise_error(Sequel::Error) proc{ar.set(:albums_attributes=>[{:id=>10, :_delete=>'t'}])}.should_not raise_error end it "should not raise an Error if an unmatched primary key is given, if the :strict=>false option is used" do @Artist.nested_attributes :albums, :strict=>false al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') ar.associations[:albums] = [al] ar.set(:albums_attributes=>[{:id=>30, :_delete=>'t'}]) @db.sqls.should == [] ar.save @db.sqls.should == ["UPDATE artists SET name = 'Ar' WHERE (id = 20)"] end it "should not raise an Error if an unmatched primary key is given, if the :unmatched_pk=>:ignore option is used" do @Artist.nested_attributes :albums, :unmatched_pk=>:ignore al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') ar.associations[:albums] = [al] ar.set(:albums_attributes=>[{:id=>30, :_delete=>'t'}]) @db.sqls.should == [] ar.save @db.sqls.should == ["UPDATE artists SET name = 'Ar' WHERE (id = 20)"] end it "should raise an Error if a composite primary key is given in a nested attribute hash, but no matching associated object exists" do ar = @Artist.load(:id=>10, :name=>'Ar') co = @Concert.load(:tour=>'To', :date=>'2004-04-05', :playlist=>'Pl') ar.associations[:concerts] = [co] proc{ar.set(:concerts_attributes=>[{:tour=>'To', :date=>'2004-04-04', :_delete=>'t'}])}.should raise_error(Sequel::Error) proc{ar.set(:concerts_attributes=>[{:tour=>'To', :date=>'2004-04-05', :_delete=>'t'}])}.should_not raise_error end it "should not raise an Error if an unmatched composite primary key is given, if the :strict=>false option is used" do @Artist.nested_attributes :concerts, :strict=>false ar = @Artist.load(:id=>10, :name=>'Ar') co = @Concert.load(:tour=>'To', :date=>'2004-04-05', :playlist=>'Pl') ar.associations[:concerts] = [co] ar.set(:concerts_attributes=>[{:tour=>'To', :date=>'2004-04-06', :_delete=>'t'}]) @db.sqls.should == [] ar.save @db.sqls.should == ["UPDATE artists SET name = 'Ar' WHERE (id = 10)"] end it "should not save if nested attribute is not valid and should include nested attribute validation errors in the main object's validation errors" do @Artist.class_eval do def validate super errors.add(:name, 'cannot be Ar') if name == 'Ar' end end a = @Album.new(:name=>'Al', :artist_attributes=>{:name=>'Ar'}) @db.sqls.should == [] proc{a.save}.should raise_error(Sequel::ValidationFailed) a.errors.full_messages.should == ['artist name cannot be Ar'] @db.sqls.should == [] # Should preserve attributes a.artist.name.should == 'Ar' end it "should not attempt to validate nested attributes if the :validate=>false association option is used" do @Album.many_to_one :artist, :class=>@Artist, :validate=>false, :reciprocal=>nil @Album.nested_attributes :artist, :tags, :destroy=>true, :remove=>true @Artist.class_eval do def validate super errors.add(:name, 'cannot be Ar') if name == 'Ar' end end a = @Album.new(:name=>'Al', :artist_attributes=>{:name=>'Ar'}) @db.sqls.should == [] a.save check_sql_array("INSERT INTO artists (name) VALUES ('Ar')", ["INSERT INTO albums (artist_id, name) VALUES (1, 'Al')", "INSERT INTO albums (name, artist_id) VALUES ('Al', 1)"]) end it "should not attempt to validate nested attributes if the :validate=>false option is passed to save" do @Artist.class_eval do def validate super errors.add(:name, 'cannot be Ar') if name == 'Ar' end end a = @Album.new(:name=>'Al', :artist_attributes=>{:name=>'Ar'}) @db.sqls.should == [] a.save(:validate=>false) check_sql_array("INSERT INTO artists (name) VALUES ('Ar')", ["INSERT INTO albums (artist_id, name) VALUES (1, 'Al')", "INSERT INTO albums (name, artist_id) VALUES ('Al', 1)"]) end it "should not accept nested attributes unless explicitly specified" do @Artist.many_to_many :tags, :class=>@Tag, :left_key=>:album_id, :right_key=>:tag_id, :join_table=>:at proc{@Artist.create({:name=>'Ar', :tags_attributes=>[{:name=>'T'}]})}.should raise_error(Sequel::Error) @db.sqls.should == [] end it "should save when save_changes or update is called if nested attribute associated objects changed but there are no changes to the main object" do al = @Album.load(:id=>10, :name=>'Al') ar = @Artist.load(:id=>20, :name=>'Ar') al.associations[:artist] = ar @db.sqls.should == [] al.update(:artist_attributes=>{:id=>'20', :name=>'Ar2'}) @db.sqls.should == ["UPDATE artists SET name = 'Ar2' WHERE (id = 20)"] end it "should have a :limit option limiting the amount of entries" do @Album.nested_attributes :tags, :limit=>2 arr = [{:name=>'T'}] proc{@Album.new({:name=>'Al', :tags_attributes=>arr*3})}.should raise_error(Sequel::Error) a = @Album.new({:name=>'Al', :tags_attributes=>arr*2}) @db.sqls.should == [] a.save check_sql_array("INSERT INTO albums (name) VALUES ('Al')", "INSERT INTO tags (name) VALUES ('T')", ["INSERT INTO at (album_id, tag_id) VALUES (1, 2)", "INSERT INTO at (tag_id, album_id) VALUES (2, 1)"], "INSERT INTO tags (name) VALUES ('T')", ["INSERT INTO at (album_id, tag_id) VALUES (1, 4)", "INSERT INTO at (tag_id, album_id) VALUES (4, 1)"]) end it "should accept a block that each hash gets passed to determine if it should be processed" do @Album.nested_attributes(:tags){|h| h[:name].empty?} a = @Album.new({:name=>'Al', :tags_attributes=>[{:name=>'T'}, {:name=>''}, {:name=>'T2'}]}) @db.sqls.should == [] a.save check_sql_array("INSERT INTO albums (name) VALUES ('Al')", "INSERT INTO tags (name) VALUES ('T')", ["INSERT INTO at (album_id, tag_id) VALUES (1, 2)", "INSERT INTO at (tag_id, album_id) VALUES (2, 1)"], "INSERT INTO tags (name) VALUES ('T2')", ["INSERT INTO at (album_id, tag_id) VALUES (1, 4)", "INSERT INTO at (tag_id, album_id) VALUES (4, 1)"]) end it "should accept a :transform block that returns a changed attributes hash" do @Album.nested_attributes :tags, :transform=>proc{|parent, hash| hash[:name] << parent.name; hash } a = @Album.new(:name => 'Al') a.set(:tags_attributes=>[{:name=>'T'}, {:name=>'T2'}]) @db.sqls.should == [] a.save check_sql_array("INSERT INTO albums (name) VALUES ('Al')", "INSERT INTO tags (name) VALUES ('TAl')", ["INSERT INTO at (album_id, tag_id) VALUES (1, 2)", "INSERT INTO at (tag_id, album_id) VALUES (2, 1)"], "INSERT INTO tags (name) VALUES ('T2Al')", ["INSERT INTO at (album_id, tag_id) VALUES (1, 4)", "INSERT INTO at (tag_id, album_id) VALUES (4, 1)"]) end it "should return objects created/modified in the internal methods" do @Album.nested_attributes :tags, :remove=>true, :strict=>false objs = [] @Album.class_eval do define_method(:nested_attributes_create){|*a| objs << [super(*a), :create]} define_method(:nested_attributes_remove){|*a| objs << [super(*a), :remove]} define_method(:nested_attributes_update){|*a| objs << [super(*a), :update]} end a = @Album.new(:name=>'Al') a.associations[:tags] = [@Tag.load(:id=>6, :name=>'A'), @Tag.load(:id=>7, :name=>'A2')] a.tags_attributes = [{:id=>6, :name=>'T'}, {:id=>7, :name=>'T2', :_remove=>true}, {:name=>'T3'}, {:id=>8, :name=>'T4'}, {:id=>9, :name=>'T5', :_remove=>true}] objs.should == [[@Tag.load(:id=>6, :name=>'T'), :update], [@Tag.load(:id=>7, :name=>'A2'), :remove], [@Tag.new(:name=>'T3'), :create]] end it "should raise an error if updating modifies the associated objects keys" do @Artist.columns :id, :name, :artist_id @Album.columns :id, :name, :artist_id @Tag.columns :id, :name, :tag_id @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id, :primary_key=>:artist_id @Album.many_to_one :artist, :class=>@Artist, :primary_key=>:artist_id @Album.many_to_many :tags, :class=>@Tag, :left_key=>:album_id, :right_key=>:tag_id, :join_table=>:at, :right_primary_key=>:tag_id @Artist.nested_attributes :albums, :destroy=>true, :remove=>true @Album.nested_attributes :artist, :tags, :destroy=>true, :remove=>true al = @Album.load(:id=>10, :name=>'Al', :artist_id=>25) ar = @Artist.load(:id=>20, :name=>'Ar', :artist_id=>25) t = @Tag.load(:id=>30, :name=>'T', :tag_id=>15) al.associations[:artist] = ar al.associations[:tags] = [t] ar.associations[:albums] = [al] proc{ar.set(:albums_attributes=>[{:id=>10, :name=>'Al2', :artist_id=>'3'}])}.should raise_error(Sequel::Error) proc{al.set(:artist_attributes=>{:id=>20, :name=>'Ar2', :artist_id=>'3'})}.should raise_error(Sequel::Error) proc{al.set(:tags_attributes=>[{:id=>30, :name=>'T2', :tag_id=>'3'}])}.should raise_error(Sequel::Error) end it "should accept a :fields option and only allow modification of those fields" do @Tag.columns :id, :name, :number @Album.nested_attributes :tags, :destroy=>true, :remove=>true, :fields=>[:name] al = @Album.load(:id=>10, :name=>'Al') t = @Tag.load(:id=>30, :name=>'T', :number=>10) al.associations[:tags] = [t] al.set(:tags_attributes=>[{:id=>30, :name=>'T2'}, {:name=>'T3'}]) @db.sqls.should == [] al.save check_sql_array("UPDATE albums SET name = 'Al' WHERE (id = 10)", "UPDATE tags SET name = 'T2' WHERE (id = 30)", "INSERT INTO tags (name) VALUES ('T3')", ["INSERT INTO at (album_id, tag_id) VALUES (10, 1)", "INSERT INTO at (tag_id, album_id) VALUES (1, 10)"]) proc{al.set(:tags_attributes=>[{:id=>30, :name=>'T2', :number=>3}])}.should raise_error(Sequel::Error) proc{al.set(:tags_attributes=>[{:name=>'T2', :number=>3}])}.should raise_error(Sequel::Error) end it "should accept a proc for the :fields option that accepts the associated object and returns an array of fields" do @Tag.columns :id, :name, :number @Album.nested_attributes :tags, :destroy=>true, :remove=>true, :fields=>proc{|object| object.is_a?(@Tag) ? [:name] : []} al = @Album.load(:id=>10, :name=>'Al') t = @Tag.load(:id=>30, :name=>'T', :number=>10) al.associations[:tags] = [t] al.set(:tags_attributes=>[{:id=>30, :name=>'T2'}, {:name=>'T3'}]) @db.sqls.should == [] al.save check_sql_array("UPDATE albums SET name = 'Al' WHERE (id = 10)", "UPDATE tags SET name = 'T2' WHERE (id = 30)", "INSERT INTO tags (name) VALUES ('T3')", ["INSERT INTO at (album_id, tag_id) VALUES (10, 1)", "INSERT INTO at (tag_id, album_id) VALUES (1, 10)"]) proc{al.set(:tags_attributes=>[{:id=>30, :name=>'T2', :number=>3}])}.should raise_error(Sequel::Error) proc{al.set(:tags_attributes=>[{:name=>'T2', :number=>3}])}.should raise_error(Sequel::Error) end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/null_dataset_spec.rb����������������������������������������������0000664�0000000�0000000�00000003766�12201565355�0023161�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "null_dataset extension" do before do @db = Sequel::mock(:fetch=>{:id=>1}, :autoid=>1, :numrows=>1, :columns=>[:id]).extension(:null_dataset) @ds = @db[:table].nullify @i = 0 @pr = proc{|*a| @i += 1} end after do @db.sqls.should == [] unless @skip_check end it "should make each be a noop" do @ds.each(&@pr).should equal(@ds) @i.should == 0 end it "should make fetch_rows be a noop" do @ds.fetch_rows("SELECT 1", &@pr).should == nil @i.should == 0 end it "should make insert be a noop" do @ds.insert(1).should == nil end it "should make update be a noop" do @ds.update(:a=>1).should == 0 end it "should make delete be a noop" do @ds.delete.should == 0 end it "should make truncate be a noop" do @ds.truncate.should == nil end it "should make execute_* be a noop" do @ds.send(:execute_ddl,'FOO').should == nil @ds.send(:execute_insert,'FOO').should == nil @ds.send(:execute_dui,'FOO').should == nil @ds.send(:execute,'FOO').should == nil end it "should have working columns" do @skip_check = true @ds.columns.should == [:id] @db.sqls.should == ['SELECT * FROM table LIMIT 1'] end it "should have count return 0" do @ds.count.should == 0 end it "should have empty return true" do @ds.empty?.should == true end it "should make import a noop" do @ds.import([:id], [[1], [2], [3]]).should == nil end it "should have nullify method returned modified receiver" do @skip_check = true ds = @db[:table] ds.nullify.should_not equal(ds) ds.each(&@pr) @db.sqls.should == ['SELECT * FROM table'] @i.should == 1 end it "should have nullify! method modify receiver" do ds = @db[:table] ds.nullify!.should equal(ds) ds.each(&@pr) @i.should == 0 end it "should work with method chaining" do @ds.where(:a=>1).select(:b).each(&@pr) @i.should == 0 end end ����������ruby-sequel-4.1.1/spec/extensions/optimistic_locking_spec.rb����������������������������������������0000664�0000000�0000000�00000007660�12201565355�0024371�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "optimistic_locking plugin" do before do @c = Class.new(Sequel::Model(:people)) do end h = {1=>{:id=>1, :name=>'John', :lock_version=>2}} lv = @lv = "lock_version" @c.instance_dataset.numrows = @c.dataset.numrows = proc do |sql| case sql when /UPDATE people SET (name|#{lv}) = ('Jim'|'Bob'|\d+), (?:name|#{lv}) = ('Jim'|'Bob'|\d+) WHERE \(\(id = (\d+)\) AND \(#{lv} = (\d+)\)\)/ name, nlv = $1 == 'name' ? [$2, $3] : [$3, $2] m = h[$4.to_i] if m && m[:lock_version] == $5.to_i m.merge!(:name=>name.gsub("'", ''), :lock_version=>nlv.to_i) 1 else 0 end when /UPDATE people SET #{lv} = (\d+) WHERE \(\(id = (\d+)\) AND \(#{lv} = (\d+)\)\)/ m = h[$2.to_i] if m && m[:lock_version] == $3.to_i m.merge!(:lock_version=>$1.to_i) 1 else 0 end when /DELETE FROM people WHERE \(\(id = (\d+)\) AND \(#{lv} = (\d+)\)\)/ m = h[$1.to_i] if m && m[lv.to_sym] == $2.to_i h.delete[$1.to_i] 1 else 0 end else puts sql end end @c.instance_dataset._fetch = @c.dataset._fetch = proc do |sql| m = h[1].dup v = m.delete(:lock_version) m[lv.to_sym] = v m end @c.columns :id, :name, :lock_version @c.plugin :optimistic_locking end specify "should raise an error when updating a stale record" do p1 = @c[1] p2 = @c[1] p1.update(:name=>'Jim') proc{p2.update(:name=>'Bob')}.should raise_error(Sequel::Plugins::OptimisticLocking::Error) end specify "should raise an error when destroying a stale record" do p1 = @c[1] p2 = @c[1] p1.update(:name=>'Jim') proc{p2.destroy}.should raise_error(Sequel::Plugins::OptimisticLocking::Error) end specify "should not raise an error when updating the same record twice" do p1 = @c[1] p1.update(:name=>'Jim') proc{p1.update(:name=>'Bob')}.should_not raise_error end specify "should allow changing the lock column via model.lock_column=" do @lv.replace('lv') @c.columns :id, :name, :lv @c.lock_column = :lv p1 = @c[1] p2 = @c[1] p1.update(:name=>'Jim') proc{p2.update(:name=>'Bob')}.should raise_error(Sequel::Plugins::OptimisticLocking::Error) end specify "should allow changing the lock column via plugin option" do @lv.replace('lv') @c.columns :id, :name, :lv @c.plugin :optimistic_locking, :lock_column=>:lv p1 = @c[1] p2 = @c[1] p1.update(:name=>'Jim') proc{p2.destroy}.should raise_error(Sequel::Plugins::OptimisticLocking::Error) end specify "should work when subclassing" do c = Class.new(@c) p1 = c[1] p2 = c[1] p1.update(:name=>'Jim') proc{p2.update(:name=>'Bob')}.should raise_error(Sequel::Plugins::OptimisticLocking::Error) end specify "should work correctly if attempting to refresh and save again after a failed save" do p1 = @c[1] p2 = @c[1] p1.update(:name=>'Jim') begin p2.update(:name=>'Bob') rescue Sequel::Plugins::OptimisticLocking::Error p2.refresh @c.db.sqls proc{p2.update(:name=>'Bob')}.should_not raise_error end @c.db.sqls.first.should =~ /UPDATE people SET (name = 'Bob', lock_version = 4|lock_version = 4, name = 'Bob') WHERE \(\(id = 1\) AND \(lock_version = 3\)\)/ end specify "should increment the lock column when #modified! even if no columns are changed" do p1 = @c[1] p1.modified! lv = p1.lock_version p1.save_changes p1.lock_version.should == lv + 1 end specify "should not increment the lock column when the update fails" do @c.instance_dataset.meta_def(:update) { raise Exception } p1 = @c[1] p1.modified! lv = p1.lock_version proc{p1.save_changes}.should raise_error(Exception) p1.lock_version.should == lv end end ��������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pagination_spec.rb������������������������������������������������0000664�0000000�0000000�00000006457�12201565355�0022633�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "A paginated dataset" do before do @d = Sequel.mock.dataset.extension(:pagination) @d.meta_def(:count) {153} @paginated = @d.paginate(1, 20) end specify "should raise an error if the dataset already has a limit" do proc{@d.limit(10).paginate(1,10)}.should raise_error(Sequel::Error) proc{@paginated.paginate(2,20)}.should raise_error(Sequel::Error) end specify "should set the limit and offset options correctly" do @paginated.opts[:limit].should == 20 @paginated.opts[:offset].should == 0 end specify "should set the page count correctly" do @paginated.page_count.should == 8 @d.paginate(1, 50).page_count.should == 4 end specify "should set the current page number correctly" do @paginated.current_page.should == 1 @d.paginate(3, 50).current_page.should == 3 end specify "should return the next page number or nil if we're on the last" do @paginated.next_page.should == 2 @d.paginate(4, 50).next_page.should be_nil end specify "should return the previous page number or nil if we're on the first" do @paginated.prev_page.should be_nil @d.paginate(4, 50).prev_page.should == 3 end specify "should return the page range" do @paginated.page_range.should == (1..8) @d.paginate(4, 50).page_range.should == (1..4) end specify "should return the record range for the current page" do @paginated.current_page_record_range.should == (1..20) @d.paginate(4, 50).current_page_record_range.should == (151..153) @d.paginate(5, 50).current_page_record_range.should == (0..0) end specify "should return the record count for the current page" do @paginated.current_page_record_count.should == 20 @d.paginate(3, 50).current_page_record_count.should == 50 @d.paginate(4, 50).current_page_record_count.should == 3 @d.paginate(5, 50).current_page_record_count.should == 0 end specify "should know if current page is last page" do @paginated.last_page?.should be_false @d.paginate(2, 20).last_page?.should be_false @d.paginate(5, 30).last_page?.should be_false @d.paginate(6, 30).last_page?.should be_true end specify "should know if current page is first page" do @paginated.first_page?.should be_true @d.paginate(1, 20).first_page?.should be_true @d.paginate(2, 20).first_page?.should be_false end specify "should work with fixed sql" do ds = @d.clone(:sql => 'select * from blah') ds.meta_def(:count) {150} ds.paginate(2, 50).sql.should == 'SELECT * FROM (select * from blah) AS t1 LIMIT 50 OFFSET 50' end end describe "Dataset#each_page" do before do @d = Sequel.mock.dataset.from(:items).extension(:pagination) @d.meta_def(:count) {153} end specify "should raise an error if the dataset already has a limit" do proc{@d.limit(10).each_page(10){}}.should raise_error(Sequel::Error) end specify "should iterate over each page in the resultset as a paginated dataset" do a = [] @d.each_page(50) {|p| a << p} a.map {|p| p.sql}.should == [ 'SELECT * FROM items LIMIT 50 OFFSET 0', 'SELECT * FROM items LIMIT 50 OFFSET 50', 'SELECT * FROM items LIMIT 50 OFFSET 100', 'SELECT * FROM items LIMIT 50 OFFSET 150', ] end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_array_associations_spec.rb�������������������������������������0000664�0000000�0000000�00000073600�12201565355�0025057�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "pg_array_associations" do before do class ::Artist < Sequel::Model attr_accessor :yyy columns :id, :tag_ids plugin :pg_array_associations pg_array_to_many :tags end class ::Tag < Sequel::Model columns :id plugin :pg_array_associations many_to_pg_array :artists def id3 id*3 end end @c1 = Artist @c2 = Tag @c1.dataset._fetch = {:id=>1, :tag_ids=>Sequel.pg_array([1,2,3])} @c2.dataset._fetch = {:id=>2} @o1 = @c1.first @o2 = @c2.first @n1 = @c1.new @n2 = @c2.new DB.reset end after do Object.send(:remove_const, :Artist) Object.send(:remove_const, :Tag) end it "should populate :key_hash and :id_map option correctly for custom eager loaders" do khs = [] pr = proc{|h| khs << [h[:key_hash], h[:id_map]]} @c1.pg_array_to_many :tags, :clone=>:tags, :eager_loader=>pr @c2.many_to_pg_array :artists, :clone=>:artists, :eager_loader=>pr @c1.eager(:tags).all @c2.eager(:artists).all khs.should == [[{}, nil], [{:id=>{2=>[Tag.load(:id=>2)]}}, {2=>[Tag.load(:id=>2)]}]] end it "should not issue queries if the object cannot have associated objects" do @n1.tags.should == [] @c1.load(:tag_ids=>[]).tags.should == [] @n2.artists.should == [] DB.sqls.should == [] end it "should use correct SQL when loading associations lazily" do @o1.tags.should == [@o2] @o2.artists.should == [@o1] DB.sqls.should == ["SELECT * FROM tags WHERE (tags.id IN (1, 2, 3))", "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2])"] end it "should accept :primary_key option for primary keys to use in current and associated table" do @c1.pg_array_to_many :tags, :clone=>:tags, :primary_key=>Sequel./(:id, 3) @c2.many_to_pg_array :artists, :clone=>:artists, :primary_key=>:id3 @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE ((tags.id / 3) IN (1, 2, 3))" @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[6])" end it "should allowing filtering by associations" do @c1.filter(:tags=>@o2).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2])" @c2.filter(:artists=>@o1).sql.should == "SELECT * FROM tags WHERE (tags.id IN (1, 2, 3))" end it "should allowing excluding by associations" do @c1.exclude(:tags=>@o2).sql.should == "SELECT * FROM artists WHERE (NOT (artists.tag_ids @> ARRAY[2]) OR (artists.tag_ids IS NULL))" @c2.exclude(:artists=>@o1).sql.should == "SELECT * FROM tags WHERE ((tags.id NOT IN (1, 2, 3)) OR (tags.id IS NULL))" end it "should allowing filtering by multiple associations" do @c1.filter(:tags=>[@c2.load(:id=>1), @c2.load(:id=>2)]).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[1,2])" @c2.filter(:artists=>[@c1.load(:tag_ids=>Sequel.pg_array([3, 4])), @c1.load(:tag_ids=>Sequel.pg_array([4, 5]))]).sql.should == "SELECT * FROM tags WHERE (tags.id IN (3, 4, 5))" end it "should allowing excluding by multiple associations" do @c1.exclude(:tags=>[@c2.load(:id=>1), @c2.load(:id=>2)]).sql.should == "SELECT * FROM artists WHERE (NOT (artists.tag_ids && ARRAY[1,2]) OR (artists.tag_ids IS NULL))" @c2.exclude(:artists=>[@c1.load(:tag_ids=>Sequel.pg_array([3, 4])), @c1.load(:tag_ids=>Sequel.pg_array([4, 5]))]).sql.should == "SELECT * FROM tags WHERE ((tags.id NOT IN (3, 4, 5)) OR (tags.id IS NULL))" end it "should allowing filtering/excluding associations with NULL or empty values" do @c1.filter(:tags=>@c2.new).sql.should == 'SELECT * FROM artists WHERE \'f\'' @c1.exclude(:tags=>@c2.new).sql.should == 'SELECT * FROM artists WHERE \'t\'' @c2.filter(:artists=>@c1.new).sql.should == 'SELECT * FROM tags WHERE \'f\'' @c2.exclude(:artists=>@c1.new).sql.should == 'SELECT * FROM tags WHERE \'t\'' @c2.filter(:artists=>@c1.load(:tag_ids=>[])).sql.should == 'SELECT * FROM tags WHERE \'f\'' @c2.exclude(:artists=>@c1.load(:tag_ids=>[])).sql.should == 'SELECT * FROM tags WHERE \'t\'' @c1.filter(:tags=>[@c2.new, @c2.load(:id=>2)]).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])" @c2.filter(:artists=>[@c1.load(:tag_ids=>Sequel.pg_array([3, 4])), @c1.new]).sql.should == "SELECT * FROM tags WHERE (tags.id IN (3, 4))" end it "should allowing filtering by association datasets" do @c1.filter(:tags=>@c2.where(:id=>1)).sql.should == "SELECT * FROM artists WHERE coalesce((artists.tag_ids && (SELECT array_agg(tags.id) FROM tags WHERE (id = 1))), 'f')" @c2.filter(:artists=>@c1.where(:id=>1)).sql.should == "SELECT * FROM tags WHERE (tags.id IN (SELECT unnest(artists.tag_ids) FROM artists WHERE (id = 1)))" end it "should allowing excluding by association datasets" do @c1.exclude(:tags=>@c2.where(:id=>1)).sql.should == "SELECT * FROM artists WHERE (NOT coalesce((artists.tag_ids && (SELECT array_agg(tags.id) FROM tags WHERE (id = 1))), 'f') OR (artists.tag_ids IS NULL))" @c2.exclude(:artists=>@c1.where(:id=>1)).sql.should == "SELECT * FROM tags WHERE ((tags.id NOT IN (SELECT unnest(artists.tag_ids) FROM artists WHERE (id = 1))) OR (tags.id IS NULL))" end it "filter by associations should respect key options" do @c1.class_eval{def tag3_ids; tag_ids.map{|x| x*3} end} @c1.pg_array_to_many :tags, :clone=>:tags, :primary_key=>Sequel.*(:id, 3), :primary_key_method=>:id3, :key=>:tag3_ids, :key_column=>Sequel.pg_array(:tag_ids)[1..2] @c2.many_to_pg_array :artists, :clone=>:artists, :primary_key=>Sequel.*(:id, 3), :primary_key_method=>:id3, :key=>:tag3_ids, :key_column=>Sequel.pg_array(:tag_ids)[1..2] @c1.filter(:tags=>@o2).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids[1:2] @> ARRAY[6])" @c2.filter(:artists=>@o1).sql.should == "SELECT * FROM tags WHERE ((tags.id * 3) IN (3, 6, 9))" @c1.filter(:tags=>@c2.where(:id=>1)).sql.should == "SELECT * FROM artists WHERE coalesce((artists.tag_ids[1:2] && (SELECT array_agg((tags.id * 3)) FROM tags WHERE (id = 1))), 'f')" @c2.filter(:artists=>@c1.where(:id=>1)).sql.should == "SELECT * FROM tags WHERE ((tags.id * 3) IN (SELECT unnest(artists.tag_ids[1:2]) FROM artists WHERE (id = 1)))" end it "should support a :key option" do @c1.pg_array_to_many :tags, :clone=>:tags, :key=>:tag2_ids @c2.many_to_pg_array :artists, :clone=>:artists, :key=>:tag2_ids @c1.class_eval{def tag2_ids; tag_ids.map{|x| x * 2} end} @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE (tags.id IN (2, 4, 6))" @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag2_ids @> ARRAY[2])" end it "should support a :key_column option" do @c2.many_to_pg_array :artists, :clone=>:artists, :key_column=>Sequel.pg_array(:tag_ids)[1..2], :key=>:tag2_ids @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids[1:2] @> ARRAY[2])" end it "should support a :primary_key option" do @c1.pg_array_to_many :tags, :clone=>:tags, :primary_key=>:id2 @c2.many_to_pg_array :artists, :clone=>:artists, :primary_key=>:id2 @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE (tags.id2 IN (1, 2, 3))" @c2.class_eval{def id2; id*2 end} @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[4])" end it "should support a :conditions option" do @c1.pg_array_to_many :tags, :clone=>:tags, :conditions=>{:a=>1} @c2.many_to_pg_array :artists, :clone=>:artists, :conditions=>{:a=>1} @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE ((a = 1) AND (tags.id IN (1, 2, 3)))" @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE ((a = 1) AND (artists.tag_ids @> ARRAY[2]))" end it "should support an :order option" do @c1.pg_array_to_many :tags, :clone=>:tags, :order=>[:a, :b] @c2.many_to_pg_array :artists, :clone=>:artists, :order=>[:a, :b] @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE (tags.id IN (1, 2, 3)) ORDER BY a, b" @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2]) ORDER BY a, b" end it "should support a select option" do @c1.pg_array_to_many :tags, :clone=>:tags, :select=>[:a, :b] @c2.many_to_pg_array :artists, :clone=>:artists, :select=>[:a, :b] @c1.load(:tag_ids=>Sequel.pg_array([1,2,3])).tags_dataset.sql.should == "SELECT a, b FROM tags WHERE (tags.id IN (1, 2, 3))" @c2.load(:id=>1).artists_dataset.sql.should == "SELECT a, b FROM artists WHERE (artists.tag_ids @> ARRAY[1])" end it "should accept a block" do @c1.pg_array_to_many :tags, :clone=>:tags do |ds| ds.filter(:yyy=>@yyy) end @c2.many_to_pg_array :artists, :clone=>:artists do |ds| ds.filter(:a=>1) end @c1.new(:yyy=>6, :tag_ids=>Sequel.pg_array([1,2,3])).tags_dataset.sql.should == "SELECT * FROM tags WHERE ((tags.id IN (1, 2, 3)) AND (yyy = 6))" @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE ((artists.tag_ids @> ARRAY[2]) AND (a = 1))" end it "should support a :dataset option that is used instead of the default" do @c1.pg_array_to_many :tags, :clone=>:tags, :dataset=>proc{Tag.where(:id=>tag_ids.map{|x| x*2})} @c2.many_to_pg_array :artists, :clone=>:artists, :dataset=>proc{Artist.where(Sequel.pg_array(Sequel.pg_array(:tag_ids)[1..2]).contains([id]))} @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE (id IN (2, 4, 6))" @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (tag_ids[1:2] @> ARRAY[2])" end it "should support a :limit option" do @c1.pg_array_to_many :tags, :clone=>:tags, :limit=>[2, 3] @c2.many_to_pg_array :artists, :clone=>:artists, :limit=>[3, 2] @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE (tags.id IN (1, 2, 3)) LIMIT 2 OFFSET 3" @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2]) LIMIT 3 OFFSET 2" end it "should support a :uniq option that removes duplicates from the association" do @c1.pg_array_to_many :tags, :clone=>:tags, :uniq=>true @c2.many_to_pg_array :artists, :clone=>:artists, :uniq=>true @c1.dataset._fetch = [{:id=>20}, {:id=>30}, {:id=>20}, {:id=>30}] @c2.dataset._fetch = [{:id=>20}, {:id=>30}, {:id=>20}, {:id=>30}] @o1.tags.should == [@c2.load(:id=>20), @c2.load(:id=>30)] @o2.artists.should == [@c1.load(:id=>20), @c1.load(:id=>30)] end it "reflection associated_object_keys should return correct values" do @c1.association_reflection(:tags).associated_object_keys.should == [:id] @c2.association_reflection(:artists).associated_object_keys.should == [:tag_ids] end it "reflection remove_before_destroy? should return correct values" do @c1.association_reflection(:tags).remove_before_destroy?.should be_true @c2.association_reflection(:artists).remove_before_destroy?.should be_false end it "reflection reciprocal should be correct" do @c1.association_reflection(:tags).reciprocal.should == :artists @c2.association_reflection(:artists).reciprocal.should == :tags end it "should eagerly load correctly" do a = @c1.eager(:tags).all a.should == [@o1] sqls = DB.sqls sqls.pop.should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ sqls.should == ["SELECT * FROM artists"] a.first.tags.should == [@o2] DB.sqls.should == [] a = @c2.eager(:artists).all a.should == [@o2] DB.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"] a.first.artists.should == [@o1] DB.sqls.should == [] end it "should support using custom key options when eager loading associations" do @c1.class_eval{def tag3_ids; tag_ids.map{|x| x*3} end} @c1.pg_array_to_many :tags, :clone=>:tags, :primary_key=>Sequel.*(:id, 3), :primary_key_method=>:id3, :key=>:tag3_ids @c2.many_to_pg_array :artists, :clone=>:artists, :primary_key=>:id3, :key=>:tag3_ids, :key_column=>Sequel.pg_array(:tag_ids)[1..2] a = @c1.eager(:tags).all a.should == [@o1] sqls = DB.sqls sqls.pop.should =~ /SELECT \* FROM tags WHERE \(\(tags\.id \* 3\) IN \([369], [369], [369]\)\)/ sqls.should == ["SELECT * FROM artists"] a.first.tags.should == [@o2] DB.sqls.should == [] a = @c2.eager(:artists).all a.should == [@o2] DB.sqls.should == ["SELECT * FROM tags", "SELECT * FROM artists WHERE (artists.tag_ids[1:2] && ARRAY[6])"] a.first.artists.should == [@o1] DB.sqls.should == [] end it "should allow cascading of eager loading for associations of associated models" do a = @c1.eager(:tags=>:artists).all a.should == [@o1] sqls = DB.sqls sqls.slice!(1).should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ sqls.should == ['SELECT * FROM artists', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"] a.first.tags.should == [@o2] a.first.tags.first.artists.should == [@o1] DB.sqls.should == [] end it "should respect :eager when lazily loading an association" do @c1.pg_array_to_many :tags2, :clone=>:tags, :eager=>:artists, :key=>:tag_ids @c2.many_to_pg_array :artists2, :clone=>:artists, :eager=>:tags @o1.tags2.should == [@o2] DB.sqls.should == ["SELECT * FROM tags WHERE (tags.id IN (1, 2, 3))", "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"] @o1.tags2.first.artists.should == [@o1] DB.sqls.should == [] @o2.artists2.should == [@o1] sqls = DB.sqls sqls.pop.should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ sqls.should == ["SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2])"] @o2.artists2.first.tags.should == [@o2] DB.sqls.should == [] end it "should cascade eagerly loading when the :eager_graph association option is used" do @c1.pg_array_to_many :tags2, :clone=>:tags, :eager_graph=>:artists, :key=>:tag_ids @c2.many_to_pg_array :artists2, :clone=>:artists, :eager_graph=>:tags @c2.dataset._fetch = {:id=>2, :artists_id=>1, :tag_ids=>Sequel.pg_array([1,2,3])} @c1.dataset._fetch = {:id=>1, :tags_id=>2, :tag_ids=>Sequel.pg_array([1,2,3])} @o1.tags2.should == [@o2] DB.sqls.first.should =~ /SELECT tags\.id, artists\.id AS artists_id, artists\.tag_ids FROM tags LEFT OUTER JOIN artists ON \(artists.tag_ids @> ARRAY\[tags.id\]\) WHERE \(tags\.id IN \([123], [123], [123]\)\)/ @o1.tags2.first.artists.should == [@o1] DB.sqls.should == [] @o2.artists2.should == [@o1] DB.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id]) WHERE (artists.tag_ids @> ARRAY[2])"] @o2.artists2.first.tags.should == [@o2] DB.sqls.should == [] @c2.dataset._fetch = {:id=>2, :artists_id=>1, :tag_ids=>Sequel.pg_array([1,2,3])} @c1.dataset._fetch = {:id=>1, :tag_ids=>Sequel.pg_array([1,2,3])} a = @c1.eager(:tags2).all sqls = DB.sqls sqls.pop.should =~ /SELECT tags\.id, artists\.id AS artists_id, artists\.tag_ids FROM tags LEFT OUTER JOIN artists ON \(artists.tag_ids @> ARRAY\[tags.id\]\) WHERE \(tags\.id IN \([123], [123], [123]\)\)/ sqls.should == ["SELECT * FROM artists"] a.should == [@o1] a.first.tags2.should == [@o2] a.first.tags2.first.artists.should == [@o1] DB.sqls.should == [] @c2.dataset._fetch = {:id=>2} @c1.dataset._fetch = {:id=>1, :tags_id=>2, :tag_ids=>Sequel.pg_array([1,2,3])} a = @c2.eager(:artists2).all DB.sqls.should == ["SELECT * FROM tags", "SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id]) WHERE (artists.tag_ids && ARRAY[2])"] a.should == [@o2] a.first.artists2.should == [@o1] a.first.artists2.first.tags.should == [@o2] DB.sqls.should == [] end it "should respect the :limit option when eager loading" do @c2.dataset._fetch = [{:id=>1},{:id=>2}, {:id=>3}] @c1.pg_array_to_many :tags, :clone=>:tags, :limit=>2 a = @c1.eager(:tags).all a.should == [@o1] sqls = DB.sqls sqls.pop.should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ sqls.should == ["SELECT * FROM artists"] a.first.tags.should == [@c2.load(:id=>1), @c2.load(:id=>2)] DB.sqls.should == [] @c1.pg_array_to_many :tags, :clone=>:tags, :limit=>[1, 1] a = @c1.eager(:tags).all a.should == [@o1] sqls = DB.sqls sqls.pop.should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ sqls.should == ["SELECT * FROM artists"] a.first.tags.should == [@c2.load(:id=>2)] DB.sqls.should == [] @c1.pg_array_to_many :tags, :clone=>:tags, :limit=>[nil, 1] a = @c1.eager(:tags).all a.should == [@o1] sqls = DB.sqls sqls.pop.should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ sqls.should == ["SELECT * FROM artists"] a.first.tags.should == [@c2.load(:id=>2), @c2.load(:id=>3)] DB.sqls.length.should == 0 @c2.dataset._fetch = [{:id=>2}] @c1.dataset._fetch = [{:id=>5, :tag_ids=>Sequel.pg_array([1,2,3])},{:id=>6, :tag_ids=>Sequel.pg_array([2,3])}, {:id=>7, :tag_ids=>Sequel.pg_array([1,2])}] @c2.many_to_pg_array :artists, :clone=>:artists, :limit=>2 a = @c2.eager(:artists).all a.should == [@o2] DB.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"] a.first.artists.should == [@c1.load(:id=>5, :tag_ids=>Sequel.pg_array([1,2,3])), @c1.load(:id=>6, :tag_ids=>Sequel.pg_array([2,3]))] DB.sqls.should == [] @c2.many_to_pg_array :artists, :clone=>:artists, :limit=>[1, 1] a = @c2.eager(:artists).all a.should == [@o2] DB.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"] a.first.artists.should == [@c1.load(:id=>6, :tag_ids=>Sequel.pg_array([2,3]))] DB.sqls.should == [] @c2.many_to_pg_array :artists, :clone=>:artists, :limit=>[nil, 1] a = @c2.eager(:artists).all a.should == [@o2] DB.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"] a.first.artists.should == [@c1.load(:id=>6, :tag_ids=>Sequel.pg_array([2,3])), @c1.load(:id=>7, :tag_ids=>Sequel.pg_array([1,2]))] DB.sqls.should == [] end it "should eagerly graph associations" do @c2.dataset._fetch = {:id=>2, :artists_id=>1, :tag_ids=>Sequel.pg_array([1,2,3])} @c1.dataset._fetch = {:id=>1, :tags_id=>2, :tag_ids=>Sequel.pg_array([1,2,3])} a = @c1.eager_graph(:tags).all DB.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id])"] a.should == [@o1] a.first.tags.should == [@o2] DB.sqls.should == [] a = @c2.eager_graph(:artists).all DB.sqls.should == ["SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids @> ARRAY[tags.id])"] a.should == [@o2] a.first.artists.should == [@o1] DB.sqls.should == [] end it "should allow cascading of eager graphing for associations of associated models" do @c2.dataset._fetch = {:id=>2, :artists_id=>1, :tag_ids=>Sequel.pg_array([1,2,3]), :tags_0_id=>2} @c1.dataset._fetch = {:id=>1, :tags_id=>2, :tag_ids=>Sequel.pg_array([1,2,3]), :artists_0_id=>1, :artists_0_tag_ids=>Sequel.pg_array([1,2,3])} a = @c1.eager_graph(:tags=>:artists).all DB.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id, artists_0.id AS artists_0_id, artists_0.tag_ids AS artists_0_tag_ids FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id]) LEFT OUTER JOIN artists AS artists_0 ON (artists_0.tag_ids @> ARRAY[tags.id])"] a.should == [@o1] a.first.tags.should == [@o2] a.first.tags.first.artists.should == [@o1] DB.sqls.should == [] a = @c2.eager_graph(:artists=>:tags).all DB.sqls.should == ["SELECT tags.id, artists.id AS artists_id, artists.tag_ids, tags_0.id AS tags_0_id FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids @> ARRAY[tags.id]) LEFT OUTER JOIN tags AS tags_0 ON (artists.tag_ids @> ARRAY[tags_0.id])"] a.should == [@o2] a.first.artists.should == [@o1] a.first.artists.first.tags.should == [@o2] DB.sqls.should == [] end it "eager graphing should respect key options" do @c1.class_eval{def tag3_ids; tag_ids.map{|x| x*3} end} @c1.pg_array_to_many :tags, :clone=>:tags, :primary_key=>Sequel.*(:id, 3), :primary_key_method=>:id3, :key=>:tag3_ids, :key_column=>Sequel.pg_array(:tag_ids)[1..2] @c2.many_to_pg_array :artists, :clone=>:artists, :primary_key=>:id3, :key=>:tag3_ids, :key_column=>Sequel.pg_array(:tag_ids)[1..2] @c2.dataset._fetch = {:id=>2, :artists_id=>1, :tag_ids=>Sequel.pg_array([1,2,3]), :tags_0_id=>2} @c1.dataset._fetch = {:id=>1, :tags_id=>2, :tag_ids=>Sequel.pg_array([1,2,3]), :artists_0_id=>1, :artists_0_tag_ids=>Sequel.pg_array([1,2,3])} a = @c1.eager_graph(:tags).all a.should == [@o1] DB.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids[1:2] @> ARRAY[(tags.id * 3)])"] a.first.tags.should == [@o2] DB.sqls.should == [] a = @c2.eager_graph(:artists).all a.should == [@o2] DB.sqls.should == ["SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids[1:2] @> ARRAY[tags.id3])"] a.first.artists.should == [@o1] DB.sqls.should == [] end it "should respect the association's :graph_select option" do @c1.pg_array_to_many :tags, :clone=>:tags, :graph_select=>:id2 @c2.many_to_pg_array :artists, :clone=>:artists, :graph_select=>:id @c2.dataset._fetch = {:id=>2, :artists_id=>1} @c1.dataset._fetch = {:id=>1, :id2=>2, :tag_ids=>Sequel.pg_array([1,2,3])} a = @c1.eager_graph(:tags).all DB.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id2 FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id])"] a.should == [@o1] a.first.tags.should == [@c2.load(:id2=>2)] DB.sqls.should == [] a = @c2.eager_graph(:artists).all DB.sqls.should == ["SELECT tags.id, artists.id AS artists_id FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids @> ARRAY[tags.id])"] a.should == [@o2] a.first.artists.should == [@c1.load(:id=>1)] DB.sqls.should == [] end it "should respect the association's :graph_join_type option" do @c1.pg_array_to_many :tags, :clone=>:tags, :graph_join_type=>:inner @c2.many_to_pg_array :artists, :clone=>:artists, :graph_join_type=>:inner @c1.eager_graph(:tags).sql.should == "SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists INNER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id])" @c2.eager_graph(:artists).sql.should == "SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags INNER JOIN artists ON (artists.tag_ids @> ARRAY[tags.id])" end it "should respect the association's :conditions option" do @c1.pg_array_to_many :tags, :clone=>:tags, :conditions=>{:a=>1} @c2.many_to_pg_array :artists, :clone=>:artists, :conditions=>{:a=>1} @c1.eager_graph(:tags).sql.should == "SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON ((tags.a = 1) AND (artists.tag_ids @> ARRAY[tags.id]))" @c2.eager_graph(:artists).sql.should == "SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags LEFT OUTER JOIN artists ON ((artists.a = 1) AND (artists.tag_ids @> ARRAY[tags.id]))" end it "should respect the association's :graph_conditions option" do @c1.pg_array_to_many :tags, :clone=>:tags, :graph_conditions=>{:a=>1} @c2.many_to_pg_array :artists, :clone=>:artists, :graph_conditions=>{:a=>1} @c1.eager_graph(:tags).sql.should == "SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON ((tags.a = 1) AND (artists.tag_ids @> ARRAY[tags.id]))" @c2.eager_graph(:artists).sql.should == "SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags LEFT OUTER JOIN artists ON ((artists.a = 1) AND (artists.tag_ids @> ARRAY[tags.id]))" end it "should respect the association's :graph_block option" do @c1.pg_array_to_many :tags, :clone=>:tags, :graph_block=>proc{|ja,lja,js| {Sequel.qualify(ja, :a)=>1}} @c2.many_to_pg_array :artists, :clone=>:artists, :graph_block=>proc{|ja,lja,js| {Sequel.qualify(ja, :a)=>1}} @c1.eager_graph(:tags).sql.should == "SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON ((tags.a = 1) AND (artists.tag_ids @> ARRAY[tags.id]))" @c2.eager_graph(:artists).sql.should == "SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags LEFT OUTER JOIN artists ON ((artists.a = 1) AND (artists.tag_ids @> ARRAY[tags.id]))" end it "should respect the association's :graph_only_conditions option" do @c1.pg_array_to_many :tags, :clone=>:tags, :graph_only_conditions=>{:a=>1} @c2.many_to_pg_array :artists, :clone=>:artists, :graph_only_conditions=>{:a=>1} @c1.eager_graph(:tags).sql.should == "SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (tags.a = 1)" @c2.eager_graph(:artists).sql.should == "SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags LEFT OUTER JOIN artists ON (artists.a = 1)" end it "should respect the association's :graph_only_conditions with :graph_block option" do @c1.pg_array_to_many :tags, :clone=>:tags, :graph_only_conditions=>{:a=>1}, :graph_block=>proc{|ja,lja,js| {Sequel.qualify(lja, :b)=>1}} @c2.many_to_pg_array :artists, :clone=>:artists, :graph_only_conditions=>{:a=>1}, :graph_block=>proc{|ja,lja,js| {Sequel.qualify(lja, :b)=>1}} @c1.eager_graph(:tags).sql.should == "SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON ((tags.a = 1) AND (artists.b = 1))" @c2.eager_graph(:artists).sql.should == "SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags LEFT OUTER JOIN artists ON ((artists.a = 1) AND (tags.b = 1))" end it "should define an add_ method for adding associated objects" do @o1.add_tag(@c2.load(:id=>4)) @o1.tag_ids.should == [1,2,3,4] DB.sqls.should == [] @o1.save_changes DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,2,3,4] WHERE (id = 1)"] @o2.add_artist(@c1.load(:id=>1, :tag_ids=>Sequel.pg_array([4]))) DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4,2] WHERE (id = 1)"] end it "should define a remove_ method for removing associated objects" do @o1.remove_tag(@o2) @o1.tag_ids.should == [1,3] DB.sqls.should == [] @o1.save_changes DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,3] WHERE (id = 1)"] @o2.remove_artist(@c1.load(:id=>1, :tag_ids=>Sequel.pg_array([1,2,3,4]))) DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,3,4] WHERE (id = 1)"] end it "should define a remove_all_ method for removing all associated objects" do @o1.remove_all_tags @o1.tag_ids.should == [] DB.sqls.should == [] @o1.save_changes DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[] WHERE (id = 1)"] @o2.remove_all_artists DB.sqls.should == ["UPDATE artists SET tag_ids = array_remove(tag_ids, 2) WHERE (tag_ids @> ARRAY[2])"] end it "should have pg_array_to_many association modification methods save if :save_after_modify option is used" do @c1.pg_array_to_many :tags, :clone=>:tags, :save_after_modify=>true @o1.add_tag(@c2.load(:id=>4)) @o1.tag_ids.should == [1,2,3,4] DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,2,3,4] WHERE (id = 1)"] @o1.remove_tag(@o2) @o1.tag_ids.should == [1,3,4] DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,3,4] WHERE (id = 1)"] @o1.remove_all_tags @o1.tag_ids.should == [] DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[] WHERE (id = 1)"] end it "should have association modification methods deal with nil values" do v = @c1.load(:id=>1) v.add_tag(@c2.load(:id=>4)) v.tag_ids.should == [4] DB.sqls.should == [] v.save_changes DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4]::integer[] WHERE (id = 1)"] @o2.add_artist(@c1.load(:id=>1)) DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[2]::integer[] WHERE (id = 1)"] v = @c1.load(:id=>1) v.remove_tag(@c2.load(:id=>4)) v.tag_ids.should == nil DB.sqls.should == [] v.save_changes DB.sqls.should == [] @o2.remove_artist(@c1.load(:id=>1)) DB.sqls.should == [] v = @c1.load(:id=>1) v.remove_all_tags v.tag_ids.should == nil DB.sqls.should == [] v.save_changes DB.sqls.should == [] end it "should have association modification methods deal with empty arrays values" do v = @c1.load(:id=>1, :tag_ids=>Sequel.pg_array([])) v.add_tag(@c2.load(:id=>4)) v.tag_ids.should == [4] DB.sqls.should == [] v.save_changes DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4] WHERE (id = 1)"] @o2.add_artist(@c1.load(:id=>1, :tag_ids=>Sequel.pg_array([]))) DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[2] WHERE (id = 1)"] v = @c1.load(:id=>1, :tag_ids=>Sequel.pg_array([])) v.remove_tag(@c2.load(:id=>4)) v.tag_ids.should == [] DB.sqls.should == [] v.save_changes DB.sqls.should == [] @o2.remove_artist(@c1.load(:id=>1, :tag_ids=>Sequel.pg_array([]))) DB.sqls.should == [] v = @c1.load(:id=>1, :tag_ids=>Sequel.pg_array([])) v.remove_all_tags v.tag_ids.should == [] DB.sqls.should == [] v.save_changes DB.sqls.should == [] end it "should respect the :array_type option when manually creating arrays" do @c1.pg_array_to_many :tags, :clone=>:tags, :array_type=>:int8 @c2.many_to_pg_array :artists, :clone=>:artists, :array_type=>:int8 v = @c1.load(:id=>1) v.add_tag(@c2.load(:id=>4)) v.tag_ids.should == [4] DB.sqls.should == [] v.save_changes DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4]::int8[] WHERE (id = 1)"] @o2.add_artist(@c1.load(:id=>1)) DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[2]::int8[] WHERE (id = 1)"] end end ��������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_array_ops_spec.rb����������������������������������������������0000664�0000000�0000000�00000011570�12201565355�0023157�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") Sequel.extension :pg_array, :pg_array_ops, :pg_hstore, :pg_hstore_ops describe "Sequel::Postgres::ArrayOp" do before do @db = Sequel.connect('mock://postgres', :quote_identifiers=>false) @a = Sequel.pg_array_op(:a) end it "should support the standard mathematical operators" do @db.literal(@a < @a).should == "(a < a)" @db.literal(@a <= @a).should == "(a <= a)" @db.literal(@a > @a).should == "(a > a)" @db.literal(@a >= @a).should == "(a >= a)" end it "#[] should support subscript access" do @db.literal(@a[1]).should == "a[1]" @db.literal(@a[1][2]).should == "a[1][2]" end it "#[] with a range should return an ArrayOp" do @db.literal(@a[1..2].any).should == "ANY(a[1:2])" end it "#any should use the ANY method" do @db.literal(1=>@a.any).should == "(1 = ANY(a))" end it "#all should use the ALL method" do @db.literal(1=>@a.all).should == "(1 = ALL(a))" end it "#contains should use the @> operator" do @db.literal(@a.contains(:b)).should == "(a @> b)" end it "#contained_by should use the <@ operator" do @db.literal(@a.contained_by(:b)).should == "(a <@ b)" end it "#overlaps should use the && operator" do @db.literal(@a.overlaps(:b)).should == "(a && b)" end it "#push/concat should use the || operator in append mode" do @db.literal(@a.push(:b)).should == "(a || b)" @db.literal(@a.concat(:b)).should == "(a || b)" end it "#remove should remove the element from the array" do @db.literal(@a.remove(1)).should == "array_remove(a, 1)" @db.literal(@a.remove(1)[2]).should == "array_remove(a, 1)[2]" end it "#remove should replace the element in the array with another" do @db.literal(@a.replace(1, 2)).should == "array_replace(a, 1, 2)" @db.literal(@a.replace(1, 2)[3]).should == "array_replace(a, 1, 2)[3]" end it "#unshift should use the || operator in prepend mode" do @db.literal(@a.unshift(:b)).should == "(b || a)" end it "#dims should use the array_dims function" do @db.literal(@a.dims).should == "array_dims(a)" end it "#length should use the array_length function" do @db.literal(@a.length).should == "array_length(a, 1)" @db.literal(@a.length(2)).should == "array_length(a, 2)" end it "#length should use the array_lower function" do @db.literal(@a.lower).should == "array_lower(a, 1)" @db.literal(@a.lower(2)).should == "array_lower(a, 2)" end it "#to_string/join should use the array_to_string function" do @db.literal(@a.to_string).should == "array_to_string(a, '', NULL)" @db.literal(@a.join).should == "array_to_string(a, '', NULL)" @db.literal(@a.join(':')).should == "array_to_string(a, ':', NULL)" @db.literal(@a.join(':', '*')).should == "array_to_string(a, ':', '*')" end it "#hstore should convert the item to an hstore using the hstore function" do @db.literal(@a.hstore).should == "hstore(a)" @db.literal(@a.hstore['a']).should == "(hstore(a) -> 'a')" @db.literal(@a.hstore(:b)).should == "hstore(a, b)" @db.literal(@a.hstore(:b)['a']).should == "(hstore(a, b) -> 'a')" @db.literal(@a.hstore(%w'1')).should == "hstore(a, ARRAY['1'])" @db.literal(@a.hstore(%w'1')['a']).should == "(hstore(a, ARRAY['1']) -> 'a')" end it "#unnest should use the unnest function" do @db.literal(@a.unnest).should == "unnest(a)" end it "#pg_array should return self" do @a.pg_array.should equal(@a) end it "Sequel.pg_array_op should return arg for ArrayOp" do Sequel.pg_array_op(@a).should equal(@a) end it "should be able to turn expressions into array ops using pg_array" do @db.literal(Sequel.qualify(:b, :a).pg_array.push(3)).should == "(b.a || 3)" @db.literal(Sequel.function(:a, :b).pg_array.push(3)).should == "(a(b) || 3)" end it "should be able to turn literal strings into array ops using pg_array" do @db.literal(Sequel.lit('a').pg_array.unnest).should == "unnest(a)" end it "should be able to turn symbols into array ops using Sequel.pg_array_op" do @db.literal(Sequel.pg_array_op(:a).unnest).should == "unnest(a)" end it "should be able to turn symbols into array ops using Sequel.pg_array" do @db.literal(Sequel.pg_array(:a).unnest).should == "unnest(a)" end it "should allow transforming PGArray instances into ArrayOp instances" do @db.literal(Sequel.pg_array([1,2]).op.push(3)).should == "(ARRAY[1,2] || 3)" end it "should wrap array arguments in PGArrays" do @db.literal(@a.contains([1, 2])).should == "(a @> ARRAY[1,2])" @db.literal(@a.contained_by([1, 2])).should == "(a <@ ARRAY[1,2])" @db.literal(@a.overlaps([1, 2])).should == "(a && ARRAY[1,2])" @db.literal(@a.push([1, 2])).should == "(a || ARRAY[1,2])" @db.literal(@a.concat([1, 2])).should == "(a || ARRAY[1,2])" @db.literal(@a.unshift([1, 2])).should == "(ARRAY[1,2] || a)" end end ����������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_array_spec.rb��������������������������������������������������0000664�0000000�0000000�00000044752�12201565355�0022306�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "pg_array extension" do before(:all) do Sequel.extension :pg_array @pg_types = Sequel::Postgres::PG_TYPES.dup @pg_named_types = Sequel::Postgres::PG_NAMED_TYPES.dup end after(:all) do Sequel::Postgres::PG_TYPES.replace(@pg_types) Sequel::Postgres::PG_NAMED_TYPES.replace(@pg_named_types) end before do @db = Sequel.connect('mock://postgres', :quote_identifiers=>false) @db.extend_datasets(Module.new{def supports_timestamp_timezones?; false; end; def supports_timestamp_usecs?; false; end}) @db.extension(:pg_array) @m = Sequel::Postgres @converter = @m::PG_TYPES @db.sqls end it "should parse single dimensional text arrays" do c = @converter[1009] c.call("{a}").to_a.first.should be_a_kind_of(String) c.call("{}").to_a.should == [] c.call("{a}").to_a.should == ['a'] c.call('{"a b"}').to_a.should == ['a b'] c.call('{a,b}').to_a.should == ['a', 'b'] end it "should parse multi-dimensional text arrays" do c = @converter[1009] c.call("{{}}").to_a.should == [[]] c.call("{{a},{b}}").to_a.should == [['a'], ['b']] c.call('{{"a b"},{c}}').to_a.should == [['a b'], ['c']] c.call('{{{a},{b}},{{c},{d}}}').to_a.should == [[['a'], ['b']], [['c'], ['d']]] c.call('{{{a,e},{b,f}},{{c,g},{d,h}}}').to_a.should == [[['a', 'e'], ['b', 'f']], [['c', 'g'], ['d', 'h']]] end it "should parse text arrays with embedded deliminaters" do c = @converter[1009] c.call('{{"{},","\\",\\,\\\\\\"\\""}}').to_a.should == [['{},', '",,\\""']] end it "should parse single dimensional integer arrays" do c = @converter[1007] c.call("{1}").to_a.first.should be_a_kind_of(Integer) c.call("{}").to_a.should == [] c.call("{1}").to_a.should == [1] c.call('{2,3}').to_a.should == [2, 3] c.call('{3,4,5}').to_a.should == [3, 4, 5] end it "should parse multiple dimensional integer arrays" do c = @converter[1007] c.call("{{}}").to_a.should == [[]] c.call("{{1}}").to_a.should == [[1]] c.call('{{2},{3}}').to_a.should == [[2], [3]] c.call('{{{1,2},{3,4}},{{5,6},{7,8}}}').to_a.should == [[[1, 2], [3, 4]], [[5, 6], [7, 8]]] end it "should parse single dimensional float arrays" do c = @converter[1022] c.call("{}").to_a.should == [] c.call("{1.5}").to_a.should == [1.5] c.call('{2.5,3.5}').to_a.should == [2.5, 3.5] c.call('{3.5,4.5,5.5}').to_a.should == [3.5, 4.5, 5.5] end it "should parse multiple dimensional float arrays" do c = @converter[1022] c.call("{{}}").to_a.should == [[]] c.call("{{1.5}}").to_a.should == [[1.5]] c.call('{{2.5},{3.5}}').to_a.should == [[2.5], [3.5]] c.call('{{{1.5,2.5},{3.5,4.5}},{{5.5,6.5},{7.5,8.5}}}').to_a.should == [[[1.5, 2.5], [3.5, 4.5]], [[5.5, 6.5], [7.5, 8.5]]] end it "should parse integers in float arrays as floats" do c = @converter[1022] c.call("{1}").to_a.first.should be_a_kind_of(Float) c.call("{1}").to_a.should == [1.0] c.call('{{{1,2},{3,4}},{{5,6},{7,8}}}').to_a.should == [[[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]]] end it "should parse single dimensional decimal arrays" do c = @converter[1231] c.call("{}").to_a.should == [] c.call("{1.5}").to_a.should == [BigDecimal.new('1.5')] c.call('{2.5,3.5}').to_a.should == [BigDecimal.new('2.5'), BigDecimal.new('3.5')] c.call('{3.5,4.5,5.5}').to_a.should == [BigDecimal.new('3.5'), BigDecimal.new('4.5'), BigDecimal.new('5.5')] end it "should parse multiple dimensional decimal arrays" do c = @converter[1231] c.call("{{}}").to_a.should == [[]] c.call("{{1.5}}").to_a.should == [[BigDecimal.new('1.5')]] c.call('{{2.5},{3.5}}').to_a.should == [[BigDecimal.new('2.5')], [BigDecimal.new('3.5')]] c.call('{{{1.5,2.5},{3.5,4.5}},{{5.5,6.5},{7.5,8.5}}}').to_a.should == [[[BigDecimal.new('1.5'), BigDecimal.new('2.5')], [BigDecimal.new('3.5'), BigDecimal.new('4.5')]], [[BigDecimal.new('5.5'), BigDecimal.new('6.5')], [BigDecimal.new('7.5'), BigDecimal.new('8.5')]]] end it "should parse decimal values with arbitrary precision" do c = @converter[1231] c.call("{1.000000000000000000005}").to_a.should == [BigDecimal.new('1.000000000000000000005')] c.call("{{1.000000000000000000005,2.000000000000000000005},{3.000000000000000000005,4.000000000000000000005}}").to_a.should == [[BigDecimal.new('1.000000000000000000005'), BigDecimal.new('2.000000000000000000005')], [BigDecimal.new('3.000000000000000000005'), BigDecimal.new('4.000000000000000000005')]] end it "should parse integers in decimal arrays as BigDecimals" do c = @converter[1231] c.call("{1}").to_a.first.should be_a_kind_of(BigDecimal) c.call("{1}").to_a.should == [BigDecimal.new('1')] c.call('{{{1,2},{3,4}},{{5,6},{7,8}}}').to_a.should == [[[BigDecimal.new('1'), BigDecimal.new('2')], [BigDecimal.new('3'), BigDecimal.new('4')]], [[BigDecimal.new('5'), BigDecimal.new('6')], [BigDecimal.new('7'), BigDecimal.new('8')]]] end it "should parse arrays with NULL values" do @converter.values_at(1007, 1009, 1022, 1231).each do |c| c.call("{NULL}").should == [nil] c.call("{NULL,NULL}").should == [nil,nil] c.call("{{NULL,NULL},{NULL,NULL}}").should == [[nil,nil],[nil,nil]] end end it 'should parse arrays with "NULL" values' do c = @converter[1009] c.call('{NULL,"NULL",NULL}').to_a.should == [nil, "NULL", nil] c.call('{NULLA,"NULL",NULL}').to_a.should == ["NULLA", "NULL", nil] end it "should literalize arrays without types correctly" do @db.literal(@m::PGArray.new([])).should == 'ARRAY[]' @db.literal(@m::PGArray.new([1])).should == 'ARRAY[1]' @db.literal(@m::PGArray.new([nil])).should == 'ARRAY[NULL]' @db.literal(@m::PGArray.new([nil, 1])).should == 'ARRAY[NULL,1]' @db.literal(@m::PGArray.new([1.0, 2.5])).should == 'ARRAY[1.0,2.5]' @db.literal(@m::PGArray.new([BigDecimal.new('1'), BigDecimal.new('2.000000000000000000005')])).should == 'ARRAY[1.0,2.000000000000000000005]' @db.literal(@m::PGArray.new([nil, "NULL"])).should == "ARRAY[NULL,'NULL']" @db.literal(@m::PGArray.new([nil, "{},[]'\""])).should == "ARRAY[NULL,'{},[]''\"']" end it "should literalize multidimensional arrays correctly" do @db.literal(@m::PGArray.new([[]])).should == 'ARRAY[[]]' @db.literal(@m::PGArray.new([[1, 2]])).should == 'ARRAY[[1,2]]' @db.literal(@m::PGArray.new([[3], [5]])).should == 'ARRAY[[3],[5]]' @db.literal(@m::PGArray.new([[[1.0]], [[2.5]]])).should == 'ARRAY[[[1.0]],[[2.5]]]' @db.literal(@m::PGArray.new([[[["NULL"]]]])).should == "ARRAY[[[['NULL']]]]" @db.literal(@m::PGArray.new([["a", "b"], ["{},[]'\"", nil]])).should == "ARRAY[['a','b'],['{},[]''\"',NULL]]" end it "should literalize with types correctly" do @db.literal(@m::PGArray.new([1], :int4)).should == 'ARRAY[1]::int4[]' @db.literal(@m::PGArray.new([nil], :text)).should == 'ARRAY[NULL]::text[]' @db.literal(@m::PGArray.new([nil, 1], :int8)).should == 'ARRAY[NULL,1]::int8[]' @db.literal(@m::PGArray.new([1.0, 2.5], :real)).should == 'ARRAY[1.0,2.5]::real[]' @db.literal(@m::PGArray.new([BigDecimal.new('1'), BigDecimal.new('2.000000000000000000005')], :decimal)).should == 'ARRAY[1.0,2.000000000000000000005]::decimal[]' @db.literal(@m::PGArray.new([nil, "NULL"], :varchar)).should == "ARRAY[NULL,'NULL']::varchar[]" @db.literal(@m::PGArray.new([nil, "{},[]'\""], :"varchar(255)")).should == "ARRAY[NULL,'{},[]''\"']::varchar(255)[]" end it "should have Sequel.pg_array method for easy PGArray creation" do @db.literal(Sequel.pg_array([1])).should == 'ARRAY[1]' @db.literal(Sequel.pg_array([1, 2], :int4)).should == 'ARRAY[1,2]::int4[]' @db.literal(Sequel.pg_array([[[1], [2]], [[3], [4]]], :real)).should == 'ARRAY[[[1],[2]],[[3],[4]]]::real[]' end it "should have Sequel.pg_array return existing PGArrays as-is" do a = Sequel.pg_array([1]) Sequel.pg_array(a).should equal(a) end it "should have Sequel.pg_array create a new PGArrays if type of existing does not match" do a = Sequel.pg_array([1], :int4) b = Sequel.pg_array(a, :int8) a.should == b a.should_not equal(b) a.array_type.should == :int4 b.array_type.should == :int8 end it "should support using arrays as bound variables" do @db.bound_variable_arg(1, nil).should == 1 @db.bound_variable_arg(Sequel.pg_array([1,2]), nil).should == '{1,2}' @db.bound_variable_arg([1,2], nil).should == '{1,2}' @db.bound_variable_arg([[1,2]], nil).should == '{{1,2}}' @db.bound_variable_arg([1.0,2.0], nil).should == '{1.0,2.0}' @db.bound_variable_arg([Sequel.lit('a'), Sequel.blob("a\0'\"")], nil).should == '{a,"a\\\\000\\\\047\\""}' @db.bound_variable_arg(["\\ \"", 'NULL', nil], nil).should == '{"\\\\ \\"","NULL",NULL}' end it "should parse array types from the schema correctly" do @db.fetch = [{:name=>'id', :db_type=>'integer'}, {:name=>'i', :db_type=>'integer[]'}, {:name=>'f', :db_type=>'real[]'}, {:name=>'d', :db_type=>'numeric[]'}, {:name=>'t', :db_type=>'text[]'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:integer, :integer_array, :float_array, :decimal_array, :string_array] end it "should support typecasting of the various array types" do { :integer=>{:class=>Integer, :convert=>['1', 1, '1']}, :float=>{:db_type=>'double precision', :class=>Float, :convert=>['1.1', 1.1, '1.1']}, :decimal=>{:db_type=>'numeric', :class=>BigDecimal, :convert=>['1.00000000000000000000000001', BigDecimal.new('1.00000000000000000000000001'), '1.00000000000000000000000001']}, :string=>{:db_type=>'text', :class=>String, :convert=>[1, '1', "'1'"]}, :bigint=>{:class=>Integer, :convert=>['1', 1, '1']}, :boolean=>{:class=>TrueClass, :convert=>['t', true, 'true']}, :blob=>{:db_type=>'bytea', :class=>Sequel::SQL::Blob, :convert=>['1', '1', "'1'"]}, :date=>{:class=>Date, :convert=>['2011-10-12', Date.new(2011, 10, 12), "'2011-10-12'"]}, :time=>{:db_type=>'time without time zone', :class=>Sequel::SQLTime, :convert=>['01:02:03', Sequel::SQLTime.create(1, 2, 3), "'01:02:03'"]}, :datetime=>{:db_type=>'timestamp without time zone', :class=>Time, :convert=>['2011-10-12 01:02:03', Time.local(2011, 10, 12, 1, 2, 3), "'2011-10-12 01:02:03'"]}, :time_timezone=>{:db_type=>'time with time zone', :class=>Sequel::SQLTime, :convert=>['01:02:03', Sequel::SQLTime.create(1, 2, 3), "'01:02:03'"]}, :datetime_timezone=>{:db_type=>'timestamp with time zone', :class=>Time, :convert=>['2011-10-12 01:02:03', Time.local(2011, 10, 12, 1, 2, 3), "'2011-10-12 01:02:03'"]}, }.each do |type, h| meth = :"#{type}_array" db_type = h[:db_type]||type klass = h[:class] array_in, value, output = h[:convert] [[array_in]].each do |input| v = @db.typecast_value(meth, input) v.should == [value] v.first.should be_a_kind_of(klass) v.array_type.should_not be_nil @db.typecast_value(meth, Sequel.pg_array([value])).should == v @db.typecast_value(meth, v).should equal(v) end [[[array_in]]].each do |input| v = @db.typecast_value(meth, input) v.should == [[value]] v.first.first.should be_a_kind_of(klass) v.array_type.should_not be_nil @db.typecast_value(meth, Sequel.pg_array([[value]])).should == v @db.typecast_value(meth, v).should equal(v) end @db.literal(@db.typecast_value(meth, [array_in])).should == "ARRAY[#{output}]::#{db_type}[]" @db.literal(@db.typecast_value(meth, [])).should == "ARRAY[]::#{db_type}[]" end proc{@db.typecast_value(:integer_array, {})}.should raise_error(Sequel::InvalidValue) end it "should support SQL::AliasMethods" do @db.select(Sequel.pg_array([1], :integer).as(:col1)).sql.should == 'SELECT ARRAY[1]::integer[] AS col1' end it "should support registering custom array types" do Sequel::Postgres::PGArray.register('foo') @db.typecast_value(:foo_array, []).should be_a_kind_of(Sequel::Postgres::PGArray) @db.fetch = [{:name=>'id', :db_type=>'foo[]'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:foo_array] end it "should support registering custom types with :type_symbol option" do Sequel::Postgres::PGArray.register('foo', :type_symbol=>:bar) @db.typecast_value(:bar_array, []).should be_a_kind_of(Sequel::Postgres::PGArray) @db.fetch = [{:name=>'id', :db_type=>'foo[]'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:bar_array] end it "should support using a block as a custom conversion proc given as block" do Sequel::Postgres::PGArray.register('foo', :oid=>1234){|s| (s*2).to_i} @converter[1234].call('{1}').should == [11] end it "should support using a block as a custom conversion proc given as :converter option" do Sequel::Postgres::PGArray.register('foo', :oid=>1234, :converter=>proc{|s| (s*2).to_i}) @converter[1234].call('{1}').should == [11] end it "should support using an existing scaler conversion proc via the :scalar_oid option" do Sequel::Postgres::PGArray.register('foo', :oid=>1234, :scalar_oid=>16) @converter[1234].call('{t}').should == [true] end it "should support using a given conversion procs hash via the :type_procs option" do h = {16=>proc{|s| "!#{s}"}} Sequel::Postgres::PGArray.register('foo', :oid=>1234, :scalar_oid=>16, :type_procs=>h) h[1234].call('{t}').should == ["!t"] end it "should support adding methods to the given module via the :typecast_methods_module option" do m = Module.new Sequel::Postgres::PGArray.register('foo15', :scalar_typecast=>:boolean, :typecast_methods_module=>m) @db.typecast_value(:foo15_array, ['t']).should == ['t'] @db.extend(m) @db.typecast_value(:foo15_array, ['t']).should == [true] end it "should raise an error if using :scalar_oid option with unexisting scalar conversion proc" do proc{Sequel::Postgres::PGArray.register('foo', :scalar_oid=>0)}.should raise_error(Sequel::Error) end it "should raise an error if using :converter option and a block argument" do proc{Sequel::Postgres::PGArray.register('foo', :converter=>proc{}){}}.should raise_error(Sequel::Error) end it "should raise an error if using :scalar_oid option and a block argument" do proc{Sequel::Postgres::PGArray.register('foo', :scalar_oid=>16){}}.should raise_error(Sequel::Error) end it "should support registering custom types with :oid option" do Sequel::Postgres::PGArray.register('foo', :oid=>1) Sequel::Postgres::PG_TYPES[1].call('{1}').should be_a_kind_of(Sequel::Postgres::PGArray) end it "should support registering custom types with :parser=>:json option" do Sequel::Postgres::PGArray.register('foo', :oid=>2, :parser=>:json) Sequel::Postgres::PG_TYPES[2].should be_a_kind_of(Sequel::Postgres::PGArray::JSONCreator) end it "should support registering converters with :parser=>:json option and blocks" do Sequel::Postgres::PGArray.register('foo', :oid=>4, :parser=>:json){|s| s * 2} Sequel::Postgres::PG_TYPES[4].call('{{1, 2}, {3, 4}}').should == [[2, 4], [6, 8]] end it "should support registering custom types with :array_type option" do Sequel::Postgres::PGArray.register('foo', :oid=>3, :array_type=>:blah) @db.literal(Sequel::Postgres::PG_TYPES[3].call('{}')).should == 'ARRAY[]::blah[]' end it "should use and not override existing database typecast method if :typecast_method option is given" do Sequel::Postgres::PGArray.register('foo', :typecast_method=>:float) @db.fetch = [{:name=>'id', :db_type=>'foo[]'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:float_array] end it "should support registering custom array types on a per-Database basis" do @db.register_array_type('banana', :oid=>7865){|s| s} @db.typecast_value(:banana_array, []).should be_a_kind_of(Sequel::Postgres::PGArray) @db.fetch = [{:name=>'id', :db_type=>'banana[]'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:banana_array] @db.conversion_procs.should have_key(7865) @db.respond_to?(:typecast_value_banana_array, true).should be_true db = Sequel.connect('mock://postgres', :quote_identifiers=>false) db.extend_datasets(Module.new{def supports_timestamp_timezones?; false; end; def supports_timestamp_usecs?; false; end}) db.extension(:pg_array) db.fetch = [{:name=>'id', :db_type=>'banana[]'}] db.schema(:items).map{|e| e[1][:type]}.should == [nil] db.conversion_procs.should_not have_key(7865) db.respond_to?(:typecast_value_banana_array, true).should be_false end it "should automatically look up the array and scalar oids when registering per-Database types" do @db.fetch = [[{:oid=>21, :typarray=>7866}], [{:name=>'id', :db_type=>'banana[]'}]] @db.register_array_type('banana', :scalar_typecast=>:integer) @db.sqls.should == ["SELECT typarray, oid FROM pg_type WHERE (typname = 'banana') LIMIT 1"] @db.schema(:items).map{|e| e[1][:type]}.should == [:banana_array] @db.conversion_procs[7866].call("{1,2}").should == [1,2] @db.typecast_value(:banana_array, %w'1 2').should == [1,2] end it "should not automatically look up oids if given both scalar and array oids" do @db.register_array_type('banana', :oid=>7866, :scalar_oid=>21, :scalar_typecast=>:integer) @db.sqls.should == [] @db.conversion_procs[7866].call("{1,2}").should == [1,2] @db.typecast_value(:banana_array, %w'1 2').should == [1,2] end it "should not automatically look up oids if given array oid and block" do @db.register_array_type('banana', :oid=>7866, :scalar_typecast=>:integer){|s| s.to_i} @db.sqls.should == [] @db.conversion_procs[7866].call("{1,2}").should == [1,2] @db.typecast_value(:banana_array, %w'1 2').should == [1,2] end it "should set appropriate timestamp conversion procs when resetting conversion procs" do Sequel::Postgres::PG_NAMED_TYPES[:foo] = proc{|v| v*2} @db.fetch = [[{:oid=>2222, :typname=>'foo'}], [{:oid=>2222, :typarray=>2223, :typname=>'foo'}]] @db.reset_conversion_procs procs = @db.conversion_procs procs[1185].call('{"2011-10-20 11:12:13"}').should == [Time.local(2011, 10, 20, 11, 12, 13)] procs[1115].call('{"2011-10-20 11:12:13"}').should == [Time.local(2011, 10, 20, 11, 12, 13)] procs[2222].call('1').should == '11' procs[2223].call('{"2"}').should == ['22'] end it "should return correct results for Database#schema_type_class" do @db.register_array_type('banana', :oid=>7866, :scalar_typecast=>:integer){|s| s.to_i} @db.schema_type_class(:banana_array).should == Sequel::Postgres::PGArray @db.schema_type_class(:integer).should == Integer end end ����������������������ruby-sequel-4.1.1/spec/extensions/pg_hstore_ops_spec.rb���������������������������������������������0000664�0000000�0000000�00000016373�12201565355�0023353�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") Sequel.extension :pg_array, :pg_array_ops, :pg_hstore, :pg_hstore_ops describe "Sequel::Postgres::HStoreOp" do before do @ds = Sequel.connect('mock://postgres', :quote_identifiers=>false).dataset @h = Sequel.hstore_op(:h) end it "#- should use the - operator" do @ds.literal(@h - :a).should == "(h - a)" end it "#- should cast String argument to text when using - operator" do @ds.literal(@h - 'a').should == "(h - CAST('a' AS text))" end it "#- should not cast LiteralString argument to text when using - operator" do @ds.literal(@h - Sequel.lit('a')).should == "(h - a)" end it "#- should handle arrays" do @ds.literal(@h - %w'a').should == "(h - ARRAY['a'])" end it "#- should handle hashes" do @ds.literal(@h - {'a'=>'b'}).should == "(h - '\"a\"=>\"b\"'::hstore)" end it "#- should return an HStoreOp" do @ds.literal((@h - :a)['a']).should == "((h - a) -> 'a')" end it "#[] should use the -> operator" do @ds.literal(@h['a']).should == "(h -> 'a')" end it "#[] should handle arrays" do @ds.literal(@h[%w'a']).should == "(h -> ARRAY['a'])" end it "#[] should return a PGArrayOp if given an array" do @ds.literal(@h[%w'a'][0]).should == "(h -> ARRAY['a'])[0]" end it "#[] should not return a PGArrayOp if given an array but pg_array_op is not supported" do begin module Sequel::Postgres::HStoreOp::Sequel SQL = ::Sequel::SQL end @ds.literal(@h[%w'a']).should_not be_a_kind_of(Sequel::Postgres::ArrayOp) ensure Sequel::Postgres::HStoreOp.send(:remove_const, :Sequel) end end it "#[] should return a PGArrayOp if given a PGArray" do @ds.literal(@h[Sequel.pg_array(%w'a')][0]).should == "(h -> ARRAY['a'])[0]" end it "#[] should return a PGArrayOp if given a PGArrayOp" do @ds.literal(@h[Sequel.pg_array_op(:a)][0]).should == "(h -> a)[0]" end it "#[] should return a string expression" do @ds.literal(@h['a'] + 'b').should == "((h -> 'a') || 'b')" end it "#concat and #merge should use the || operator" do @ds.literal(@h.concat(:h1)).should == "(h || h1)" @ds.literal(@h.merge(:h1)).should == "(h || h1)" end it "#concat and #merge should handle hashes" do @ds.literal(@h.concat('a'=>'b')).should == "(h || '\"a\"=>\"b\"'::hstore)" @ds.literal(@h.merge('a'=>'b')).should == "(h || '\"a\"=>\"b\"'::hstore)" end it "#concat should return an HStoreOp" do @ds.literal(@h.concat(:h1)['a']).should == "((h || h1) -> 'a')" end it "#contain_all should use the ?& operator" do @ds.literal(@h.contain_all(:h1)).should == "(h ?& h1)" end it "#contain_all handle arrays" do @ds.literal(@h.contain_all(%w'h1')).should == "(h ?& ARRAY['h1'])" end it "#contain_any should use the ?| operator" do @ds.literal(@h.contain_any(:h1)).should == "(h ?| h1)" end it "#contain_any should handle arrays" do @ds.literal(@h.contain_any(%w'h1')).should == "(h ?| ARRAY['h1'])" end it "#contains should use the @> operator" do @ds.literal(@h.contains(:h1)).should == "(h @> h1)" end it "#contains should handle hashes" do @ds.literal(@h.contains('a'=>'b')).should == "(h @> '\"a\"=>\"b\"'::hstore)" end it "#contained_by should use the <@ operator" do @ds.literal(@h.contained_by(:h1)).should == "(h <@ h1)" end it "#contained_by should handle hashes" do @ds.literal(@h.contained_by('a'=>'b')).should == "(h <@ '\"a\"=>\"b\"'::hstore)" end it "#defined should use the defined function" do @ds.literal(@h.defined('a')).should == "defined(h, 'a')" end it "#delete should use the delete function" do @ds.literal(@h.delete('a')).should == "delete(h, 'a')" end it "#delete should handle arrays" do @ds.literal(@h.delete(%w'a')).should == "delete(h, ARRAY['a'])" end it "#delete should handle hashes" do @ds.literal(@h.delete('a'=>'b')).should == "delete(h, '\"a\"=>\"b\"'::hstore)" end it "#delete should return an HStoreOp" do @ds.literal(@h.delete('a')['a']).should == "(delete(h, 'a') -> 'a')" end it "#each should use the each function" do @ds.literal(@h.each).should == "each(h)" end it "#has_key? and aliases should use the ? operator" do @ds.literal(@h.has_key?('a')).should == "(h ? 'a')" @ds.literal(@h.key?('a')).should == "(h ? 'a')" @ds.literal(@h.member?('a')).should == "(h ? 'a')" @ds.literal(@h.include?('a')).should == "(h ? 'a')" @ds.literal(@h.exist?('a')).should == "(h ? 'a')" end it "#hstore should return the receiver" do @h.hstore.should equal(@h) end it "#keys and #akeys should use the akeys function" do @ds.literal(@h.keys).should == "akeys(h)" @ds.literal(@h.akeys).should == "akeys(h)" end it "#keys and #akeys should return PGArrayOps" do @ds.literal(@h.keys[0]).should == "akeys(h)[0]" @ds.literal(@h.akeys[0]).should == "akeys(h)[0]" end it "#populate should use the populate_record function" do @ds.literal(@h.populate(:a)).should == "populate_record(a, h)" end it "#record_set should use the #= operator" do @ds.literal(@h.record_set(:a)).should == "(a #= h)" end it "#skeys should use the skeys function" do @ds.literal(@h.skeys).should == "skeys(h)" end it "#slice should should use the slice function" do @ds.literal(@h.slice(:a)).should == "slice(h, a)" end it "#slice should handle arrays" do @ds.literal(@h.slice(%w'a')).should == "slice(h, ARRAY['a'])" end it "#slice should return an HStoreOp" do @ds.literal(@h.slice(:a)['a']).should == "(slice(h, a) -> 'a')" end it "#svals should use the svals function" do @ds.literal(@h.svals).should == "svals(h)" end it "#to_array should use the hstore_to_array function" do @ds.literal(@h.to_array).should == "hstore_to_array(h)" end it "#to_array should return a PGArrayOp" do @ds.literal(@h.to_array[0]).should == "hstore_to_array(h)[0]" end it "#to_matrix should use the hstore_to_matrix function" do @ds.literal(@h.to_matrix).should == "hstore_to_matrix(h)" end it "#to_matrix should return a PGArrayOp" do @ds.literal(@h.to_matrix[0]).should == "hstore_to_matrix(h)[0]" end it "#values and #avals should use the avals function" do @ds.literal(@h.values).should == "avals(h)" @ds.literal(@h.avals).should == "avals(h)" end it "#values and #avals should return PGArrayOps" do @ds.literal(@h.values[0]).should == "avals(h)[0]" @ds.literal(@h.avals[0]).should == "avals(h)[0]" end it "should have Sequel.hstore_op return HStoreOp instances as-is" do Sequel.hstore_op(@h).should equal(@h) end it "should have Sequel.hstore return HStoreOp instances" do Sequel.hstore(:h).should == @h end it "should be able to turn expressions into hstore ops using hstore" do @ds.literal(Sequel.qualify(:b, :a).hstore['a']).should == "(b.a -> 'a')" @ds.literal(Sequel.function(:a, :b).hstore['a']).should == "(a(b) -> 'a')" end it "should be able to turn literal strings into hstore ops using hstore" do @ds.literal(Sequel.lit('a').hstore['a']).should == "(a -> 'a')" end it "should allow transforming HStore instances into HStoreOp instances" do @ds.literal(Sequel.hstore('a'=>'b').op['a']).should == "('\"a\"=>\"b\"'::hstore -> 'a')" end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_hstore_spec.rb�������������������������������������������������0000664�0000000�0000000�00000017203�12201565355�0022463�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "pg_hstore extension" do before do Sequel.extension :pg_array, :pg_hstore @db = Sequel.connect('mock://postgres', :quote_identifiers=>false) @m = Sequel::Postgres @c = @m::HStore @db.extension :pg_hstore end it "should parse hstore strings correctly" do @c.parse('').to_hash.should == {} @c.parse('"a"=>"b"').to_hash.should == {'a'=>'b'} @c.parse('"a"=>"b", "c"=>NULL').to_hash.should == {'a'=>'b', 'c'=>nil} @c.parse('"a"=>"b", "c"=>"NULL"').to_hash.should == {'a'=>'b', 'c'=>'NULL'} @c.parse('"a"=>"b", "c"=>"\\\\ \\"\'=>"').to_hash.should == {'a'=>'b', 'c'=>'\ "\'=>'} end it "should cache parse results" do r = @c::Parser.new('') o = r.parse o.should == {} r.parse.should equal(o) end it "should literalize HStores to strings correctly" do @db.literal(Sequel.hstore({})).should == '\'\'::hstore' @db.literal(Sequel.hstore("a"=>"b")).should == '\'"a"=>"b"\'::hstore' @db.literal(Sequel.hstore("c"=>nil)).should == '\'"c"=>NULL\'::hstore' @db.literal(Sequel.hstore("c"=>'NULL')).should == '\'"c"=>"NULL"\'::hstore' @db.literal(Sequel.hstore('c'=>'\ "\'=>')).should == '\'"c"=>"\\\\ \\"\'\'=>"\'::hstore' ['\'"a"=>"b","c"=>"d"\'::hstore', '\'"c"=>"d","a"=>"b"\'::hstore'].should include(@db.literal(Sequel.hstore("a"=>"b","c"=>"d"))) end it "should have Sequel.hstore method for creating HStore instances" do Sequel.hstore({}).should be_a_kind_of(@c) end it "should have Sequel.hstore return HStores as-is" do a = Sequel.hstore({}) Sequel.hstore(a).should equal(a) end it "should HStore#to_hash method for getting underlying hash" do Sequel.hstore({}).to_hash.should be_a_kind_of(Hash) end it "should convert keys and values to strings on creation" do Sequel.hstore(1=>2).to_hash.should == {"1"=>"2"} end it "should convert keys and values to strings on assignment" do v = Sequel.hstore({}) v[1] = 2 v.to_hash.should == {"1"=>"2"} v.store(:'1', 3) v.to_hash.should == {"1"=>"3"} end it "should not convert nil values to strings on creation" do Sequel.hstore(:foo=>nil).to_hash.should == {"foo"=>nil} end it "should not convert nil values to strings on assignment" do v = Sequel.hstore({}) v[:foo] = nil v.to_hash.should == {"foo"=>nil} end it "should convert lookups by key to string" do Sequel.hstore('foo'=>'bar')[:foo].should == 'bar' Sequel.hstore('1'=>'bar')[1].should == 'bar' Sequel.hstore('foo'=>'bar').fetch(:foo).should == 'bar' Sequel.hstore('foo'=>'bar').fetch(:foo2, 2).should == 2 k = nil Sequel.hstore('foo2'=>'bar').fetch(:foo){|key| k = key }.should == 'foo' k.should == 'foo' Sequel.hstore('foo'=>'bar').has_key?(:foo).should be_true Sequel.hstore('foo'=>'bar').has_key?(:bar).should be_false Sequel.hstore('foo'=>'bar').key?(:foo).should be_true Sequel.hstore('foo'=>'bar').key?(:bar).should be_false Sequel.hstore('foo'=>'bar').member?(:foo).should be_true Sequel.hstore('foo'=>'bar').member?(:bar).should be_false Sequel.hstore('foo'=>'bar').include?(:foo).should be_true Sequel.hstore('foo'=>'bar').include?(:bar).should be_false Sequel.hstore('foo'=>'bar', '1'=>'2').values_at(:foo3, :foo, :foo2, 1).should == [nil, 'bar', nil, '2'] if RUBY_VERSION >= '1.9.0' Sequel.hstore('foo'=>'bar').assoc(:foo).should == ['foo', 'bar'] Sequel.hstore('foo'=>'bar').assoc(:foo2).should == nil end end it "should convert has_value?/value? lookups to string" do Sequel.hstore('foo'=>'bar').has_value?(:bar).should be_true Sequel.hstore('foo'=>'bar').has_value?(:foo).should be_false Sequel.hstore('foo'=>'bar').value?(:bar).should be_true Sequel.hstore('foo'=>'bar').value?(:foo).should be_false end it "should handle nil values in has_value?/value? lookups" do Sequel.hstore('foo'=>'').has_value?('').should be_true Sequel.hstore('foo'=>'').has_value?(nil).should be_false Sequel.hstore('foo'=>nil).has_value?(nil).should be_true end it "should have underlying hash convert lookups by key to string" do Sequel.hstore('foo'=>'bar').to_hash[:foo].should == 'bar' Sequel.hstore('1'=>'bar').to_hash[1].should == 'bar' end if RUBY_VERSION >= '1.9.0' it "should convert key lookups to string" do Sequel.hstore('foo'=>'bar').key(:bar).should == 'foo' Sequel.hstore('foo'=>'bar').key(:bar2).should be_nil end it "should handle nil values in key lookups" do Sequel.hstore('foo'=>'').key('').should == 'foo' Sequel.hstore('foo'=>'').key(nil).should == nil Sequel.hstore('foo'=>nil).key(nil).should == 'foo' end it "should convert rassoc lookups to string" do Sequel.hstore('foo'=>'bar').rassoc(:bar).should == ['foo', 'bar'] Sequel.hstore('foo'=>'bar').rassoc(:bar2).should be_nil end it "should handle nil values in rassoc lookups" do Sequel.hstore('foo'=>'').rassoc('').should == ['foo', ''] Sequel.hstore('foo'=>'').rassoc(nil).should == nil Sequel.hstore('foo'=>nil).rassoc(nil).should == ['foo', nil] end end it "should have delete convert key to string" do v = Sequel.hstore('foo'=>'bar') v.delete(:foo).should == 'bar' v.to_hash.should == {} end it "should handle #replace with hashes that do not use strings" do v = Sequel.hstore('foo'=>'bar') v.replace(:bar=>1) v.should be_a_kind_of(@c) v.should == {'bar'=>'1'} v.to_hash[:bar].should == '1' end it "should handle #merge with hashes that do not use strings" do v = Sequel.hstore('foo'=>'bar').merge(:bar=>1) v.should be_a_kind_of(@c) v.should == {'foo'=>'bar', 'bar'=>'1'} end it "should handle #merge/#update with hashes that do not use strings" do v = Sequel.hstore('foo'=>'bar') v.merge!(:bar=>1) v.should be_a_kind_of(@c) v.should == {'foo'=>'bar', 'bar'=>'1'} v = Sequel.hstore('foo'=>'bar') v.update(:bar=>1) v.should be_a_kind_of(@c) v.should == {'foo'=>'bar', 'bar'=>'1'} end it "should support using hstores as bound variables" do @db.bound_variable_arg(1, nil).should == 1 @db.bound_variable_arg({'1'=>'2'}, nil).should == '"1"=>"2"' @db.bound_variable_arg(Sequel.hstore('1'=>'2'), nil).should == '"1"=>"2"' @db.bound_variable_arg(Sequel.hstore('1'=>nil), nil).should == '"1"=>NULL' @db.bound_variable_arg(Sequel.hstore('1'=>"NULL"), nil).should == '"1"=>"NULL"' @db.bound_variable_arg(Sequel.hstore('1'=>"'\\ \"=>"), nil).should == '"1"=>"\'\\\\ \\"=>"' ['"a"=>"b","c"=>"d"', '"c"=>"d","a"=>"b"'].should include(@db.bound_variable_arg(Sequel.hstore("a"=>"b","c"=>"d"), nil)) end it "should parse hstore type from the schema correctly" do @db.fetch = [{:name=>'id', :db_type=>'integer'}, {:name=>'i', :db_type=>'hstore'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:integer, :hstore] end it "should support typecasting for the hstore type" do h = Sequel.hstore(1=>2) @db.typecast_value(:hstore, h).should equal(h) @db.typecast_value(:hstore, {}).should be_a_kind_of(@c) @db.typecast_value(:hstore, {}).should == Sequel.hstore({}) @db.typecast_value(:hstore, {'a'=>'b'}).should == Sequel.hstore("a"=>"b") proc{@db.typecast_value(:hstore, [])}.should raise_error(Sequel::InvalidValue) end it "should be serializable" do v = Sequel.hstore('foo'=>'bar') dump = Marshal.dump(v) Marshal.load(dump).should == v end it "should return correct results for Database#schema_type_class" do @db.schema_type_class(:hstore).should == Sequel::Postgres::HStore @db.schema_type_class(:integer).should == Integer end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_inet_spec.rb���������������������������������������������������0000664�0000000�0000000�00000004166�12201565355�0022122�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "pg_inet extension" do ipv6_broken = (IPAddr.new('::1'); false) rescue true before do @db = Sequel.connect('mock://postgres', :quote_identifiers=>false) @db.extension(:pg_array, :pg_inet) end it "should literalize IPAddr v4 instances to strings correctly" do @db.literal(IPAddr.new('127.0.0.1')).should == "'127.0.0.1/32'" @db.literal(IPAddr.new('127.0.0.0/8')).should == "'127.0.0.0/8'" end it "should literalize IPAddr v6 instances to strings correctly" do @db.literal(IPAddr.new('2001:4f8:3:ba::/64')).should == "'2001:4f8:3:ba::/64'" @db.literal(IPAddr.new('2001:4f8:3:ba:2e0:81ff:fe22:d1f1')).should == "'2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128'" end unless ipv6_broken it "should not affect literalization of custom objects" do o = Object.new def o.sql_literal(ds) 'v' end @db.literal(o).should == 'v' end it "should support using IPAddr as bound variables" do @db.bound_variable_arg(1, nil).should == 1 @db.bound_variable_arg(IPAddr.new('127.0.0.1'), nil).should == '127.0.0.1/32' end it "should support using IPAddr instances in array types in bound variables" do @db.bound_variable_arg(Sequel.pg_array([IPAddr.new('127.0.0.1')]), nil).should == '{"127.0.0.1/32"}' end it "should parse inet/cidr type from the schema correctly" do @db.fetch = [{:name=>'id', :db_type=>'integer'}, {:name=>'i', :db_type=>'inet'}, {:name=>'c', :db_type=>'cidr'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:integer, :ipaddr, :ipaddr] end it "should support typecasting for the ipaddr type" do ip = IPAddr.new('127.0.0.1') @db.typecast_value(:ipaddr, ip).should equal(ip) @db.typecast_value(:ipaddr, ip.to_s).should == ip proc{@db.typecast_value(:ipaddr, '')}.should raise_error(Sequel::InvalidValue) proc{@db.typecast_value(:ipaddr, 1)}.should raise_error(Sequel::InvalidValue) end it "should return correct results for Database#schema_type_class" do @db.schema_type_class(:ipaddr).should == IPAddr @db.schema_type_class(:integer).should == Integer end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_interval_spec.rb�����������������������������������������������0000664�0000000�0000000�00000011160�12201565355�0022777�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") begin require 'active_support/duration' rescue LoadError => exc skip_warn "pg_interval plugin: can't load active_support/duration (#{exc.class}: #{exc})" else describe "pg_interval extension" do before do @db = Sequel.connect('mock://postgres', :quote_identifiers=>false) @db.extension(:pg_array, :pg_interval) end it "should literalize ActiveSupport::Duration instances to strings correctly" do @db.literal(ActiveSupport::Duration.new(0, [])).should == "'0'::interval" @db.literal(ActiveSupport::Duration.new(0, [[:seconds, 0]])).should == "'0'::interval" @db.literal(ActiveSupport::Duration.new(0, [[:seconds, 10], [:minutes, 20], [:days, 3], [:months, 4], [:years, 6]])).should == "'6 years 4 months 3 days 20 minutes 10 seconds '::interval" @db.literal(ActiveSupport::Duration.new(0, [[:seconds, -10.000001], [:minutes, -20], [:days, -3], [:months, -4], [:years, -6]])).should == "'-6 years -4 months -3 days -20 minutes -10.000001 seconds '::interval" end it "should literalize ActiveSupport::Duration instances with repeated parts correctly" do @db.literal(ActiveSupport::Duration.new(0, [[:seconds, 2], [:seconds, 1]])).should == "'3 seconds '::interval" @db.literal(ActiveSupport::Duration.new(0, [[:seconds, 2], [:seconds, 1], [:days, 1], [:days, 4]])).should == "'5 days 3 seconds '::interval" end it "should not affect literalization of custom objects" do o = Object.new def o.sql_literal(ds) 'v' end @db.literal(o).should == 'v' end it "should support using ActiveSupport::Duration instances as bound variables" do @db.bound_variable_arg(1, nil).should == 1 @db.bound_variable_arg(ActiveSupport::Duration.new(0, [[:seconds, 0]]), nil).should == '0' @db.bound_variable_arg(ActiveSupport::Duration.new(0, [[:seconds, -10.000001], [:minutes, -20], [:days, -3], [:months, -4], [:years, -6]]), nil).should == '-6 years -4 months -3 days -20 minutes -10.000001 seconds ' end it "should support using ActiveSupport::Duration instances in array types in bound variables" do @db.bound_variable_arg(Sequel.pg_array([ActiveSupport::Duration.new(0, [[:seconds, 0]])]), nil).should == '{"0"}' @db.bound_variable_arg(Sequel.pg_array([ActiveSupport::Duration.new(0, [[:seconds, -10.000001], [:minutes, -20], [:days, -3], [:months, -4], [:years, -6]])]), nil).should == '{"-6 years -4 months -3 days -20 minutes -10.000001 seconds "}' end it "should parse interval type from the schema correctly" do @db.fetch = [{:name=>'id', :db_type=>'integer'}, {:name=>'i', :db_type=>'interval'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:integer, :interval] end it "should support typecasting for the interval type" do d = ActiveSupport::Duration.new(31557600 + 2*86400*30 + 3*86400*7 + 4*86400 + 5*3600 + 6*60 + 7, [[:years, 1], [:months, 2], [:days, 25], [:seconds, 18367]]) @db.typecast_value(:interval, d).object_id.should == d.object_id @db.typecast_value(:interval, "1 year 2 mons 25 days 05:06:07").is_a?(ActiveSupport::Duration).should be_true @db.typecast_value(:interval, "1 year 2 mons 25 days 05:06:07").should == d @db.typecast_value(:interval, "1 year 2 mons 25 days 05:06:07").parts.sort_by{|k,v| k.to_s}.should == d.parts.sort_by{|k,v| k.to_s} @db.typecast_value(:interval, "1 year 2 mons 25 days 05:06:07.0").parts.sort_by{|k,v| k.to_s}.should == d.parts.sort_by{|k,v| k.to_s} @db.typecast_value(:interval, "1 year 2 mons 25 days 5 hours 6 mins 7 secs").is_a?(ActiveSupport::Duration).should be_true @db.typecast_value(:interval, "1 year 2 mons 25 days 5 hours 6 mins 7 secs").should == d @db.typecast_value(:interval, "1 year 2 mons 25 days 5 hours 6 mins 7 secs").parts.sort_by{|k,v| k.to_s}.should == d.parts.sort_by{|k,v| k.to_s} @db.typecast_value(:interval, "1 year 2 mons 25 days 5 hours 6 mins 7.0 secs").parts.sort_by{|k,v| k.to_s}.should == d.parts.sort_by{|k,v| k.to_s} d2 = ActiveSupport::Duration.new(1, [[:seconds, 1]]) @db.typecast_value(:interval, 1).is_a?(ActiveSupport::Duration).should be_true @db.typecast_value(:interval, 1).should == d2 @db.typecast_value(:interval, 1).parts.sort_by{|k,v| k.to_s}.should == d2.parts.sort_by{|k,v| k.to_s} proc{@db.typecast_value(:interval, 'foo')}.should raise_error(Sequel::InvalidValue) proc{@db.typecast_value(:interval, Object.new)}.should raise_error(Sequel::InvalidValue) end it "should return correct results for Database#schema_type_class" do @db.schema_type_class(:interval).should == ActiveSupport::Duration @db.schema_type_class(:integer).should == Integer end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_json_ops_spec.rb�����������������������������������������������0000664�0000000�0000000�00000010771�12201565355�0023014�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") Sequel.extension :pg_array, :pg_array_ops, :pg_json, :pg_json_ops describe "Sequel::Postgres::JSONOp" do before do @db = Sequel.connect('mock://postgres', :quote_identifiers=>false) @j = Sequel.pg_json_op(:j) @l = proc{|o| @db.literal(o)} end it "should have #[] get the element" do @l[@j[1]].should == "(j -> 1)" @l[@j['a']].should == "(j -> 'a')" end it "should have #[] accept an array" do @l[@j[%w'a b']].should == "(j #> ARRAY['a','b'])" @l[@j[Sequel.pg_array(%w'a b')]].should == "(j #> ARRAY['a','b'])" @l[@j[Sequel.pg_array(:a)]].should == "(j #> a)" end it "should have #[] return a JSONOp" do @l[@j[1][2]].should == "((j -> 1) -> 2)" @l[@j[%w'a b'][2]].should == "((j #> ARRAY['a','b']) -> 2)" end it "should have #get be an alias to #[]" do @l[@j.get(1)].should == "(j -> 1)" @l[@j.get(%w'a b')].should == "(j #> ARRAY['a','b'])" end it "should have #get_text get the element as text" do @l[@j.get_text(1)].should == "(j ->> 1)" @l[@j.get_text('a')].should == "(j ->> 'a')" end it "should have #get_text accept an array" do @l[@j.get_text(%w'a b')].should == "(j #>> ARRAY['a','b'])" @l[@j.get_text(Sequel.pg_array(%w'a b'))].should == "(j #>> ARRAY['a','b'])" @l[@j.get_text(Sequel.pg_array(:a))].should == "(j #>> a)" end it "should have #get_text return an SQL::StringExpression" do @l[@j.get_text(1) + 'a'].should == "((j ->> 1) || 'a')" @l[@j.get_text(%w'a b') + 'a'].should == "((j #>> ARRAY['a','b']) || 'a')" end it "should have #array_length use the json_array_length function" do @l[@j.array_length].should == "json_array_length(j)" end it "should have #array_length return a numeric expression" do @l[@j.array_length & 1].should == "(json_array_length(j) & 1)" end it "should have #each use the json_each function" do @l[@j.each].should == "json_each(j)" end it "should have #each_text use the json_each_text function" do @l[@j.each_text].should == "json_each_text(j)" end it "should have #extract use the json_extract_path function" do @l[@j.extract('a')].should == "json_extract_path(j, 'a')" @l[@j.extract('a', 'b')].should == "json_extract_path(j, 'a', 'b')" end it "should have #extract return a JSONOp" do @l[@j.extract('a')[1]].should == "(json_extract_path(j, 'a') -> 1)" end it "should have #extract_text use the json_extract_path_text function" do @l[@j.extract_text('a')].should == "json_extract_path_text(j, 'a')" @l[@j.extract_text('a', 'b')].should == "json_extract_path_text(j, 'a', 'b')" end it "should have #extract_text return an SQL::StringExpression" do @l[@j.extract_text('a') + 'a'].should == "(json_extract_path_text(j, 'a') || 'a')" end it "should have #keys use the json_object_keys function" do @l[@j.keys].should == "json_object_keys(j)" end it "should have #array_elements use the json_array_elements function" do @l[@j.array_elements].should == "json_array_elements(j)" end it "should have #populate use the json_populate_record function" do @l[@j.populate(:a)].should == "json_populate_record(a, j)" end it "should have #populate_set use the json_populate_record function" do @l[@j.populate_set(:a)].should == "json_populate_recordset(a, j)" end it "#pg_json should return self" do @j.pg_json.should equal(@j) end it "Sequel.pg_json_op should return arg for JSONOp" do Sequel.pg_json_op(@j).should equal(@j) end it "should be able to turn expressions into json ops using pg_json" do @db.literal(Sequel.qualify(:b, :a).pg_json[1]).should == "(b.a -> 1)" @db.literal(Sequel.function(:a, :b).pg_json[1]).should == "(a(b) -> 1)" end it "should be able to turn literal strings into json ops using pg_json" do @db.literal(Sequel.lit('a').pg_json[1]).should == "(a -> 1)" end it "should be able to turn symbols into json ops using Sequel.pg_json_op" do @db.literal(Sequel.pg_json_op(:a)[1]).should == "(a -> 1)" end it "should be able to turn symbols into json ops using Sequel.pg_json" do @db.literal(Sequel.pg_json(:a)[1]).should == "(a -> 1)" end it "should allow transforming JSONArray instances into ArrayOp instances" do @db.literal(Sequel.pg_json([1,2]).op[1]).should == "('[1,2]'::json -> 1)" end it "should allow transforming JSONHash instances into ArrayOp instances" do @db.literal(Sequel.pg_json('a'=>1).op['a']).should == "('{\"a\":1}'::json -> 'a')" end end �������ruby-sequel-4.1.1/spec/extensions/pg_json_spec.rb���������������������������������������������������0000664�0000000�0000000�00000011313�12201565355�0022124�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") Sequel.extension :pg_array, :pg_json describe "pg_json extension" do before(:all) do m = Sequel::Postgres @m = m::JSONDatabaseMethods @hc = m::JSONHash @ac = m::JSONArray # Create subclass in correct namespace for easily overriding methods j = m::JSON = JSON.dup j.instance_eval do Parser = JSON::Parser alias old_parse parse def parse(s) return 1 if s == '1' old_parse(s) end end end before do @db = Sequel.connect('mock://postgres', :quote_identifiers=>false) @db.extension(:pg_array, :pg_json) end it "should parse json strings correctly" do @m.parse_json('[]').should be_a_kind_of(@ac) @m.parse_json('[]').to_a.should == [] @m.parse_json('[1]').to_a.should == [1] @m.parse_json('[1, 2]').to_a.should == [1, 2] @m.parse_json('[1, [2], {"a": "b"}]').to_a.should == [1, [2], {'a'=>'b'}] @m.parse_json('{}').should be_a_kind_of(@hc) @m.parse_json('{}').to_hash.should == {} @m.parse_json('{"a": "b"}').to_hash.should == {'a'=>'b'} @m.parse_json('{"a": "b", "c": [1, 2, 3]}').to_hash.should == {'a'=>'b', 'c'=>[1, 2, 3]} @m.parse_json('{"a": "b", "c": {"d": "e"}}').to_hash.should == {'a'=>'b', 'c'=>{'d'=>'e'}} end it "should parse json and non-json plain strings, integers, and floats correctly in db_parse_json" do @m.db_parse_json('{"a": "b", "c": {"d": "e"}}').to_hash.should == {'a'=>'b', 'c'=>{'d'=>'e'}} @m.db_parse_json('[1, [2], {"a": "b"}]').to_a.should == [1, [2], {'a'=>'b'}] @m.db_parse_json('1').should == 1 @m.db_parse_json('"b"').should == 'b' @m.db_parse_json('1.1').should == 1.1 end it "should raise an error when attempting to parse invalid json" do proc{@m.parse_json('')}.should raise_error(Sequel::InvalidValue) proc{@m.parse_json('1')}.should raise_error(Sequel::InvalidValue) begin Sequel.instance_eval do alias pj parse_json def parse_json(v) v end end proc{@m.parse_json('1')}.should raise_error(Sequel::InvalidValue) ensure Sequel.instance_eval do alias parse_json pj end end end it "should literalize JSONHash and JSONArray to strings correctly" do @db.literal(Sequel.pg_json([])).should == "'[]'::json" @db.literal(Sequel.pg_json([1, [2], {'a'=>'b'}])).should == "'[1,[2],{\"a\":\"b\"}]'::json" @db.literal(Sequel.pg_json({})).should == "'{}'::json" @db.literal(Sequel.pg_json('a'=>'b')).should == "'{\"a\":\"b\"}'::json" end it "should have Sequel.pg_json return JSONHash and JSONArray as is" do a = Sequel.pg_json({}) Sequel.pg_json(a).should equal(a) a = Sequel.pg_json([]) Sequel.pg_json(a).should equal(a) end it "should have JSONHash#to_hash method for getting underlying hash" do Sequel.pg_json({}).to_hash.should be_a_kind_of(Hash) end it "should have JSONArray#to_a method for getting underlying array" do Sequel.pg_json([]).to_a.should be_a_kind_of(Array) end it "should support using JSONHash and JSONArray as bound variables" do @db.bound_variable_arg(1, nil).should == 1 @db.bound_variable_arg(Sequel.pg_json([1]), nil).should == '[1]' @db.bound_variable_arg(Sequel.pg_json('a'=>'b'), nil).should == '{"a":"b"}' end it "should support using json[] types in bound variables" do @db.bound_variable_arg(Sequel.pg_array([Sequel.pg_json([{"a"=>1}]), Sequel.pg_json("b"=>[1, 2])]), nil).should == '{"[{\\"a\\":1}]","{\\"b\\":[1,2]}"}' end it "should parse json type from the schema correctly" do @db.fetch = [{:name=>'id', :db_type=>'integer'}, {:name=>'i', :db_type=>'json'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:integer, :json] end it "should support typecasting for the json type" do h = Sequel.pg_json(1=>2) a = Sequel.pg_json([1]) @db.typecast_value(:json, h).should equal(h) @db.typecast_value(:json, h.to_hash).should == h @db.typecast_value(:json, h.to_hash).should be_a_kind_of(@hc) @db.typecast_value(:json, a).should equal(a) @db.typecast_value(:json, a.to_a).should == a @db.typecast_value(:json, a.to_a).should be_a_kind_of(@ac) @db.typecast_value(:json, '[]').should == Sequel.pg_json([]) @db.typecast_value(:json, '{"a": "b"}').should == Sequel.pg_json("a"=>"b") proc{@db.typecast_value(:json, '')}.should raise_error(Sequel::InvalidValue) proc{@db.typecast_value(:json, 1)}.should raise_error(Sequel::InvalidValue) end it "should return correct results for Database#schema_type_class" do @db.schema_type_class(:json).should == [Sequel::Postgres::JSONHash, Sequel::Postgres::JSONArray] @db.schema_type_class(:integer).should == Integer end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_range_ops_spec.rb����������������������������������������������0000664�0000000�0000000�00000004326�12201565355�0023136�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") Sequel.extension :pg_array, :pg_range, :pg_range_ops describe "Sequel::Postgres::RangeOp" do before do @ds = Sequel.connect('mock://postgres', :quote_identifiers=>false).dataset @h = Sequel.pg_range_op(:h) end it "#pg_range should return self" do @h.pg_range.should equal(@h) end it "Sequel.pg_range_op should return argument if already a RangeOp" do Sequel.pg_range_op(@h).should equal(@h) end it "Sequel.pg_range should return a new RangeOp if not given a range" do @ds.literal(Sequel.pg_range(:h).lower).should == "lower(h)" end it "#pg_range should return a RangeOp for literal strings, and expressions" do @ds.literal(Sequel.function(:b, :h).pg_range.lower).should == "lower(b(h))" @ds.literal(Sequel.lit('h').pg_range.lower).should == "lower(h)" end it "PGRange#op should return a RangeOp" do @ds.literal(Sequel.pg_range(1..2, :numrange).op.lower).should == "lower('[1,2]'::numrange)" end it "should define methods for all of the PostgreSQL range operators" do @ds.literal(@h.contains(@h)).should == "(h @> h)" @ds.literal(@h.contained_by(@h)).should == "(h <@ h)" @ds.literal(@h.overlaps(@h)).should == "(h && h)" @ds.literal(@h.left_of(@h)).should == "(h << h)" @ds.literal(@h.right_of(@h)).should == "(h >> h)" @ds.literal(@h.ends_before(@h)).should == "(h &< h)" @ds.literal(@h.starts_after(@h)).should == "(h &> h)" @ds.literal(@h.adjacent_to(@h)).should == "(h -|- h)" end it "should define methods for all of the PostgreSQL range functions" do @ds.literal(@h.lower).should == "lower(h)" @ds.literal(@h.upper).should == "upper(h)" @ds.literal(@h.isempty).should == "isempty(h)" @ds.literal(@h.lower_inc).should == "lower_inc(h)" @ds.literal(@h.upper_inc).should == "upper_inc(h)" @ds.literal(@h.lower_inf).should == "lower_inf(h)" @ds.literal(@h.upper_inf).should == "upper_inf(h)" end it "+ - * operators should be defined and return a RangeOp" do @ds.literal((@h + @h).lower).should == "lower((h + h))" @ds.literal((@h * @h).lower).should == "lower((h * h))" @ds.literal((@h - @h).lower).should == "lower((h - h))" end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_range_spec.rb��������������������������������������������������0000664�0000000�0000000�00000041517�12201565355�0022260�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "pg_range extension" do before(:all) do Sequel.extension :pg_array, :pg_range @pg_types = Sequel::Postgres::PG_TYPES.dup end after(:all) do Sequel::Postgres::PG_TYPES.replace(@pg_types) end before do @db = Sequel.connect('mock://postgres', :quote_identifiers=>false) @R = Sequel::Postgres::PGRange @db.extend_datasets(Module.new{def supports_timestamp_timezones?; false; end; def supports_timestamp_usecs?; false; end}) @db.extension(:pg_array, :pg_range) end it "should literalize Range instances to strings correctly" do @db.literal(Date.new(2011, 1, 2)...Date.new(2011, 3, 2)).should == "'[2011-01-02,2011-03-02)'" @db.literal(Time.local(2011, 1, 2, 10, 20, 30)...Time.local(2011, 2, 3, 10, 20, 30)).should == "'[2011-01-02 10:20:30,2011-02-03 10:20:30)'" @db.literal(DateTime.new(2011, 1, 2, 10, 20, 30)...DateTime.new(2011, 2, 3, 10, 20, 30)).should == "'[2011-01-02 10:20:30,2011-02-03 10:20:30)'" @db.literal(DateTime.new(2011, 1, 2, 10, 20, 30)...DateTime.new(2011, 2, 3, 10, 20, 30)).should == "'[2011-01-02 10:20:30,2011-02-03 10:20:30)'" @db.literal(1..2).should == "'[1,2]'" @db.literal(1.0..2.0).should == "'[1.0,2.0]'" @db.literal(BigDecimal.new('1.0')..BigDecimal.new('2.0')).should == "'[1.0,2.0]'" @db.literal(Sequel.lit('a')..Sequel.lit('z')).should == "'[a,z]'" @db.literal(''..'()[]",\\2').should == "'[\"\",\\(\\)\\[\\]\\\"\\,\\\\2]'" end it "should literalize PGRange instances to strings correctly" do @db.literal(@R.new(1, 2)).should == "'[1,2]'" @db.literal(@R.new(true, false)).should == "'[true,false]'" @db.literal(@R.new(1, 2, :exclude_begin=>true)).should == "'(1,2]'" @db.literal(@R.new(1, 2, :exclude_end=>true)).should == "'[1,2)'" @db.literal(@R.new(nil, 2)).should == "'[,2]'" @db.literal(@R.new(1, nil)).should == "'[1,]'" @db.literal(@R.new(1, 2, :db_type=>'int8range')).should == "'[1,2]'::int8range" @db.literal(@R.new(nil, nil, :empty=>true)).should == "'empty'" @db.literal(@R.new("", 2)).should == "'[\"\",2]'" end it "should not affect literalization of custom objects" do o = Object.new def o.sql_literal(ds) 'v' end @db.literal(o).should == 'v' end it "should support using Range instances as bound variables" do @db.bound_variable_arg(1..2, nil).should == "[1,2]" end it "should support using PGRange instances as bound variables" do @db.bound_variable_arg(@R.new(1, 2), nil).should == "[1,2]" end it "should support using arrays of Range instances as bound variables" do @db.bound_variable_arg([1..2,2...3], nil).should == '{"[1,2]","[2,3)"}' end it "should support using PGRange instances as bound variables" do @db.bound_variable_arg([@R.new(1, 2),@R.new(2, 3)], nil).should == '{"[1,2]","[2,3]"}' end it "should parse range types from the schema correctly" do @db.fetch = [{:name=>'id', :db_type=>'integer'}, {:name=>'i4', :db_type=>'int4range'}, {:name=>'i8', :db_type=>'int8range'}, {:name=>'n', :db_type=>'numrange'}, {:name=>'d', :db_type=>'daterange'}, {:name=>'ts', :db_type=>'tsrange'}, {:name=>'tz', :db_type=>'tstzrange'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:integer, :int4range, :int8range, :numrange, :daterange, :tsrange, :tstzrange] end it "should parse arrays of range types from the schema correctly" do @db.fetch = [{:name=>'id', :db_type=>'integer'}, {:name=>'i4', :db_type=>'int4range[]'}, {:name=>'i8', :db_type=>'int8range[]'}, {:name=>'n', :db_type=>'numrange[]'}, {:name=>'d', :db_type=>'daterange[]'}, {:name=>'ts', :db_type=>'tsrange[]'}, {:name=>'tz', :db_type=>'tstzrange[]'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:integer, :int4range_array, :int8range_array, :numrange_array, :daterange_array, :tsrange_array, :tstzrange_array] end describe "database typecasting" do before do @o = @R.new(1, 2, :db_type=>'int4range') @o2 = @R.new(1, 2, :db_type=>'int8range') @eo = @R.new(nil, nil, :empty=>true, :db_type=>'int4range') @eo2 = @R.new(nil, nil, :empty=>true, :db_type=>'int8range') end it "should handle multiple range types" do %w'int4 int8 num date ts tstz'.each do |i| @db.typecast_value(:"#{i}range", @R.new(1, 2, :db_type=>"#{i}range")).should == @R.new(1, 2, :db_type=>"#{i}range") end end it "should handle multiple array range types" do %w'int4 int8 num date ts tstz'.each do |i| @db.typecast_value(:"#{i}range_array", [@R.new(1, 2, :db_type=>"#{i}range")]).should be_a_kind_of(Sequel::Postgres::PGArray) @db.typecast_value(:"#{i}range_array", [@R.new(1, 2, :db_type=>"#{i}range")]).should == [@R.new(1, 2, :db_type=>"#{i}range")] end end it "should return PGRange value as is if they have the same subtype" do @db.typecast_value(:int4range, @o).should equal(@o) end it "should return new PGRange value as is if they have a different subtype" do @db.typecast_value(:int8range, @o).should_not equal(@o) @db.typecast_value(:int8range, @o).should == @o2 end it "should return new PGRange value as is if they have a different subtype and value is empty" do @db.typecast_value(:int8range, @eo).should == @eo2 end it "should return new PGRange value if given a Range" do @db.typecast_value(:int4range, 1..2).should == @o @db.typecast_value(:int4range, 1..2).should_not == @o2 @db.typecast_value(:int8range, 1..2).should == @o2 end it "should parse a string argument as the PostgreSQL output format" do @db.typecast_value(:int4range, '[1,2]').should == @o end it "should raise errors for unparsable formats" do proc{@db.typecast_value(:int8range, 'foo')}.should raise_error(Sequel::InvalidValue) end it "should raise errors for unhandled values" do proc{@db.typecast_value(:int4range, 1)}.should raise_error(Sequel::InvalidValue) end end it "should support registering custom range types" do @R.register('foorange') @db.typecast_value(:foorange, 1..2).should be_a_kind_of(@R) @db.fetch = [{:name=>'id', :db_type=>'foorange'}] @db.schema(:items).map{|e| e[1][:type]}.should == [:foorange] end it "should support using a block as a custom conversion proc given as block" do @R.register('foo2range'){|s| (s*2).to_i} @db.typecast_value(:foo2range, '[1,2]').should == (11..22) end it "should support using a block as a custom conversion proc given as :converter option" do @R.register('foo3range', :converter=>proc{|s| (s*2).to_i}) @db.typecast_value(:foo3range, '[1,2]').should == (11..22) end it "should support using an existing scaler conversion proc via the :subtype_oid option" do @R.register('foo4range', :subtype_oid=>16) @db.typecast_value(:foo4range, '[t,f]').should == @R.new(true, false, :db_type=>'foo4range') end it "should raise an error if using :subtype_oid option with unexisting scalar conversion proc" do proc{@R.register('fooirange', :subtype_oid=>0)}.should raise_error(Sequel::Error) end it "should raise an error if using :converter option and a block argument" do proc{@R.register('fooirange', :converter=>proc{}){}}.should raise_error(Sequel::Error) end it "should raise an error if using :subtype_oid option and a block argument" do proc{@R.register('fooirange', :subtype_oid=>16){}}.should raise_error(Sequel::Error) end it "should support registering custom types with :oid option" do @R.register('foo5range', :oid=>331) Sequel::Postgres::PG_TYPES[331].call('[1,3)').should be_a_kind_of(@R) end it "should return correct results for Database#schema_type_class" do @db.schema_type_class(:int4range).should == Sequel::Postgres::PGRange @db.schema_type_class(:integer).should == Integer end describe "parser" do before do @p = Sequel::Postgres::PG_TYPES[3904] @sp = @R::Parser.new(nil) end it "should have db_type method to return the database type string" do @p.db_type.should == 'int4range' end it "should have converter method which returns a callable used for conversion" do @p.converter.call('1').should == 1 end it "should have call parse input string argument into PGRange instance" do @p.call('[1,2]').should == @R.new(1, 2, :db_type=>'int4range') end it "should handle empty ranges" do @p.call('empty').should == @R.new(nil, nil, :empty=>true, :db_type=>'int4range') end it "should handle exclusive beginnings and endings" do @p.call('(1,3]').should == @R.new(1, 3, :exclude_begin=>true, :db_type=>'int4range') @p.call('[1,3)').should == @R.new(1, 3, :exclude_end=>true, :db_type=>'int4range') @p.call('(1,3)').should == @R.new(1, 3, :exclude_begin=>true, :exclude_end=>true, :db_type=>'int4range') end it "should handle unbounded beginnings and endings" do @p.call('[,2]').should == @R.new(nil, 2, :db_type=>'int4range') @p.call('[1,]').should == @R.new(1, nil, :db_type=>'int4range') @p.call('[,]').should == @R.new(nil, nil, :db_type=>'int4range') end it "should unescape quoted beginnings and endings" do @sp.call('["\\\\ \\"","\\" \\\\"]').should == @R.new("\\ \"", "\" \\") end it "should treat empty quoted string not as unbounded" do @sp.call('["","z"]').should == @R.new("", "z") @sp.call('["a",""]').should == @R.new("a", "") @sp.call('["",""]').should == @R.new("", "") end end it "should set appropriate timestamp range conversion procs when resetting conversion procs" do @db.reset_conversion_procs procs = @db.conversion_procs procs[3908].call('[2011-10-20 11:12:13,2011-10-20 11:12:14]').should == (Time.local(2011, 10, 20, 11, 12, 13)..(Time.local(2011, 10, 20, 11, 12, 14))) procs[3910].call('[2011-10-20 11:12:13,2011-10-20 11:12:14]').should == (Time.local(2011, 10, 20, 11, 12, 13)..(Time.local(2011, 10, 20, 11, 12, 14))) end it "should set appropriate timestamp range array conversion procs when resetting conversion procs" do @db.reset_conversion_procs procs = @db.conversion_procs procs[3909].call('{"[2011-10-20 11:12:13,2011-10-20 11:12:14]"}').should == [Time.local(2011, 10, 20, 11, 12, 13)..Time.local(2011, 10, 20, 11, 12, 14)] procs[3911].call('{"[2011-10-20 11:12:13,2011-10-20 11:12:14]"}').should == [Time.local(2011, 10, 20, 11, 12, 13)..Time.local(2011, 10, 20, 11, 12, 14)] end describe "a PGRange instance" do before do @r1 = @R.new(1, 2) @r2 = @R.new(3, nil, :exclude_begin=>true, :db_type=>'int4range') @r3 = @R.new(nil, 4, :exclude_end=>true, :db_type=>'int8range') end it "should have #begin return the beginning of the range" do @r1.begin.should == 1 @r2.begin.should == 3 @r3.begin.should == nil end it "should have #end return the end of the range" do @r1.end.should == 2 @r2.end.should == nil @r3.end.should == 4 end it "should have #db_type return the range's database type" do @r1.db_type.should == nil @r2.db_type.should == 'int4range' @r3.db_type.should == 'int8range' end it "should be able to be created by Sequel.pg_range" do Sequel.pg_range(1..2).should == @r1 end it "should have Sequel.pg_range be able to take a database type" do Sequel.pg_range(1..2, :int4range).should == @R.new(1, 2, :db_type=>:int4range) end it "should have Sequel.pg_range return a PGRange as is" do a = Sequel.pg_range(1..2) Sequel.pg_range(a).should equal(a) end it "should have Sequel.pg_range return a new PGRange if the database type differs" do a = Sequel.pg_range(1..2, :int4range) b = Sequel.pg_range(a, :int8range) a.to_range.should == b.to_range a.should_not equal(b) a.db_type.should == :int4range b.db_type.should == :int8range end it "should have #initialize raise if requesting an empty range with beginning or ending" do proc{@R.new(1, nil, :empty=>true)}.should raise_error(Sequel::Error) proc{@R.new(nil, 2, :empty=>true)}.should raise_error(Sequel::Error) proc{@R.new(nil, nil, :empty=>true, :exclude_begin=>true)}.should raise_error(Sequel::Error) proc{@R.new(nil, nil, :empty=>true, :exclude_end=>true)}.should raise_error(Sequel::Error) end it "should quack like a range" do if RUBY_VERSION >= '1.9' @r1.cover?(1.5).should be_true @r1.cover?(2.5).should be_false @r1.first(1).should == [1] @r1.last(1).should == [2] end @r1.to_a.should == [1, 2] @r1.first.should == 1 @r1.last.should == 2 a = [] @r1.step{|x| a << x} a.should == [1, 2] end it "should only consider PGRanges equal if they have the same db_type" do @R.new(1, 2, :db_type=>'int4range').should == @R.new(1, 2, :db_type=>'int4range') @R.new(1, 2, :db_type=>'int8range').should_not == @R.new(1, 2, :db_type=>'int4range') end it "should only consider empty PGRanges equal with other empty PGRanges" do @R.new(nil, nil, :empty=>true).should == @R.new(nil, nil, :empty=>true) @R.new(nil, nil, :empty=>true).should_not == @R.new(nil, nil) @R.new(nil, nil).should_not == @R.new(nil, nil, :empty=>true) end it "should only consider empty PGRanges equal if they have the same bounds" do @R.new(1, 2).should == @R.new(1, 2) @R.new(1, 2).should_not == @R.new(1, 3) end it "should only consider empty PGRanges equal if they have the same bound exclusions" do @R.new(1, 2, :exclude_begin=>true).should == @R.new(1, 2, :exclude_begin=>true) @R.new(1, 2, :exclude_end=>true).should == @R.new(1, 2, :exclude_end=>true) @R.new(1, 2, :exclude_begin=>true).should_not == @R.new(1, 2, :exclude_end=>true) @R.new(1, 2, :exclude_end=>true).should_not == @R.new(1, 2, :exclude_begin=>true) end it "should consider PGRanges equal with a Range they represent" do @R.new(1, 2).should == (1..2) @R.new(1, 2, :exclude_end=>true).should == (1...2) @R.new(1, 3).should_not == (1..2) @R.new(1, 2, :exclude_end=>true).should_not == (1..2) end it "should not consider a PGRange equal with a Range if it can't be expressed as a range" do @R.new(nil, nil).should_not == (1..2) end it "should not consider a PGRange equal to other objects" do @R.new(nil, nil).should_not == 1 end it "should have #=== be true if given an equal PGRange" do @R.new(1, 2).should === @R.new(1, 2) @R.new(1, 2).should_not === @R.new(1, 3) end it "should have #=== be true if it would be true for the Range represented by the PGRange" do @R.new(1, 2).should === 1.5 @R.new(1, 2).should_not === 2.5 end it "should have #=== be false if the PGRange cannot be represented by a Range" do @R.new(nil, nil).should_not === 1.5 end it "should have #empty? indicate whether the range is empty" do @R.empty.should be_empty @R.new(1, 2).should_not be_empty end it "should have #exclude_begin? and #exclude_end indicate whether the beginning or ending of the range is excluded" do @r1.exclude_begin?.should be_false @r1.exclude_end?.should be_false @r2.exclude_begin?.should be_true @r2.exclude_end?.should be_false @r3.exclude_begin?.should be_false @r3.exclude_end?.should be_true end it "should have #to_range raise an exception if the PGRange cannot be represented by a Range" do proc{@R.new(nil, 1).to_range}.should raise_error(Sequel::Error) proc{@R.new(1, nil).to_range}.should raise_error(Sequel::Error) proc{@R.new(0, 1, :exclude_begin=>true).to_range}.should raise_error(Sequel::Error) proc{@R.empty.to_range}.should raise_error(Sequel::Error) end it "should have #to_range return the represented range" do @r1.to_range.should == (1..2) end it "should have #to_range cache the returned value" do @r1.to_range.should equal(@r1.to_range) end it "should have #unbounded_begin? and #unbounded_end indicate whether the beginning or ending of the range is unbounded" do @r1.unbounded_begin?.should be_false @r1.unbounded_end?.should be_false @r2.unbounded_begin?.should be_false @r2.unbounded_end?.should be_true @r3.unbounded_begin?.should be_true @r3.unbounded_end?.should be_false end it "should have #valid_ruby_range? return true if the PGRange can be represented as a Range" do @r1.valid_ruby_range?.should be_true @R.new(1, 2, :exclude_end=>true).valid_ruby_range?.should be_true end it "should have #valid_ruby_range? return false if the PGRange cannot be represented as a Range" do @R.new(nil, 1).valid_ruby_range?.should be_false @R.new(1, nil).valid_ruby_range?.should be_false @R.new(0, 1, :exclude_begin=>true).valid_ruby_range?.should be_false @R.empty.valid_ruby_range?.should be_false end end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_row_ops_spec.rb������������������������������������������������0000664�0000000�0000000�00000003613�12201565355�0022647�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") Sequel.extension :pg_array, :pg_array_ops, :pg_row, :pg_row_ops describe "Sequel::Postgres::PGRowOp" do before do @db = Sequel.connect('mock://postgres', :quote_identifiers=>false) @a = Sequel.pg_row_op(:a) end it "#[] should access members of the composite type" do @db.literal(@a[:b]).should == "(a).b" end it "#[] should be chainable" do @db.literal(@a[:b][:c]).should == "((a).b).c" end it "#[] should support array access if not given an identifier" do @db.literal(@a[:b][1]).should == "(a).b[1]" end it "#[] should be chainable with array access" do @db.literal(@a[1][:b]).should == "(a[1]).b" end it "#splat should return a splatted argument inside parentheses" do @db.literal(@a.splat).should == "(a.*)" end it "#splat(type) should return a splatted argument cast to given type" do @db.literal(@a.splat(:b)).should == "(a.*)::b" end it "#splat should not work on an already accessed composite type" do proc{@a[:a].splat(:b)}.should raise_error(Sequel::Error) end it "#* should reference all members of the composite type as separate columns if given no arguments" do @db.literal(@a.*).should == "(a).*" @db.literal(@a[:b].*).should == "((a).b).*" end it "#* should use a multiplication operation if any arguments are given" do @db.literal(@a.*(1)).should == "(a * 1)" @db.literal(@a[:b].*(1)).should == "((a).b * 1)" end it "#pg_row should be callable on literal strings" do @db.literal(Sequel.lit('a').pg_row[:b]).should == "(a).b" end it "#pg_row should be callable on Sequel expressions" do @db.literal(Sequel.function(:a).pg_row[:b]).should == "(a()).b" end it "Sequel.pg_row should work as well if the pg_row extension is loaded" do @db.literal(Sequel.pg_row(Sequel.function(:a))[:b]).should == "(a()).b" end end ���������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_row_plugin_spec.rb���������������������������������������������0000664�0000000�0000000�00000005203�12201565355�0023341�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::PgRow" do before(:all) do @db = Sequel.connect('mock://postgres', :quote_identifiers=>false) @db.extension(:pg_array) @c = Class.new(Sequel::Model(@db[:address])) @c.columns :street, :city @c.db_schema[:street][:type] = :string @c.db_schema[:city][:type] = :string @db.fetch = [[{:oid=>1098, :typrelid=>2, :typarray=>3}], [{:attname=>'street', :atttypid=>1324}, {:attname=>'city', :atttypid=>1324}]] @c.plugin :pg_row @c2 = Class.new(Sequel::Model(@db[:company])) @c2.columns :address @c2.db_schema[:address].merge!(:type=>:pg_row_address) end after do @c.dataset.opts[:from] = [:address] end it "should have schema_type_class include Sequel::Model" do @c2.new.send(:schema_type_class, :address).should == @c @db.conversion_procs[1098].call('(123 Foo St,Bar City)').should == @c.load(:street=>'123 Foo St', :city=>'Bar City') end it "should set up a parser for the type that creates a model class" do @db.conversion_procs[1098].call('(123 Foo St,Bar City)').should == @c.load(:street=>'123 Foo St', :city=>'Bar City') end it "should set up type casting for the type" do @c2.new(:address=>{'street'=>123, 'city'=>:Bar}).address.should == @c.load(:street=>'123', :city=>'Bar') end it "should return model instances as is when typecasting to rows" do o = @c.load(:street=>'123', :city=>'Bar') @c2.new(:address=>o).address.should equal(o) end it "should handle literalizing model instances" do @db.literal(@c.load(:street=>'123 Foo St', :city=>'Bar City')).should == "ROW('123 Foo St', 'Bar City')::address" end it "should handle literalizing model instances when model table is aliased" do @c.dataset.opts[:from] = [Sequel.as(:address, :a)] @db.literal(@c.load(:street=>'123 Foo St', :city=>'Bar City')).should == "ROW('123 Foo St', 'Bar City')::address" end it "should handle model instances in bound variables" do @db.bound_variable_arg(1, nil).should == 1 @db.bound_variable_arg(@c.load(:street=>'123 Foo St', :city=>'Bar City'), nil).should == '("123 Foo St","Bar City")' end it "should handle model instances in arrays of bound variables" do @db.bound_variable_arg(1, nil).should == 1 @db.bound_variable_arg(Sequel.pg_array([@c.load(:street=>'123 Foo St', :city=>'Bar City')]), nil).should == '{"(\\"123 Foo St\\",\\"Bar City\\")"}' end it "should allow inserting just this model value" do @c2.dataset.insert_sql(@c.load(:street=>'123', :city=>'Bar')).should == "INSERT INTO company VALUES (ROW('123', 'Bar')::address)" end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_row_spec.rb����������������������������������������������������0000664�0000000�0000000�00000042275�12201565355�0021775�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "pg_row extension" do before do @db = Sequel.connect('mock://postgres', :quote_identifiers=>false) @db.extension(:pg_array, :pg_row) @m = Sequel::Postgres::PGRow @db.sqls end it "should parse record objects as arrays" do a = Sequel::Postgres::PG_TYPES[2249].call("(a,b,c)") a.should be_a_kind_of(@m::ArrayRow) a.to_a.should be_a_kind_of(Array) a[0].should == 'a' a.should == %w'a b c' a.db_type.should == nil @db.literal(a).should == "ROW('a', 'b', 'c')" end it "should parse arrays of record objects as arrays of arrays" do as = Sequel::Postgres::PG_TYPES[2287].call('{"(a,b,c)","(d,e,f)"}') as.should == [%w'a b c', %w'd e f'] as.each do |a| a.should be_a_kind_of(@m::ArrayRow) a.to_a.should be_a_kind_of(Array) a.db_type.should == nil end @db.literal(as).should == "ARRAY[ROW('a', 'b', 'c'),ROW('d', 'e', 'f')]::record[]" end it "should be able to register custom parsing of row types as array-like objects" do klass = @m::ArrayRow.subclass(:foo) parser = @m::Parser.new(:converter=>klass) a = parser.call("(a,b,c)") a.should be_a_kind_of(klass) a.to_a.should be_a_kind_of(Array) a[0].should == 'a' a.should == %w'a b c' a.db_type.should == :foo @db.literal(a).should == "ROW('a', 'b', 'c')::foo" end it "should be able to register custom parsing of row types as hash-like objects" do klass = @m::HashRow.subclass(:foo, [:a, :b, :c]) parser = @m::Parser.new(:converter=>klass, :columns=>[:a, :b, :c]) a = parser.call("(a,b,c)") a.should be_a_kind_of(klass) a.to_hash.should be_a_kind_of(Hash) a[:a].should == 'a' a.should == {:a=>'a', :b=>'b', :c=>'c'} a.db_type.should == :foo a.columns.should == [:a, :b, :c] @db.literal(a).should == "ROW('a', 'b', 'c')::foo" end it "should raise an error if attempting to literalize a HashRow without column information" do h = @m::HashRow.call(:a=>'a', :b=>'b', :c=>'c') proc{@db.literal(h)}.should raise_error(Sequel::Error) end it "should be able to manually override db_type per ArrayRow instance" do a = @m::ArrayRow.call(%w'a b c') a.db_type = :foo @db.literal(a).should == "ROW('a', 'b', 'c')::foo" end it "should be able to manually override db_type and columns per HashRow instance" do h = @m::HashRow.call(:a=>'a', :c=>'c', :b=>'b') h.db_type = :foo h.columns = [:a, :b, :c] @db.literal(h).should == "ROW('a', 'b', 'c')::foo" end it "should correctly split an empty row" do @m::Splitter.new("()").parse.should == [nil] end it "should correctly split a row with a single value" do @m::Splitter.new("(1)").parse.should == %w'1' end it "should correctly split a row with multiple values" do @m::Splitter.new("(1,2)").parse.should == %w'1 2' end it "should correctly NULL values when splitting" do @m::Splitter.new("(1,)").parse.should == ['1', nil] end it "should correctly empty string values when splitting" do @m::Splitter.new('(1,"")').parse.should == ['1', ''] end it "should handle quoted values when splitting" do @m::Splitter.new('("1","2")').parse.should == %w'1 2' end it "should handle escaped backslashes in quoted values when splitting" do @m::Splitter.new('("\\\\1","2\\\\")').parse.should == ['\\1', '2\\'] end it "should handle doubled quotes in quoted values when splitting" do @m::Splitter.new('("""1","2""")').parse.should == ['"1', '2"'] end it "should correctly convert types when parsing into an array" do @m::Parser.new(:column_converters=>[proc{|s| s*2}, proc{|s| s*3}, proc{|s| s*4}]).call("(a,b,c)").should == %w'aa bbb cccc' end it "should correctly convert types into hashes if columns are known" do @m::Parser.new(:columns=>[:a, :b, :c]).call("(a,b,c)").should == {:a=>'a', :b=>'b', :c=>'c'} end it "should correctly handle type conversion when converting into hashes" do @m::Parser.new(:column_converters=>[proc{|s| s*2}, proc{|s| s*3}, proc{|s| s*4}], :columns=>[:a, :b, :c]).call("(a,b,c)").should == {:a=>'aa', :b=>'bbb', :c=>'cccc'} end it "should correctly wrap arrays when converting" do @m::Parser.new(:converter=>proc{|s| [:foo, s]}).call("(a,b,c)").should == [:foo, %w'a b c'] end it "should correctly wrap hashes when converting" do @m::Parser.new(:converter=>proc{|s| [:foo, s]}, :columns=>[:a, :b, :c]).call("(a,b,c)").should == [:foo, {:a=>'a', :b=>'b', :c=>'c'}] end it "should have parser store reflection information" do p = @m::Parser.new(:oid=>1, :column_oids=>[2], :columns=>[:a], :converter=>Array, :typecaster=>Hash, :column_converters=>[Array]) p.oid.should == 1 p.column_oids.should == [2] p.columns.should == [:a] p.converter.should == Array p.typecaster.should == Hash p.column_converters.should == [Array] end it "should reload registered row types when reseting conversion procs" do db = Sequel.mock(:host=>'postgres') db.extension(:pg_row) db.conversion_procs[4] = proc{|s| s.to_i} db.conversion_procs[5] = proc{|s| s * 2} db.sqls db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] db.register_row_type(:foo) db.sqls.should == ["SELECT pg_type.oid, typrelid, typarray FROM pg_type WHERE ((typtype = 'c') AND (typname = 'foo')) LIMIT 1", "SELECT attname, (CASE pg_type.typbasetype WHEN 0 THEN atttypid ELSE pg_type.typbasetype END) AS atttypid FROM pg_attribute INNER JOIN pg_type ON (pg_type.oid = pg_attribute.atttypid) WHERE ((attrelid = 2) AND (attnum > 0) AND NOT attisdropped) ORDER BY attnum"] begin pgnt = Sequel::Postgres::PG_NAMED_TYPES.dup Sequel::Postgres::PG_NAMED_TYPES.clear db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] db.reset_conversion_procs db.sqls.should == ["SELECT pg_type.oid, typrelid, typarray FROM pg_type WHERE ((typtype = 'c') AND (typname = 'foo')) LIMIT 1", "SELECT attname, (CASE pg_type.typbasetype WHEN 0 THEN atttypid ELSE pg_type.typbasetype END) AS atttypid FROM pg_attribute INNER JOIN pg_type ON (pg_type.oid = pg_attribute.atttypid) WHERE ((attrelid = 2) AND (attnum > 0) AND NOT attisdropped) ORDER BY attnum"] ensure Sequel::Postgres::PG_NAMED_TYPES.replace pgnt end end it "should handle ArrayRows and HashRows in bound variables" do @db.bound_variable_arg(1, nil).should == 1 @db.bound_variable_arg(@m::ArrayRow.call(["1", "abc\\'\","]), nil).should == '("1","abc\\\\\'\\",")' @db.bound_variable_arg(@m::HashRow.subclass(nil, [:a, :b]).call(:a=>"1", :b=>"abc\\'\","), nil).should == '("1","abc\\\\\'\\",")' end it "should handle ArrayRows and HashRows in arrays in bound variables" do @db.bound_variable_arg(1, nil).should == 1 @db.bound_variable_arg([@m::ArrayRow.call(["1", "abc\\'\","])], nil).should == '{"(\\"1\\",\\"abc\\\\\\\\\'\\\\\\",\\")"}' @db.bound_variable_arg([@m::HashRow.subclass(nil, [:a, :b]).call(:a=>"1", :b=>"abc\\'\",")], nil).should == '{"(\\"1\\",\\"abc\\\\\\\\\'\\\\\\",\\")"}' end it "should handle nils in bound variables" do @db.bound_variable_arg(@m::ArrayRow.call([nil, nil]), nil).should == '(,)' @db.bound_variable_arg(@m::HashRow.subclass(nil, [:a, :b]).call(:a=>nil, :b=>nil), nil).should == '(,)' @db.bound_variable_arg([@m::ArrayRow.call([nil, nil])], nil).should == '{"(,)"}' @db.bound_variable_arg([@m::HashRow.subclass(nil, [:a, :b]).call(:a=>nil, :b=>nil)], nil).should == '{"(,)"}' end it "should allow registering row type parsers by introspecting system tables" do @db.conversion_procs[4] = p4 = proc{|s| s.to_i} @db.conversion_procs[5] = p5 = proc{|s| s * 2} @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] @db.register_row_type(:foo) @db.sqls.should == ["SELECT pg_type.oid, typrelid, typarray FROM pg_type WHERE ((typtype = 'c') AND (typname = 'foo')) LIMIT 1", "SELECT attname, (CASE pg_type.typbasetype WHEN 0 THEN atttypid ELSE pg_type.typbasetype END) AS atttypid FROM pg_attribute INNER JOIN pg_type ON (pg_type.oid = pg_attribute.atttypid) WHERE ((attrelid = 2) AND (attnum > 0) AND NOT attisdropped) ORDER BY attnum"] p1 = @db.conversion_procs[1] p1.columns.should == [:bar, :baz] p1.column_oids.should == [4, 5] p1.column_converters.should == [p4, p5] p1.oid.should == 1 @db.send(:schema_column_type, 'foo').should == :pg_row_foo @db.send(:schema_column_type, 'integer').should == :integer c = p1.converter c.superclass.should == @m::HashRow c.columns.should == [:bar, :baz] c.db_type.should == :foo p1.typecaster.should == c p1.call('(1,b)').should == {:bar=>1, :baz=>'bb'} @db.typecast_value(:pg_row_foo, %w'1 b').should be_a_kind_of(@m::HashRow) @db.typecast_value(:pg_row_foo, %w'1 b').should == {:bar=>'1', :baz=>'b'} @db.typecast_value(:pg_row_foo, :bar=>'1', :baz=>'b').should == {:bar=>'1', :baz=>'b'} @db.literal(p1.call('(1,b)')).should == "ROW(1, 'bb')::foo" end it "should allow registering row type parsers for schema qualify types" do @db.conversion_procs[4] = p4 = proc{|s| s.to_i} @db.conversion_procs[5] = p5 = proc{|s| s * 2} @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] @db.register_row_type(:foo__bar) @db.sqls.should == ["SELECT pg_type.oid, typrelid, typarray FROM pg_type INNER JOIN pg_namespace ON ((pg_namespace.oid = pg_type.typnamespace) AND (pg_namespace.nspname = 'foo')) WHERE ((typtype = 'c') AND (typname = 'bar')) LIMIT 1", "SELECT attname, (CASE pg_type.typbasetype WHEN 0 THEN atttypid ELSE pg_type.typbasetype END) AS atttypid FROM pg_attribute INNER JOIN pg_type ON (pg_type.oid = pg_attribute.atttypid) WHERE ((attrelid = 2) AND (attnum > 0) AND NOT attisdropped) ORDER BY attnum"] p1 = @db.conversion_procs[1] p1.columns.should == [:bar, :baz] p1.column_oids.should == [4, 5] p1.column_converters.should == [p4, p5] p1.oid.should == 1 c = p1.converter c.superclass.should == @m::HashRow c.columns.should == [:bar, :baz] c.db_type.should == :foo__bar p1.typecaster.should == c p1.call('(1,b)').should == {:bar=>1, :baz=>'bb'} @db.typecast_value(:pg_row_foo__bar, %w'1 b').should == {:bar=>'1', :baz=>'b'} @db.typecast_value(:pg_row_foo__bar, :bar=>'1', :baz=>'b').should == {:bar=>'1', :baz=>'b'} @db.literal(p1.call('(1,b)')).should == "ROW(1, 'bb')::foo.bar" end it "should allow registering with a custom converter" do @db.conversion_procs[4] = proc{|s| s.to_i} @db.conversion_procs[5] = proc{|s| s * 2} @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] c = proc{|h| [h]} @db.register_row_type(:foo, :converter=>c) o = @db.conversion_procs[1].call('(1,b)') o.should == [{:bar=>1, :baz=>'bb'}] o.first.should be_a_kind_of(Hash) end it "should allow registering with a custom typecaster" do @db.conversion_procs[4] = proc{|s| s.to_i} @db.conversion_procs[5] = proc{|s| s * 2} @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] @db.register_row_type(:foo, :typecaster=>proc{|h| {:bar=>(h[:bar]||0).to_i, :baz=>(h[:baz] || 'a')*2}}) @db.typecast_value(:pg_row_foo, %w'1 b').should be_a_kind_of(Hash) @db.typecast_value(:pg_row_foo, %w'1 b').should == {:bar=>1, :baz=>'bb'} @db.typecast_value(:pg_row_foo, :bar=>'1', :baz=>'b').should == {:bar=>1, :baz=>'bb'} @db.typecast_value(:pg_row_foo, 'bar'=>'1', 'baz'=>'b').should == {:bar=>0, :baz=>'aa'} @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] @db.register_row_type(:foo, :typecaster=>proc{|h| {:bar=>(h[:bar] || h['bar'] || 0).to_i, :baz=>(h[:baz] || h['baz'] || 'a')*2}}) @db.typecast_value(:pg_row_foo, %w'1 b').should == {:bar=>1, :baz=>'bb'} @db.typecast_value(:pg_row_foo, :bar=>'1', :baz=>'b').should == {:bar=>1, :baz=>'bb'} @db.typecast_value(:pg_row_foo, 'bar'=>'1', 'baz'=>'b').should == {:bar=>1, :baz=>'bb'} end it "should handle conversion procs that aren't added until later" do @db.conversion_procs[5] = proc{|s| s * 2} @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] c = proc{|h| [h]} @db.register_row_type(:foo, :converter=>c) @db.conversion_procs[4] = proc{|s| s.to_i} @db.conversion_procs[1].call('(1,b)').should == [{:bar=>1, :baz=>'bb'}] end it "should handle nil values when converting columns" do @db.conversion_procs[5] = proc{|s| s * 2} @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}]] called = false @db.conversion_procs[4] = proc{|s| called = true; s} @db.register_row_type(:foo) @db.conversion_procs[1].call('()').should == {:bar=>nil} called.should be_false end it "should registering array type for row type if type has an array oid" do @db.conversion_procs[4] = proc{|s| s.to_i} @db.conversion_procs[5] = proc{|s| s * 2} @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] @db.register_row_type(:foo, :typecaster=>proc{|h| {:bar=>(h[:bar]||0).to_i, :baz=>(h[:baz] || 'a')*2}}) p3 = @db.conversion_procs[3] p3.call('{"(1,b)"}').should == [{:bar=>1, :baz=>'bb'}] @db.literal(p3.call('{"(1,b)"}')).should == "ARRAY[ROW(1, 'bb')::foo]::foo[]" @db.typecast_value(:foo_array, [{:bar=>'1', :baz=>'b'}]).should == [{:bar=>1, :baz=>'bb'}] end it "should allow creating unregisted row types via Database#row_type" do @db.literal(@db.row_type(:foo, [1, 2])).should == 'ROW(1, 2)::foo' end it "should allow typecasting of registered row types via Database#row_type" do @db.conversion_procs[4] = proc{|s| s.to_i} @db.conversion_procs[5] = proc{|s| s * 2} @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] @db.register_row_type(:foo, :typecaster=>proc{|h| @m::HashRow.subclass(:foo, [:bar, :baz]).new({:bar=>(h[:bar]||0).to_i, :baz=>(h[:baz] || 'a')*2})}) @db.literal(@db.row_type(:foo, ['1', 'b'])).should == "ROW(1, 'bb')::foo" @db.literal(@db.row_type(:foo, {:bar=>'1', :baz=>'b'})).should == "ROW(1, 'bb')::foo" end it "should allow parsing when typecasting registered row types via Database#row_type" do @db.conversion_procs[4] = proc{|s| s.to_i} @db.conversion_procs[5] = proc{|s| s * 2} @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] @db.register_row_type(:foo, :typecaster=>proc{|h| @m::HashRow.subclass(:foo, [:bar, :baz]).new(:bar=>(h[:bar]||0).to_i, :baz=>(h[:baz] || 'a')*2)}) @db.literal(@db.row_type(:foo, ['1', 'b'])).should == "ROW(1, 'bb')::foo" end it "should raise an error if attempt to use Database#row_type with an unregistered type and hash" do proc{@db.literal(@db.row_type(:foo, {:bar=>'1', :baz=>'b'}))}.should raise_error(Sequel::Error) end it "should raise an error if attempt to use Database#row_type with an unhandled type" do proc{@db.literal(@db.row_type(:foo, 1))}.should raise_error(Sequel::Error) end it "should return ArrayRow and HashRow values as-is" do h = @m::HashRow.call(:a=>1) a = @m::ArrayRow.call([1]) @db.row_type(:foo, h).should equal(h) @db.row_type(:foo, a).should equal(a) end it "should have Sequel.pg_row return a plain ArrayRow" do @db.literal(Sequel.pg_row([1, 2, 3])).should == 'ROW(1, 2, 3)' end it "should raise an error if attempting to typecast a hash for a parser without columns" do proc{@m::Parser.new.typecast(:a=>1)}.should raise_error(Sequel::Error) end it "should raise an error if attempting to typecast a unhandled value for a parser" do proc{@m::Parser.new.typecast(1)}.should raise_error(Sequel::Error) end it "should handle typecasting for a parser without a typecaster" do @m::Parser.new.typecast([1]).should == [1] end it "should raise an error if no columns are returned when registering a custom row type" do @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}]] proc{@db.register_row_type(:foo)}.should raise_error(Sequel::Error) end it "should raise an error when registering a custom row type if the type is found found" do @db.fetch = [] proc{@db.register_row_type(:foo)}.should raise_error(Sequel::Error) end it "should return correct results for Database#schema_type_class" do @db.conversion_procs[4] = proc{|s| s.to_i} @db.conversion_procs[5] = proc{|s| s * 2} @db.fetch = [[{:oid=>1, :typrelid=>2, :typarray=>3}], [{:attname=>'bar', :atttypid=>4}, {:attname=>'baz', :atttypid=>5}]] @db.register_row_type(:foo, :typecaster=>proc{|h| {:bar=>(h[:bar]||0).to_i, :baz=>(h[:baz] || 'a')*2}}) @db.schema_type_class(:pg_row_foo).should == [Sequel::Postgres::PGRow::HashRow, Sequel::Postgres::PGRow::ArrayRow] @db.schema_type_class(:integer).should == Integer end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pg_typecast_on_load_spec.rb���������������������������������������0000664�0000000�0000000�00000004616�12201565355�0024512�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "PgTypecastOnLoad plugin" do before do @db = Sequel.mock(:host=>'postgres', :fetch=>{:id=>1, :b=>"t", :y=>"0"}, :columns=>[:id, :b, :y], :numrows=>1) def @db.schema(*args) [[:id, {}], [:b, {:type=>:boolean, :oid=>16}], [:y, {:type=>:integer, :oid=>20}]] end @c = Class.new(Sequel::Model(@db[:items])) @c.plugin :pg_typecast_on_load, :b, :y end specify "should call the database conversion proc for all given columns" do @c.first.values.should == {:id=>1, :b=>true, :y=>0} end specify "should call the database conversion proc with value when reloading the object, for all given columns" do @c.first.refresh.values.should == {:id=>1, :b=>true, :y=>0} end specify "should not fail if schema oid does not have a related conversion proc" do @c.db_schema[:b][:oid] = 0 @c.first.refresh.values.should == {:id=>1, :b=>"t", :y=>0} end specify "should call the database conversion proc with value when automatically reloading the object on creation via insert_select" do @c.dataset.meta_def(:insert_select){|h| insert(h); first} @c.create.values.should == {:id=>1, :b=>true, :y=>0} end specify "should allowing setting columns separately via add_pg_typecast_on_load_columns" do @c = Class.new(Sequel::Model(@db[:items])) @c.plugin :pg_typecast_on_load @c.first.values.should == {:id=>1, :b=>"t", :y=>"0"} @c.add_pg_typecast_on_load_columns :b @c.first.values.should == {:id=>1, :b=>true, :y=>"0"} @c.add_pg_typecast_on_load_columns :y @c.first.values.should == {:id=>1, :b=>true, :y=>0} end specify "should work with subclasses" do @c = Class.new(Sequel::Model(@db[:items])) @c.plugin :pg_typecast_on_load @c.first.values.should == {:id=>1, :b=>"t", :y=>"0"} c1 = Class.new(@c) @c.add_pg_typecast_on_load_columns :b @c.first.values.should == {:id=>1, :b=>true, :y=>"0"} c1.first.values.should == {:id=>1, :b=>"t", :y=>"0"} c2 = Class.new(@c) @c.add_pg_typecast_on_load_columns :y @c.first.values.should == {:id=>1, :b=>true, :y=>0} c2.first.values.should == {:id=>1, :b=>true, :y=>"0"} c1.add_pg_typecast_on_load_columns :y c1.first.values.should == {:id=>1, :b=>"t", :y=>0} end specify "should not mark the object as modified" do @c.first.modified?.should == false end end ������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/prepared_statements_associations_spec.rb��������������������������0000664�0000000�0000000�00000015476�12201565355�0027333�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::PreparedStatementsAssociations" do before do @db = Sequel.mock @db.extend_datasets do def select_sql sql = super sql << ' -- prepared' if is_a?(Sequel::Dataset::PreparedStatementMethods) sql end end @Artist = Class.new(Sequel::Model(@db[:artists])) @Artist.columns :id, :id2 @Album= Class.new(Sequel::Model(@db[:albums])) @Album.columns :id, :artist_id, :id2, :artist_id2 @Tag = Class.new(Sequel::Model(@db[:tags])) @Tag.columns :id, :id2 @Artist.plugin :prepared_statements_associations @Album.plugin :prepared_statements_associations @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id @Artist.one_to_one :album, :class=>@Album, :key=>:artist_id @Album.many_to_one :artist, :class=>@Artist @Album.many_to_many :tags, :class=>@Tag, :join_table=>:albums_tags, :left_key=>:album_id @Artist.plugin :many_through_many @Artist.many_through_many :tags, [[:albums, :artist_id, :id], [:albums_tags, :album_id, :tag_id]], :class=>@Tag @db.sqls end specify "should run correct SQL for associations" do @Artist.load(:id=>1).albums @db.sqls.should == ["SELECT * FROM albums WHERE (albums.artist_id = 1) -- prepared"] @Artist.load(:id=>1).album @db.sqls.should == ["SELECT * FROM albums WHERE (albums.artist_id = 1) LIMIT 1 -- prepared"] @Album.load(:id=>1, :artist_id=>2).artist @db.sqls.should == ["SELECT * FROM artists WHERE (artists.id = 2) LIMIT 1 -- prepared"] @Album.load(:id=>1, :artist_id=>2).tags @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.album_id = 1)) -- prepared"] @Artist.load(:id=>1).tags @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON ((albums.id = albums_tags.album_id) AND (albums.artist_id = 1)) -- prepared"] end specify "should run correct SQL for composite key associations" do @Artist.one_to_many :albums, :class=>@Album, :key=>[:artist_id, :artist_id2], :primary_key=>[:id, :id2] @Artist.one_to_one :album, :class=>@Album, :key=>[:artist_id, :artist_id2], :primary_key=>[:id, :id2] @Album.many_to_one :artist, :class=>@Artist, :key=>[:artist_id, :artist_id2], :primary_key=>[:id, :id2] @Album.many_to_many :tags, :class=>@Tag, :join_table=>:albums_tags, :left_key=>[:album_id, :album_id2], :right_key=>[:tag_id, :tag_id2], :right_primary_key=>[:id, :id2], :left_primary_key=>[:id, :id2] @Artist.many_through_many :tags, [[:albums, [:artist_id, :artist_id2], [:id, :id2]], [:albums_tags, [:album_id, :album_id2], [:tag_id, :tag_id2]]], :class=>@Tag, :right_primary_key=>[:id, :id2], :left_primary_key=>[:id, :id2] @Artist.load(:id=>1, :id2=>2).albums @db.sqls.should == ["SELECT * FROM albums WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) -- prepared"] @Artist.load(:id=>1, :id2=>2).album @db.sqls.should == ["SELECT * FROM albums WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) LIMIT 1 -- prepared"] @Album.load(:id=>1, :artist_id=>2, :artist_id2=>3).artist @db.sqls.should == ["SELECT * FROM artists WHERE ((artists.id = 2) AND (artists.id2 = 3)) LIMIT 1 -- prepared"] @Album.load(:id=>1, :artist_id=>2, :id2=>3).tags @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2) AND (albums_tags.album_id = 1) AND (albums_tags.album_id2 = 3)) -- prepared"] @Artist.load(:id=>1, :id2=>2).tags @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) INNER JOIN albums ON ((albums.id = albums_tags.album_id) AND (albums.id2 = albums_tags.album_id2) AND (albums.artist_id = 1) AND (albums.artist_id2 = 2)) -- prepared"] end specify "should not run query if no objects can be associated" do @Artist.new.albums.should == [] @Album.new.artist.should == nil @db.sqls.should == [] end specify "should run a regular query if there is a callback" do @Artist.load(:id=>1).albums(proc{|ds| ds}) @db.sqls.should == ["SELECT * FROM albums WHERE (albums.artist_id = 1)"] end specify "should run a regular query if :prepared_statement=>false option is used for the association" do @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id, :prepared_statement=>false @Artist.load(:id=>1).albums @db.sqls.should == ["SELECT * FROM albums WHERE (albums.artist_id = 1)"] end specify "should run a regular query if unrecognized association is used" do a = @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id a[:type] = :foo @Artist.load(:id=>1).albums @db.sqls.should == ["SELECT * FROM albums WHERE (albums.artist_id = 1)"] end specify "should run a regular query if a block is used when defining the association" do @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id do |ds| ds end @Artist.load(:id=>1).albums @db.sqls.should == ["SELECT * FROM albums WHERE (albums.artist_id = 1)"] end specify "should use a prepared statement if the associated dataset has conditions" do @Album.dataset = @Album.dataset.where(:a=>2) @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id @Artist.load(:id=>1).albums @db.sqls.should == ["SELECT * FROM albums WHERE ((a = 2) AND (albums.artist_id = 1)) -- prepared"] end specify "should use a prepared statement if the :conditions association option" do @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id, :conditions=>{:a=>2} @Artist.load(:id=>1).albums @db.sqls.should == ["SELECT * FROM albums WHERE ((a = 2) AND (albums.artist_id = 1)) -- prepared"] end specify "should not use a prepared statement if :conditions association option uses an identifier" do @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id, :conditions=>{Sequel.identifier('a')=>2} @Artist.load(:id=>1).albums @db.sqls.should == ["SELECT * FROM albums WHERE ((a = 2) AND (albums.artist_id = 1)) -- prepared"] end specify "should run a regular query if :dataset option is used when defining the association" do album = @Album @Artist.one_to_many :albums, :class=>@Album, :dataset=>proc{album.filter(:artist_id=>id)} @Artist.load(:id=>1).albums @db.sqls.should == ["SELECT * FROM albums WHERE (artist_id = 1)"] end specify "should run a regular query if :cloning an association that doesn't used prepared statements" do @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id do |ds| ds end @Artist.one_to_many :oalbums, :clone=>:albums @Artist.load(:id=>1).oalbums @db.sqls.should == ["SELECT * FROM albums WHERE (albums.artist_id = 1)"] end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/prepared_statements_safe_spec.rb����������������������������������0000664�0000000�0000000�00000005266�12201565355�0025546�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "prepared_statements_safe plugin" do before do @db = Sequel.mock(:fetch=>{:id=>1, :name=>'foo', :i=>2}, :autoid=>proc{|sql| 1}, :numrows=>1, :servers=>{:read_only=>{}}) @c = Class.new(Sequel::Model(@db[:people])) @c.columns :id, :name, :i @c.instance_variable_set(:@db_schema, {:i=>{}, :name=>{}, :id=>{:primary_key=>true}}) @c.plugin :prepared_statements_safe @p = @c.load(:id=>1, :name=>'foo', :i=>2) @db.sqls end specify "should load the prepared_statements plugin" do @c.plugins.should include(Sequel::Plugins::PreparedStatements) end specify "should set default values correctly" do @c.prepared_statements_column_defaults.should == {:name=>nil, :i=>nil} @c.instance_variable_set(:@db_schema, {:i=>{:default=>'f(x)'}, :name=>{:ruby_default=>'foo'}, :id=>{:primary_key=>true}}) Class.new(@c).prepared_statements_column_defaults.should == {:name=>'foo'} end specify "should set default values when creating" do @c.create @db.sqls.first.should =~ /INSERT INTO people \((i|name), (i|name)\) VALUES \(NULL, NULL\)/ @c.create(:name=>'foo') @db.sqls.first.should =~ /INSERT INTO people \((i|name), (i|name)\) VALUES \((NULL|'foo'), (NULL|'foo')\)/ @c.create(:name=>'foo', :i=>2) @db.sqls.first.should =~ /INSERT INTO people \((i|name), (i|name)\) VALUES \((2|'foo'), (2|'foo')\)/ end specify "should use database default values" do @c.instance_variable_set(:@db_schema, {:i=>{:ruby_default=>2}, :name=>{:ruby_default=>'foo'}, :id=>{:primary_key=>true}}) c = Class.new(@c) c.create @db.sqls.first.should =~ /INSERT INTO people \((i|name), (i|name)\) VALUES \((2|'foo'), (2|'foo')\)/ end specify "should not set defaults for unparseable dataset default values" do @c.instance_variable_set(:@db_schema, {:i=>{:default=>'f(x)'}, :name=>{:ruby_default=>'foo'}, :id=>{:primary_key=>true}}) c = Class.new(@c) c.create @db.sqls.first.should == "INSERT INTO people (name) VALUES ('foo')" end specify "should save all fields when updating" do @p.update(:i=>3) @db.sqls.first.should =~ /UPDATE people SET (name = 'foo'|i = 3), (name = 'foo'|i = 3) WHERE \(id = 1\)/ end specify "should work with abstract classes" do c = Class.new(Sequel::Model) c.plugin :prepared_statements_safe c1 = Class.new(c) c1.meta_def(:get_db_schema){@db_schema = {:i=>{:default=>'f(x)'}, :name=>{:ruby_default=>'foo'}, :id=>{:primary_key=>true}}} c1.set_dataset(:people) c1.prepared_statements_column_defaults.should == {:name=>'foo'} Class.new(c1).prepared_statements_column_defaults.should == {:name=>'foo'} end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/prepared_statements_spec.rb���������������������������������������0000664�0000000�0000000�00000006431�12201565355�0024543�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "prepared_statements plugin" do before do @db = Sequel.mock(:fetch=>{:id=>1, :name=>'foo', :i=>2}, :autoid=>proc{|sql| 1}, :numrows=>1, :servers=>{:read_only=>{}}) @c = Class.new(Sequel::Model(@db[:people])) @c.columns :id, :name, :i @c.plugin :prepared_statements @p = @c.load(:id=>1, :name=>'foo', :i=>2) @ds = @c.dataset @db.sqls end specify "should correctly lookup by primary key" do @c[1].should == @p @db.sqls.should == ["SELECT * FROM people WHERE (id = 1) LIMIT 1 -- read_only"] end shared_examples_for "prepared_statements plugin" do specify "should correctly delete instance" do @p.destroy.should == @p @db.sqls.should == ["DELETE FROM people WHERE (id = 1)"] end specify "should correctly update instance" do @p.update(:name=>'bar').should == @c.load(:id=>1, :name=>'bar', :i => 2) @db.sqls.should == ["UPDATE people SET name = 'bar' WHERE (id = 1)"] end specify "should correctly create instance" do @c.create(:name=>'foo').should == @c.load(:id=>1, :name=>'foo', :i => 2) @db.sqls.should == ["INSERT INTO people (name) VALUES ('foo')", "SELECT * FROM people WHERE (id = 1) LIMIT 1"] end specify "should correctly create instance if dataset supports insert_select" do @c.dataset_module do def supports_insert_select? true end def insert_select(h) self._fetch = {:id=>1, :name=>'foo', :i => 2} returning.server(:default).with_sql(:insert_sql, h).first end def insert_sql(*) "#{super}#{' RETURNING *' if opts.has_key?(:returning)}" end end @c.create(:name=>'foo').should == @c.load(:id=>1, :name=>'foo', :i => 2) @db.sqls.should == ["INSERT INTO people (name) VALUES ('foo') RETURNING *"] end end it_should_behave_like "prepared_statements plugin" describe "when #use_prepared_statements_for? returns false" do before do @c.class_eval{def use_prepared_statements_for?(type) false end} end it_should_behave_like "prepared_statements plugin" end specify "should work correctly when subclassing" do c = Class.new(@c) c[1].should == c.load(:id=>1, :name=>'foo', :i=>2) @db.sqls.should == ["SELECT * FROM people WHERE (id = 1) LIMIT 1 -- read_only"] end describe " with placeholder type specifiers" do before do @ds.meta_def(:requires_placeholder_type_specifiers?){true} end specify "should correctly handle without schema type" do @c[1].should == @p @db.sqls.should == ["SELECT * FROM people WHERE (id = 1) LIMIT 1 -- read_only"] end specify "should correctly handle with schema type" do @c.db_schema[:id][:type] = :integer ds = @c.send(:prepared_lookup) def ds.literal_symbol_append(sql, v) if @opts[:bind_vars] and match = /\A\$(.*)\z/.match(v.to_s) s = match[1].split('__')[0].to_sym if prepared_arg?(s) literal_append(sql, prepared_arg(s)) else sql << v.to_s end else super end end @c[1].should == @p @db.sqls.should == ["SELECT * FROM people WHERE (id = 1) LIMIT 1 -- read_only"] end end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/prepared_statements_with_pk_spec.rb�������������������������������0000664�0000000�0000000�00000002540�12201565355�0026265�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "prepared_statements_with_pk plugin" do before do @db = Sequel.mock(:fetch=>{:id=>1, :name=>'foo', :i=>2}, :autoid=>proc{|sql| 1}, :numrows=>1, :servers=>{:read_only=>{}}) @c = Class.new(Sequel::Model(@db[:people])) @c.columns :id, :name, :i @c.plugin :prepared_statements_with_pk @p = @c.load(:id=>1, :name=>'foo', :i=>2) @db.sqls end specify "should load the prepared_statements plugin" do @c.plugins.should include(Sequel::Plugins::PreparedStatements) end specify "should correctly lookup by primary key from dataset" do @c.dataset.filter(:name=>'foo')[1].should == @p @c.db.sqls.should == ["SELECT * FROM people WHERE ((name = 'foo') AND (people.id = 1)) LIMIT 1 -- read_only"] end specify "should still work correctly if there are multiple conflicting variables" do @c.dataset.filter(:name=>'foo').or(:name=>'bar')[1].should == @p @c.db.sqls.should == ["SELECT * FROM people WHERE (((name = 'foo') OR (name = 'bar')) AND (people.id = 1)) LIMIT 1 -- read_only"] end specify "should still work correctly if the primary key is used elsewhere in the query" do @c.dataset.filter{id > 2}[1].should == @p @c.db.sqls.should == ["SELECT * FROM people WHERE ((id > 2) AND (people.id = 1)) LIMIT 1 -- read_only"] end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/pretty_table_spec.rb����������������������������������������������0000664�0000000�0000000�00000004402�12201565355�0023164�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') require 'stringio' describe "Dataset#print" do before do @output = StringIO.new @orig_stdout = $stdout $stdout = @output @dataset = Sequel.mock(:fetch=>[{:a=>1, :b=>2}, {:a=>3, :b=>4}, {:a=>5, :b=>6}])[:items].extension(:pretty_table) end after do $stdout = @orig_stdout end specify "should print out a table with the values" do @dataset.print(:a, :b) @output.rewind @output.read.should == \ "+-+-+\n|a|b|\n+-+-+\n|1|2|\n|3|4|\n|5|6|\n+-+-+\n" end specify "should default to the dataset's columns" do @dataset.meta_def(:columns) {[:a, :b]} @dataset.print @output.rewind @output.read.should == \ "+-+-+\n|a|b|\n+-+-+\n|1|2|\n|3|4|\n|5|6|\n+-+-+\n" end end describe "PrettyTable" do before do @data1 = [ {:x => 3, :y => 4} ] @data2 = [ {:a => 23, :b => 45}, {:a => 45, :b => 2377} ] @data3 = [ {:aaa => 1}, {:bb => 2}, {:c => 3.1} ] @output = StringIO.new @orig_stdout = $stdout $stdout = @output end after do $stdout = @orig_stdout end specify "should infer the columns if not given" do Sequel::PrettyTable.print(@data1) @output.rewind @output.read.should =~ \ /\n(\|x\|y\|)|(\|y\|x\|)\n/ end specify "should have #string return the string without printing" do Sequel::PrettyTable.string(@data1).should =~ /\n(\|x\|y\|)|(\|y\|x\|)\n/ @output.rewind @output.read.should == '' end specify "should calculate the maximum width of each column correctly" do Sequel::PrettyTable.print(@data2, [:a, :b]) @output.rewind @output.read.should == \ "+--+----+\n|a |b |\n+--+----+\n|23| 45|\n|45|2377|\n+--+----+\n" end specify "should also take header width into account" do Sequel::PrettyTable.print(@data3, [:aaa, :bb, :c]) @output.rewind @output.read.should == \ "+---+--+---+\n|aaa|bb|c |\n+---+--+---+\n| 1| | |\n| | 2| |\n| | |3.1|\n+---+--+---+\n" end specify "should print only the specified columns" do Sequel::PrettyTable.print(@data2, [:a]) @output.rewind @output.read.should == \ "+--+\n|a |\n+--+\n|23|\n|45|\n+--+\n" end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/query_literals_spec.rb��������������������������������������������0000664�0000000�0000000�00000016171�12201565355�0023540�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "query_literals extension" do before do @ds = Sequel.mock.dataset.from(:t).extension(:query_literals) end it "should not use special support if given a block" do @ds.select('a, b, c'){d}.sql.should == 'SELECT \'a, b, c\', d FROM t' end it "should have #select use literal string if given a single string" do @ds.select('a, b, c').sql.should == 'SELECT a, b, c FROM t' end it "should have #select use placeholder literal string if given a string and additional arguments" do @ds.select('a, b, ?', 1).sql.should == 'SELECT a, b, 1 FROM t' end it "should have #select work the standard way if initial string is a literal string already" do @ds.select(Sequel.lit('a, b, ?'), 1).sql.should == 'SELECT a, b, ?, 1 FROM t' end it "should have #select work regularly if not given a string as the first argument" do @ds.select(:a, 1).sql.should == 'SELECT a, 1 FROM t' end describe 'with existing selection' do before do @ds = @ds.select(:d) end it "should have #select_more use literal string if given a single string" do @ds.select_more('a, b, c').sql.should == 'SELECT d, a, b, c FROM t' end it "should have #select_more use placeholder literal string if given a string and additional arguments" do @ds.select_more('a, b, ?', 1).sql.should == 'SELECT d, a, b, 1 FROM t' end it "should have #select_more work the standard way if initial string is a literal string already" do @ds.select_more(Sequel.lit('a, b, ?'), 1).sql.should == 'SELECT d, a, b, ?, 1 FROM t' end it "should have #select_more work regularly if not given a string as the first argument" do @ds.select_more(:a, 1).sql.should == 'SELECT d, a, 1 FROM t' end end it "should have #select_append use literal string if given a single string" do @ds.select_append('a, b, c').sql.should == 'SELECT *, a, b, c FROM t' end it "should have #select_append use placeholder literal string if given a string and additional arguments" do @ds.select_append('a, b, ?', 1).sql.should == 'SELECT *, a, b, 1 FROM t' end it "should have #select_append work the standard way if initial string is a literal string already" do @ds.select_append(Sequel.lit('a, b, ?'), 1).sql.should == 'SELECT *, a, b, ?, 1 FROM t' end it "should have #select_append work regularly if not given a string as the first argument" do @ds.select_append(:a, 1).sql.should == 'SELECT *, a, 1 FROM t' end it "should have #select_group use literal string if given a single string" do @ds.select_group('a, b, c').sql.should == 'SELECT a, b, c FROM t GROUP BY a, b, c' end it "should have #select_group use placeholder literal string if given a string and additional arguments" do @ds.select_group('a, b, ?', 1).sql.should == 'SELECT a, b, 1 FROM t GROUP BY a, b, 1' end it "should have #select_group work the standard way if initial string is a literal string already" do @ds.select_group(Sequel.lit('a, b, ?'), 1).sql.should == 'SELECT a, b, ?, 1 FROM t GROUP BY a, b, ?, 1' end it "should have #select_group work regularly if not given a string as the first argument" do @ds.select_group(:a, 1).sql.should == 'SELECT a, 1 FROM t GROUP BY a, 1' end it "should have #group use literal string if given a single string" do @ds.group('a, b, c').sql.should == 'SELECT * FROM t GROUP BY a, b, c' end it "should have #group use placeholder literal string if given a string and additional arguments" do @ds.group('a, b, ?', 1).sql.should == 'SELECT * FROM t GROUP BY a, b, 1' end it "should have #group work the standard way if initial string is a literal string already" do @ds.group(Sequel.lit('a, b, ?'), 1).sql.should == 'SELECT * FROM t GROUP BY a, b, ?, 1' end it "should have #group work regularly if not given a string as the first argument" do @ds.group(:a, 1).sql.should == 'SELECT * FROM t GROUP BY a, 1' end it "should have #group_and_count use literal string if given a single string" do @ds.group_and_count('a, b, c').sql.should == 'SELECT a, b, c, count(*) AS count FROM t GROUP BY a, b, c' end it "should have #group_and_count use placeholder literal string if given a string and additional arguments" do @ds.group_and_count('a, b, ?', 1).sql.should == 'SELECT a, b, 1, count(*) AS count FROM t GROUP BY a, b, 1' end it "should have #group_and_count work the standard way if initial string is a literal string already" do @ds.group_and_count(Sequel.lit('a, b, ?'), 1).sql.should == 'SELECT a, b, ?, 1, count(*) AS count FROM t GROUP BY a, b, ?, 1' end it "should have #group_and_count work regularly if not given a string as the first argument" do @ds.group_and_count(:a, 1).sql.should == 'SELECT a, 1, count(*) AS count FROM t GROUP BY a, 1' end it "should have #order use literal string if given a single string" do @ds.order('a, b, c').sql.should == 'SELECT * FROM t ORDER BY a, b, c' end it "should have #order use placeholder literal string if given a string and additional arguments" do @ds.order('a, b, ?', 1).sql.should == 'SELECT * FROM t ORDER BY a, b, 1' end it "should have #order work the standard way if initial string is a literal string already" do @ds.order(Sequel.lit('a, b, ?'), 1).sql.should == 'SELECT * FROM t ORDER BY a, b, ?, 1' end it "should have #order work regularly if not given a string as the first argument" do @ds.order(:a, 1).sql.should == 'SELECT * FROM t ORDER BY a, 1' end describe 'with existing order' do before do @ds = @ds.order(:d) end it "should have #order_more use literal string if given a single string" do @ds.order_more('a, b, c').sql.should == 'SELECT * FROM t ORDER BY d, a, b, c' end it "should have #order_more use placeholder literal string if given a string and additional arguments" do @ds.order_more('a, b, ?', 1).sql.should == 'SELECT * FROM t ORDER BY d, a, b, 1' end it "should have #order_more work the standard way if initial string is a literal string already" do @ds.order_more(Sequel.lit('a, b, ?'), 1).sql.should == 'SELECT * FROM t ORDER BY d, a, b, ?, 1' end it "should have #order_more work regularly if not given a string as the first argument" do @ds.order_more(:a, 1).sql.should == 'SELECT * FROM t ORDER BY d, a, 1' end it "should have #order_prepend use literal string if given a single string" do @ds.order_prepend('a, b, c').sql.should == 'SELECT * FROM t ORDER BY a, b, c, d' end it "should have #order_append use placeholder literal string if given a string and additional arguments" do @ds.order_prepend('a, b, ?', 1).sql.should == 'SELECT * FROM t ORDER BY a, b, 1, d' end it "should have #order_append work the standard way if initial string is a literal string already" do @ds.order_prepend(Sequel.lit('a, b, ?'), 1).sql.should == 'SELECT * FROM t ORDER BY a, b, ?, 1, d' end it "should have #order_append work regularly if not given a string as the first argument" do @ds.order_prepend(:a, 1).sql.should == 'SELECT * FROM t ORDER BY a, 1, d' end end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/query_spec.rb�����������������������������������������������������0000664�0000000�0000000�00000004476�12201565355�0021646�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Database#query" do before do @db = Sequel.mock.extension(:query) end specify "should delegate to Dataset#query if block is provided" do @d = @db.query {select :x; from :y} @d.should be_a_kind_of(Sequel::Dataset) @d.sql.should == "SELECT x FROM y" end end describe "Dataset#query" do before do @d = Sequel.mock.dataset.extension(:query) end specify "should allow cloning without arguments" do q = @d.query {clone} q.class.should == @d.class q.sql.should == "SELECT *" end specify "should support #from" do q = @d.query {from :xxx} q.class.should == @d.class q.sql.should == "SELECT * FROM xxx" end specify "should support #select" do q = @d.query do select :a, :b___mongo from :yyy end q.class.should == @d.class q.sql.should == "SELECT a, b AS mongo FROM yyy" end specify "should support #where" do q = @d.query do from :zzz where{x + 2 > Sequel.expr(:y) + 3} end q.class.should == @d.class q.sql.should == "SELECT * FROM zzz WHERE ((x + 2) > (y + 3))" q = @d.from(:zzz).query do where{(x > 1) & (Sequel.expr(:y) > 2)} end q.class.should == @d.class q.sql.should == "SELECT * FROM zzz WHERE ((x > 1) AND (y > 2))" q = @d.from(:zzz).query do where :x => 33 end q.class.should == @d.class q.sql.should == "SELECT * FROM zzz WHERE (x = 33)" end specify "should support #group_by and #having" do q = @d.query do from :abc group_by :id having{x >= 2} end q.class.should == @d.class q.sql.should == "SELECT * FROM abc GROUP BY id HAVING (x >= 2)" end specify "should support #order, #order_by" do q = @d.query do from :xyz order_by :stamp end q.class.should == @d.class q.sql.should == "SELECT * FROM xyz ORDER BY stamp" end specify "should support blocks that end in nil" do condition = false q = @d.query do from :xyz order_by :stamp if condition end q.sql.should == "SELECT * FROM xyz" end specify "should raise on non-chainable method calls" do proc {@d.query {row_proc}}.should raise_error(Sequel::Error) proc {@d.query {all}}.should raise_error(Sequel::Error) end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/rcte_tree_spec.rb�������������������������������������������������0000664�0000000�0000000�00000052220�12201565355�0022443�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "rcte_tree" do before do @c = Class.new(Sequel::Model(DB[:nodes])) @c.class_eval do def self.name; 'Node'; end columns :id, :name, :parent_id, :i, :pi end @ds = @c.dataset @o = @c.load(:id=>2, :parent_id=>1, :name=>'AA', :i=>3, :pi=>4) DB.reset end it "should define the correct associations" do @c.plugin :rcte_tree @c.associations.sort_by{|x| x.to_s}.should == [:ancestors, :children, :descendants, :parent] end it "should define the correct associations when giving options" do @c.plugin :rcte_tree, :ancestors=>{:name=>:as}, :children=>{:name=>:cs}, :descendants=>{:name=>:ds}, :parent=>{:name=>:p} @c.associations.sort_by{|x| x.to_s}.should == [:as, :cs, :ds, :p] end it "should use the correct SQL for lazy associations" do @c.plugin :rcte_tree @o.parent_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.id = 1) LIMIT 1' @o.children_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.parent_id = 2)' @o.ancestors_dataset.sql.should == 'WITH t AS (SELECT * FROM nodes WHERE (id = 1) UNION ALL SELECT nodes.* FROM nodes INNER JOIN t ON (t.parent_id = nodes.id)) SELECT * FROM t AS nodes' @o.descendants_dataset.sql.should == 'WITH t AS (SELECT * FROM nodes WHERE (parent_id = 2) UNION ALL SELECT nodes.* FROM nodes INNER JOIN t ON (t.id = nodes.parent_id)) SELECT * FROM t AS nodes' end it "should use the correct SQL for lazy associations when recursive CTEs require column aliases" do @c.dataset.meta_def(:recursive_cte_requires_column_aliases?){true} @c.plugin :rcte_tree @o.parent_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.id = 1) LIMIT 1' @o.children_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.parent_id = 2)' @o.ancestors_dataset.sql.should == 'WITH t(id, name, parent_id, i, pi) AS (SELECT id, name, parent_id, i, pi FROM nodes WHERE (id = 1) UNION ALL SELECT nodes.id, nodes.name, nodes.parent_id, nodes.i, nodes.pi FROM nodes INNER JOIN t ON (t.parent_id = nodes.id)) SELECT * FROM t AS nodes' @o.descendants_dataset.sql.should == 'WITH t(id, name, parent_id, i, pi) AS (SELECT id, name, parent_id, i, pi FROM nodes WHERE (parent_id = 2) UNION ALL SELECT nodes.id, nodes.name, nodes.parent_id, nodes.i, nodes.pi FROM nodes INNER JOIN t ON (t.id = nodes.parent_id)) SELECT * FROM t AS nodes' end it "should use the correct SQL for lazy associations when giving options" do @c.plugin :rcte_tree, :primary_key=>:i, :key=>:pi, :cte_name=>:cte, :order=>:name, :ancestors=>{:name=>:as}, :children=>{:name=>:cs}, :descendants=>{:name=>:ds}, :parent=>{:name=>:p} @o.p_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.i = 4) ORDER BY name LIMIT 1' @o.cs_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.pi = 3) ORDER BY name' @o.as_dataset.sql.should == 'WITH cte AS (SELECT * FROM nodes WHERE (i = 4) UNION ALL SELECT nodes.* FROM nodes INNER JOIN cte ON (cte.pi = nodes.i)) SELECT * FROM cte AS nodes ORDER BY name' @o.ds_dataset.sql.should == 'WITH cte AS (SELECT * FROM nodes WHERE (pi = 3) UNION ALL SELECT nodes.* FROM nodes INNER JOIN cte ON (cte.i = nodes.pi)) SELECT * FROM cte AS nodes ORDER BY name' end it "should use the correct SQL for lazy associations with :conditions option" do @c.plugin :rcte_tree, :conditions => {:i => 1} @o.parent_dataset.sql.should == 'SELECT * FROM nodes WHERE ((i = 1) AND (nodes.id = 1)) LIMIT 1' @o.children_dataset.sql.should == 'SELECT * FROM nodes WHERE ((i = 1) AND (nodes.parent_id = 2))' @o.ancestors_dataset.sql.should == 'WITH t AS (SELECT * FROM nodes WHERE ((id = 1) AND (i = 1)) UNION ALL SELECT nodes.* FROM nodes INNER JOIN t ON (t.parent_id = nodes.id) WHERE (i = 1)) SELECT * FROM t AS nodes WHERE (i = 1)' @o.descendants_dataset.sql.should == 'WITH t AS (SELECT * FROM nodes WHERE ((parent_id = 2) AND (i = 1)) UNION ALL SELECT nodes.* FROM nodes INNER JOIN t ON (t.id = nodes.parent_id) WHERE (i = 1)) SELECT * FROM t AS nodes WHERE (i = 1)' end it "should add all parent associations when lazily loading ancestors" do @c.plugin :rcte_tree @ds._fetch = [[{:id=>1, :name=>'A', :parent_id=>3}, {:id=>4, :name=>'B', :parent_id=>nil}, {:id=>3, :name=>'?', :parent_id=>4}]] @o.ancestors.should == [@c.load(:id=>1, :name=>'A', :parent_id=>3), @c.load(:id=>4, :name=>'B', :parent_id=>nil), @c.load(:id=>3, :name=>'?', :parent_id=>4)] @o.associations[:parent].should == @c.load(:id=>1, :name=>'A', :parent_id=>3) @o.associations[:parent].associations[:parent].should == @c.load(:id=>3, :name=>'?', :parent_id=>4) @o.associations[:parent].associations[:parent].associations[:parent].should == @c.load(:id=>4, :name=>'B', :parent_id=>nil) @o.associations[:parent].associations[:parent].associations[:parent].associations.fetch(:parent, 1).should == nil end it "should add all parent associations when lazily loading ancestors and giving options" do @c.plugin :rcte_tree, :primary_key=>:i, :key=>:pi, :ancestors=>{:name=>:as}, :parent=>{:name=>:p} @ds._fetch = [[{:i=>4, :name=>'A', :pi=>5}, {:i=>6, :name=>'B', :pi=>nil}, {:i=>5, :name=>'?', :pi=>6}]] @o.as.should == [@c.load(:i=>4, :name=>'A', :pi=>5), @c.load(:i=>6, :name=>'B', :pi=>nil), @c.load(:i=>5, :name=>'?', :pi=>6)] @o.associations[:p].should == @c.load(:i=>4, :name=>'A', :pi=>5) @o.associations[:p].associations[:p].should == @c.load(:i=>5, :name=>'?', :pi=>6) @o.associations[:p].associations[:p].associations[:p].should == @c.load(:i=>6, :name=>'B', :pi=>nil) @o.associations[:p].associations[:p].associations[:p].associations.fetch(:p, 1).should == nil end it "should add all children associations when lazily loading descendants" do @c.plugin :rcte_tree @ds._fetch = [[{:id=>3, :name=>'??', :parent_id=>1}, {:id=>1, :name=>'A', :parent_id=>2}, {:id=>4, :name=>'B', :parent_id=>2}, {:id=>5, :name=>'?', :parent_id=>3}]] @o.descendants.should == [@c.load(:id=>3, :name=>'??', :parent_id=>1), @c.load(:id=>1, :name=>'A', :parent_id=>2), @c.load(:id=>4, :name=>'B', :parent_id=>2), @c.load(:id=>5, :name=>'?', :parent_id=>3)] @o.associations[:children].should == [@c.load(:id=>1, :name=>'A', :parent_id=>2), @c.load(:id=>4, :name=>'B', :parent_id=>2)] @o.associations[:children].map{|c1| c1.associations[:children]}.should == [[@c.load(:id=>3, :name=>'??', :parent_id=>1)], []] @o.associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children]}}.should == [[[@c.load(:id=>5, :name=>'?', :parent_id=>3)]], []] @o.associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children].map{|c3| c3.associations[:children]}}}.should == [[[[]]], []] end it "should add all children associations when lazily loading descendants and giving options" do @c.plugin :rcte_tree, :primary_key=>:i, :key=>:pi, :children=>{:name=>:cs}, :descendants=>{:name=>:ds} @ds._fetch = [[{:i=>7, :name=>'??', :pi=>5}, {:i=>5, :name=>'A', :pi=>3}, {:i=>6, :name=>'B', :pi=>3}, {:i=>8, :name=>'?', :pi=>7}]] @o.ds.should == [@c.load(:i=>7, :name=>'??', :pi=>5), @c.load(:i=>5, :name=>'A', :pi=>3), @c.load(:i=>6, :name=>'B', :pi=>3), @c.load(:i=>8, :name=>'?', :pi=>7)] @o.associations[:cs].should == [@c.load(:i=>5, :name=>'A', :pi=>3), @c.load(:i=>6, :name=>'B', :pi=>3)] @o.associations[:cs].map{|c1| c1.associations[:cs]}.should == [[@c.load(:i=>7, :name=>'??', :pi=>5)], []] @o.associations[:cs].map{|c1| c1.associations[:cs].map{|c2| c2.associations[:cs]}}.should == [[[@c.load(:i=>8, :name=>'?', :pi=>7)]], []] @o.associations[:cs].map{|c1| c1.associations[:cs].map{|c2| c2.associations[:cs].map{|c3| c3.associations[:cs]}}}.should == [[[[]]], []] end it "should eagerly load ancestors" do @c.plugin :rcte_tree @ds._fetch = [[{:id=>2, :parent_id=>1, :name=>'AA'}, {:id=>6, :parent_id=>2, :name=>'C'}, {:id=>7, :parent_id=>1, :name=>'D'}, {:id=>9, :parent_id=>nil, :name=>'E'}], [{:id=>2, :name=>'AA', :parent_id=>1, :x_root_x=>2}, {:id=>1, :name=>'00', :parent_id=>8, :x_root_x=>1}, {:id=>1, :name=>'00', :parent_id=>8, :x_root_x=>2}, {:id=>8, :name=>'?', :parent_id=>nil, :x_root_x=>2}, {:id=>8, :name=>'?', :parent_id=>nil, :x_root_x=>1}]] os = @ds.eager(:ancestors).all sqls = DB.sqls sqls.first.should == "SELECT * FROM nodes" sqls.last.should =~ /WITH t AS \(SELECT id AS x_root_x, nodes\.\* FROM nodes WHERE \(id IN \([12], [12]\)\) UNION ALL SELECT t\.x_root_x, nodes\.\* FROM nodes INNER JOIN t ON \(t\.parent_id = nodes\.id\)\) SELECT \* FROM t AS nodes/ os.should == [@c.load(:id=>2, :parent_id=>1, :name=>'AA'), @c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>7, :parent_id=>1, :name=>'D'), @c.load(:id=>9, :parent_id=>nil, :name=>'E')] os.map{|o| o.ancestors}.should == [[@c.load(:id=>1, :name=>'00', :parent_id=>8), @c.load(:id=>8, :name=>'?', :parent_id=>nil)], [@c.load(:id=>2, :name=>'AA', :parent_id=>1), @c.load(:id=>1, :name=>'00', :parent_id=>8), @c.load(:id=>8, :name=>'?', :parent_id=>nil)], [@c.load(:id=>1, :name=>'00', :parent_id=>8), @c.load(:id=>8, :name=>'?', :parent_id=>nil)], []] os.map{|o| o.parent}.should == [@c.load(:id=>1, :name=>'00', :parent_id=>8), @c.load(:id=>2, :name=>'AA', :parent_id=>1), @c.load(:id=>1, :name=>'00', :parent_id=>8), nil] os.map{|o| o.parent.parent if o.parent}.should == [@c.load(:id=>8, :name=>'?', :parent_id=>nil), @c.load(:id=>1, :name=>'00', :parent_id=>8), @c.load(:id=>8, :name=>'?', :parent_id=>nil), nil] os.map{|o| o.parent.parent.parent if o.parent and o.parent.parent}.should == [nil, @c.load(:id=>8, :name=>'?', :parent_id=>nil), nil, nil] os.map{|o| o.parent.parent.parent.parent if o.parent and o.parent.parent and o.parent.parent.parent}.should == [nil, nil, nil, nil] DB.sqls.should == [] end it "should eagerly load ancestors when giving options" do @c.plugin :rcte_tree, :primary_key=>:i, :key=>:pi, :key_alias=>:kal, :cte_name=>:cte, :ancestors=>{:name=>:as}, :parent=>{:name=>:p} @ds._fetch = [[{:i=>2, :pi=>1, :name=>'AA'}, {:i=>6, :pi=>2, :name=>'C'}, {:i=>7, :pi=>1, :name=>'D'}, {:i=>9, :pi=>nil, :name=>'E'}], [{:i=>2, :name=>'AA', :pi=>1, :kal=>2}, {:i=>1, :name=>'00', :pi=>8, :kal=>1}, {:i=>1, :name=>'00', :pi=>8, :kal=>2}, {:i=>8, :name=>'?', :pi=>nil, :kal=>2}, {:i=>8, :name=>'?', :pi=>nil, :kal=>1}]] os = @ds.eager(:as).all sqls = DB.sqls sqls.first.should == "SELECT * FROM nodes" sqls.last.should =~ /WITH cte AS \(SELECT i AS kal, nodes\.\* FROM nodes WHERE \(i IN \([12], [12]\)\) UNION ALL SELECT cte\.kal, nodes\.\* FROM nodes INNER JOIN cte ON \(cte\.pi = nodes\.i\)\) SELECT \* FROM cte/ os.should == [@c.load(:i=>2, :pi=>1, :name=>'AA'), @c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>7, :pi=>1, :name=>'D'), @c.load(:i=>9, :pi=>nil, :name=>'E')] os.map{|o| o.as}.should == [[@c.load(:i=>1, :name=>'00', :pi=>8), @c.load(:i=>8, :name=>'?', :pi=>nil)], [@c.load(:i=>2, :name=>'AA', :pi=>1), @c.load(:i=>1, :name=>'00', :pi=>8), @c.load(:i=>8, :name=>'?', :pi=>nil)], [@c.load(:i=>1, :name=>'00', :pi=>8), @c.load(:i=>8, :name=>'?', :pi=>nil)], []] os.map{|o| o.p}.should == [@c.load(:i=>1, :name=>'00', :pi=>8), @c.load(:i=>2, :name=>'AA', :pi=>1), @c.load(:i=>1, :name=>'00', :pi=>8), nil] os.map{|o| o.p.p if o.p}.should == [@c.load(:i=>8, :name=>'?', :pi=>nil), @c.load(:i=>1, :name=>'00', :pi=>8), @c.load(:i=>8, :name=>'?', :pi=>nil), nil] os.map{|o| o.p.p.p if o.p and o.p.p}.should == [nil, @c.load(:i=>8, :name=>'?', :pi=>nil), nil, nil] os.map{|o| o.p.p.p.p if o.p and o.p.p and o.p.p.p}.should == [nil, nil, nil, nil] end it "should eagerly load ancestors respecting association option :conditions" do @c.plugin :rcte_tree, :conditions => {:i => 1} @ds._fetch = [[{:id=>2, :parent_id=>1, :name=>'AA'}, {:id=>6, :parent_id=>2, :name=>'C'}, {:id=>7, :parent_id=>1, :name=>'D'}, {:id=>9, :parent_id=>nil, :name=>'E'}], [{:id=>2, :name=>'AA', :parent_id=>1, :x_root_x=>2}, {:id=>1, :name=>'00', :parent_id=>8, :x_root_x=>1}, {:id=>1, :name=>'00', :parent_id=>8, :x_root_x=>2}, {:id=>8, :name=>'?', :parent_id=>nil, :x_root_x=>2}, {:id=>8, :name=>'?', :parent_id=>nil, :x_root_x=>1}]] @ds.eager(:ancestors).all sqls = DB.sqls sqls.first.should == "SELECT * FROM nodes" sqls.last.should =~ /WITH t AS \(SELECT id AS x_root_x, nodes\.\* FROM nodes WHERE \(\(id IN \([12], [12]\)\) AND \(i = 1\)\) UNION ALL SELECT t\.x_root_x, nodes\.\* FROM nodes INNER JOIN t ON \(t\.parent_id = nodes\.id\) WHERE \(i = 1\)\) SELECT \* FROM t AS nodes WHERE \(i = 1\)/ end it "should eagerly load descendants" do @c.plugin :rcte_tree @ds._fetch = [[{:id=>2, :parent_id=>1, :name=>'AA'}, {:id=>6, :parent_id=>2, :name=>'C'}, {:id=>7, :parent_id=>1, :name=>'D'}], [{:id=>6, :parent_id=>2, :name=>'C', :x_root_x=>2}, {:id=>9, :parent_id=>2, :name=>'E', :x_root_x=>2}, {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>6}, {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>2}, {:id=>4, :name=>'?', :parent_id=>7, :x_root_x=>7}, {:id=>5, :name=>'?', :parent_id=>4, :x_root_x=>7}]] os = @ds.eager(:descendants).all sqls = DB.sqls sqls.first.should == "SELECT * FROM nodes" sqls.last.should =~ /WITH t AS \(SELECT parent_id AS x_root_x, nodes\.\* FROM nodes WHERE \(parent_id IN \([267], [267], [267]\)\) UNION ALL SELECT t\.x_root_x, nodes\.\* FROM nodes INNER JOIN t ON \(t\.id = nodes\.parent_id\)\) SELECT \* FROM t AS nodes/ os.should == [@c.load(:id=>2, :parent_id=>1, :name=>'AA'), @c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>7, :parent_id=>1, :name=>'D')] os.map{|o| o.descendants}.should == [[@c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>9, :parent_id=>2, :name=>'E'), @c.load(:id=>3, :name=>'00', :parent_id=>6)], [@c.load(:id=>3, :name=>'00', :parent_id=>6)], [@c.load(:id=>4, :name=>'?', :parent_id=>7), @c.load(:id=>5, :name=>'?', :parent_id=>4)]] os.map{|o| o.children}.should == [[@c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>9, :parent_id=>2, :name=>'E')], [@c.load(:id=>3, :name=>'00', :parent_id=>6)], [@c.load(:id=>4, :name=>'?', :parent_id=>7)]] os.map{|o1| o1.children.map{|o2| o2.children}}.should == [[[@c.load(:id=>3, :name=>'00', :parent_id=>6)], []], [[]], [[@c.load(:id=>5, :name=>'?', :parent_id=>4)]]] os.map{|o1| o1.children.map{|o2| o2.children.map{|o3| o3.children}}}.should == [[[[]], []], [[]], [[[]]]] DB.sqls.should == [] end it "should eagerly load descendants when giving options" do @c.plugin :rcte_tree, :primary_key=>:i, :key=>:pi, :key_alias=>:kal, :cte_name=>:cte, :children=>{:name=>:cs}, :descendants=>{:name=>:ds} @ds._fetch = [[{:i=>2, :pi=>1, :name=>'AA'}, {:i=>6, :pi=>2, :name=>'C'}, {:i=>7, :pi=>1, :name=>'D'}], [{:i=>6, :pi=>2, :name=>'C', :kal=>2}, {:i=>9, :pi=>2, :name=>'E', :kal=>2}, {:i=>3, :name=>'00', :pi=>6, :kal=>6}, {:i=>3, :name=>'00', :pi=>6, :kal=>2}, {:i=>4, :name=>'?', :pi=>7, :kal=>7}, {:i=>5, :name=>'?', :pi=>4, :kal=>7}]] os = @ds.eager(:ds).all sqls = DB.sqls sqls.first.should == "SELECT * FROM nodes" sqls.last.should =~ /WITH cte AS \(SELECT pi AS kal, nodes\.\* FROM nodes WHERE \(pi IN \([267], [267], [267]\)\) UNION ALL SELECT cte\.kal, nodes\.\* FROM nodes INNER JOIN cte ON \(cte\.i = nodes\.pi\)\) SELECT \* FROM cte/ os.should == [@c.load(:i=>2, :pi=>1, :name=>'AA'), @c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>7, :pi=>1, :name=>'D')] os.map{|o| o.ds}.should == [[@c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>9, :pi=>2, :name=>'E'), @c.load(:i=>3, :name=>'00', :pi=>6)], [@c.load(:i=>3, :name=>'00', :pi=>6)], [@c.load(:i=>4, :name=>'?', :pi=>7), @c.load(:i=>5, :name=>'?', :pi=>4)]] os.map{|o| o.cs}.should == [[@c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>9, :pi=>2, :name=>'E')], [@c.load(:i=>3, :name=>'00', :pi=>6)], [@c.load(:i=>4, :name=>'?', :pi=>7)]] os.map{|o1| o1.cs.map{|o2| o2.cs}}.should == [[[@c.load(:i=>3, :name=>'00', :pi=>6)], []], [[]], [[@c.load(:i=>5, :name=>'?', :pi=>4)]]] os.map{|o1| o1.cs.map{|o2| o2.cs.map{|o3| o3.cs}}}.should == [[[[]], []], [[]], [[[]]]] DB.sqls.should == [] end it "should eagerly load descendants to a given level" do @c.plugin :rcte_tree @ds._fetch = [[{:id=>2, :parent_id=>1, :name=>'AA'}, {:id=>6, :parent_id=>2, :name=>'C'}, {:id=>7, :parent_id=>1, :name=>'D'}], [{:id=>6, :parent_id=>2, :name=>'C', :x_root_x=>2, :x_level_x=>0}, {:id=>9, :parent_id=>2, :name=>'E', :x_root_x=>2, :x_level_x=>0}, {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>6, :x_level_x=>0}, {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>2, :x_level_x=>1}, {:id=>4, :name=>'?', :parent_id=>7, :x_root_x=>7, :x_level_x=>0}, {:id=>5, :name=>'?', :parent_id=>4, :x_root_x=>7, :x_level_x=>1}]] os = @ds.eager(:descendants=>2).all sqls = DB.sqls sqls.first.should == "SELECT * FROM nodes" sqls.last.should =~ /WITH t AS \(SELECT parent_id AS x_root_x, nodes\.\*, 0 AS x_level_x FROM nodes WHERE \(parent_id IN \([267], [267], [267]\)\) UNION ALL SELECT t\.x_root_x, nodes\.\*, \(t\.x_level_x \+ 1\) AS x_level_x FROM nodes INNER JOIN t ON \(t\.id = nodes\.parent_id\) WHERE \(t\.x_level_x < 1\)\) SELECT \* FROM t AS nodes/ os.should == [@c.load(:id=>2, :parent_id=>1, :name=>'AA'), @c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>7, :parent_id=>1, :name=>'D')] os.map{|o| o.descendants}.should == [[@c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>9, :parent_id=>2, :name=>'E'), @c.load(:id=>3, :name=>'00', :parent_id=>6)], [@c.load(:id=>3, :name=>'00', :parent_id=>6)], [@c.load(:id=>4, :name=>'?', :parent_id=>7), @c.load(:id=>5, :name=>'?', :parent_id=>4)]] os.map{|o| o.associations[:children]}.should == [[@c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>9, :parent_id=>2, :name=>'E')], [@c.load(:id=>3, :name=>'00', :parent_id=>6)], [@c.load(:id=>4, :name=>'?', :parent_id=>7)]] os.map{|o1| o1.associations[:children].map{|o2| o2.associations[:children]}}.should == [[[@c.load(:id=>3, :name=>'00', :parent_id=>6)], []], [[]], [[@c.load(:id=>5, :name=>'?', :parent_id=>4)]]] os.map{|o1| o1.associations[:children].map{|o2| o2.associations[:children].map{|o3| o3.associations[:children]}}}.should == [[[[]], []], [[]], [[nil]]] DB.sqls.should == [] end it "should eagerly load descendants to a given level when giving options" do @c.plugin :rcte_tree, :primary_key=>:i, :key=>:pi, :key_alias=>:kal, :level_alias=>:lal, :cte_name=>:cte, :children=>{:name=>:cs}, :descendants=>{:name=>:ds} @ds._fetch = [[{:i=>2, :pi=>1, :name=>'AA'}, {:i=>6, :pi=>2, :name=>'C'}, {:i=>7, :pi=>1, :name=>'D'}], [{:i=>6, :pi=>2, :name=>'C', :kal=>2, :lal=>0}, {:i=>9, :pi=>2, :name=>'E', :kal=>2, :lal=>0}, {:i=>3, :name=>'00', :pi=>6, :kal=>6, :lal=>0}, {:i=>3, :name=>'00', :pi=>6, :kal=>2, :lal=>1}, {:i=>4, :name=>'?', :pi=>7, :kal=>7, :lal=>0}, {:i=>5, :name=>'?', :pi=>4, :kal=>7, :lal=>1}]] os = @ds.eager(:ds=>2).all sqls = DB.sqls sqls.first.should == "SELECT * FROM nodes" sqls.last.should =~ /WITH cte AS \(SELECT pi AS kal, nodes\.\*, 0 AS lal FROM nodes WHERE \(pi IN \([267], [267], [267]\)\) UNION ALL SELECT cte\.kal, nodes\.\*, \(cte\.lal \+ 1\) AS lal FROM nodes INNER JOIN cte ON \(cte\.i = nodes\.pi\) WHERE \(cte\.lal < 1\)\) SELECT \* FROM cte/ os.should == [@c.load(:i=>2, :pi=>1, :name=>'AA'), @c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>7, :pi=>1, :name=>'D')] os.map{|o| o.ds}.should == [[@c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>9, :pi=>2, :name=>'E'), @c.load(:i=>3, :name=>'00', :pi=>6)], [@c.load(:i=>3, :name=>'00', :pi=>6)], [@c.load(:i=>4, :name=>'?', :pi=>7), @c.load(:i=>5, :name=>'?', :pi=>4)]] os.map{|o| o.associations[:cs]}.should == [[@c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>9, :pi=>2, :name=>'E')], [@c.load(:i=>3, :name=>'00', :pi=>6)], [@c.load(:i=>4, :name=>'?', :pi=>7)]] os.map{|o1| o1.associations[:cs].map{|o2| o2.associations[:cs]}}.should == [[[@c.load(:i=>3, :name=>'00', :pi=>6)], []], [[]], [[@c.load(:i=>5, :name=>'?', :pi=>4)]]] os.map{|o1| o1.associations[:cs].map{|o2| o2.associations[:cs].map{|o3| o3.associations[:cs]}}}.should == [[[[]], []], [[]], [[nil]]] DB.sqls.should == [] end it "should eagerly load descendants respecting association option :conditions" do @c.plugin :rcte_tree, :conditions => {:i => 1} @ds._fetch = [[{:id=>2, :parent_id=>1, :name=>'AA'}, {:id=>6, :parent_id=>2, :name=>'C'}, {:id=>7, :parent_id=>1, :name=>'D'}], [{:id=>6, :parent_id=>2, :name=>'C', :x_root_x=>2}, {:id=>9, :parent_id=>2, :name=>'E', :x_root_x=>2}, {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>6}, {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>2}, {:id=>4, :name=>'?', :parent_id=>7, :x_root_x=>7}, {:id=>5, :name=>'?', :parent_id=>4, :x_root_x=>7}]] @ds.eager(:descendants).all sqls = DB.sqls sqls.first.should == "SELECT * FROM nodes" sqls.last.should =~ /WITH t AS \(SELECT parent_id AS x_root_x, nodes\.\* FROM nodes WHERE \(\(parent_id IN \([267], [267], [267]\)\) AND \(i = 1\)\) UNION ALL SELECT t\.x_root_x, nodes\.\* FROM nodes INNER JOIN t ON \(t\.id = nodes\.parent_id\) WHERE \(i = 1\)\) SELECT \* FROM t AS nodes WHERE \(i = 1\)/ end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/schema_caching_spec.rb��������������������������������������������0000664�0000000�0000000�00000003166�12201565355�0023410�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "schema_caching extension" do before do @db = Sequel.mock.extension(:schema_caching) @schemas = {'"table"'=>[[:column, {:db_type=>"integer", :default=>"nextval('table_id_seq'::regclass)", :allow_null=>false, :primary_key=>true, :type=>:integer, :ruby_default=>nil}]]} @filename = "spec/files/test_schema_#$$.dump" @db.instance_variable_set(:@schemas, @schemas) end after do File.delete(@filename) if File.exist?(@filename) end it "Database#dump_schema_cache should dump cached schema to the given file" do File.exist?(@filename).should be_false @db.dump_schema_cache(@filename) File.exist?(@filename).should be_true File.size(@filename).should > 0 end it "Database#load_schema_cache should load cached schema from the given file dumped by #dump_schema_cache" do @db.dump_schema_cache(@filename) db = Sequel::Database.new.extension(:schema_caching) db.load_schema_cache(@filename) @db.instance_variable_get(:@schemas).should == @schemas end it "Database#dump_schema_cache? should dump cached schema to the given file unless the file exists" do File.open(@filename, 'wb'){|f|} File.size(@filename).should == 0 @db.dump_schema_cache?(@filename) File.size(@filename).should == 0 end it "Database#load_schema_cache? should load cached schema from the given file if it exists" do db = Sequel::Database.new.extension(:schema_caching) File.exist?(@filename).should be_false db.load_schema_cache?(@filename) db.instance_variable_get(:@schemas).should == {} end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/schema_dumper_spec.rb���������������������������������������������0000664�0000000�0000000�00000072612�12201565355�0023312�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') describe "Sequel::Schema::Generator dump methods" do before do @d = Sequel::Database.new.extension(:schema_dumper) @g = Sequel::Schema::Generator end it "should allow the same table information to be converted to a string for evaling inside of another instance with the same result" do g = @g.new(@d) do Integer :a varchar :b column :dt, DateTime column :vc, :varchar primary_key :c foreign_key :d, :a foreign_key :e foreign_key [:d, :e], :name=>:cfk constraint :blah, "a=1" check :a=>1 unique [:e] index :a index [:c, :e] index [:b, :c], :type=>:hash index [:d], :unique=>true spatial_index :a full_text_index [:b, :c] end g2 = @g.new(@d) do instance_eval(g.dump_columns, __FILE__, __LINE__) instance_eval(g.dump_constraints, __FILE__, __LINE__) instance_eval(g.dump_indexes, __FILE__, __LINE__) end g.columns.should == g2.columns g.constraints.should == g2.constraints g.indexes.should == g2.indexes end it "should allow dumping indexes as separate add_index and drop_index methods" do g = @g.new(@d) do index :a index [:c, :e], :name=>:blah index [:b, :c], :unique=>true end g.dump_indexes(:add_index=>:t).should == (<<END_CODE).strip add_index :t, [:a] add_index :t, [:c, :e], :name=>:blah add_index :t, [:b, :c], :unique=>true END_CODE g.dump_indexes(:drop_index=>:t).should == (<<END_CODE).strip drop_index :t, [:b, :c], :unique=>true drop_index :t, [:c, :e], :name=>:blah drop_index :t, [:a] END_CODE end it "should raise an error if you try to dump a Generator that uses a constraint with a proc" do proc{@g.new(@d){check{a>1}}.dump_constraints}.should raise_error(Sequel::Error) end end describe "Sequel::Database dump methods" do before do @d = Sequel::Database.new.extension(:schema_dumper) @d.meta_def(:tables){|o| [:t1, :t2]} @d.meta_def(:schema) do |t, *o| case t when :t1, 't__t1', Sequel.identifier(:t__t1) [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>false}], [:c2, {:db_type=>'varchar(20)', :allow_null=>true}]] when :t2 [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>false}], [:c2, {:db_type=>'numeric', :primary_key=>true, :allow_null=>false}]] when :t5 [[:c1, {:db_type=>'blahblah', :allow_null=>true}]] end end end it "should support dumping table schemas as create_table method calls" do @d.dump_table_schema(:t1).should == "create_table(:t1) do\n primary_key :c1\n String :c2, :size=>20\nend" end it "should support dumping table schemas when given a string" do @d.dump_table_schema('t__t1').should == "create_table(\"t__t1\") do\n primary_key :c1\n String :c2, :size=>20\nend" end it "should support dumping table schemas when given an identifier" do @d.dump_table_schema(Sequel.identifier(:t__t1)).should == "create_table(\"t__t1\") do\n primary_key :c1\n String :c2, :size=>20\nend" end it "should dump non-Integer primary key columns with explicit :type" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'bigint', :primary_key=>true, :allow_null=>true}]]} @d.dump_table_schema(:t6).should == "create_table(:t6) do\n primary_key :c1, :type=>Bignum\nend" end it "should handle foreign keys" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :allow_null=>true}]]} @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list){|*s| [{:columns=>[:c1], :table=>:t2, :key=>[:c2]}]} @d.dump_table_schema(:t6).should == "create_table(:t6) do\n foreign_key :c1, :t2, :key=>[:c2]\nend" end it "should handle primary keys that are also foreign keys" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>true}]]} @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list){|*s| [{:columns=>[:c1], :table=>:t2, :key=>[:c2]}]} s = @d.dump_table_schema(:t6) s.should =~ /create_table\(:t6\) do\n primary_key :c1, / s.should =~ /:table=>:t2/ s.should =~ /:key=>\[:c2\]/ end it "should handle foreign key options" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :allow_null=>true}]]} @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list){|*s| [{:columns=>[:c1], :table=>:t2, :key=>[:c2], :on_delete=>:restrict, :on_update=>:set_null, :deferrable=>true}]} s = @d.dump_table_schema(:t6) s.should =~ /create_table\(:t6\) do\n foreign_key :c1, :t2, / s.should =~ /:key=>\[:c2\]/ s.should =~ /:on_delete=>:restrict/ s.should =~ /:on_update=>:set_null/ s.should =~ /:deferrable=>true/ end it "should handle foreign key options in the primary key" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>true}]]} @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list){|*s| [{:columns=>[:c1], :table=>:t2, :key=>[:c2], :on_delete=>:restrict, :on_update=>:set_null, :deferrable=>true}]} s = @d.dump_table_schema(:t6) s.should =~ /create_table\(:t6\) do\n primary_key :c1, / s.should =~ /:table=>:t2/ s.should =~ /:key=>\[:c2\]/ s.should =~ /:on_delete=>:restrict/ s.should =~ /:on_update=>:set_null/ s.should =~ /:deferrable=>true/ end it "should omit foreign key options that are the same as defaults" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :allow_null=>true}]]} @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list){|*s| [{:columns=>[:c1], :table=>:t2, :key=>[:c2], :on_delete=>:no_action, :on_update=>:no_action, :deferrable=>false}]} s = @d.dump_table_schema(:t6) s.should =~ /create_table\(:t6\) do\n foreign_key :c1, :t2, / s.should =~ /:key=>\[:c2\]/ s.should_not =~ /:on_delete/ s.should_not =~ /:on_update/ s.should_not =~ /:deferrable/ end it "should omit foreign key options that are the same as defaults in the primary key" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>true}]]} @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list){|*s| [{:columns=>[:c1], :table=>:t2, :key=>[:c2], :on_delete=>:no_action, :on_update=>:no_action, :deferrable=>false}]} s = @d.dump_table_schema(:t6) s.should =~ /create_table\(:t6\) do\n primary_key :c1, / s.should =~ /:table=>:t2/ s.should =~ /:key=>\[:c2\]/ s.should_not =~ /:on_delete/ s.should_not =~ /:on_update/ s.should_not =~ /:deferrable/ end it "should dump primary key columns with explicit type equal to the database type when :same_db option is passed" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'somedbspecifictype', :primary_key=>true, :allow_null=>false}]]} @d.dump_table_schema(:t7, :same_db => true).should == "create_table(:t7) do\n column :c1, \"somedbspecifictype\", :null=>false\n \n primary_key [:c1]\nend" end it "should use a composite primary_key calls if there is a composite primary key" do @d.dump_table_schema(:t2).should == "create_table(:t2) do\n Integer :c1, :null=>false\n BigDecimal :c2, :null=>false\n \n primary_key [:c1, :c2]\nend" end it "should use a composite foreign_key calls if there is a composite foreign key" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer'}], [:c2, {:db_type=>'integer'}]]} @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list){|*s| [{:columns=>[:c1, :c2], :table=>:t2, :key=>[:c3, :c4]}]} @d.dump_table_schema(:t1).should == "create_table(:t1) do\n Integer :c1\n Integer :c2\n \n foreign_key [:c1, :c2], :t2, :key=>[:c3, :c4]\nend" end it "should include index information if available" do @d.meta_def(:supports_index_parsing?){true} @d.meta_def(:indexes) do |t| {:i1=>{:columns=>[:c1], :unique=>false}, :t1_c2_c1_index=>{:columns=>[:c2, :c1], :unique=>true}} end @d.dump_table_schema(:t1).should == "create_table(:t1, :ignore_index_errors=>true) do\n primary_key :c1\n String :c2, :size=>20\n \n index [:c1], :name=>:i1\n index [:c2, :c1], :unique=>true\nend" end it "should support dumping the whole database as a migration" do @d.dump_schema_migration.should == <<-END_MIG Sequel.migration do change do create_table(:t1) do primary_key :c1 String :c2, :size=>20 end create_table(:t2) do Integer :c1, :null=>false BigDecimal :c2, :null=>false primary_key [:c1, :c2] end end end END_MIG end it "should sort table names when dumping a migration" do @d.meta_def(:tables){|o| [:t2, :t1]} @d.dump_schema_migration.should == <<-END_MIG Sequel.migration do change do create_table(:t1) do primary_key :c1 String :c2, :size=>20 end create_table(:t2) do Integer :c1, :null=>false BigDecimal :c2, :null=>false primary_key [:c1, :c2] end end end END_MIG end it "should sort table names topologically when dumping a migration with foreign keys" do @d.meta_def(:tables){|o| [:t1, :t2]} @d.meta_def(:schema) do |t| t == :t1 ? [[:c2, {:db_type=>'integer'}]] : [[:c1, {:db_type=>'integer', :primary_key=>true}]] end @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list) do |t| t == :t1 ? [{:columns=>[:c2], :table=>:t2, :key=>[:c1]}] : [] end @d.dump_schema_migration.should == <<-END_MIG Sequel.migration do change do create_table(:t2) do primary_key :c1 end create_table(:t1) do foreign_key :c2, :t2, :key=>[:c1] end end end END_MIG end it "should handle circular dependencies when dumping a migration with foreign keys" do @d.meta_def(:tables){|o| [:t1, :t2]} @d.meta_def(:schema) do |t| t == :t1 ? [[:c2, {:db_type=>'integer'}]] : [[:c1, {:db_type=>'integer'}]] end @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list) do |t| t == :t1 ? [{:columns=>[:c2], :table=>:t2, :key=>[:c1]}] : [{:columns=>[:c1], :table=>:t1, :key=>[:c2]}] end @d.dump_schema_migration.should == <<-END_MIG Sequel.migration do change do create_table(:t1) do Integer :c2 end create_table(:t2) do foreign_key :c1, :t1, :key=>[:c2] end alter_table(:t1) do add_foreign_key [:c2], :t2, :key=>[:c1] end end end END_MIG end it "should sort topologically even if the database raises an error when trying to parse foreign keys for a non-existent table" do @d.meta_def(:tables){|o| [:t1, :t2]} @d.meta_def(:schema) do |t| t == :t1 ? [[:c2, {:db_type=>'integer'}]] : [[:c1, {:db_type=>'integer', :primary_key=>true}]] end @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list) do |t| raise Sequel::DatabaseError unless [:t1, :t2].include?(t) t == :t1 ? [{:columns=>[:c2], :table=>:t2, :key=>[:c1]}] : [] end @d.dump_schema_migration.should == <<-END_MIG Sequel.migration do change do create_table(:t2) do primary_key :c1 end create_table(:t1) do foreign_key :c2, :t2, :key=>[:c1] end end end END_MIG end it "should honor the :same_db option to not convert types" do @d.dump_table_schema(:t1, :same_db=>true).should == "create_table(:t1) do\n primary_key :c1\n column :c2, \"varchar(20)\"\nend" @d.dump_schema_migration(:same_db=>true).should == <<-END_MIG Sequel.migration do change do create_table(:t1) do primary_key :c1 column :c2, "varchar(20)" end create_table(:t2) do column :c1, "integer", :null=>false column :c2, "numeric", :null=>false primary_key [:c1, :c2] end end end END_MIG end it "should honor the :index_names => false option to not include names of indexes" do @d.meta_def(:supports_index_parsing?){true} @d.meta_def(:indexes) do |t| {:i1=>{:columns=>[:c1], :unique=>false}, :t1_c2_c1_index=>{:columns=>[:c2, :c1], :unique=>true}} end @d.dump_table_schema(:t1, :index_names=>false).should == "create_table(:t1, :ignore_index_errors=>true) do\n primary_key :c1\n String :c2, :size=>20\n \n index [:c1]\n index [:c2, :c1], :unique=>true\nend" @d.dump_schema_migration(:index_names=>false).should == <<-END_MIG Sequel.migration do change do create_table(:t1, :ignore_index_errors=>true) do primary_key :c1 String :c2, :size=>20 index [:c1] index [:c2, :c1], :unique=>true end create_table(:t2, :ignore_index_errors=>true) do Integer :c1, :null=>false BigDecimal :c2, :null=>false primary_key [:c1, :c2] index [:c1] index [:c2, :c1], :unique=>true end end end END_MIG end it "should make :index_names => :namespace option a noop if there is a global index namespace" do @d.meta_def(:supports_index_parsing?){true} @d.meta_def(:indexes) do |t| {:i1=>{:columns=>[:c1], :unique=>false}, :t1_c2_c1_index=>{:columns=>[:c2, :c1], :unique=>false}} end @d.dump_table_schema(:t1, :index_names=>:namespace).should == "create_table(:t1, :ignore_index_errors=>true) do\n primary_key :c1\n String :c2, :size=>20\n \n index [:c1], :name=>:i1\n index [:c2, :c1]\nend" @d.dump_schema_migration(:index_names=>:namespace).should == <<-END_MIG Sequel.migration do change do create_table(:t1, :ignore_index_errors=>true) do primary_key :c1 String :c2, :size=>20 index [:c1], :name=>:i1 index [:c2, :c1] end create_table(:t2, :ignore_index_errors=>true) do Integer :c1, :null=>false BigDecimal :c2, :null=>false primary_key [:c1, :c2] index [:c1], :name=>:i1 index [:c2, :c1], :name=>:t1_c2_c1_index end end end END_MIG end it "should honor the :index_names => :namespace option to include names of indexes with prepended table name if there is no global index namespace" do @d.meta_def(:global_index_namespace?){false} @d.meta_def(:supports_index_parsing?){true} @d.meta_def(:indexes) do |t| {:i1=>{:columns=>[:c1], :unique=>false}, :t1_c2_c1_index=>{:columns=>[:c2, :c1], :unique=>false}} end @d.dump_table_schema(:t1, :index_names=>:namespace).should == "create_table(:t1, :ignore_index_errors=>true) do\n primary_key :c1\n String :c2, :size=>20\n \n index [:c1], :name=>:t1_i1\n index [:c2, :c1]\nend" @d.dump_schema_migration(:index_names=>:namespace).should == <<-END_MIG Sequel.migration do change do create_table(:t1, :ignore_index_errors=>true) do primary_key :c1 String :c2, :size=>20 index [:c1], :name=>:t1_i1 index [:c2, :c1] end create_table(:t2, :ignore_index_errors=>true) do Integer :c1, :null=>false BigDecimal :c2, :null=>false primary_key [:c1, :c2] index [:c1], :name=>:t2_i1 index [:c2, :c1], :name=>:t2_t1_c2_c1_index end end end END_MIG end it "should honor the :indexes => false option to not include indexes" do @d.meta_def(:supports_index_parsing?){true} @d.meta_def(:indexes) do |t| {:i1=>{:columns=>[:c1], :unique=>false}, :t1_c2_c1_index=>{:columns=>[:c2, :c1], :unique=>true}} end @d.dump_table_schema(:t1, :indexes=>false).should == "create_table(:t1) do\n primary_key :c1\n String :c2, :size=>20\nend" @d.dump_schema_migration(:indexes=>false).should == <<-END_MIG Sequel.migration do change do create_table(:t1) do primary_key :c1 String :c2, :size=>20 end create_table(:t2) do Integer :c1, :null=>false BigDecimal :c2, :null=>false primary_key [:c1, :c2] end end end END_MIG end it "should have :indexes => false option disable foreign keys as well when dumping a whole migration" do @d.meta_def(:foreign_key_list) do |t| t == :t1 ? [{:columns=>[:c2], :table=>:t2, :key=>[:c1]}] : [] end @d.dump_schema_migration(:indexes=>false).should_not =~ /foreign_key/ end it "should have :foreign_keys option override :indexes => false disabling of foreign keys" do @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list) do |t| t == :t1 ? [{:columns=>[:c2], :table=>:t2, :key=>[:c1]}] : [] end @d.dump_schema_migration(:indexes=>false, :foreign_keys=>true).should =~ /foreign_key/ end it "should support dumping just indexes as a migration" do @d.meta_def(:tables){|o| [:t1]} @d.meta_def(:supports_index_parsing?){true} @d.meta_def(:indexes) do |t| {:i1=>{:columns=>[:c1], :unique=>false}, :t1_c2_c1_index=>{:columns=>[:c2, :c1], :unique=>true}} end @d.dump_indexes_migration.should == <<-END_MIG Sequel.migration do change do add_index :t1, [:c1], :ignore_errors=>true, :name=>:i1 add_index :t1, [:c2, :c1], :ignore_errors=>true, :unique=>true end end END_MIG end it "should honor the :index_names => false option to not include names of indexes when dumping just indexes as a migration" do @d.meta_def(:tables){|o| [:t1]} @d.meta_def(:supports_index_parsing?){true} @d.meta_def(:indexes) do |t| {:i1=>{:columns=>[:c1], :unique=>false}, :t1_c2_c1_index=>{:columns=>[:c2, :c1], :unique=>true}} end @d.dump_indexes_migration(:index_names=>false).should == <<-END_MIG Sequel.migration do change do add_index :t1, [:c1], :ignore_errors=>true add_index :t1, [:c2, :c1], :ignore_errors=>true, :unique=>true end end END_MIG end it "should honor the :index_names => :namespace option be a noop if there is a global index namespace" do @d.meta_def(:tables){|o| [:t1, :t2]} @d.meta_def(:supports_index_parsing?){true} @d.meta_def(:indexes) do |t| {:i1=>{:columns=>[:c1], :unique=>false}, :t1_c2_c1_index=>{:columns=>[:c2, :c1], :unique=>false}} end @d.dump_indexes_migration(:index_names=>:namespace).should == <<-END_MIG Sequel.migration do change do add_index :t1, [:c1], :ignore_errors=>true, :name=>:i1 add_index :t1, [:c2, :c1], :ignore_errors=>true add_index :t2, [:c1], :ignore_errors=>true, :name=>:i1 add_index :t2, [:c2, :c1], :ignore_errors=>true, :name=>:t1_c2_c1_index end end END_MIG end it "should honor the :index_names => :namespace option to include names of indexes with prepended table name when dumping just indexes as a migration if there is no global index namespace" do @d.meta_def(:global_index_namespace?){false} @d.meta_def(:tables){|o| [:t1, :t2]} @d.meta_def(:supports_index_parsing?){true} @d.meta_def(:indexes) do |t| {:i1=>{:columns=>[:c1], :unique=>false}, :t1_c2_c1_index=>{:columns=>[:c2, :c1], :unique=>false}} end @d.dump_indexes_migration(:index_names=>:namespace).should == <<-END_MIG Sequel.migration do change do add_index :t1, [:c1], :ignore_errors=>true, :name=>:t1_i1 add_index :t1, [:c2, :c1], :ignore_errors=>true add_index :t2, [:c1], :ignore_errors=>true, :name=>:t2_i1 add_index :t2, [:c2, :c1], :ignore_errors=>true, :name=>:t2_t1_c2_c1_index end end END_MIG end it "should handle missing index parsing support when dumping index migration" do @d.meta_def(:tables){|o| [:t1]} @d.dump_indexes_migration.should == <<-END_MIG Sequel.migration do change do end end END_MIG end it "should handle missing foreign key parsing support when dumping foreign key migration" do @d.meta_def(:tables){|o| [:t1]} @d.dump_foreign_key_migration.should == <<-END_MIG Sequel.migration do change do end end END_MIG end it "should support dumping just foreign_keys as a migration" do @d.meta_def(:tables){|o| [:t1, :t2, :t3]} @d.meta_def(:schema) do |t| t == :t1 ? [[:c2, {:db_type=>'integer'}]] : [[:c1, {:db_type=>'integer'}]] end @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list) do |t, *a| case t when :t1 [{:columns=>[:c2], :table=>:t2, :key=>[:c1]}] when :t2 [{:columns=>[:c1, :c3], :table=>:t1, :key=>[:c2, :c4]}] else [] end end @d.dump_foreign_key_migration.should == <<-END_MIG Sequel.migration do change do alter_table(:t1) do add_foreign_key [:c2], :t2, :key=>[:c1] end alter_table(:t2) do add_foreign_key [:c1, :c3], :t1, :key=>[:c2, :c4] end end end END_MIG end it "should handle not null values and defaults" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'date', :default=>"'now()'", :allow_null=>true}], [:c2, {:db_type=>'datetime', :allow_null=>false}]]} @d.dump_table_schema(:t3).should == "create_table(:t3) do\n Date :c1\n DateTime :c2, :null=>false\nend" end it "should handle converting common defaults" do @d.meta_def(:schema) do |t, *os| s = [[:c1, {:db_type=>'boolean', :default=>"false", :type=>:boolean, :allow_null=>true}], [:c2, {:db_type=>'varchar', :default=>"'blah'", :type=>:string, :allow_null=>true}], [:c3, {:db_type=>'integer', :default=>"-1", :type=>:integer, :allow_null=>true}], [:c4, {:db_type=>'float', :default=>"1.0", :type=>:float, :allow_null=>true}], [:c5, {:db_type=>'decimal', :default=>"100.50", :type=>:decimal, :allow_null=>true}], [:c6, {:db_type=>'blob', :default=>"'blah'", :type=>:blob, :allow_null=>true}], [:c7, {:db_type=>'date', :default=>"'2008-10-29'", :type=>:date, :allow_null=>true}], [:c8, {:db_type=>'datetime', :default=>"'2008-10-29 10:20:30'", :type=>:datetime, :allow_null=>true}], [:c9, {:db_type=>'time', :default=>"'10:20:30'", :type=>:time, :allow_null=>true}], [:c10, {:db_type=>'foo', :default=>"'6 weeks'", :type=>nil, :allow_null=>true}], [:c11, {:db_type=>'date', :default=>"CURRENT_DATE", :type=>:date, :allow_null=>true}], [:c12, {:db_type=>'timestamp', :default=>"now()", :type=>:datetime, :allow_null=>true}]] s.each{|_, c| c[:ruby_default] = column_schema_to_ruby_default(c[:default], c[:type])} s end @d.dump_table_schema(:t4).gsub(/[+-]\d\d\d\d"\)/, '")').gsub(/\.0+/, '.0').should == "create_table(:t4) do\n TrueClass :c1, :default=>false\n String :c2, :default=>\"blah\"\n Integer :c3, :default=>-1\n Float :c4, :default=>1.0\n BigDecimal :c5, :default=>BigDecimal.new(\"0.1005E3\")\n File :c6, :default=>Sequel::SQL::Blob.new(\"blah\")\n Date :c7, :default=>Date.new(2008, 10, 29)\n DateTime :c8, :default=>DateTime.parse(\"2008-10-29T10:20:30.0\")\n Time :c9, :default=>Sequel::SQLTime.parse(\"10:20:30.0\"), :only_time=>true\n String :c10\n Date :c11, :default=>Sequel::CURRENT_DATE\n DateTime :c12, :default=>Sequel::CURRENT_TIMESTAMP\nend" @d.dump_table_schema(:t4, :same_db=>true).gsub(/[+-]\d\d\d\d"\)/, '")').gsub(/\.0+/, '.0').should == "create_table(:t4) do\n column :c1, \"boolean\", :default=>false\n column :c2, \"varchar\", :default=>\"blah\"\n column :c3, \"integer\", :default=>-1\n column :c4, \"float\", :default=>1.0\n column :c5, \"decimal\", :default=>BigDecimal.new(\"0.1005E3\")\n column :c6, \"blob\", :default=>Sequel::SQL::Blob.new(\"blah\")\n column :c7, \"date\", :default=>Date.new(2008, 10, 29)\n column :c8, \"datetime\", :default=>DateTime.parse(\"2008-10-29T10:20:30.0\")\n column :c9, \"time\", :default=>Sequel::SQLTime.parse(\"10:20:30.0\")\n column :c10, \"foo\", :default=>Sequel::LiteralString.new(\"'6 weeks'\")\n column :c11, \"date\", :default=>Sequel::CURRENT_DATE\n column :c12, \"timestamp\", :default=>Sequel::CURRENT_TIMESTAMP\nend" end it "should not use a literal string as a fallback if using MySQL with the :same_db option" do @d.meta_def(:database_type){:mysql} @d.meta_def(:supports_index_parsing?){false} @d.meta_def(:supports_foreign_key_parsing?){false} @d.meta_def(:schema) do |t, *os| s = [[:c10, {:db_type=>'foo', :default=>"'6 weeks'", :type=>nil, :allow_null=>true}]] s.each{|_, c| c[:ruby_default] = column_schema_to_ruby_default(c[:default], c[:type])} s end @d.dump_table_schema(:t5, :same_db=>true).should == "create_table(:t5) do\n column :c10, \"foo\"\nend" end it "should convert unknown database types to strings" do @d.dump_table_schema(:t5).should == "create_table(:t5) do\n String :c1\nend" end it "should convert many database types to ruby types" do types = %w"mediumint smallint int integer mediumint(6) smallint(7) int(8) integer(9) tinyint tinyint(2) bigint bigint(20) real float double boolean tinytext mediumtext longtext text clob date datetime timestamp time char character varchar varchar(255) varchar(30) bpchar string money decimal decimal(10,2) numeric numeric(15,3) number bytea tinyblob mediumblob longblob blob varbinary varbinary(10) binary binary(20) year" + ["double precision", "timestamp with time zone", "timestamp without time zone", "time with time zone", "time without time zone", "character varying(20)"] + %w"nvarchar ntext smalldatetime smallmoney binary varbinary nchar" + ["timestamp(6) without time zone", "timestamp(6) with time zone", 'mediumint(10) unsigned', 'int(9) unsigned', 'int(10) unsigned', "int(12) unsigned", 'bigint unsigned', 'tinyint(3) unsigned', 'identity', 'int identity'] + %w"integer(10)" @d.meta_def(:schema) do |t, *o| i = 0 types.map{|x| [:"c#{i+=1}", {:db_type=>x, :allow_null=>true}]} end @d.dump_table_schema(:x).should == (<<END_MIG).chomp create_table(:x) do Integer :c1 Integer :c2 Integer :c3 Integer :c4 Integer :c5 Integer :c6 Integer :c7 Integer :c8 Integer :c9 Integer :c10 Bignum :c11 Bignum :c12 Float :c13 Float :c14 Float :c15 TrueClass :c16 String :c17, :text=>true String :c18, :text=>true String :c19, :text=>true String :c20, :text=>true String :c21, :text=>true Date :c22 DateTime :c23 DateTime :c24 Time :c25, :only_time=>true String :c26, :fixed=>true String :c27, :fixed=>true String :c28 String :c29, :size=>255 String :c30, :size=>30 String :c31 String :c32 BigDecimal :c33, :size=>[19, 2] BigDecimal :c34 BigDecimal :c35, :size=>[10, 2] BigDecimal :c36 BigDecimal :c37, :size=>[15, 3] BigDecimal :c38 File :c39 File :c40 File :c41 File :c42 File :c43 File :c44 File :c45, :size=>10 File :c46 File :c47, :size=>20 Integer :c48 Float :c49 DateTime :c50 DateTime :c51 Time :c52, :only_time=>true Time :c53, :only_time=>true String :c54, :size=>20 String :c55 String :c56, :text=>true DateTime :c57 BigDecimal :c58, :size=>[19, 2] File :c59 File :c60 String :c61, :fixed=>true DateTime :c62, :size=>6 DateTime :c63, :size=>6 Integer :c64 Integer :c65 Bignum :c66 Bignum :c67 Bignum :c68 Integer :c69 Integer :c70 Integer :c71 Integer :c72 check Sequel::SQL::BooleanExpression.new(:>=, Sequel::SQL::Identifier.new(:c64), 0) check Sequel::SQL::BooleanExpression.new(:>=, Sequel::SQL::Identifier.new(:c65), 0) check Sequel::SQL::BooleanExpression.new(:>=, Sequel::SQL::Identifier.new(:c66), 0) check Sequel::SQL::BooleanExpression.new(:>=, Sequel::SQL::Identifier.new(:c67), 0) check Sequel::SQL::BooleanExpression.new(:>=, Sequel::SQL::Identifier.new(:c68), 0) check Sequel::SQL::BooleanExpression.new(:>=, Sequel::SQL::Identifier.new(:c69), 0) end END_MIG end it "should convert mysql types to ruby types" do types = ['double(15,2)', 'double(7,1) unsigned'] @d.meta_def(:schema) do |t, *o| i = 0 types.map{|x| [:"c#{i+=1}", {:db_type=>x, :allow_null=>true}]} end @d.dump_table_schema(:x).should == (<<END_MIG).chomp create_table(:x) do Float :c1 Float :c2 check Sequel::SQL::BooleanExpression.new(:>=, Sequel::SQL::Identifier.new(:c2), 0) end END_MIG end it "should use separate primary_key call with non autoincrementable types" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'varchar(8)', :primary_key=>true}]]} @d.dump_table_schema(:t3).should == "create_table(:t3) do\n String :c1, :size=>8\n \n primary_key [:c1]\nend" @d.dump_table_schema(:t3, :same_db=>true).should == "create_table(:t3) do\n column :c1, \"varchar(8)\"\n \n primary_key [:c1]\nend" end it "should use explicit type for non integer foreign_key types" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'date', :primary_key=>true}]]} @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list){|t, *a| [{:columns=>[:c1], :table=>:t3, :key=>[:c1]}] if t == :t4} ["create_table(:t4) do\n foreign_key :c1, :t3, :type=>Date, :key=>[:c1]\n \n primary_key [:c1]\nend", "create_table(:t4) do\n foreign_key :c1, :t3, :key=>[:c1], :type=>Date\n \n primary_key [:c1]\nend"].should include(@d.dump_table_schema(:t4)) ["create_table(:t4) do\n foreign_key :c1, :t3, :type=>\"date\", :key=>[:c1]\n \n primary_key [:c1]\nend", "create_table(:t4) do\n foreign_key :c1, :t3, :key=>[:c1], :type=>\"date\"\n \n primary_key [:c1]\nend"].should include(@d.dump_table_schema(:t4, :same_db=>true)) end it "should correctly handing autoincrementing primary keys that are also foreign keys" do @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true}]]} @d.meta_def(:supports_foreign_key_parsing?){true} @d.meta_def(:foreign_key_list){|t, *a| [{:columns=>[:c1], :table=>:t3, :key=>[:c1]}] if t == :t4} ["create_table(:t4) do\n primary_key :c1, :table=>:t3, :key=>[:c1]\nend", "create_table(:t4) do\n primary_key :c1, :key=>[:c1], :table=>:t3\nend"].should include(@d.dump_table_schema(:t4)) end end ����������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/schema_spec.rb����������������������������������������������������0000664�0000000�0000000�00000006544�12201565355�0021737�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "set_schema" do before do @model = Class.new(Sequel::Model(:items)) @model.plugin :schema end specify "sets schema with implicit table name" do @model.set_schema do primary_key :ssn, :type=>:string end @model.primary_key.should == :ssn @model.table_name.should == :items end specify "sets schema with explicit table name" do @model.set_schema :foo do primary_key :id end @model.primary_key.should == :id @model.table_name.should == :foo end end describe Sequel::Model, "create_table and schema" do before do @model = Class.new(Sequel::Model) @model.class_eval do plugin :schema set_schema(:items) do text :name float :price, :null => false end end DB.reset end it "should get the create table SQL list from the db and execute it line by line" do @model.create_table DB.sqls.should == ['CREATE TABLE items (name text, price float NOT NULL)'] end it "should allow setting schema and creating the table in one call" do @model.create_table { text :name } DB.sqls.should == ['CREATE TABLE items (name text)'] end it "should reload the schema from the database" do schem = {:name=>{:type=>:string}, :price=>{:type=>:float}} @model.db.should_receive(:schema).with(@model.dataset, :reload=>true).and_return(schem.to_a.sort_by{|x| x[0].to_s}) @model.create_table @model.db_schema.should == schem @model.instance_variable_get(:@columns).should == [:name, :price] end it "should return the schema generator via schema" do @model.schema.should be_a_kind_of(Sequel::Schema::Generator) end it "should use the superclasses schema if it exists" do @submodel = Class.new(@model) @submodel.schema.should be_a_kind_of(Sequel::Schema::Generator) end it "should return nil if no schema is present" do @model = Class.new(Sequel::Model) @model.plugin :schema @model.schema.should == nil @submodel = Class.new(@model) @submodel.schema.should == nil end end describe Sequel::Model, "schema methods" do before do @model = Class.new(Sequel::Model(:items)) @model.plugin :schema DB.reset end it "table_exists? should get the table name and question the model's db if table_exists?" do @model.db.should_receive(:table_exists?).and_return(false) @model.table_exists?.should == false end it "drop_table should drop the related table" do @model.drop_table DB.sqls.should == ['DROP TABLE items'] end it "drop_table? should drop the table if it exists" do @model.drop_table? DB.sqls.should == ["SELECT NULL AS nil FROM items LIMIT 1", 'DROP TABLE items'] end it "create_table! should drop table if it exists and then create the table" do @model.create_table! DB.sqls.should == ["SELECT NULL AS nil FROM items LIMIT 1", 'DROP TABLE items', 'CREATE TABLE items ()'] end it "create_table? should not create the table if it already exists" do @model.should_receive(:table_exists?).and_return(true) @model.create_table? DB.sqls.should == [] end it "create_table? should create the table if it doesn't exist" do @model.should_receive(:table_exists?).and_return(false) @model.create_table? DB.sqls.should == ['CREATE TABLE items ()'] end end ������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/scissors_spec.rb��������������������������������������������������0000664�0000000�0000000�00000001275�12201565355�0022343�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "scissors plugin" do before do @m = Class.new(Sequel::Model(:items)) @m.use_transactions = true @m.plugin :scissors @m.db.sqls end it "Model.delete should delete from the dataset" do @m.delete @m.db.sqls.should == ['DELETE FROM items'] end it "Model.update should update the dataset" do @m.update(:a=>1) @m.db.sqls.should == ['UPDATE items SET a = 1'] end it "Model.destory each instance in the dataset" do @m.dataset._fetch = {:id=>1} @m.destroy @m.db.sqls.should == ['BEGIN', 'SELECT * FROM items', 'DELETE FROM items WHERE id = 1', 'COMMIT'] end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/select_remove_spec.rb���������������������������������������������0000664�0000000�0000000�00000002673�12201565355�0023332�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Dataset#select_remove" do before do @d = Sequel.mock.from(:test).extension(:select_remove) @d.columns :a, :b, :c end specify "should remove columns from the selected columns" do @d.sql.should == 'SELECT * FROM test' @d.select_remove(:a).sql.should == 'SELECT b, c FROM test' @d.select_remove(:b).sql.should == 'SELECT a, c FROM test' @d.select_remove(:c).sql.should == 'SELECT a, b FROM test' end specify "should work correctly if there are already columns selected" do d = @d.select(:a, :b, :c) d.columns :a, :b, :c d.select_remove(:c).sql.should == 'SELECT a, b FROM test' end specify "should have no effect if the columns given are not currently selected" do @d.select_remove(:d).sql.should == 'SELECT a, b, c FROM test' end specify "should handle expressions where Sequel can't determine the alias by itself" do d = @d.select(:a, Sequel.function(:b), Sequel.as(:c, :b)) d.columns :a, :"b()", :b d.select_remove(:"b()").sql.should == 'SELECT a, c AS b FROM test' end specify "should remove expressions if given exact expressions" do d = @d.select(:a, Sequel.function(:b), Sequel.as(:c, :b)) d.columns :a, :"b()", :b d.select_remove(Sequel.function(:b)).sql.should == 'SELECT a, c AS b FROM test' d.select_remove(Sequel.as(:c, :b)).sql.should == 'SELECT a, b() FROM test' end end ���������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/sequel_3_dataset_methods_spec.rb����������������������������������0000664�0000000�0000000�00000006633�12201565355�0025446�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Dataset#to_csv" do before do @ds = Sequel.mock(:fetch=>[{:a=>1, :b=>2, :c=>3}, {:a=>4, :b=>5, :c=>6}, {:a=>7, :b=>8, :c=>9}])[:items].columns(:a, :b, :c).extension(:sequel_3_dataset_methods) end specify "should format a CSV representation of the records" do @ds.to_csv.should == "a, b, c\r\n1, 2, 3\r\n4, 5, 6\r\n7, 8, 9\r\n" end specify "should exclude column titles if so specified" do @ds.to_csv(false).should == "1, 2, 3\r\n4, 5, 6\r\n7, 8, 9\r\n" end end describe "Dataset#[]=" do specify "should perform an update on the specified filter" do db = Sequel.mock ds = db[:items].extension(:sequel_3_dataset_methods) ds[:a => 1] = {:x => 3} db.sqls.should == ['UPDATE items SET x = 3 WHERE (a = 1)'] end end describe "Dataset#insert_multiple" do before do @db = Sequel.mock(:autoid=>2) @ds = @db[:items].extension(:sequel_3_dataset_methods) end specify "should insert all items in the supplied array" do @ds.insert_multiple(['aa', 5, 3, {:a => 2}]) @db.sqls.should == ["INSERT INTO items VALUES ('aa')", "INSERT INTO items VALUES (5)", "INSERT INTO items VALUES (3)", "INSERT INTO items (a) VALUES (2)"] end specify "should pass array items through the supplied block if given" do @ds.insert_multiple(["inevitable", "hello", "the ticking clock"]){|i| i.gsub('l', 'r')} @db.sqls.should == ["INSERT INTO items VALUES ('inevitabre')", "INSERT INTO items VALUES ('herro')", "INSERT INTO items VALUES ('the ticking crock')"] end specify "should return array of inserted ids" do @ds.insert_multiple(['aa', 5, 3, {:a => 2}]).should == [2, 3, 4, 5] end specify "should work exactly like in metioned in the example" do @ds.insert_multiple([{:x=>1}, {:x=>2}]){|row| row[:y] = row[:x] * 2 ; row } sqls = @db.sqls ["INSERT INTO items (x, y) VALUES (1, 2)", "INSERT INTO items (y, x) VALUES (2, 1)"].should include(sqls[0]) ["INSERT INTO items (x, y) VALUES (2, 4)", "INSERT INTO items (y, x) VALUES (4, 2)"].should include(sqls[1]) end end describe "Dataset#db=" do specify "should change the dataset's database" do db = Sequel.mock ds = db[:items].extension(:sequel_3_dataset_methods) db2 = Sequel.mock ds.db = db2 ds.db.should == db2 ds.db.should_not == db end end describe "Dataset#opts=" do specify "should change the dataset's opts" do db = Sequel.mock ds = db[:items].extension(:sequel_3_dataset_methods) db2 = Sequel.mock ds.sql.should == 'SELECT * FROM items' ds.opts = {} ds.sql.should == 'SELECT *' ds.opts.should == {} end end describe "Dataset#set" do specify "should act as alias to #update" do db = Sequel.mock ds = db[:items].extension(:sequel_3_dataset_methods) ds.set({:x => 3}) db.sqls.should == ['UPDATE items SET x = 3'] end end describe "Sequel::Dataset#qualify_to_first_source" do specify "should qualify to the first source" do Sequel.mock.dataset.extension(:sequel_3_dataset_methods).from(:t).filter{a<b}.qualify_to_first_source.sql.should == 'SELECT t.* FROM t WHERE (t.a < t.b)' end end describe "Sequel::Dataset#qualify_to" do specify "should qualify to the given table" do Sequel.mock.dataset.extension(:sequel_3_dataset_methods).from(:t).filter{a<b}.qualify_to(:e).sql.should == 'SELECT e.* FROM t WHERE (e.a < e.b)' end end �����������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/serialization_modification_detection_spec.rb����������������������0000664�0000000�0000000�00000003532�12201565355�0030131�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") require 'yaml' describe "serialization_modification_detection plugin" do before do @c = Class.new(Sequel::Model(:items)) @c.class_eval do columns :id, :h plugin :serialization, :yaml, :h plugin :serialization_modification_detection end @o1 = @c.new(:h=>{}) @o2 = @c.load(:id=>1, :h=>"--- {}\n\n") @o3 = @c.new @o4 = @c.load(:id=>1, :h=>nil) DB.reset end it "should not detect columns that haven't been changed" do @o1.changed_columns.should == [] @o1.h.should == {} @o1.h[1] = 2 @o1.h.clear @o1.changed_columns.should == [] @o2.changed_columns.should == [] @o2.h.should == {} @o2.h[1] = 2 @o2.h.clear @o2.changed_columns.should == [] end it "should detect columns that have been changed" do @o1.changed_columns.should == [] @o1.h.should == {} @o1.h[1] = 2 @o1.changed_columns.should == [:h] @o2.changed_columns.should == [] @o2.h.should == {} @o2.h[1] = 2 @o2.changed_columns.should == [:h] @o3.changed_columns.should == [] @o3.h.should == nil @o3.h = {} @o3.changed_columns.should == [:h] @o4.changed_columns.should == [] @o4.h.should == nil @o4.h = {} @o4.changed_columns.should == [:h] end it "should report correct changed_columns after saving" do @o1.h[1] = 2 @o1.save @o1.changed_columns.should == [] @o2.h[1] = 2 @o2.save_changes @o2.changed_columns.should == [] @o3.h = {1=>2} @o3.save @o3.changed_columns.should == [] @o4.h = {1=>2} @o4.save @o4.changed_columns.should == [] end it "should work with frozen objects" do @o1.changed_columns.should == [] @o1.h.should == {} @o1.freeze @o1.h[1] = 2 @o1.changed_columns.should == [:h] end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/serialization_spec.rb���������������������������������������������0000664�0000000�0000000�00000023503�12201565355�0023346�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") require 'yaml' require 'json' describe "Serialization plugin" do before do @c = Class.new(Sequel::Model(:items)) do no_primary_key columns :id, :abc, :def, :ghi end DB.reset end it "should allow setting additional serializable attributes via plugin :serialization call" do @c.plugin :serialization, :yaml, :abc @c.create(:abc => 1, :def=> 2) DB.sqls.last.should =~ /INSERT INTO items \((abc, def|def, abc)\) VALUES \(('--- 1\n(\.\.\.\n)?', 2|2, '--- 1\n(\.\.\.\n)?')\)/ @c.plugin :serialization, :marshal, :def @c.create(:abc => 1, :def=> 1) DB.sqls.last.should =~ /INSERT INTO items \((abc, def|def, abc)\) VALUES \(('--- 1\n(\.\.\.\n)?', 'BAhpBg==\n'|'BAhpBg==\n', '--- 1\n(\.\.\.\n)?')\)/ @c.plugin :serialization, :json, :ghi @c.create(:ghi => [123]) DB.sqls.last.should =~ /INSERT INTO items \((ghi)\) VALUES \('\[123\]'\)/ end it "should allow serializing attributes to yaml" do @c.plugin :serialization, :yaml, :abc @c.create(:abc => 1) @c.create(:abc => "hello") DB.sqls.map{|s| s.sub("...\n", '')}.should == ["INSERT INTO items (abc) VALUES ('--- 1\n')", "INSERT INTO items (abc) VALUES ('--- hello\n')"] end it "serialized_columns should be the columns serialized" do @c.plugin :serialization, :yaml, :abc @c.serialized_columns.should == [:abc] end it "should allow serializing attributes to marshal" do @c.plugin :serialization, :marshal, :abc @c.create(:abc => 1) @c.create(:abc => "hello") x = [Marshal.dump("hello")].pack('m') DB.sqls.should == [ \ "INSERT INTO items (abc) VALUES ('BAhpBg==\n')", \ "INSERT INTO items (abc) VALUES ('#{x}')", \ ] end it "should allow serializing attributes to json" do @c.plugin :serialization, :json, :ghi @c.create(:ghi => [1]) @c.create(:ghi => ["hello"]) x = ["hello"].to_json DB.sqls.should == [ \ "INSERT INTO items (ghi) VALUES ('[1]')", \ "INSERT INTO items (ghi) VALUES ('#{x}')", \ ] end it "should allow serializing attributes using arbitrary callable" do @c.plugin :serialization, [proc{|s| s.reverse}, proc{}], :abc @c.create(:abc => "hello") DB.sqls.should == ["INSERT INTO items (abc) VALUES ('olleh')"] end it "should raise an error if specificing serializer as an unregistered symbol" do proc{@c.plugin :serialization, :foo, :abc}.should raise_error(Sequel::Error) end it "should translate values to and from yaml serialization format using accessor methods" do @c.set_primary_key :id @c.plugin :serialization, :yaml, :abc, :def @c.dataset._fetch = {:id => 1, :abc => "--- 1\n", :def => "--- hello\n"} o = @c.first o.id.should == 1 o.abc.should == 1 o.abc.should == 1 o.def.should == "hello" o.def.should == "hello" o.update(:abc => 23) @c.create(:abc => [1, 2, 3]) DB.sqls.should == ["SELECT * FROM items LIMIT 1", "UPDATE items SET abc = '#{23.to_yaml}' WHERE (id = 1)", "INSERT INTO items (abc) VALUES ('#{[1, 2, 3].to_yaml}')", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end it "should translate values to and from marshal serialization format using accessor methods" do @c.set_primary_key :id @c.plugin :serialization, :marshal, :abc, :def @c.dataset._fetch = [:id => 1, :abc =>[Marshal.dump(1)].pack('m'), :def =>[Marshal.dump('hello')].pack('m')] o = @c.first o.id.should == 1 o.abc.should == 1 o.abc.should == 1 o.def.should == "hello" o.def.should == "hello" o.update(:abc => 23) @c.create(:abc => [1, 2, 3]) DB.sqls.should == ["SELECT * FROM items LIMIT 1", "UPDATE items SET abc = '#{[Marshal.dump(23)].pack('m')}' WHERE (id = 1)", "INSERT INTO items (abc) VALUES ('#{[Marshal.dump([1, 2, 3])].pack('m')}')", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end it "should handle old non-base-64 encoded marshal serialization format" do @c.set_primary_key :id @c.plugin :serialization, :marshal, :abc, :def @c.dataset._fetch = [:id => 1, :abc =>Marshal.dump(1), :def =>Marshal.dump('hello')] o = @c.first o.abc.should == 1 o.def.should == "hello" end it "should raise exception for bad marshal data" do @c.set_primary_key :id @c.plugin :serialization, :marshal, :abc, :def @c.dataset._fetch = [:id => 1, :abc =>'foo', :def =>'bar'] o = @c.first proc{o.abc}.should raise_error proc{o.def}.should raise_error end it "should translate values to and from json serialization format using accessor methods" do @c.set_primary_key :id @c.plugin :serialization, :json, :abc, :def @c.dataset._fetch = {:id => 1, :abc => [1].to_json, :def => ["hello"].to_json} o = @c.first o.id.should == 1 o.abc.should == [1] o.abc.should == [1] o.def.should == ["hello"] o.def.should == ["hello"] o.update(:abc => [23]) @c.create(:abc => [1,2,3]) DB.sqls.should == ["SELECT * FROM items LIMIT 1", "UPDATE items SET abc = '#{[23].to_json}' WHERE (id = 1)", "INSERT INTO items (abc) VALUES ('#{[1,2,3].to_json}')", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end it "should translate values to and from arbitrary callables using accessor methods" do @c.set_primary_key :id @c.plugin :serialization, [proc{|s| s.reverse}, proc{|s| s.reverse}], :abc, :def @c.dataset._fetch = {:id => 1, :abc => 'cba', :def => 'olleh'} o = @c.first o.id.should == 1 o.abc.should == 'abc' o.abc.should == 'abc' o.def.should == "hello" o.def.should == "hello" o.update(:abc => 'foo') @c.create(:abc => 'bar') DB.sqls.should == ["SELECT * FROM items LIMIT 1", "UPDATE items SET abc = 'oof' WHERE (id = 1)", "INSERT INTO items (abc) VALUES ('rab')", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end it "should handle registration of custom serializer/deserializer pairs" do @c.set_primary_key :id Sequel::Plugins::Serialization.register_format(:reverse, proc{|s| s.reverse}, proc{|s| s.reverse}) @c.plugin :serialization, :reverse, :abc, :def @c.dataset._fetch = {:id => 1, :abc => 'cba', :def => 'olleh'} o = @c.first o.id.should == 1 o.abc.should == 'abc' o.abc.should == 'abc' o.def.should == "hello" o.def.should == "hello" o.update(:abc => 'foo') @c.create(:abc => 'bar') DB.sqls.should == ["SELECT * FROM items LIMIT 1", "UPDATE items SET abc = 'oof' WHERE (id = 1)", "INSERT INTO items (abc) VALUES ('rab')", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end it "should copy serialization formats and columns to subclasses" do @c.set_primary_key :id @c.plugin :serialization, :yaml, :abc, :def @c.dataset._fetch = {:id => 1, :abc => "--- 1\n", :def => "--- hello\n"} o = Class.new(@c).first o.id.should == 1 o.abc.should == 1 o.abc.should == 1 o.def.should == "hello" o.def.should == "hello" o.update(:abc => 23) Class.new(@c).create(:abc => [1, 2, 3]) DB.sqls.should == ["SELECT * FROM items LIMIT 1", "UPDATE items SET abc = '#{23.to_yaml}' WHERE (id = 1)", "INSERT INTO items (abc) VALUES ('#{[1, 2, 3].to_yaml}')", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end it "should clear the deserialized columns when refreshing" do @c.set_primary_key :id @c.plugin :serialization, :yaml, :abc, :def o = @c.load(:id => 1, :abc => "--- 1\n", :def => "--- hello\n") o.abc = 23 o.deserialized_values.length.should == 1 o.abc.should == 23 o.refresh o.deserialized_values.length.should == 0 end it "should not clear the deserialized columns when refreshing after saving a new object" do @c.set_primary_key :id @c.plugin :serialization, :yaml, :abc, :def o = @c.new(:abc => "--- 1\n", :def => "--- hello\n") o.deserialized_values.length.should == 2 o.save o.deserialized_values.length.should == 2 end it "should not clear the deserialized columns when refreshing after saving a new object with insert_select" do @c.set_primary_key :id @c.plugin :serialization, :yaml, :abc, :def def (@c.instance_dataset).supports_insert_select?() true end def (@c.instance_dataset).insert_select(*) {:id=>1} end o = @c.new(:abc => "--- 1\n", :def => "--- hello\n") o.deserialized_values.length.should == 2 o.save o.deserialized_values.length.should == 2 end it "should raise an error if calling internal serialization methods with bad columns" do @c.set_primary_key :id @c.plugin :serialization o = @c.load(:id => 1, :abc => "--- 1\n", :def => "--- hello\n") lambda{o.send(:serialize_value, :abc, 1)}.should raise_error(Sequel::Error) lambda{o.send(:deserialize_value, :abc, "--- hello\n")}.should raise_error(Sequel::Error) end it "should add the accessors to a module included in the class, so they can be easily overridden" do @c.class_eval do def abc "#{super}-blah" end end @c.plugin :serialization, :yaml, :abc o = @c.load(:abc => "--- 1\n") o.abc.should == "1-blah" end it "should call super to get the deserialized value from a previous accessor" do m = Module.new do def abc "--- #{@values[:abc]*3}\n" end end @c.send(:include, m) @c.plugin :serialization, :yaml, :abc o = @c.load(:abc => 3) o.abc.should == 9 end it "should work correctly with frozen instances" do @c.set_primary_key :id @c.plugin :serialization, :yaml, :abc, :def @c.dataset._fetch = {:id => 1, :abc => "--- 1\n", :def => "--- hello\n"} o = @c.first o.freeze o.abc.should == 1 o.abc.should == 1 o.def.should == "hello" o.def.should == "hello" proc{o.abc = 2}.should raise_error proc{o.def = 'h'}.should raise_error end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/server_block_spec.rb����������������������������������������������0000664�0000000�0000000�00000005053�12201565355�0023151�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") shared_examples_for "Database#with_server" do specify "should set the default server to use in the block" do @db.with_server(:a){@db[:t].all} @db.sqls.should == ["SELECT * FROM t -- a"] @db.with_server(:b){@db[:t].all} @db.sqls.should == ["SELECT * FROM t -- b"] end specify "should have no affect after the block" do @db.with_server(:a){@db[:t].all} @db.sqls.should == ["SELECT * FROM t -- a"] @db[:t].all @db.sqls.should == ["SELECT * FROM t"] end specify "should not override specific server inside the block" do @db.with_server(:a){@db[:t].server(:b).all} @db.sqls.should == ["SELECT * FROM t -- b"] end specify "should work correctly when blocks are nested" do @db[:t].all @db.with_server(:a) do @db[:t].all @db.with_server(:b){@db[:t].all} @db[:t].all end @db[:t].all @db.sqls.should == ["SELECT * FROM t", "SELECT * FROM t -- a", "SELECT * FROM t -- b", "SELECT * FROM t -- a", "SELECT * FROM t"] end specify "should work correctly for inserts/updates/deletes" do @db.with_server(:a) do @db[:t].insert @db[:t].update(:a=>1) @db[:t].delete end @db.sqls.should == ["INSERT INTO t DEFAULT VALUES -- a", "UPDATE t SET a = 1 -- a", "DELETE FROM t -- a"] end end describe "Database#with_server single threaded" do before do @db = Sequel.mock(:single_threaded=>true, :servers=>{:a=>{}, :b=>{}}) @db.extension :server_block end it_should_behave_like "Database#with_server" end describe "Database#with_server multi threaded" do before do @db = Sequel.mock(:servers=>{:a=>{}, :b=>{}, :c=>{}, :d=>{}}) @db.extension :server_block end it_should_behave_like "Database#with_server" specify "should respect multithreaded access" do q, q1 = Queue.new, Queue.new t = nil @db[:t].all @db.with_server(:a) do @db[:t].all t = Thread.new do @db[:t].all @db.with_server(:c) do @db[:t].all @db.with_server(:d){@db[:t].all} q.push nil q1.pop @db[:t].all end @db[:t].all end q.pop @db.with_server(:b){@db[:t].all} @db[:t].all end @db[:t].all q1.push nil t.join @db.sqls.should == ["SELECT * FROM t", "SELECT * FROM t -- a", "SELECT * FROM t", "SELECT * FROM t -- c", "SELECT * FROM t -- d", "SELECT * FROM t -- b", "SELECT * FROM t -- a", "SELECT * FROM t", "SELECT * FROM t -- c", "SELECT * FROM t"] end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/set_overrides_spec.rb���������������������������������������������0000664�0000000�0000000�00000004466�12201565355�0023355�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Dataset #set_defaults" do before do @ds = Sequel.mock.dataset.from(:items).extension(:set_overrides).set_defaults(:x=>1) end specify "should set the default values for inserts" do @ds.insert_sql.should == "INSERT INTO items (x) VALUES (1)" @ds.insert_sql(:x=>2).should == "INSERT INTO items (x) VALUES (2)" @ds.insert_sql(:y=>2).should =~ /INSERT INTO items \([xy], [xy]\) VALUES \([21], [21]\)/ @ds.set_defaults(:y=>2).insert_sql.should =~ /INSERT INTO items \([xy], [xy]\) VALUES \([21], [21]\)/ @ds.set_defaults(:x=>2).insert_sql.should == "INSERT INTO items (x) VALUES (2)" end specify "should set the default values for updates" do @ds.update_sql.should == "UPDATE items SET x = 1" @ds.update_sql(:x=>2).should == "UPDATE items SET x = 2" @ds.update_sql(:y=>2).should =~ /UPDATE items SET (x = 1|y = 2), (x = 1|y = 2)/ @ds.set_defaults(:y=>2).update_sql.should =~ /UPDATE items SET (x = 1|y = 2), (x = 1|y = 2)/ @ds.set_defaults(:x=>2).update_sql.should == "UPDATE items SET x = 2" end specify "should not affect String update arguments" do @ds.update_sql('y = 2').should == "UPDATE items SET y = 2" end end describe "Sequel::Dataset #set_overrides" do before do @ds = Sequel.mock.dataset.from(:items).extension(:set_overrides).set_overrides(:x=>1) end specify "should override the given values for inserts" do @ds.insert_sql.should == "INSERT INTO items (x) VALUES (1)" @ds.insert_sql(:x=>2).should == "INSERT INTO items (x) VALUES (1)" @ds.insert_sql(:y=>2).should =~ /INSERT INTO items \([xy], [xy]\) VALUES \([21], [21]\)/ @ds.set_overrides(:y=>2).insert_sql.should =~ /INSERT INTO items \([xy], [xy]\) VALUES \([21], [21]\)/ @ds.set_overrides(:x=>2).insert_sql.should == "INSERT INTO items (x) VALUES (1)" end specify "should override the given values for updates" do @ds.update_sql.should == "UPDATE items SET x = 1" @ds.update_sql(:x=>2).should == "UPDATE items SET x = 1" @ds.update_sql(:y=>2).should =~ /UPDATE items SET (x = 1|y = 2), (x = 1|y = 2)/ @ds.set_overrides(:y=>2).update_sql.should =~ /UPDATE items SET (x = 1|y = 2), (x = 1|y = 2)/ @ds.set_overrides(:x=>2).update_sql.should == "UPDATE items SET x = 1" end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/sharding_spec.rb��������������������������������������������������0000664�0000000�0000000�00000025714�12201565355�0022276�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "sharding plugin" do before do @db = Sequel.mock(:numrows=>1, :autoid=>proc{1}, :servers=>{:s1=>{}, :s2=>{}, :s3=>{}, :s4=>{}}) @Artist = Class.new(Sequel::Model(@db[:artists])) @Artist.class_eval do instance_dataset._fetch = dataset._fetch = {:id=>2, :name=>'YJM'} columns :id, :name plugin :sharding end @Album = Class.new(Sequel::Model(@db[:albums])) @Album.class_eval do instance_dataset._fetch = dataset._fetch = {:id=>1, :name=>'RF', :artist_id=>2} columns :id, :artist_id, :name plugin :sharding end @Tag = Class.new(Sequel::Model(@db[:tags])) @Tag.class_eval do instance_dataset._fetch = dataset._fetch = {:id=>3, :name=>'M'} columns :id, :name plugin :sharding end @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id @Album.many_to_one :artist, :class=>@Artist @Album.many_to_many :tags, :class=>@Tag, :left_key=>:album_id, :right_key=>:tag_id, :join_table=>:albums_tags @db.sqls end specify "should allow you to instantiate a new object for a specified shard" do @Album.new_using_server(:s1, :name=>'RF').save @db.sqls.should == ["INSERT INTO albums (name) VALUES ('RF') -- s1", "SELECT * FROM albums WHERE (id = 1) LIMIT 1 -- s1"] @Album.new_using_server(:s2){|o| o.name = 'MO'}.save @db.sqls.should == ["INSERT INTO albums (name) VALUES ('MO') -- s2", "SELECT * FROM albums WHERE (id = 1) LIMIT 1 -- s2"] end specify "should allow you to create and save a new object for a specified shard" do @Album.create_using_server(:s1, :name=>'RF') @db.sqls.should == ["INSERT INTO albums (name) VALUES ('RF') -- s1", "SELECT * FROM albums WHERE (id = 1) LIMIT 1 -- s1"] @Album.create_using_server(:s2){|o| o.name = 'MO'} @db.sqls.should == ["INSERT INTO albums (name) VALUES ('MO') -- s2", "SELECT * FROM albums WHERE (id = 1) LIMIT 1 -- s2"] end specify "should have objects retrieved from a specific shard update that shard" do @Album.server(:s1).first.update(:name=>'MO') @db.sqls.should == ["SELECT * FROM albums LIMIT 1 -- s1", "UPDATE albums SET name = 'MO' WHERE (id = 1) -- s1"] end specify "should have objects retrieved from a specific shard delete from that shard" do @Album.server(:s1).first.delete @db.sqls.should == ["SELECT * FROM albums LIMIT 1 -- s1", "DELETE FROM albums WHERE (id = 1) -- s1"] end specify "should have objects retrieved from a specific shard reload from that shard" do @Album.server(:s1).first.reload @db.sqls.should == ["SELECT * FROM albums LIMIT 1 -- s1", "SELECT * FROM albums WHERE (id = 1) LIMIT 1 -- s1"] end specify "should use current dataset's shard when eager loading if eagerly loaded dataset doesn't have its own shard" do albums = @Album.server(:s1).eager(:artist).all @db.sqls.should == ["SELECT * FROM albums -- s1", "SELECT * FROM artists WHERE (artists.id IN (2)) -- s1"] albums.length.should == 1 albums.first.artist.save @db.sqls.should == ["UPDATE artists SET name = 'YJM' WHERE (id = 2) -- s1"] end specify "should not use current dataset's shard when eager loading if eagerly loaded dataset has its own shard" do @Artist.instance_dataset.opts[:server] = @Artist.dataset.opts[:server] = :s2 albums = @Album.server(:s1).eager(:artist).all @db.sqls.should == ["SELECT * FROM albums -- s1", "SELECT * FROM artists WHERE (artists.id IN (2)) -- s2"] albums.length.should == 1 albums.first.artist.save @db.sqls.should == ["UPDATE artists SET name = 'YJM' WHERE (id = 2) -- s2"] end specify "should use current dataset's shard when eager graphing if eagerly graphed dataset doesn't have its own shard" do ds = @Album.server(:s1).eager_graph(:artist) ds._fetch = {:id=>1, :artist_id=>2, :name=>'RF', :artist_id_0=>2, :artist_name=>'YJM'} albums = ds.all @db.sqls.should == ["SELECT albums.id, albums.artist_id, albums.name, artist.id AS artist_id_0, artist.name AS artist_name FROM albums LEFT OUTER JOIN artists AS artist ON (artist.id = albums.artist_id) -- s1"] albums.length.should == 1 albums.first.artist.save @db.sqls.should == ["UPDATE artists SET name = 'YJM' WHERE (id = 2) -- s1"] end specify "should not use current dataset's shard when eager graphing if eagerly graphed dataset has its own shard" do @Artist.instance_dataset.opts[:server] = @Artist.dataset.opts[:server] = :s2 ds = @Album.server(:s1).eager_graph(:artist) ds._fetch = {:id=>1, :artist_id=>2, :name=>'RF', :artist_id_0=>2, :artist_name=>'YJM'} albums = ds.all @db.sqls.should == ["SELECT albums.id, albums.artist_id, albums.name, artist.id AS artist_id_0, artist.name AS artist_name FROM albums LEFT OUTER JOIN artists AS artist ON (artist.id = albums.artist_id) -- s1"] albums.length.should == 1 albums.first.artist.save @db.sqls.should == ["UPDATE artists SET name = 'YJM' WHERE (id = 2) -- s2"] end specify "should use eagerly graphed dataset shard for eagerly graphed objects even if current dataset does not have a shard" do @Artist.instance_dataset.opts[:server] = @Artist.dataset.opts[:server] = :s2 ds = @Album.eager_graph(:artist) ds._fetch = {:id=>1, :artist_id=>2, :name=>'RF', :artist_id_0=>2, :artist_name=>'YJM'} albums = ds.all @db.sqls.should == ["SELECT albums.id, albums.artist_id, albums.name, artist.id AS artist_id_0, artist.name AS artist_name FROM albums LEFT OUTER JOIN artists AS artist ON (artist.id = albums.artist_id)"] albums.length.should == 1 albums.first.artist.save @db.sqls.should == ["UPDATE artists SET name = 'YJM' WHERE (id = 2) -- s2"] end specify "should have objects retrieved from a specific shard use associated objects from that shard, with modifications to the associated objects using that shard" do album = @Album.server(:s1).first @db.sqls.should == ["SELECT * FROM albums LIMIT 1 -- s1"] album.artist.update(:name=>'AS') @db.sqls.should == ["SELECT * FROM artists WHERE (artists.id = 2) LIMIT 1 -- s1", "UPDATE artists SET name = 'AS' WHERE (id = 2) -- s1"] album.tags.map{|a| a.update(:name=>'SR')} @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.album_id = 1)) -- s1", "UPDATE tags SET name = 'SR' WHERE (id = 3) -- s1"] @Artist.server(:s2).first.albums.map{|a| a.update(:name=>'MO')} @db.sqls.should == ["SELECT * FROM artists LIMIT 1 -- s2", "SELECT * FROM albums WHERE (albums.artist_id = 2) -- s2", "UPDATE albums SET name = 'MO' WHERE (id = 1) -- s2"] end specify "should have objects retrieved from a specific shard add associated objects to that shard" do album = @Album.server(:s1).first artist = @Artist.server(:s2).first @db.sqls.should == ["SELECT * FROM albums LIMIT 1 -- s1", "SELECT * FROM artists LIMIT 1 -- s2"] artist.add_album(:name=>'MO') sqls = @db.sqls ["INSERT INTO albums (artist_id, name) VALUES (2, 'MO') -- s2", "INSERT INTO albums (name, artist_id) VALUES ('MO', 2) -- s2"].should include(sqls.shift) sqls.should == ["SELECT * FROM albums WHERE (id = 1) LIMIT 1 -- s2"] album.add_tag(:name=>'SR') sqls = @db.sqls ["INSERT INTO albums_tags (album_id, tag_id) VALUES (1, 3) -- s1", "INSERT INTO albums_tags (tag_id, album_id) VALUES (3, 1) -- s1"].should include(sqls.pop) sqls.should == ["INSERT INTO tags (name) VALUES ('SR') -- s1", "SELECT * FROM tags WHERE (id = 1) LIMIT 1 -- s1", ] end specify "should have objects retrieved from a specific shard remove associated objects from that shard" do album = @Album.server(:s1).first artist = @Artist.server(:s2).first @db.sqls.should == ["SELECT * FROM albums LIMIT 1 -- s1", "SELECT * FROM artists LIMIT 1 -- s2"] artist.remove_album(1) sqls = @db.sqls ["UPDATE albums SET artist_id = NULL, name = 'RF' WHERE (id = 1) -- s2", "UPDATE albums SET name = 'RF', artist_id = NULL WHERE (id = 1) -- s2"].should include(sqls.pop) sqls.should == ["SELECT * FROM albums WHERE ((albums.artist_id = 2) AND (albums.id = 1)) LIMIT 1 -- s2"] album.remove_tag(3) @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.album_id = 1)) WHERE (tags.id = 3) LIMIT 1 -- s1", "DELETE FROM albums_tags WHERE ((album_id = 1) AND (tag_id = 3)) -- s1"] end specify "should have objects retrieved from a specific shard remove all associated objects from that shard" do album = @Album.server(:s1).first artist = @Artist.server(:s2).first @db.sqls.should == ["SELECT * FROM albums LIMIT 1 -- s1", "SELECT * FROM artists LIMIT 1 -- s2"] artist.remove_all_albums @db.sqls.should == ["UPDATE albums SET artist_id = NULL WHERE (artist_id = 2) -- s2"] album.remove_all_tags @db.sqls.should == ["DELETE FROM albums_tags WHERE (album_id = 1) -- s1"] end specify "should not override a server already set on an associated object" do @Album.server(:s1).first artist = @Artist.server(:s2).first @db.sqls.should == ["SELECT * FROM albums LIMIT 1 -- s1", "SELECT * FROM artists LIMIT 1 -- s2"] artist.add_album(@Album.load(:id=>4, :name=>'MO').set_server(:s3)) ["UPDATE albums SET artist_id = 2, name = 'MO' WHERE (id = 4) -- s3", "UPDATE albums SET name = 'MO', artist_id = 2 WHERE (id = 4) -- s3"].should include(@db.sqls.pop) artist.remove_album(@Album.load(:id=>5, :name=>'T', :artist_id=>2).set_server(:s4)) # Should select from current object's shard to check existing association, but update associated object's shard sqls = @db.sqls ["UPDATE albums SET artist_id = NULL, name = 'T' WHERE (id = 5) -- s4", "UPDATE albums SET name = 'T', artist_id = NULL WHERE (id = 5) -- s4"].should include(sqls.pop) sqls.should == ["SELECT 1 AS one FROM albums WHERE ((albums.artist_id = 2) AND (id = 5)) LIMIT 1 -- s2"] end specify "should be able to set a shard to use for any object using set_server" do @Album.server(:s1).first.set_server(:s2).reload @db.sqls.should == ["SELECT * FROM albums LIMIT 1 -- s1", "SELECT * FROM albums WHERE (id = 1) LIMIT 1 -- s2"] end specify "should use transactions on the correct shard" do @Album.use_transactions = true @Album.server(:s2).first.save sqls = @db.sqls ["UPDATE albums SET artist_id = 2, name = 'RF' WHERE (id = 1) -- s2", "UPDATE albums SET name = 'RF', artist_id = 2 WHERE (id = 1) -- s2"].should include(sqls.slice!(2)) sqls.should == ["SELECT * FROM albums LIMIT 1 -- s2", "BEGIN -- s2", "COMMIT -- s2"] end specify "should use override current shard when saving with given :server option" do @Album.use_transactions = true @Album.server(:s2).first.save(:server=>:s1) sqls = @db.sqls ["UPDATE albums SET artist_id = 2, name = 'RF' WHERE (id = 1) -- s1", "UPDATE albums SET name = 'RF', artist_id = 2 WHERE (id = 1) -- s1"].should include(sqls.slice!(2)) sqls.should == ["SELECT * FROM albums LIMIT 1 -- s2", "BEGIN -- s1", "COMMIT -- s1"] end end ����������������������������������������������������ruby-sequel-4.1.1/spec/extensions/shared_caching_spec.rb��������������������������������������������0000664�0000000�0000000�00000014405�12201565355�0023414�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Shared caching behavior" do before do @db = Sequel.mock class ::LookupModel < ::Sequel::Model(@db) columns :id, :caching_model_id, :caching_model_id2 many_to_one :caching_model many_to_one :caching_model2, :key=>[:caching_model_id, :caching_model_id2], :class=>:CachingModel end @c = LookupModel class ::CachingModel < Sequel::Model(@db) columns :id, :id2 end @cc = CachingModel end after do Object.send(:remove_const, :CachingModel) Object.send(:remove_const, :LookupModel) end shared_examples_for "many_to_one_pk_lookup with composite keys" do it "should use a simple primary key lookup when retrieving many_to_one associated records with a composite key" do @db.sqls.should == [] @c.load(:id=>3, :caching_model_id=>1, :caching_model_id2=>2).caching_model2.should equal(@cm12) @c.load(:id=>3, :caching_model_id=>2, :caching_model_id2=>1).caching_model2.should equal(@cm21) @db.sqls.should == [] @cc.dataset._fetch = [] @c.load(:id=>4, :caching_model_id=>2, :caching_model_id2=>2).caching_model2.should == nil end end shared_examples_for "many_to_one_pk_lookup" do it "should use a simple primary key lookup when retrieving many_to_one associated records" do @cc.set_primary_key([:id, :id2]) @db.sqls.should == [] @c.load(:id=>3, :caching_model_id=>1).caching_model.should equal(@cm1) @c.load(:id=>4, :caching_model_id=>2).caching_model.should equal(@cm2) @db.sqls.should == [] @cc.dataset._fetch = [] @c.load(:id=>4, :caching_model_id=>3).caching_model.should == nil end it "should not use a simple primary key lookup if the assocation has a nil :key option" do @c.many_to_one :caching_model, :key=>nil, :dataset=>proc{CachingModel.filter(:caching_model_id=>caching_model_id)} @c.load(:id=>3, :caching_model_id=>1).caching_model @db.sqls.should_not == [] end it "should not use a simple primary key lookup if the assocation has a nil :key option" do @c.many_to_one :caching_model, :many_to_one_pk_lookup=>false @c.load(:id=>3, :caching_model_id=>1).caching_model @db.sqls.should_not == [] end it "should not use a simple primary key lookup if the assocation's :primary_key option doesn't match the primary key of the associated class" do @c.many_to_one :caching_model, :primary_key=>:id2 @c.load(:id=>3, :caching_model_id=>1).caching_model @db.sqls.should_not == [] end it "should not use a simple primary key lookup if the assocation has :conditions" do @c.many_to_one :caching_model, :conditions=>{:a=>1} @c.load(:id=>3, :caching_model_id=>1).caching_model @db.sqls.should_not == [] end it "should not use a simple primary key lookup if the assocation has :select" do @c.many_to_one :caching_model, :select=>[:a, :b] @c.load(:id=>3, :caching_model_id=>1).caching_model @db.sqls.should_not == [] end it "should not use a simple primary key lookup if the assocation has a block" do @c.many_to_one(:caching_model){|ds| ds.where{a > 1}} @c.load(:id=>3, :caching_model_id=>1).caching_model @db.sqls.should_not == [] end it "should not use a simple primary key lookup if the assocation has a non-default :dataset option" do cc = @cc @c.many_to_one :caching_model, :dataset=>proc{cc.where(:id=>caching_model_id)} @c.load(:id=>3, :caching_model_id=>1).caching_model @db.sqls.should_not == [] end it "should use a simple primary key lookup if explicitly set" do @c.many_to_one :caching_model, :select=>[:a, :b], :many_to_one_pk_lookup=>true @c.load(:id=>3, :caching_model_id=>1).caching_model @db.sqls.should == [] end it "should not use a simple primary key lookup if the prepared_statements_associations method is being used" do c2 = Class.new(Sequel::Model(@db[:not_caching_model])) c2.dataset._fetch = {:id=>1} c = Class.new(Sequel::Model(@db[:lookup_model])) c.class_eval do plugin :prepared_statements_associations columns :id, :caching_model_id many_to_one :caching_model, :class=>c2 end c.load(:id=>3, :caching_model_id=>1).caching_model.should == c2.load(:id=>1) @db.sqls.should_not == [] end it "should use a simple primary key lookup if the prepared_statements_associations method is being used but associated model also uses caching" do c = Class.new(Sequel::Model(:lookup_model)) c.class_eval do plugin :prepared_statements_associations columns :id, :caching_model_id many_to_one :caching_model end c.load(:id=>3, :caching_model_id=>1).caching_model.should equal(@cm1) @db.sqls.should == [] end end describe "With caching plugin" do before do @cache_class = Class.new(Hash) do attr_accessor :ttl def set(k, v, ttl); self[k] = v; @ttl = ttl; end def get(k); self[k]; end end cache = @cache_class.new @cache = cache @cc.plugin :caching, @cache @cc.dataset._fetch = {:id=>1} @cm1 = @cc[1] @cm2 = @cc[2] @cm12 = @cc[1, 2] @cm21 = @cc[2, 1] @db.sqls end it_should_behave_like "many_to_one_pk_lookup" it_should_behave_like "many_to_one_pk_lookup with composite keys" end describe "With static_cache plugin with single key" do before do @cc.dataset._fetch = [{:id=>1}, {:id=>2}] @cc.plugin :static_cache @cm1 = @cc[1] @cm2 = @cc[2] @db.sqls end it_should_behave_like "many_to_one_pk_lookup" it "should not issue regular query if primary key lookup returns no rows" do def @cc.primary_key_lookup(pk); end @c.many_to_one :caching_model @c.load(:id=>3, :caching_model_id=>1).caching_model @db.sqls.should == [] end end describe "With static_cache plugin with composite key" do before do @cc.set_primary_key([:id, :id2]) @cc.dataset._fetch = [{:id=>1, :id2=>2}, {:id=>2, :id2=>1}] @cc.plugin :static_cache @cm12 = @cc[[1, 2]] @cm21 = @cc[[2, 1]] @db.sqls end it_should_behave_like "many_to_one_pk_lookup with composite keys" end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/single_table_inheritance_spec.rb����������������������������������0000664�0000000�0000000�00000026310�12201565355�0025471�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "single table inheritance plugin" do before do class ::StiTest < Sequel::Model columns :id, :kind, :blah plugin :single_table_inheritance, :kind end class ::StiTestSub1 < StiTest end class ::StiTestSub2 < StiTest end @ds = StiTest.dataset DB.reset end after do Object.send(:remove_const, :StiTestSub1) Object.send(:remove_const, :StiTestSub2) Object.send(:remove_const, :StiTest) end specify "should have simple_table = nil" do StiTest.simple_table.should == "sti_tests" StiTestSub1.simple_table.should == nil end it "should allow changing the inheritance column via a plugin :single_table_inheritance call" do StiTest.plugin :single_table_inheritance, :blah Object.send(:remove_const, :StiTestSub1) Object.send(:remove_const, :StiTestSub2) class ::StiTestSub1 < StiTest; end class ::StiTestSub2 < StiTest; end @ds._fetch = [{:blah=>'StiTest'}, {:blah=>'StiTestSub1'}, {:blah=>'StiTestSub2'}] StiTest.all.collect{|x| x.class}.should == [StiTest, StiTestSub1, StiTestSub2] StiTest.dataset.sql.should == "SELECT * FROM sti_tests" StiTestSub1.dataset.sql.should == "SELECT * FROM sti_tests WHERE (sti_tests.blah IN ('StiTestSub1'))" StiTestSub2.dataset.sql.should == "SELECT * FROM sti_tests WHERE (sti_tests.blah IN ('StiTestSub2'))" end it "should return rows with the correct class based on the polymorphic_key value" do @ds._fetch = [{:kind=>'StiTest'}, {:kind=>'StiTestSub1'}, {:kind=>'StiTestSub2'}] StiTest.all.collect{|x| x.class}.should == [StiTest, StiTestSub1, StiTestSub2] end it "should return rows with the correct class based on the polymorphic_key value when retreiving by primary key" do @ds._fetch = [{:kind=>'StiTestSub1'}] StiTest[1].class.should == StiTestSub1 end it "should return rows with the correct class for subclasses based on the polymorphic_key value" do class ::StiTestSub1Sub < StiTestSub1; end StiTestSub1.dataset._fetch = [{:kind=>'StiTestSub1'}, {:kind=>'StiTestSub1Sub'}] StiTestSub1.all.collect{|x| x.class}.should == [StiTestSub1, StiTestSub1Sub] end it "should fallback to the main class if the given class does not exist" do @ds._fetch = {:kind=>'StiTestSub3'} StiTest.all.collect{|x| x.class}.should == [StiTest] end it "should fallback to the main class if the sti_key field is empty or nil without calling constantize" do called = false StiTest.meta_def(:constantize) do |s| called = true Object end StiTest.plugin :single_table_inheritance, :kind @ds._fetch = [{:kind=>''}, {:kind=>nil}] StiTest.all.collect{|x| x.class}.should == [StiTest, StiTest] called.should == false end it "should add a before_create hook that sets the model class name for the key" do StiTest.new.save StiTestSub1.new.save StiTestSub2.new.save DB.sqls.should == ["INSERT INTO sti_tests (kind) VALUES ('StiTest')", "SELECT * FROM sti_tests WHERE (id = 10) LIMIT 1", "INSERT INTO sti_tests (kind) VALUES ('StiTestSub1')", "SELECT * FROM sti_tests WHERE ((sti_tests.kind IN ('StiTestSub1')) AND (id = 10)) LIMIT 1", "INSERT INTO sti_tests (kind) VALUES ('StiTestSub2')", "SELECT * FROM sti_tests WHERE ((sti_tests.kind IN ('StiTestSub2')) AND (id = 10)) LIMIT 1"] end it "should have the before_create hook not override an existing value" do StiTest.create(:kind=>'StiTestSub1') DB.sqls.should == ["INSERT INTO sti_tests (kind) VALUES ('StiTestSub1')", "SELECT * FROM sti_tests WHERE (id = 10) LIMIT 1"] end it "should have the before_create hook handle columns with the same name as existing method names" do StiTest.plugin :single_table_inheritance, :type StiTest.columns :id, :type StiTest.create DB.sqls.should == ["INSERT INTO sti_tests (type) VALUES ('StiTest')", "SELECT * FROM sti_tests WHERE (id = 10) LIMIT 1"] end it "should add a filter to model datasets inside subclasses hook to only retreive objects with the matching key" do StiTest.dataset.sql.should == "SELECT * FROM sti_tests" StiTestSub1.dataset.sql.should == "SELECT * FROM sti_tests WHERE (sti_tests.kind IN ('StiTestSub1'))" StiTestSub2.dataset.sql.should == "SELECT * FROM sti_tests WHERE (sti_tests.kind IN ('StiTestSub2'))" end it "should add a correct filter for multiple levels of subclasses" do class ::StiTestSub1A < StiTestSub1; end StiTestSub1.dataset.sql.should == "SELECT * FROM sti_tests WHERE (sti_tests.kind IN ('StiTestSub1', 'StiTestSub1A'))" StiTestSub1A.dataset.sql.should == "SELECT * FROM sti_tests WHERE (sti_tests.kind IN ('StiTestSub1A'))" class ::StiTestSub2A < StiTestSub2; end StiTestSub2.dataset.sql.should == "SELECT * FROM sti_tests WHERE (sti_tests.kind IN ('StiTestSub2', 'StiTestSub2A'))" StiTestSub2A.dataset.sql.should == "SELECT * FROM sti_tests WHERE (sti_tests.kind IN ('StiTestSub2A'))" class ::StiTestSub1B < StiTestSub1A; end StiTestSub1.dataset.sql.should == "SELECT * FROM sti_tests WHERE (sti_tests.kind IN ('StiTestSub1', 'StiTestSub1A', 'StiTestSub1B'))" StiTestSub1A.dataset.sql.should == "SELECT * FROM sti_tests WHERE (sti_tests.kind IN ('StiTestSub1A', 'StiTestSub1B'))" StiTestSub1B.dataset.sql.should == "SELECT * FROM sti_tests WHERE (sti_tests.kind IN ('StiTestSub1B'))" end describe "with custom options" do before do class ::StiTest2 < Sequel::Model columns :id, :kind def _save_refresh; end end end after do Object.send(:remove_const, :StiTest2) Object.send(:remove_const, :StiTest3) Object.send(:remove_const, :StiTest4) end specify "should have working row_proc if using set_dataset in subclass to remove columns" do StiTest2.plugin :single_table_inheritance, :kind class ::StiTest3 < ::StiTest2 set_dataset(dataset.select(*(columns - [:blah]))) end class ::StiTest4 < ::StiTest3; end StiTest3.dataset._fetch = {:id=>1, :kind=>'StiTest4'} StiTest3[1].should == StiTest4.load(:id=>1, :kind=>'StiTest4') end it "should work with custom procs with strings" do StiTest2.plugin :single_table_inheritance, :kind, :model_map=>proc{|v| v == 1 ? 'StiTest3' : 'StiTest4'}, :key_map=>proc{|klass| klass.name == 'StiTest3' ? 1 : 2} class ::StiTest3 < ::StiTest2; end class ::StiTest4 < ::StiTest2; end StiTest2.dataset.row_proc.call(:kind=>0).should be_a_instance_of(StiTest4) StiTest2.dataset.row_proc.call(:kind=>1).should be_a_instance_of(StiTest3) StiTest2.dataset.row_proc.call(:kind=>2).should be_a_instance_of(StiTest4) StiTest2.create.kind.should == 2 StiTest3.create.kind.should == 1 StiTest4.create.kind.should == 2 end it "should work with custom procs with symbols" do StiTest2.plugin :single_table_inheritance, :kind, :model_map=>proc{|v| v == 1 ? :StiTest3 : :StiTest4}, :key_map=>proc{|klass| klass.name == 'StiTest3' ? 1 : 2} class ::StiTest3 < ::StiTest2; end class ::StiTest4 < ::StiTest2; end StiTest2.dataset.row_proc.call(:kind=>0).should be_a_instance_of(StiTest4) StiTest2.dataset.row_proc.call(:kind=>1).should be_a_instance_of(StiTest3) StiTest2.dataset.row_proc.call(:kind=>2).should be_a_instance_of(StiTest4) StiTest2.create.kind.should == 2 StiTest3.create.kind.should == 1 StiTest4.create.kind.should == 2 end it "should work with custom hashes" do StiTest2.plugin :single_table_inheritance, :kind, :model_map=>{0=>StiTest2, 1=>:StiTest3, 2=>'StiTest4'}, :key_map=>{StiTest2=>4, 'StiTest3'=>5, 'StiTest4'=>6} class ::StiTest3 < ::StiTest2; end class ::StiTest4 < ::StiTest2; end StiTest2.dataset.row_proc.call(:kind=>0).should be_a_instance_of(StiTest2) StiTest2.dataset.row_proc.call(:kind=>1).should be_a_instance_of(StiTest3) StiTest2.dataset.row_proc.call(:kind=>2).should be_a_instance_of(StiTest4) StiTest2.create.kind.should == 4 StiTest3.create.kind.should == 5 StiTest4.create.kind.should == 6 class ::StiTest5 < ::StiTest4; end StiTest5.create.kind.should == nil end it "should infer key_map from model_map if provided as a hash" do StiTest2.plugin :single_table_inheritance, :kind, :model_map=>{0=>StiTest2, 1=>'StiTest3', 2=>:StiTest4} class ::StiTest3 < ::StiTest2; end class ::StiTest4 < ::StiTest2; end StiTest2.dataset.row_proc.call(:kind=>0).should be_a_instance_of(StiTest2) StiTest2.dataset.row_proc.call(:kind=>1).should be_a_instance_of(StiTest3) StiTest2.dataset.row_proc.call(:kind=>2).should be_a_instance_of(StiTest4) StiTest2.create.kind.should == 0 StiTest3.create.kind.should == 1 StiTest4.create.kind.should == 2 end it "should raise exceptions if a bad model value is used" do StiTest2.plugin :single_table_inheritance, :kind, :model_map=>{0=>1,1=>1.5, 2=>Date.today} class ::StiTest3 < ::StiTest2; end class ::StiTest4 < ::StiTest2; end proc{StiTest2.dataset.row_proc.call(:kind=>0)}.should raise_error(Sequel::Error) proc{StiTest2.dataset.row_proc.call(:kind=>1)}.should raise_error(Sequel::Error) proc{StiTest2.dataset.row_proc.call(:kind=>2)}.should raise_error(Sequel::Error) end it "should work with non-bijective mappings" do StiTest2.plugin :single_table_inheritance, :kind, :model_map=>{0=>'StiTest3', 1=>'StiTest3', 2=>'StiTest4'} class ::StiTest3 < ::StiTest2; end class ::StiTest4 < ::StiTest2; end StiTest2.dataset.row_proc.call(:kind=>0).should be_a_instance_of(StiTest3) StiTest2.dataset.row_proc.call(:kind=>1).should be_a_instance_of(StiTest3) StiTest2.dataset.row_proc.call(:kind=>2).should be_a_instance_of(StiTest4) [0,1].should include(StiTest3.create.kind) StiTest4.create.kind.should == 2 end it "should work with non-bijective mappings and key map procs" do StiTest2.plugin :single_table_inheritance, :kind, :key_map=>proc{|model| model.to_s == 'StiTest4' ? 2 : [0,1] } class ::StiTest3 < ::StiTest2; end class ::StiTest4 < ::StiTest2; end StiTest2.dataset.sql.should == "SELECT * FROM sti_test2s" StiTest3.dataset.sql.should == "SELECT * FROM sti_test2s WHERE (sti_test2s.kind IN (0, 1))" StiTest4.dataset.sql.should == "SELECT * FROM sti_test2s WHERE (sti_test2s.kind IN (2))" end it "should create correct sql with non-bijective mappings" do StiTest2.plugin :single_table_inheritance, :kind, :model_map=>{0=>'StiTest3', 1=>'StiTest3', 2=>'StiTest4'} class ::StiTest3 < ::StiTest2; end class ::StiTest4 < ::StiTest2; end StiTest2.dataset.sql.should == "SELECT * FROM sti_test2s" ["SELECT * FROM sti_test2s WHERE (sti_test2s.kind IN (0, 1))", "SELECT * FROM sti_test2s WHERE (sti_test2s.kind IN (1, 0))"].should include(StiTest3.dataset.sql) end it "should honor a :key_chooser" do StiTest2.plugin :single_table_inheritance, :kind, :key_chooser => proc{|inst| inst.model.to_s.downcase } class ::StiTest3 < ::StiTest2; end class ::StiTest4 < ::StiTest2; end StiTest3.create.kind.should == 'stitest3' StiTest4.create.kind.should == 'stitest4' end end end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/skip_create_refresh_spec.rb���������������������������������������0000664�0000000�0000000�00000001140�12201565355�0024471�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::SkipCreateRefresh" do it "should skip the refresh after saving a new object" do c = Class.new(Sequel::Model(:a)) c.columns :id, :x c.db.reset c.instance_dataset.meta_def(:insert){|*a| super(*a); 2} c.create(:x=>1) c.db.sqls.should == ['INSERT INTO a (x) VALUES (1)', 'SELECT * FROM a WHERE (id = 2) LIMIT 1'] c.plugin :skip_create_refresh c.db.reset c.create(:x=>3).values.should == {:id=>2, :x=>3} c.db.sqls.should == ['INSERT INTO a (x) VALUES (3)'] end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/spec_helper.rb����������������������������������������������������0000664�0000000�0000000�00000004753�12201565355�0021756�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'rubygems' if defined?(RSpec) begin require 'rspec/expectations' rescue LoadError nil end end if ENV['COVERAGE'] require File.join(File.dirname(File.expand_path(__FILE__)), "../sequel_coverage") SimpleCov.sequel_coverage(:filter=>%r{lib/sequel/(extensions|plugins)/\w+\.rb\z}) end unless Object.const_defined?('Sequel') && Sequel.const_defined?('Model') $:.unshift(File.join(File.dirname(File.expand_path(__FILE__)), "../../lib/")) require 'sequel' end Sequel::Deprecation.backtrace_filter = lambda{|line, lineno| lineno < 4 || line =~ /_spec\.rb/} SEQUEL_EXTENSIONS_NO_DEPRECATION_WARNING = true begin # Attempt to load ActiveSupport blank extension and inflector first, so Sequel # can override them. require 'active_support/core_ext/object/blank' require 'active_support/inflector' require 'active_support/core_ext/string/inflections' rescue LoadError nil end Sequel.extension :meta_def Sequel.extension :core_refinements if RUBY_VERSION >= '2.0.0' def skip_warn(s) warn "Skipping test of #{s}" if ENV["SKIPPED_TEST_WARN"] end (defined?(RSpec) ? RSpec::Core::ExampleGroup : Spec::Example::ExampleGroup).class_eval do if ENV['SEQUEL_DEPRECATION_WARNINGS'] class << self alias qspecify specify end else def self.qspecify(*a, &block) specify(*a) do begin output = Sequel::Deprecation.output Sequel::Deprecation.output = false instance_exec(&block) ensure Sequel::Deprecation.output = output end end end end end Sequel.quote_identifiers = false Sequel.identifier_input_method = nil Sequel.identifier_output_method = nil class << Sequel::Model attr_writer :db_schema alias orig_columns columns def columns(*cols) return super if cols.empty? define_method(:columns){cols} @dataset.instance_variable_set(:@columns, cols) if @dataset def_column_accessor(*cols) @columns = cols @db_schema = {} cols.each{|c| @db_schema[c] = {}} end end Sequel::Model.use_transactions = false Sequel.cache_anonymous_models = false db = Sequel.mock(:fetch=>{:id => 1, :x => 1}, :numrows=>1, :autoid=>proc{|sql| 10}) def db.schema(*) [[:id, {:primary_key=>true}]] end def db.reset() sqls end def db.supports_schema_parsing?() true end Sequel::Model.db = DB = db if ENV['SEQUEL_COLUMNS_INTROSPECTION'] Sequel.extension :columns_introspection Sequel::Database.extension :columns_introspection Sequel::Mock::Dataset.send(:include, Sequel::ColumnsIntrospection) end ���������������������ruby-sequel-4.1.1/spec/extensions/split_array_nil_spec.rb�������������������������������������������0000664�0000000�0000000�00000001707�12201565355�0023666�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') describe "split_array_nil extension" do before do @ds = Sequel.mock[:table].extension(:split_array_nil) end specify "should split IN with nil in array into separate OR IS NULL clause" do @ds.filter(:a=>[1, nil]).sql.should == "SELECT * FROM table WHERE ((a IN (1)) OR (a IS NULL))" end specify "should split NOT IN with nil in array into separate AND IS NOT NULL clause" do @ds.exclude(:a=>[1, nil]).sql.should == "SELECT * FROM table WHERE ((a NOT IN (1)) AND (a IS NOT NULL))" end specify "should not affect other IN/NOT in clauses" do @ds.filter(:a=>[1, 2]).sql.should == "SELECT * FROM table WHERE (a IN (1, 2))" @ds.exclude(:a=>[1, 2]).sql.should == "SELECT * FROM table WHERE (a NOT IN (1, 2))" end specify "should not affect other types of filters clauses" do @ds.filter(:a=>1).sql.should == "SELECT * FROM table WHERE (a = 1)" end end ���������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/sql_expr_spec.rb��������������������������������������������������0000664�0000000�0000000�00000004467�12201565355�0022336�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') Sequel.extension :sql_expr describe "Sequel sql_expr extension" do before do @ds = Sequel.mock.dataset end specify "Object#sql_expr should wrap the object in a GenericComplexExpression" do o = Object.new def o.sql_literal(ds) 'foo' end s = o.sql_expr @ds.literal(s).should == "foo" @ds.literal(s+1).should == "(foo + 1)" @ds.literal(s & true).should == "(foo AND 't')" @ds.literal(s < 1).should == "(foo < 1)" @ds.literal(s.sql_subscript(1)).should == "foo[1]" @ds.literal(s.like('a')).should == "(foo LIKE 'a' ESCAPE '\\')" @ds.literal(s.as(:a)).should == "foo AS a" @ds.literal(s.cast(Integer)).should == "CAST(foo AS integer)" @ds.literal(s.desc).should == "foo DESC" @ds.literal(s.sql_string + '1').should == "(foo || '1')" end specify "Numeric#sql_expr should wrap the object in a NumericExpression" do [1, 2.0, 2^70, BigDecimal.new('1.0')].each do |o| @ds.literal(o.sql_expr).should == @ds.literal(o) @ds.literal(o.sql_expr + 1).should == "(#{@ds.literal(o)} + 1)" end end specify "String#sql_expr should wrap the object in a StringExpression" do @ds.literal("".sql_expr).should == "''" @ds.literal("".sql_expr + :a).should == "('' || a)" end specify "NilClass, TrueClass, and FalseClass#sql_expr should wrap the object in a BooleanExpression" do [nil, true, false].each do |o| @ds.literal(o.sql_expr).should == @ds.literal(o) @ds.literal(o.sql_expr & :a).should == "(#{@ds.literal(o)} AND a)" end end specify "Proc#sql_expr should should treat the object as a virtual row block" do @ds.literal(proc{a}.sql_expr).should == "a" @ds.literal(proc{a__b}.sql_expr).should == "a.b" @ds.literal(proc{a(b)}.sql_expr).should == "a(b)" end specify "Proc#sql_expr should should wrap the object in a GenericComplexExpression if the object is not already an expression" do @ds.literal(proc{1}.sql_expr).should == "1" @ds.literal(proc{1}.sql_expr + 2).should == "(1 + 2)" end specify "Proc#sql_expr should should convert a hash or array of two element arrays to a BooleanExpression" do @ds.literal(proc{{a=>b}}.sql_expr).should == "(a = b)" @ds.literal(proc{[[a, b]]}.sql_expr & :a).should == "((a = b) AND a)" end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/static_cache_spec.rb����������������������������������������������0000664�0000000�0000000�00000013277�12201565355�0023112�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::StaticCache" do before do @db = Sequel.mock @db.fetch = [{:id=>1}, {:id=>2}] @c = Class.new(Sequel::Model(@db[:t])) @c.columns :id @c.plugin :static_cache @c1 = @c.cache[1] @c2 = @c.cache[2] @db.sqls end it "should use a ruby hash as a cache of all model instances" do @c.cache.should == {1=>@c.load(:id=>1), 2=>@c.load(:id=>2)} end it "should work correctly with composite keys" do @db.fetch = [{:id=>1, :id2=>1}, {:id=>2, :id2=>1}] @c = Class.new(Sequel::Model(@db[:t])) @c.columns :id, :id2 @c.set_primary_key([:id, :id2]) @c.plugin :static_cache @db.sqls @c1 = @c.cache[[1, 2]] @c2 = @c.cache[[2, 1]] @c[[1, 2]].should equal(@c1) @c[[2, 1]].should equal(@c2) @db.sqls.should == [] end it "should make .[] method with primary key use the cache" do @c[1].should equal(@c1) @c[2].should equal(@c2) @c[3].should be_nil @c[[1, 2]].should be_nil @c[nil].should be_nil @c[].should be_nil @db.sqls.should == [] end it "should have .[] with a hash not use the cache" do @db.fetch = {:id=>2} @c[:id=>2].should == @c2 @db.sqls.should == ['SELECT * FROM t WHERE (id = 2) LIMIT 1'] end it "should support cache_get_pk" do @c.cache_get_pk(1).should equal(@c1) @c.cache_get_pk(2).should equal(@c2) @c.cache_get_pk(3).should be_nil @db.sqls.should == [] end it "should have each just iterate over the hash's values without sending a query" do a = [] @c.each{|o| a << o} a = a.sort_by{|o| o.id} a.first.should equal(@c1) a.last.should equal(@c2) @db.sqls.should == [] end it "should have map just iterate over the hash's values without sending a query if no argument is given" do @c.map{|v| v.id}.sort.should == [1, 2] @db.sqls.should == [] end it "should have count with no argument or block not issue a query" do @c.count.should == 2 @db.sqls.should == [] end it "should have count with argument or block not issue a query" do @db.fetch = [[{:count=>1}], [{:count=>2}]] @c.count(:a).should == 1 @c.count{b}.should == 2 @db.sqls.should == ["SELECT count(a) AS count FROM t LIMIT 1", "SELECT count(b) AS count FROM t LIMIT 1"] end it "should have map not send a query if given an argument" do @c.map(:id).sort.should == [1, 2] @db.sqls.should == [] @c.map([:id,:id]).sort.should == [[1,1], [2,2]] @db.sqls.should == [] end it "should have map without a block or argument not raise an exception or issue a query" do @c.map.to_a.should == @c.all @db.sqls.should == [] end it "should have map without a block not return a frozen object" do @c.map.frozen?.should be_false end it "should have map with a block and argument raise" do proc{@c.map(:id){}}.should raise_error(Sequel::Error) end it "should have other enumerable methods work without sending a query" do a = @c.sort_by{|o| o.id} a.first.should equal(@c1) a.last.should equal(@c2) @db.sqls.should == [] end it "should have all just return the cached values" do a = @c.all.sort_by{|o| o.id} a.first.should equal(@c1) a.last.should equal(@c2) @db.sqls.should == [] end it "should have all not return a frozen object" do @c.all.frozen?.should be_false end it "should have all return things in dataset order" do @c.all.should == [@c1, @c2] end it "should have to_hash without arguments return the cached objects without a query" do a = @c.to_hash a.should == {1=>@c1, 2=>@c2} a[1].should equal(@c1) a[2].should equal(@c2) @db.sqls.should == [] end it "should have to_hash with arguments return the cached objects without a query" do a = @c.to_hash(:id) a.should == {1=>@c1, 2=>@c2} a[1].should equal(@c1) a[2].should equal(@c2) a = @c.to_hash([:id]) a.should == {[1]=>@c1, [2]=>@c2} a[[1]].should equal(@c1) a[[2]].should equal(@c2) @c.to_hash(:id, :id).should == {1=>1, 2=>2} @c.to_hash([:id], :id).should == {[1]=>1, [2]=>2} @c.to_hash(:id, [:id]).should == {1=>[1], 2=>[2]} @c.to_hash([:id], [:id]).should == {[1]=>[1], [2]=>[2]} @db.sqls.should == [] end it "should have to_hash not return a frozen object" do @c.to_hash.frozen?.should be_false end it "should have to_hash_groups without arguments return the cached objects without a query" do a = @c.to_hash_groups(:id) a.should == {1=>[@c1], 2=>[@c2]} a[1].first.should equal(@c1) a[2].first.should equal(@c2) a = @c.to_hash_groups([:id]) a.should == {[1]=>[@c1], [2]=>[@c2]} a[[1]].first.should equal(@c1) a[[2]].first.should equal(@c2) @c.to_hash_groups(:id, :id).should == {1=>[1], 2=>[2]} @c.to_hash_groups([:id], :id).should == {[1]=>[1], [2]=>[2]} @c.to_hash_groups(:id, [:id]).should == {1=>[[1]], 2=>[[2]]} @c.to_hash_groups([:id], [:id]).should == {[1]=>[[1]], [2]=>[[2]]} @db.sqls.should == [] end it "all of the static cache values (model instances) should be frozen" do @c.all.all?{|o| o.frozen?}.should be_true end it "subclasses should work correctly" do c = Class.new(@c) c.all.should == [c.load(:id=>1), c.load(:id=>2)] c.to_hash.should == {1=>c.load(:id=>1), 2=>c.load(:id=>2)} @db.sqls.should == ['SELECT * FROM t'] end it "set_dataset should work correctly" do ds = @c.dataset.from(:t2) ds.instance_variable_set(:@columns, [:id]) ds._fetch = {:id=>3} @c.dataset = ds @c.all.should == [@c.load(:id=>3)] @c.to_hash.should == {3=>@c.load(:id=>3)} @c.to_hash[3].should equal(@c.all.first) @db.sqls.should == ['SELECT * FROM t2'] end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/string_date_time_spec.rb������������������������������������������0000664�0000000�0000000�00000006307�12201565355�0024015�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') Sequel.extension :string_date_time describe "String#to_time" do specify "should convert the string into a Time object" do "2007-07-11".to_time.should == Time.parse("2007-07-11") "06:30".to_time.should == Time.parse("06:30") end specify "should raise InvalidValue for an invalid time" do proc {'0000-00-00'.to_time}.should raise_error(Sequel::InvalidValue) end end describe "String#to_date" do after do Sequel.convert_two_digit_years = true end specify "should convert the string into a Date object" do "2007-07-11".to_date.should == Date.parse("2007-07-11") end specify "should convert 2 digit years by default" do "July 11, 07".to_date.should == Date.parse("2007-07-11") end specify "should not convert 2 digit years if set not to" do Sequel.convert_two_digit_years = false "July 11, 07".to_date.should == Date.parse("0007-07-11") end specify "should raise InvalidValue for an invalid date" do proc {'0000-00-00'.to_date}.should raise_error(Sequel::InvalidValue) end end describe "String#to_datetime" do after do Sequel.convert_two_digit_years = true end specify "should convert the string into a DateTime object" do "2007-07-11 10:11:12a".to_datetime.should == DateTime.parse("2007-07-11 10:11:12a") end specify "should convert 2 digit years by default" do "July 11, 07 10:11:12a".to_datetime.should == DateTime.parse("2007-07-11 10:11:12a") end specify "should not convert 2 digit years if set not to" do Sequel.convert_two_digit_years = false "July 11, 07 10:11:12a".to_datetime.should == DateTime.parse("0007-07-11 10:11:12a") end specify "should raise InvalidValue for an invalid date" do proc {'0000-00-00'.to_datetime}.should raise_error(Sequel::InvalidValue) end end describe "String#to_sequel_time" do after do Sequel.datetime_class = Time Sequel.convert_two_digit_years = true end specify "should convert the string into a Time object by default" do "2007-07-11 10:11:12a".to_sequel_time.class.should == Time "2007-07-11 10:11:12a".to_sequel_time.should == Time.parse("2007-07-11 10:11:12a") end specify "should convert the string into a DateTime object if that is set" do Sequel.datetime_class = DateTime "2007-07-11 10:11:12a".to_sequel_time.class.should == DateTime "2007-07-11 10:11:12a".to_sequel_time.should == DateTime.parse("2007-07-11 10:11:12a") end specify "should convert 2 digit years by default if using DateTime class" do Sequel.datetime_class = DateTime "July 11, 07 10:11:12a".to_sequel_time.should == DateTime.parse("2007-07-11 10:11:12a") end specify "should not convert 2 digit years if set not to when using DateTime class" do Sequel.datetime_class = DateTime Sequel.convert_two_digit_years = false "July 11, 07 10:11:12a".to_sequel_time.should == DateTime.parse("0007-07-11 10:11:12a") end specify "should raise InvalidValue for an invalid time" do proc {'0000-00-00'.to_sequel_time}.should raise_error(Sequel::InvalidValue) Sequel.datetime_class = DateTime proc {'0000-00-00'.to_sequel_time}.should raise_error(Sequel::InvalidValue) end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/string_stripper_spec.rb�������������������������������������������0000664�0000000�0000000�00000003451�12201565355�0023727�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::StringStripper" do before do @db = Sequel.mock @c = Class.new(Sequel::Model(@db[:test])) @c.columns :name, :b @c.db_schema[:b][:type] = :blob @c.plugin :string_stripper @o = @c.new end it "should strip all input strings" do @o.name = ' name ' @o.name.should == 'name' end it "should not affect other types" do @o.name = 1 @o.name.should == 1 @o.name = Date.today @o.name.should == Date.today end it "should not strip strings for blob arguments" do v = Sequel.blob(' name ') @o.name = v @o.name.should equal(v) end it "should not strip strings for blob columns" do @o.b = ' name ' @o.b.should be_a_kind_of(Sequel::SQL::Blob) @o.b.should == Sequel.blob(' name ') end it "should allow skipping of columns using Model.skip_string_stripping" do @c.skip_string_stripping?(:name).should == false @c.skip_string_stripping :name @c.skip_string_stripping?(:name).should == true v = ' name ' @o.name = v @o.name.should equal(v) end it "should work correctly in subclasses" do o = Class.new(@c).new o.name = ' name ' o.name.should == 'name' o.b = ' name ' o.b.should be_a_kind_of(Sequel::SQL::Blob) o.b.should == Sequel.blob(' name ') end it "should work correctly for dataset changes" do c = Class.new(Sequel::Model(@db[:test])) c.plugin :string_stripper def @db.supports_schema_parsing?() true end def @db.schema(*) [[:name, {}], [:b, {:type=>:blob}]] end c.set_dataset(@db[:test2]) o = c.new o.name = ' name ' o.name.should == 'name' o.b = ' name ' o.b.should be_a_kind_of(Sequel::SQL::Blob) o.b.should == Sequel.blob(' name ') end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/subclasses_spec.rb������������������������������������������������0000664�0000000�0000000�00000003423�12201565355�0022637�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "Subclasses plugin" do before do @c = Class.new(Sequel::Model) @c.plugin :subclasses end specify "#subclasses should record direct subclasses of the given model" do @c.subclasses.should == [] sc1 = Class.new(@c) @c.subclasses.should == [sc1] sc1.subclasses.should == [] sc2 = Class.new(@c) @c.subclasses.should == [sc1, sc2] sc1.subclasses.should == [] sc2.subclasses.should == [] ssc1 = Class.new(sc1) @c.subclasses.should == [sc1, sc2] sc1.subclasses.should == [ssc1] sc2.subclasses.should == [] end specify "#descendents should record all descendent subclasses of the given model" do @c.descendents.should == [] sc1 = Class.new(@c) @c.descendents.should == [sc1] sc1.descendents.should == [] sc2 = Class.new(@c) @c.descendents.should == [sc1, sc2] sc1.descendents.should == [] sc2.descendents.should == [] ssc1 = Class.new(sc1) @c.descendents.should == [sc1, ssc1, sc2] sc1.descendents.should == [ssc1] sc2.descendents.should == [] ssc1.descendents.should == [] sssc1 = Class.new(ssc1) @c.descendents.should == [sc1, ssc1, sssc1, sc2] sc1.descendents.should == [ssc1, sssc1] sc2.descendents.should == [] ssc1.descendents.should == [sssc1] sssc1.descendents.should == [] end specify "plugin block should be called with each subclass created" do c = Class.new(Sequel::Model) a = [] c.plugin(:subclasses){|sc| a << sc} sc1 = Class.new(c) a.should == [sc1] sc2 = Class.new(c) a.should == [sc1, sc2] sc3 = Class.new(sc1) a.should == [sc1, sc2, sc3] sc4 = Class.new(sc3) a.should == [sc1, sc2, sc3, sc4] end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/tactical_eager_loading_spec.rb������������������������������������0000664�0000000�0000000�00000005744�12201565355�0025124�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::TacticalEagerLoading" do before do class ::TacticalEagerLoadingModel < Sequel::Model plugin :tactical_eager_loading columns :id, :parent_id many_to_one :parent, :class=>self one_to_many :children, :class=>self, :key=>:parent_id dataset._fetch = proc do |sql| if sql !~ /WHERE/ [{:id=>1, :parent_id=>101}, {:id=>2, :parent_id=>102}, {:id=>101, :parent_id=>nil}, {:id=>102, :parent_id=>nil}] elsif sql =~ /WHERE.*\bid = (\d+)/ [{:id=>$1.to_i, :parent_id=>nil}] elsif sql =~ /WHERE.*\bid IN \(([\d, ]*)\)/ $1.split(', ').map{|x| {:id=>x.to_i, :parent_id=>nil}} elsif sql =~ /WHERE.*\bparent_id IN \(([\d, ]*)\)/ $1.split(', ').map{|x| {:id=>x.to_i - 100, :parent_id=>x.to_i} if x.to_i > 100}.compact end end end @c = ::TacticalEagerLoadingModel @ds = TacticalEagerLoadingModel.dataset DB.reset end after do Object.send(:remove_const, :TacticalEagerLoadingModel) end it "Dataset#all should set the retrieved_by and retrieved_with attributes" do ts = @c.all ts.map{|x| [x.retrieved_by, x.retrieved_with]}.should == [[@ds,ts], [@ds,ts], [@ds,ts], [@ds,ts]] end it "Dataset#all shouldn't raise an error if a Sequel::Model instance is not returned" do proc{@c.naked.all}.should_not raise_error end it "association getter methods should eagerly load the association if the association isn't cached" do DB.sqls.length.should == 0 ts = @c.all DB.sqls.length.should == 1 ts.map{|x| x.parent}.should == [ts[2], ts[3], nil, nil] DB.sqls.length.should == 1 ts.map{|x| x.children}.should == [[], [], [ts[0]], [ts[1]]] DB.sqls.length.should == 1 end it "association getter methods should not eagerly load the association if the association is cached" do DB.sqls.length.should == 0 ts = @c.all DB.sqls.length.should == 1 ts.map{|x| x.parent}.should == [ts[2], ts[3], nil, nil] @ds.should_not_receive(:eager_load) ts.map{|x| x.parent}.should == [ts[2], ts[3], nil, nil] end it "should handle case where an association is valid on an instance, but not on all instances" do c = Class.new(@c) c.many_to_one :parent2, :class=>@c, :key=>:parent_id @c.dataset.row_proc = proc{|r| (r[:parent_id] == 101 ? c : @c).call(r)} @c.all{|x| x.parent2 if x.is_a?(c)} end it "association getter methods should not eagerly load the association if an instance is frozen" do ts = @c.all ts.first.freeze DB.sqls.length.should == 1 ts.map{|x| x.parent}.should == [ts[2], ts[3], nil, nil] DB.sqls.length.should == 2 ts.map{|x| x.children}.should == [[], [], [ts[0]], [ts[1]]] DB.sqls.length.should == 2 ts.map{|x| x.parent}.should == [ts[2], ts[3], nil, nil] DB.sqls.length.should == 1 ts.map{|x| x.children}.should == [[], [], [ts[0]], [ts[1]]] DB.sqls.length.should == 1 end end ����������������������������ruby-sequel-4.1.1/spec/extensions/thread_local_timezones_spec.rb������������������������������������0000664�0000000�0000000�00000004154�12201565355�0025210�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") Sequel.extension :thread_local_timezones describe "Sequel thread_local_timezones extension" do after do Sequel.default_timezone = nil Sequel.thread_application_timezone = nil Sequel.thread_database_timezone = nil Sequel.thread_typecast_timezone = nil end it "should allow specifying thread local timezones via thread_*_timezone=" do proc{Sequel.thread_application_timezone = :local}.should_not raise_error proc{Sequel.thread_database_timezone = :utc}.should_not raise_error proc{Sequel.thread_typecast_timezone = nil}.should_not raise_error end it "should use thread local timezone if available" do Sequel.thread_application_timezone = :local Sequel.application_timezone.should == :local Sequel.thread_database_timezone = :utc Sequel.database_timezone.should == :utc Sequel.thread_typecast_timezone = nil Sequel.typecast_timezone.should == nil end it "should fallback to default timezone if no thread_local timezone" do Sequel.default_timezone = :utc Sequel.application_timezone.should == :utc Sequel.database_timezone.should == :utc Sequel.typecast_timezone.should == :utc end it "should use a nil thread_local_timezone if set instead of falling back to the default timezone if thread_local_timezone is set to :nil" do Sequel.typecast_timezone = :utc Sequel.thread_typecast_timezone = nil Sequel.typecast_timezone.should == :utc Sequel.thread_typecast_timezone = :nil Sequel.typecast_timezone.should == nil end it "should be thread safe" do q, q1, q2 = Queue.new, Queue.new, Queue.new tz1, tz2 = nil, nil t1 = Thread.new do Sequel.thread_application_timezone = :utc q2.push nil q.pop tz1 = Sequel.application_timezone end t2 = Thread.new do Sequel.thread_application_timezone = :local q2.push nil q1.pop tz2 = Sequel.application_timezone end q2.pop q2.pop q.push nil q1.push nil t1.join t2.join tz1.should == :utc tz2.should == :local end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/timestamps_spec.rb������������������������������������������������0000664�0000000�0000000�00000011727�12201565355�0022664�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::Timestamps" do before do dc = Object.new dc.instance_eval do def now '2009-08-01' end end Sequel.datetime_class = dc @c = Class.new(Sequel::Model(:t)) @c.class_eval do columns :id, :created_at, :updated_at plugin :timestamps def _save_refresh(*) end db.reset end @c.dataset.autoid = nil end after do Sequel.datetime_class = Time end it "should set the create timestamp field on creation" do o = @c.create @c.db.sqls.should == ["INSERT INTO t (created_at) VALUES ('2009-08-01')"] o.created_at.should == '2009-08-01' end it "should set the update timestamp field on update" do o = @c.load(:id=>1).save @c.db.sqls.should == ["UPDATE t SET updated_at = '2009-08-01' WHERE (id = 1)"] o.updated_at.should == '2009-08-01' end it "should not update the update timestamp on creation" do @c.create.updated_at.should == nil end it "should use the same value for the creation and update timestamps when creating if the :update_on_create option is given" do @c.plugin :timestamps, :update_on_create=>true o = @c.create sqls = @c.db.sqls sqls.shift.should =~ /INSERT INTO t \((creat|updat)ed_at, (creat|updat)ed_at\) VALUES \('2009-08-01', '2009-08-01'\)/ sqls.should == [] o.created_at.should === o.updated_at end it "should allow specifying the create timestamp field via the :create option" do c = Class.new(Sequel::Model(:t)) c.class_eval do columns :id, :c plugin :timestamps, :create=>:c def _save_refresh(*) end end o = c.create c.db.sqls.should == ["INSERT INTO t (c) VALUES ('2009-08-01')"] o.c.should == '2009-08-01' end it "should allow specifying the update timestamp field via the :update option" do c = Class.new(Sequel::Model(:t)) c.class_eval do columns :id, :u plugin :timestamps, :update=>:u db.reset end o = c.load(:id=>1).save c.db.sqls.should == ["UPDATE t SET u = '2009-08-01' WHERE (id = 1)"] o.u.should == '2009-08-01' end it "should not raise an error if the model doesn't have the timestamp columns" do c = Class.new(Sequel::Model(:t)) c.class_eval do columns :id, :x plugin :timestamps db.reset def _save_refresh; self end end c.create(:x=>2) c.load(:id=>1, :x=>2).save c.db.sqls.should == ["INSERT INTO t (x) VALUES (2)", "UPDATE t SET x = 2 WHERE (id = 1)"] end it "should not overwrite an existing create timestamp" do o = @c.create(:created_at=>'2009-08-03') @c.db.sqls.should == ["INSERT INTO t (created_at) VALUES ('2009-08-03')"] o.created_at.should == '2009-08-03' end it "should overwrite an existing create timestamp if the :force option is used" do @c.plugin :timestamps, :force=>true o = @c.create(:created_at=>'2009-08-03') @c.db.sqls.should == ["INSERT INTO t (created_at) VALUES ('2009-08-01')"] o.created_at.should == '2009-08-01' end it "should have create_timestamp_field give the create timestamp field" do @c.create_timestamp_field.should == :created_at @c.plugin :timestamps, :create=>:c @c.create_timestamp_field.should == :c end it "should have update_timestamp_field give the update timestamp field" do @c.update_timestamp_field.should == :updated_at @c.plugin :timestamps, :update=>:u @c.update_timestamp_field.should == :u end it "should have create_timestamp_overwrite? give the whether to overwrite an existing create timestamp" do @c.create_timestamp_overwrite?.should == false @c.plugin :timestamps, :force=>true @c.create_timestamp_overwrite?.should == true end it "should have set_update_timestamp_on_create? give whether to set the update timestamp on create" do @c.set_update_timestamp_on_create?.should == false @c.plugin :timestamps, :update_on_create=>true @c.set_update_timestamp_on_create?.should == true end it "should work with subclasses" do c = Class.new(@c) o = c.create o.created_at.should == '2009-08-01' o.updated_at.should == nil o = c.load(:id=>1).save o.updated_at.should == '2009-08-01' c.db.sqls.should == ["INSERT INTO t (created_at) VALUES ('2009-08-01')", "UPDATE t SET updated_at = '2009-08-01' WHERE (id = 1)"] c.create(:created_at=>'2009-08-03').created_at.should == '2009-08-03' c.class_eval do columns :id, :c, :u plugin :timestamps, :create=>:c, :update=>:u, :force=>true, :update_on_create=>true end c2 = Class.new(c) c2.db.reset o = c2.create o.c.should == '2009-08-01' o.u.should === o.c c2.db.sqls.first.should =~ /INSERT INTO t \([cu], [cu]\) VALUES \('2009-08-01', '2009-08-01'\)/ c2.db.reset o = c2.load(:id=>1).save o.u.should == '2009-08-01' c2.db.sqls.should == ["UPDATE t SET u = '2009-08-01' WHERE (id = 1)"] c2.create(:c=>'2009-08-03').c.should == '2009-08-01' end end �����������������������������������������ruby-sequel-4.1.1/spec/extensions/to_dot_spec.rb����������������������������������������������������0000664�0000000�0000000�00000023117�12201565355�0021762�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "to_dot extension" do def dot(ds) Sequel::ToDot.new(ds).instance_variable_get(:@dot)[4...-1] end before do @db = DB @ds = @db.dataset end it "should output a string suitable for input to the graphviz dot program" do @ds.extension(:to_dot).to_dot.should == (<<END).strip digraph G { 0 [label="self"]; 0 -> 1 [label=""]; 1 [label="Dataset"]; } END end it "should handle an empty dataset" do dot(@ds).should == [] end it "should handle WITH" do a = dot(@ds.with(:a, @ds)) a[0..3].should == ["1 -> 2 [label=\"with\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"Hash\"];"] [["3 -> 4 [label=\"dataset\"];", "4 [label=\"Dataset\"];", "3 -> 5 [label=\"name\"];", "5 [label=\":a\"];"], ["3 -> 4 [label=\"name\"];", "4 [label=\":a\"];", "3 -> 5 [label=\"dataset\"];", "5 [label=\"Dataset\"];"]].should include(a[4..-1]) end it "should handle DISTINCT" do dot(@ds.distinct).should == ["1 -> 2 [label=\"distinct\"];", "2 [label=\"Array\"];"] end it "should handle FROM" do dot(@ds.from(:a)).should == ["1 -> 2 [label=\"from\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\":a\"];"] end it "should handle JOIN" do dot(@ds.join(:a)).should == ["1 -> 2 [label=\"join\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"INNER JOIN\"];", "3 -> 4 [label=\"table\"];", "4 [label=\":a\"];"] end it "should handle WHERE" do dot(@ds.filter(true)).should == ["1 -> 2 [label=\"where\"];", "2 [label=\"ComplexExpression: NOOP\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"true\"];"] end it "should handle GROUP" do dot(@ds.group(:a)).should == ["1 -> 2 [label=\"group\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\":a\"];"] end it "should handle HAVING" do dot(@ds.having(:a)).should == ["1 -> 2 [label=\"having\"];", "2 [label=\":a\"];"] end it "should handle UNION" do dot(@ds.union(@ds)).should == ["1 -> 2 [label=\"from\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"AliasedExpression\"];", "3 -> 4 [label=\"expression\"];", "4 [label=\"Dataset\"];", "4 -> 5 [label=\"compounds\"];", "5 [label=\"Array\"];", "5 -> 6 [label=\"0\"];", "6 [label=\"Array\"];", "6 -> 7 [label=\"0\"];", "7 [label=\":union\"];", "6 -> 8 [label=\"1\"];", "8 [label=\"Dataset\"];", "6 -> 9 [label=\"2\"];", "9 [label=\"nil\"];", "3 -> 10 [label=\"alias\"];", "10 [label=\":t1\"];"] end it "should handle INTERSECT" do dot(@ds.intersect(@ds)).should == ["1 -> 2 [label=\"from\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"AliasedExpression\"];", "3 -> 4 [label=\"expression\"];", "4 [label=\"Dataset\"];", "4 -> 5 [label=\"compounds\"];", "5 [label=\"Array\"];", "5 -> 6 [label=\"0\"];", "6 [label=\"Array\"];", "6 -> 7 [label=\"0\"];", "7 [label=\":intersect\"];", "6 -> 8 [label=\"1\"];", "8 [label=\"Dataset\"];", "6 -> 9 [label=\"2\"];", "9 [label=\"nil\"];", "3 -> 10 [label=\"alias\"];", "10 [label=\":t1\"];"] end it "should handle EXCEPT" do dot(@ds.except(@ds)).should == ["1 -> 2 [label=\"from\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"AliasedExpression\"];", "3 -> 4 [label=\"expression\"];", "4 [label=\"Dataset\"];", "4 -> 5 [label=\"compounds\"];", "5 [label=\"Array\"];", "5 -> 6 [label=\"0\"];", "6 [label=\"Array\"];", "6 -> 7 [label=\"0\"];", "7 [label=\":except\"];", "6 -> 8 [label=\"1\"];", "8 [label=\"Dataset\"];", "6 -> 9 [label=\"2\"];", "9 [label=\"nil\"];", "3 -> 10 [label=\"alias\"];", "10 [label=\":t1\"];"] end it "should handle ORDER" do dot(@ds.order(:a)).should == ["1 -> 2 [label=\"order\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\":a\"];"] end it "should handle LIMIT and OFFSET" do dot(@ds.limit(1, 2)).should == ["1 -> 2 [label=\"limit\"];", "2 [label=\"1\"];", "1 -> 3 [label=\"offset\"];", "3 [label=\"2\"];"] end it "should handle FOR UPDATE" do dot(@ds.for_update).should == ["1 -> 2 [label=\"lock\"];", "2 [label=\":update\"];"] end it "should handle LiteralStrings" do dot(@ds.filter('a')).should == ["1 -> 2 [label=\"where\"];", "2 [label=\"\\\"(a)\\\".lit\"];"] end it "should handle true, false, nil" do dot(@ds.select(true, false, nil)).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"true\"];", "2 -> 4 [label=\"1\"];", "4 [label=\"false\"];", "2 -> 5 [label=\"2\"];", "5 [label=\"nil\"];"] end it "should handle SQL::ComplexExpressions" do dot(@ds.filter(:a=>:b)).should == ["1 -> 2 [label=\"where\"];", "2 [label=\"ComplexExpression: =\"];", "2 -> 3 [label=\"0\"];", "3 [label=\":a\"];", "2 -> 4 [label=\"1\"];", "4 [label=\":b\"];"] end it "should handle SQL::Identifiers" do dot(@ds.select{a}).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"Identifier\"];", "3 -> 4 [label=\"value\"];", "4 [label=\":a\"];"] end it "should handle SQL::QualifiedIdentifiers" do dot(@ds.select{a__b}).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"QualifiedIdentifier\"];", "3 -> 4 [label=\"table\"];", "4 [label=\"\\\"a\\\"\"];", "3 -> 5 [label=\"column\"];", "5 [label=\"\\\"b\\\"\"];"] end it "should handle SQL::OrderedExpressions" do dot(@ds.order(Sequel.desc(:a, :nulls=>:last))).should == ["1 -> 2 [label=\"order\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"OrderedExpression: DESC NULLS LAST\"];", "3 -> 4 [label=\"expression\"];", "4 [label=\":a\"];"] end it "should handle SQL::AliasedExpressions" do dot(@ds.from(Sequel.as(:a, :b))).should == ["1 -> 2 [label=\"from\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"AliasedExpression\"];", "3 -> 4 [label=\"expression\"];", "4 [label=\":a\"];", "3 -> 5 [label=\"alias\"];", "5 [label=\":b\"];"] end it "should handle SQL::CaseExpressions" do dot(@ds.select(Sequel.case({:a=>:b}, :c, :d))).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"CaseExpression\"];", "3 -> 4 [label=\"expression\"];", "4 [label=\":d\"];", "3 -> 5 [label=\"conditions\"];", "5 [label=\"Array\"];", "5 -> 6 [label=\"0\"];", "6 [label=\"Array\"];", "6 -> 7 [label=\"0\"];", "7 [label=\":a\"];", "6 -> 8 [label=\"1\"];", "8 [label=\":b\"];", "3 -> 9 [label=\"default\"];", "9 [label=\":c\"];"] end it "should handle SQL::Cast" do dot(@ds.select(Sequel.cast(:a, Integer))).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"Cast\"];", "3 -> 4 [label=\"expr\"];", "4 [label=\":a\"];", "3 -> 5 [label=\"type\"];", "5 [label=\"Integer\"];"] end it "should handle SQL::Function" do dot(@ds.select{a(b)}).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"Function: a\"];", "3 -> 4 [label=\"0\"];", "4 [label=\"Identifier\"];", "4 -> 5 [label=\"value\"];", "5 [label=\":b\"];"] end it "should handle SQL::Subscript" do dot(@ds.select(Sequel.subscript(:a, 1))).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"Subscript\"];", "3 -> 4 [label=\"f\"];", "4 [label=\":a\"];", "3 -> 5 [label=\"sub\"];", "5 [label=\"Array\"];", "5 -> 6 [label=\"0\"];", "6 [label=\"1\"];"] end it "should handle SQL::WindowFunction" do dot(@ds.select{sum(:over, :partition=>(:a)){}}).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"WindowFunction\"];", "3 -> 4 [label=\"function\"];", "4 [label=\"Function: sum\"];", "3 -> 5 [label=\"window\"];", "5 [label=\"Window\"];", "5 -> 6 [label=\"opts\"];", "6 [label=\"Hash\"];", "6 -> 7 [label=\"partition\"];", "7 [label=\":a\"];"] end it "should handle SQL::PlaceholderLiteralString" do dot(@ds.where("?", true)).should == ["1 -> 2 [label=\"where\"];", "2 [label=\"PlaceholderLiteralString: \\\"(?)\\\"\"];", "2 -> 3 [label=\"args\"];", "3 [label=\"Array\"];", "3 -> 4 [label=\"0\"];", "4 [label=\"true\"];"] end it "should handle JOIN ON" do dot(@ds.from(:a).join(:d, :b=>:c)).should == ["1 -> 2 [label=\"from\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\":a\"];", "1 -> 4 [label=\"join\"];", "4 [label=\"Array\"];", "4 -> 5 [label=\"0\"];", "5 [label=\"INNER JOIN ON\"];", "5 -> 6 [label=\"table\"];", "6 [label=\":d\"];", "5 -> 7 [label=\"on\"];", "7 [label=\"ComplexExpression: =\"];", "7 -> 8 [label=\"0\"];", "8 [label=\"QualifiedIdentifier\"];", "8 -> 9 [label=\"table\"];", "9 [label=\"\\\"d\\\"\"];", "8 -> 10 [label=\"column\"];", "10 [label=\"\\\"b\\\"\"];", "7 -> 11 [label=\"1\"];", "11 [label=\"QualifiedIdentifier\"];", "11 -> 12 [label=\"table\"];", "12 [label=\"\\\"a\\\"\"];", "11 -> 13 [label=\"column\"];", "13 [label=\"\\\"c\\\"\"];"] end it "should handle JOIN USING" do dot(@ds.from(:a).join(:d, [:c], :table_alias=>:c)).should == ["1 -> 2 [label=\"from\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\":a\"];", "1 -> 4 [label=\"join\"];", "4 [label=\"Array\"];", "4 -> 5 [label=\"0\"];", "5 [label=\"INNER JOIN USING\"];", "5 -> 6 [label=\"table\"];", "6 [label=\":d\"];", "5 -> 7 [label=\"alias\"];", "7 [label=\":c\"];", "5 -> 8 [label=\"using\"];", "8 [label=\"Array\"];", "8 -> 9 [label=\"0\"];", "9 [label=\":c\"];"] end it "should handle other types" do o = Object.new def o.inspect "blah" end dot(@ds.select(o)).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"Unhandled: blah\"];"] end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/touch_spec.rb�����������������������������������������������������0000664�0000000�0000000�00000021305�12201565355�0021611�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Touch plugin" do before do @c = Class.new(Sequel::Model) p = proc{def touch_instance_value; touch_association_value; end} @Artist = Class.new(@c, &p).set_dataset(:artists) @Album = Class.new(@c, &p).set_dataset(:albums) @Artist.columns :id, :updated_at, :modified_on @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id @Album.columns :id, :updated_at, :modified_on, :artist_id, :original_album_id @Album.one_to_many :followup_albums, :class=>@Album, :key=>:original_album_id @Album.many_to_one :artist, :class=>@Artist @a = @Artist.load(:id=>1) DB.reset end specify "should default to using Time.now when setting the column values for model instances" do c = Class.new(Sequel::Model).set_dataset(:a) c.plugin :touch c.columns :id, :updated_at c.load(:id=>1).touch DB.sqls.first.should =~ /UPDATE a SET updated_at = '[-0-9 :.]+' WHERE \(id = 1\)/ end specify "should allow #touch instance method for updating the updated_at column" do @Artist.plugin :touch @a.touch DB.sqls.should == ["UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)"] end specify "should have #touch take an argument for the column to touch" do @Artist.plugin :touch @a.touch(:modified_on) DB.sqls.should == ["UPDATE artists SET modified_on = CURRENT_TIMESTAMP WHERE (id = 1)"] end specify "should be able to specify the default column to touch in the plugin call using the :column option" do @Artist.plugin :touch, :column=>:modified_on @a.touch DB.sqls.should == ["UPDATE artists SET modified_on = CURRENT_TIMESTAMP WHERE (id = 1)"] end specify "should be able to specify the default column to touch using the touch_column model accessor" do @Artist.plugin :touch @Artist.touch_column = :modified_on @a.touch DB.sqls.should == ["UPDATE artists SET modified_on = CURRENT_TIMESTAMP WHERE (id = 1)"] end specify "should be able to specify the associations to touch in the plugin call using the :associations option" do @Artist.plugin :touch, :associations=>:albums @a.touch DB.sqls.should == ["UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)", "UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (albums.artist_id = 1)"] end specify "should be able to give an array to the :associations option specifying multiple associations" do @Album.plugin :touch, :associations=>[:artist, :followup_albums] @Album.load(:id=>4, :artist_id=>1).touch sqls = DB.sqls sqls.shift.should == "UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (id = 4)" sqls.sort.should == ["UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (albums.original_album_id = 4)", "UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (artists.id = 1)"] end specify "should be able to give a hash to the :associations option specifying the column to use for each association" do @Artist.plugin :touch, :associations=>{:albums=>:modified_on} @a.touch DB.sqls.should == ["UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)", "UPDATE albums SET modified_on = CURRENT_TIMESTAMP WHERE (albums.artist_id = 1)"] end specify "should default to using the touch_column as the default touch column for associations" do @Artist.plugin :touch, :column=>:modified_on, :associations=>:albums @a.touch DB.sqls.should == ["UPDATE artists SET modified_on = CURRENT_TIMESTAMP WHERE (id = 1)", "UPDATE albums SET modified_on = CURRENT_TIMESTAMP WHERE (albums.artist_id = 1)"] end specify "should allow the mixed use of symbols and hashes inside an array for the :associations option" do @Album.plugin :touch, :associations=>[:artist, {:followup_albums=>:modified_on}] @Album.load(:id=>4, :artist_id=>1).touch sqls = DB.sqls sqls.shift.should == "UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (id = 4)" sqls.sort.should == ["UPDATE albums SET modified_on = CURRENT_TIMESTAMP WHERE (albums.original_album_id = 4)", "UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (artists.id = 1)"] end specify "should be able to specify the associations to touch via a touch_associations_method" do @Album.plugin :touch @Album.touch_associations(:artist, {:followup_albums=>:modified_on}) @Album.load(:id=>4, :artist_id=>1).touch sqls = DB.sqls sqls.shift.should == "UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (id = 4)" sqls.sort.should == ["UPDATE albums SET modified_on = CURRENT_TIMESTAMP WHERE (albums.original_album_id = 4)", "UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (artists.id = 1)"] end specify "should touch associated objects when destroying an object" do @Album.plugin :touch @Album.touch_associations(:artist, {:followup_albums=>:modified_on}) @Album.load(:id=>4, :artist_id=>1).destroy sqls = DB.sqls sqls.shift.should == "DELETE FROM albums WHERE id = 4" sqls.sort.should == ["UPDATE albums SET modified_on = CURRENT_TIMESTAMP WHERE (albums.original_album_id = 4)", "UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (artists.id = 1)"] end specify "should be able to touch many_to_one associations" do @Album.plugin :touch, :associations=>:artist @Album.load(:id=>3, :artist_id=>4).touch DB.sqls.should == ["UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (id = 3)", "UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (artists.id = 4)"] end specify "should be able to touch many_to_one associations" do @Artist.one_to_one :album, :class=>@Album, :key=>:artist_id @Artist.plugin :touch, :associations=>:album @a.touch DB.sqls.should == ["UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)", "UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (albums.artist_id = 1)"] end specify "should be able to touch many_to_many associations" do @Artist.many_to_many :albums, :class=>@Album, :left_key=>:artist_id, :join_table=>:aa @Artist.plugin :touch, :associations=>:albums @a.touch DB.sqls.should == ["UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)", "SELECT albums.* FROM albums INNER JOIN aa ON ((aa.album_id = albums.id) AND (aa.artist_id = 1))", "UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)"] end specify "should be able to touch many_through_many associations" do @Artist.plugin :many_through_many @Artist.many_through_many :albums, [[:aa, :artist_id, :album_id]], :class=>@Album @Artist.plugin :touch, :associations=>:albums @a.touch DB.sqls.should == ["UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)", "SELECT albums.* FROM albums INNER JOIN aa ON ((aa.album_id = albums.id) AND (aa.artist_id = 1))", "UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)"] end specify "should handle touching many_to_one associations with no associated object" do @Album.plugin :touch, :associations=>:artist @Album.load(:id=>3, :artist_id=>nil).touch DB.sqls.should == ["UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (id = 3)"] end specify "should not update a column that doesn't exist" do @Album.plugin :touch, :column=>:x a = @Album.load(:id=>1) a.touch DB.sqls.should == [] a.artist_id = 1 a.touch DB.sqls.should == ['UPDATE albums SET artist_id = 1 WHERE (id = 1)'] end specify "should raise an error if given a column argument in touch that doesn't exist" do @Artist.plugin :touch proc{@a.touch(:x)}.should raise_error(Sequel::Error) end specify "should raise an Error when a nonexistent association is given" do @Artist.plugin :touch proc{@Artist.plugin :touch, :associations=>:blah}.should raise_error(Sequel::Error) end specify "should work correctly in subclasses" do @Artist.plugin :touch c1 = Class.new(@Artist) c1.load(:id=>4).touch DB.sqls.should == ["UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (id = 4)"] c1.touch_column = :modified_on c1.touch_associations :albums c1.load(:id=>1).touch DB.sqls.should == ["UPDATE artists SET modified_on = CURRENT_TIMESTAMP WHERE (id = 1)", "UPDATE albums SET modified_on = CURRENT_TIMESTAMP WHERE (albums.artist_id = 1)"] @a.touch DB.sqls.should == ["UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)"] @Artist.plugin :touch, :column=>:modified_on, :associations=>:albums c2 = Class.new(@Artist) c2.load(:id=>4).touch DB.sqls.should == ["UPDATE artists SET modified_on = CURRENT_TIMESTAMP WHERE (id = 4)", "UPDATE albums SET modified_on = CURRENT_TIMESTAMP WHERE (albums.artist_id = 4)"] end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/tree_spec.rb������������������������������������������������������0000664�0000000�0000000�00000014123�12201565355�0021426�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "tree plugin" do def klass(opts={}) @db = DB c = Class.new(Sequel::Model(@db[:nodes])) c.class_eval do def self.name; 'Node'; end columns :id, :name, :parent_id, :i, :pi plugin :tree, opts end c end before do @c = klass @ds = @c.dataset @o = @c.load(:id=>2, :parent_id=>1, :name=>'AA', :i=>3, :pi=>4) @db.reset end it "should define the correct associations" do @c.associations.sort_by{|x| x.to_s}.should == [:children, :parent] end it "should define the correct associations when giving options" do klass(:children=>{:name=>:cs}, :parent=>{:name=>:p}).associations.sort_by{|x| x.to_s}.should == [:cs, :p] end it "should use the correct SQL for lazy associations" do @o.parent_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.id = 1) LIMIT 1' @o.children_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.parent_id = 2)' end it "should use the correct SQL for lazy associations when giving options" do o = klass(:primary_key=>:i, :key=>:pi, :order=>:name, :children=>{:name=>:cs}, :parent=>{:name=>:p}).load(:id=>2, :parent_id=>1, :name=>'AA', :i=>3, :pi=>4) o.p_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.i = 4) ORDER BY name LIMIT 1' o.cs_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.pi = 3) ORDER BY name' end it "should have parent_column give the symbol of the parent column" do @c.parent_column.should == :parent_id klass(:key=>:p_id).parent_column.should == :p_id end it "should have tree_order give the order of the association" do @c.tree_order.should == nil klass(:order=>:name).tree_order.should == :name klass(:order=>[:parent_id, :name]).tree_order.should == [:parent_id, :name] end it "should work correctly in subclasses" do o = Class.new(klass(:primary_key=>:i, :key=>:pi, :order=>:name, :children=>{:name=>:cs}, :parent=>{:name=>:p})).load(:id=>2, :parent_id=>1, :name=>'AA', :i=>3, :pi=>4) o.p_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.i = 4) ORDER BY name LIMIT 1' o.cs_dataset.sql.should == 'SELECT * FROM nodes WHERE (nodes.pi = 3) ORDER BY name' end it "should have roots return an array of the tree's roots" do @ds._fetch = [{:id=>1, :parent_id=>nil, :name=>'r'}] @c.roots.should == [@c.load(:id=>1, :parent_id=>nil, :name=>'r')] @db.sqls.should == ["SELECT * FROM nodes WHERE (parent_id IS NULL)"] end it "should have roots_dataset be a dataset representing the tree's roots" do @c.roots_dataset.sql.should == "SELECT * FROM nodes WHERE (parent_id IS NULL)" end it "should have ancestors return the ancestors of the current node" do @ds._fetch = [[{:id=>1, :parent_id=>5, :name=>'r'}], [{:id=>5, :parent_id=>nil, :name=>'r2'}]] @o.ancestors.should == [@c.load(:id=>1, :parent_id=>5, :name=>'r'), @c.load(:id=>5, :parent_id=>nil, :name=>'r2')] @db.sqls.should == ["SELECT * FROM nodes WHERE id = 1", "SELECT * FROM nodes WHERE id = 5"] end it "should have descendants return the descendants of the current node" do @ds._fetch = [[{:id=>3, :parent_id=>2, :name=>'r'}, {:id=>4, :parent_id=>2, :name=>'r2'}], [{:id=>5, :parent_id=>4, :name=>'r3'}], []] @o.descendants.should == [@c.load(:id=>3, :parent_id=>2, :name=>'r'), @c.load(:id=>4, :parent_id=>2, :name=>'r2'), @c.load(:id=>5, :parent_id=>4, :name=>'r3')] @db.sqls.should == ["SELECT * FROM nodes WHERE (nodes.parent_id = 2)", "SELECT * FROM nodes WHERE (nodes.parent_id = 3)", "SELECT * FROM nodes WHERE (nodes.parent_id = 5)", "SELECT * FROM nodes WHERE (nodes.parent_id = 4)"] end it "should have root return the root of the current node" do @ds._fetch = [[{:id=>1, :parent_id=>5, :name=>'r'}], [{:id=>5, :parent_id=>nil, :name=>'r2'}]] @o.root.should == @c.load(:id=>5, :parent_id=>nil, :name=>'r2') @db.sqls.should == ["SELECT * FROM nodes WHERE id = 1", "SELECT * FROM nodes WHERE id = 5"] end it "should have root? return true for a root node and false for a child node" do @c.load(:parent_id => nil).root?.should be_true @c.load(:parent_id => 1).root?.should be_false end it "should have root? return false for an new node" do @c.new.root?.should be_false end it "should have self_and_siblings return the children of the current node's parent" do @ds._fetch = [[{:id=>1, :parent_id=>3, :name=>'r'}], [{:id=>7, :parent_id=>1, :name=>'r2'}, @o.values.dup]] @o.self_and_siblings.should == [@c.load(:id=>7, :parent_id=>1, :name=>'r2'), @o] @db.sqls.should == ["SELECT * FROM nodes WHERE id = 1", "SELECT * FROM nodes WHERE (nodes.parent_id = 1)"] end it "should have siblings return the children of the current node's parent, except for the current node" do @ds._fetch = [[{:id=>1, :parent_id=>3, :name=>'r'}], [{:id=>7, :parent_id=>1, :name=>'r2'}, @o.values.dup]] @o.siblings.should == [@c.load(:id=>7, :parent_id=>1, :name=>'r2')] @db.sqls.should == ["SELECT * FROM nodes WHERE id = 1", "SELECT * FROM nodes WHERE (nodes.parent_id = 1)"] end describe ":single_root option" do before do @c = klass(:single_root => true) end it "should have root class method return the root" do @c.dataset._fetch = [{:id=>1, :parent_id=>nil, :name=>'r'}] @c.root.should == @c.load(:id=>1, :parent_id=>nil, :name=>'r') end it "prevents creating a second root" do @c.dataset._fetch = [{:id=>1, :parent_id=>nil, :name=>'r'}] lambda { @c.create }.should raise_error(Sequel::Plugins::Tree::TreeMultipleRootError) end it "errors when promoting an existing record to a second root" do @c.dataset._fetch = [{:id=>1, :parent_id=>nil, :name=>'r'}] n = @c.load(:id => 2, :parent_id => 1) lambda { n.update(:parent_id => nil) }.should raise_error(Sequel::Plugins::Tree::TreeMultipleRootError) end it "allows updating existing root" do @c.dataset._fetch = [{:id=>1, :parent_id=>nil, :name=>'r'}] lambda { @c.root.update(:name => 'fdsa') }.should_not raise_error end end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/typecast_on_load_spec.rb������������������������������������������0000664�0000000�0000000�00000006003�12201565355�0024014�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "TypecastOnLoad plugin" do before do @db = Sequel.mock(:fetch=>{:id=>1, :b=>"1", :y=>"0"}, :columns=>[:id, :b, :y], :numrows=>1) def @db.supports_schema_parsing?() true end def @db.schema(*args) [[:id, {}], [:y, {:type=>:boolean, :db_type=>'tinyint(1)'}], [:b, {:type=>:integer, :db_type=>'integer'}]] end @c = Class.new(Sequel::Model(@db[:items])) do attr_accessor :bset def b=(x) self.bset = true super end end end specify "should call setter method with value when loading the object, for all given columns" do @c.plugin :typecast_on_load, :b o = @c.load(:id=>1, :b=>"1", :y=>"0") o.values.should == {:id=>1, :b=>1, :y=>"0"} o.bset.should == true end specify "should call setter method with value when reloading the object, for all given columns" do @c.plugin :typecast_on_load, :b o = @c.load(:id=>1, :b=>"1", :y=>"0") o.refresh o.values.should == {:id=>1, :b=>1, :y=>"0"} o.bset.should == true end specify "should call setter method with value when automatically reloading the object on creation" do @c.plugin :typecast_on_load, :b o = @c.new(:b=>"1", :y=>"0") o.save.values.should == {:id=>1, :b=>1, :y=>"0"} o.bset.should == true end specify "should call setter method with value when automatically reloading the object on creation via insert_select" do @c.plugin :typecast_on_load, :b @c.dataset.meta_def(:insert_select){|h| insert(h); first} o = @c.new(:b=>"1", :y=>"0") o.save.values.should == {:id=>1, :b=>1, :y=>"0"} o.bset.should == true end specify "should allowing setting columns separately via add_typecast_on_load_columns" do @c.plugin :typecast_on_load @c.load(:id=>1, :b=>"1", :y=>"0").values.should == {:id=>1, :b=>"1", :y=>"0"} @c.add_typecast_on_load_columns :b @c.load(:id=>1, :b=>"1", :y=>"0").values.should == {:id=>1, :b=>1, :y=>"0"} @c.add_typecast_on_load_columns :y @c.load(:id=>1, :b=>"1", :y=>"0").values.should == {:id=>1, :b=>1, :y=>false} end specify "should work with subclasses" do @c.plugin :typecast_on_load @c.load(:id=>1, :b=>"1", :y=>"0").values.should == {:id=>1, :b=>"1", :y=>"0"} c1 = Class.new(@c) @c.add_typecast_on_load_columns :b @c.load(:id=>1, :b=>"1", :y=>"0").values.should == {:id=>1, :b=>1, :y=>"0"} c1.load(:id=>1, :b=>"1", :y=>"0").values.should == {:id=>1, :b=>"1", :y=>"0"} c2 = Class.new(@c) @c.add_typecast_on_load_columns :y @c.load(:id=>1, :b=>"1", :y=>"0").values.should == {:id=>1, :b=>1, :y=>false} c2.load(:id=>1, :b=>"1", :y=>"0").values.should == {:id=>1, :b=>1, :y=>"0"} c1.add_typecast_on_load_columns :y c1.load(:id=>1, :b=>"1", :y=>"0").values.should == {:id=>1, :b=>"1", :y=>false} end specify "should not mark the object as modified" do @c.plugin :typecast_on_load, :b @c.load(:id=>1, :b=>"1", :y=>"0").modified?.should == false end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/unlimited_update_spec.rb������������������������������������������0000664�0000000�0000000�00000001102�12201565355�0024014�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::UnlimitedUpdate" do before do @db = Sequel.mock(:host=>'mysql', :numrows=>1) @c = Class.new(Sequel::Model(@db[:test])) @c.columns :id, :name @o = @c.load(:id=>1, :name=>'a') @db.sqls end it "should remove limit from update dataset" do @o.save @db.sqls.should == ["UPDATE test SET name = 'a' WHERE (id = 1) LIMIT 1"] @c.plugin :unlimited_update @o.save @db.sqls.should == ["UPDATE test SET name = 'a' WHERE (id = 1)"] end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/update_primary_key_spec.rb����������������������������������������0000664�0000000�0000000�00000010250�12201565355�0024361�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::UpdatePrimaryKey" do before do @c = Class.new(Sequel::Model(:a)) @c.plugin :update_primary_key @c.columns :a, :b @c.set_primary_key :a @c.unrestrict_primary_key @ds = @c.dataset DB.reset end specify "should handle regular updates" do @ds._fetch = [[{:a=>1, :b=>3}], [{:a=>1, :b=>4}], [{:a=>1, :b=>4}], [{:a=>1, :b=>5}], [{:a=>1, :b=>5}], [{:a=>1, :b=>6}], [{:a=>1, :b=>6}]] @c.first.update(:b=>4) @c.all.should == [@c.load(:a=>1, :b=>4)] DB.sqls.should == ["SELECT * FROM a LIMIT 1", "UPDATE a SET b = 4 WHERE (a = 1)", "SELECT * FROM a"] @c.first.set(:b=>5).save @c.all.should == [@c.load(:a=>1, :b=>5)] DB.sqls.should == ["SELECT * FROM a LIMIT 1", "UPDATE a SET b = 5 WHERE (a = 1)", "SELECT * FROM a"] @c.first.set(:b=>6).save(:columns=>:b) @c.all.should == [@c.load(:a=>1, :b=>6)] DB.sqls.should == ["SELECT * FROM a LIMIT 1", "UPDATE a SET b = 6 WHERE (a = 1)", "SELECT * FROM a"] end specify "should handle updating the primary key field with another field" do @ds._fetch = [[{:a=>1, :b=>3}], [{:a=>2, :b=>4}]] @c.first.update(:a=>2, :b=>4) @c.all.should == [@c.load(:a=>2, :b=>4)] sqls = DB.sqls ["UPDATE a SET a = 2, b = 4 WHERE (a = 1)", "UPDATE a SET b = 4, a = 2 WHERE (a = 1)"].should include(sqls.slice!(1)) sqls.should == ["SELECT * FROM a LIMIT 1", "SELECT * FROM a"] end specify "should handle updating just the primary key field when saving changes" do @ds._fetch = [[{:a=>1, :b=>3}], [{:a=>2, :b=>3}], [{:a=>2, :b=>3}], [{:a=>3, :b=>3}]] @c.first.update(:a=>2) @c.all.should == [@c.load(:a=>2, :b=>3)] DB.sqls.should == ["SELECT * FROM a LIMIT 1", "UPDATE a SET a = 2 WHERE (a = 1)", "SELECT * FROM a"] @c.first.set(:a=>3).save(:columns=>:a) @c.all.should == [@c.load(:a=>3, :b=>3)] DB.sqls.should == ["SELECT * FROM a LIMIT 1", "UPDATE a SET a = 3 WHERE (a = 2)", "SELECT * FROM a"] end specify "should handle saving after modifying the primary key field with another field" do @ds._fetch = [[{:a=>1, :b=>3}], [{:a=>2, :b=>4}]] @c.first.set(:a=>2, :b=>4).save @c.all.should == [@c.load(:a=>2, :b=>4)] sqls = DB.sqls ["UPDATE a SET a = 2, b = 4 WHERE (a = 1)", "UPDATE a SET b = 4, a = 2 WHERE (a = 1)"].should include(sqls.slice!(1)) sqls.should == ["SELECT * FROM a LIMIT 1", "SELECT * FROM a"] end specify "should handle saving after modifying just the primary key field" do @ds._fetch = [[{:a=>1, :b=>3}], [{:a=>2, :b=>3}]] @c.first.set(:a=>2).save @c.all.should == [@c.load(:a=>2, :b=>3)] sqls = DB.sqls ["UPDATE a SET a = 2, b = 3 WHERE (a = 1)", "UPDATE a SET b = 3, a = 2 WHERE (a = 1)"].should include(sqls.slice!(1)) sqls.should == ["SELECT * FROM a LIMIT 1", "SELECT * FROM a"] end specify "should handle saving after updating the primary key" do @ds._fetch = [[{:a=>1, :b=>3}], [{:a=>2, :b=>5}]] @c.first.update(:a=>2).update(:b=>4).set(:b=>5).save @c.all.should == [@c.load(:a=>2, :b=>5)] DB.sqls.should == ["SELECT * FROM a LIMIT 1", "UPDATE a SET a = 2 WHERE (a = 1)", "UPDATE a SET b = 4 WHERE (a = 2)", "UPDATE a SET b = 5 WHERE (a = 2)", "SELECT * FROM a"] end specify "should work correctly when using the prepared_statements plugin" do @c.plugin :prepared_statements @ds._fetch = [[{:a=>1, :b=>3}], [{:a=>2, :b=>4}]] o = @c.first o.update(:a=>2, :b=>4) @c.all.should == [@c.load(:a=>2, :b=>4)] sqls = DB.sqls ["UPDATE a SET a = 2, b = 4 WHERE (a = 1)", "UPDATE a SET b = 4, a = 2 WHERE (a = 1)"].should include(sqls.slice!(1)) sqls.should == ["SELECT * FROM a LIMIT 1", "SELECT * FROM a"] o.delete end specify "should clear the associations cache of non-many_to_one associations when changing the primary key" do @c.one_to_many :cs, :class=>@c @c.many_to_one :c, :class=>@c o = @c.new(:a=>1) o.associations[:cs] = @c.new o.associations[:c] = o2 = @c.new o.a = 2 o.associations.should == {:c=>o2} end specify "should handle frozen instances" do o = @c.new o.a = 1 o.freeze o.pk_hash.should == {:a=>1} end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/validation_class_methods_spec.rb����������������������������������0000664�0000000�0000000�00000070403�12201565355�0025534�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") model_class = proc do |klass, &block| c = Class.new(klass) c.plugin :validation_class_methods c.class_eval(&block) if block c end describe Sequel::Model do before do @c = model_class.call Sequel::Model do def self.validates_coolness_of(attr) validates_each(attr) {|o, a, v| o.errors.add(a, 'is not cool') if v != :cool} end end end specify "should respond to validations, has_validations?, and validation_reflections" do @c.should respond_to(:validations) @c.should respond_to(:has_validations?) @c.should respond_to(:validation_reflections) end specify "should be able to reflect on validations" do @c.validation_reflections.should == {} @c.validates_acceptance_of(:a) @c.validation_reflections.should == {:a=>[[:acceptance, {:tag=>:acceptance, :message=>"is not accepted", :allow_nil=>true, :accept=>"1"}]]} @c.validates_presence_of(:a) @c.validation_reflections[:a].length.should == 2 @c.validation_reflections[:a].last.should == [:presence, {:tag=>:presence, :message=>"is not present"}] end specify "should handle validation reflections correctly when subclassing" do @c.validates_acceptance_of(:a) c = Class.new(@c) c.validation_reflections.map{|k,v| k}.should == [:a] c.validates_presence_of(:a) @c.validation_reflections.should == {:a=>[[:acceptance, {:tag=>:acceptance, :message=>"is not accepted", :allow_nil=>true, :accept=>"1"}]]} c.validation_reflections[:a].last.should == [:presence, {:tag=>:presence, :message=>"is not present"}] end specify "should acccept validation definitions using validates_each" do @c.validates_each(:xx, :yy) {|o, a, v| o.errors.add(a, 'too low') if v < 50} o = @c.new o.should_receive(:xx).once.and_return(40) o.should_receive(:yy).once.and_return(60) o.valid?.should == false o.errors.full_messages.should == ['xx too low'] end specify "should return true/false for has_validations?" do @c.has_validations?.should == false @c.validates_each(:xx) {1} @c.has_validations?.should == true end specify "should validate multiple attributes at once" do o = @c.new def o.xx 1 end def o.yy 2 end vals = nil atts = nil @c.validates_each([:xx, :yy]){|obj,a,v| atts=a; vals=v} o.valid? vals.should == [1,2] atts.should == [:xx, :yy] end specify "should respect allow_missing option when using multiple attributes" do o = @c.new def o.xx self[:xx] end def o.yy self[:yy] end vals = nil atts = nil @c.validates_each([:xx, :yy], :allow_missing=>true){|obj,a,v| atts=a; vals=v} o.values[:xx] = 1 o.valid? vals.should == [1,nil] atts.should == [:xx, :yy] vals = nil atts = nil o.values.clear o.values[:yy] = 2 o.valid? vals.should == [nil, 2] atts.should == [:xx, :yy] vals = nil atts = nil o.values.clear o.valid?.should == true vals.should == nil atts.should == nil end specify "should overwrite existing validation with the same tag and attribute" do @c.validates_each(:xx, :xx, :tag=>:low) {|o, a, v| o.xxx; o.errors.add(a, 'too low') if v < 50} @c.validates_each(:yy, :yy) {|o, a, v| o.yyy; o.errors.add(a, 'too low') if v < 50} @c.validates_presence_of(:zz, :zz) @c.validates_length_of(:aa, :aa, :tag=>:blah) o = @c.new def o.zz @a ||= 0 @a += 1 end def o.aa @b ||= 0 @b += 1 end o.should_receive(:xx).once.and_return(40) o.should_receive(:yy).once.and_return(60) o.should_receive(:xxx).once o.should_receive(:yyy).twice o.valid?.should == false o.zz.should == 2 o.aa.should == 2 o.errors.full_messages.should == ['xx too low'] end specify "should provide a validates method that takes block with validation definitions" do @c.validates do coolness_of :blah end @c.validations[:blah].should_not be_empty o = @c.new o.should_receive(:blah).once.and_return(nil) o.valid?.should == false o.errors.full_messages.should == ['blah is not cool'] end specify "should have the validates block have appropriate respond_to?" do c = nil @c.validates{c = respond_to?(:foo)} c.should be_false @c.validates{c = respond_to?(:length_of)} c.should be_true end if RUBY_VERSION >= '1.9' end describe Sequel::Model do before do @c = model_class.call Sequel::Model do columns :score validates_each :score do |o, a, v| o.errors.add(a, 'too low') if v < 87 end end @o = @c.new end specify "should supply a #valid? method that returns true if validations pass" do @o.score = 50 @o.should_not be_valid @o.score = 100 @o.should be_valid end specify "should provide an errors object" do @o.score = 100 @o.should be_valid @o.errors.should be_empty @o.score = 86 @o.should_not be_valid @o.errors[:score].should == ['too low'] @o.errors.on(:blah).should be_nil end end describe "Sequel::Plugins::ValidationClassMethods::ClassMethods::Generator" do before do @testit = testit = [] @c = model_class.call Sequel::Model do (class << self; self end).send(:define_method, :validates_blah) do testit << 1324 end end end specify "should instance_eval the block, sending everything to its receiver" do @c.validates do blah end @testit.should == [1324] end end describe Sequel::Model do before do @c = model_class.call Sequel::Model do columns :value def self.filter(*args) o = Object.new def o.count; 2; end o end def skip; false; end def dont_skip; true; end end @m = @c.new end specify "should validate acceptance_of" do @c.validates_acceptance_of :value @m.should be_valid @m.value = '1' @m.should be_valid end specify "should validate acceptance_of with accept" do @c.validates_acceptance_of :value, :accept => 'true' @m.value = '1' @m.should_not be_valid @m.value = 'true' @m.should be_valid end specify "should validate acceptance_of with allow_nil => false" do @c.validates_acceptance_of :value, :allow_nil => false @m.should_not be_valid end specify "should validate acceptance_of with allow_missing => true" do @c.validates_acceptance_of :value, :allow_missing => true @m.should be_valid end specify "should validate acceptance_of with allow_missing => true and allow_nil => false" do @c.validates_acceptance_of :value, :allow_missing => true, :allow_nil => false @m.should be_valid @m.value = nil @m.should_not be_valid end specify "should validate acceptance_of with if => true" do @c.validates_acceptance_of :value, :if => :dont_skip @m.value = '0' @m.should_not be_valid end specify "should validate acceptance_of with if => false" do @c.validates_acceptance_of :value, :if => :skip @m.value = '0' @m.should be_valid end specify "should validate acceptance_of with if proc that evaluates to true" do @c.validates_acceptance_of :value, :if => proc{true} @m.value = '0' @m.should_not be_valid end specify "should validate acceptance_of with if proc that evaluates to false" do @c.validates_acceptance_of :value, :if => proc{false} @m.value = '0' @m.should be_valid end specify "should raise an error if :if option is not a Symbol, Proc, or nil" do @c.validates_acceptance_of :value, :if => 1 @m.value = '0' proc{@m.valid?}.should raise_error(Sequel::Error) end specify "should validate confirmation_of" do @c.send(:attr_accessor, :value_confirmation) @c.validates_confirmation_of :value @m.value = 'blah' @m.should_not be_valid @m.value_confirmation = 'blah' @m.should be_valid end specify "should validate confirmation_of with if => true" do @c.send(:attr_accessor, :value_confirmation) @c.validates_confirmation_of :value, :if => :dont_skip @m.value = 'blah' @m.should_not be_valid end specify "should validate confirmation_of with if => false" do @c.send(:attr_accessor, :value_confirmation) @c.validates_confirmation_of :value, :if => :skip @m.value = 'blah' @m.should be_valid end specify "should validate confirmation_of with allow_missing => true" do @c.send(:attr_accessor, :value_confirmation) @c.validates_acceptance_of :value, :allow_missing => true @m.should be_valid @m.value_confirmation = 'blah' @m.should be_valid @m.value = nil @m.should_not be_valid end specify "should validate format_of" do @c.validates_format_of :value, :with => /.+_.+/ @m.value = 'abc_' @m.should_not be_valid @m.value = 'abc_def' @m.should be_valid end specify "should raise for validate_format_of without regexp" do proc {@c.validates_format_of :value}.should raise_error(ArgumentError) proc {@c.validates_format_of :value, :with => :blah}.should raise_error(ArgumentError) end specify "should validate format_of with if => true" do @c.validates_format_of :value, :with => /_/, :if => :dont_skip @m.value = 'a' @m.should_not be_valid end specify "should validate format_of with if => false" do @c.validates_format_of :value, :with => /_/, :if => :skip @m.value = 'a' @m.should be_valid end specify "should validate format_of with allow_missing => true" do @c.validates_format_of :value, :allow_missing => true, :with=>/./ @m.should be_valid @m.value = nil @m.should_not be_valid end specify "should validate length_of with maximum" do @c.validates_length_of :value, :maximum => 5 @m.should_not be_valid @m.value = '12345' @m.should be_valid @m.value = '123456' @m.should_not be_valid @m.errors[:value].should == ['is too long'] @m.value = nil @m.should_not be_valid @m.errors[:value].should == ['is not present'] end specify "should validate length_of with maximum using customized error messages" do @c.validates_length_of :value, :maximum => 5, :too_long=>'tl', :nil_message=>'np' @m.value = '123456' @m.should_not be_valid @m.errors[:value].should == ['tl'] @m.value = nil @m.should_not be_valid @m.errors[:value].should == ['np'] end specify "should validate length_of with minimum" do @c.validates_length_of :value, :minimum => 5 @m.should_not be_valid @m.value = '12345' @m.should be_valid @m.value = '1234' @m.should_not be_valid end specify "should validate length_of with within" do @c.validates_length_of :value, :within => 2..5 @m.should_not be_valid @m.value = '12345' @m.should be_valid @m.value = '1' @m.should_not be_valid @m.value = '123456' @m.should_not be_valid end specify "should validate length_of with is" do @c.validates_length_of :value, :is => 3 @m.should_not be_valid @m.value = '123' @m.should be_valid @m.value = '12' @m.should_not be_valid @m.value = '1234' @m.should_not be_valid end specify "should validate length_of with allow_nil" do @c.validates_length_of :value, :is => 3, :allow_nil => true @m.should be_valid end specify "should validate length_of with if => true" do @c.validates_length_of :value, :is => 3, :if => :dont_skip @m.value = 'a' @m.should_not be_valid end specify "should validate length_of with if => false" do @c.validates_length_of :value, :is => 3, :if => :skip @m.value = 'a' @m.should be_valid end specify "should validate length_of with allow_missing => true" do @c.validates_length_of :value, :allow_missing => true, :minimum => 5 @m.should be_valid @m.value = nil @m.should_not be_valid end specify "should allow multiple calls to validates_length_of with different options without overwriting" do @c.validates_length_of :value, :maximum => 5 @c.validates_length_of :value, :minimum => 5 @m.should_not be_valid @m.value = '12345' @m.should be_valid @m.value = '123456' @m.should_not be_valid @m.value = '12345' @m.should be_valid @m.value = '1234' @m.should_not be_valid end specify "should validate numericality_of" do @c.validates_numericality_of :value @m.value = 'blah' @m.should_not be_valid @m.value = '123' @m.should be_valid @m.value = '123.1231' @m.should be_valid @m.value = '+1' @m.should be_valid @m.value = '-1' @m.should be_valid @m.value = '+1.123' @m.should be_valid @m.value = '-0.123' @m.should be_valid @m.value = '-0.123E10' @m.should be_valid @m.value = '32.123e10' @m.should be_valid @m.value = '+32.123E10' @m.should be_valid @m.should be_valid @m.value = '.0123' end specify "should validate numericality_of with only_integer" do @c.validates_numericality_of :value, :only_integer => true @m.value = 'blah' @m.should_not be_valid @m.value = '123' @m.should be_valid @m.value = '123.1231' @m.should_not be_valid end specify "should validate numericality_of with if => true" do @c.validates_numericality_of :value, :if => :dont_skip @m.value = 'a' @m.should_not be_valid end specify "should validate numericality_of with if => false" do @c.validates_numericality_of :value, :if => :skip @m.value = 'a' @m.should be_valid end specify "should validate numericality_of with allow_missing => true" do @c.validates_numericality_of :value, :allow_missing => true @m.should be_valid @m.value = nil @m.should_not be_valid end specify "should validate presence_of" do @c.validates_presence_of :value @m.should_not be_valid @m.value = '' @m.should_not be_valid @m.value = 1234 @m.should be_valid @m.value = nil @m.should_not be_valid @m.value = true @m.should be_valid @m.value = false @m.should be_valid end specify "should validate inclusion_of with an array" do @c.validates_inclusion_of :value, :in => [1,2] @m.should_not be_valid @m.value = 1 @m.should be_valid @m.value = 1.5 @m.should_not be_valid @m.value = 2 @m.should be_valid @m.value = 3 @m.should_not be_valid end specify "should validate inclusion_of with a range" do @c.validates_inclusion_of :value, :in => 1..4 @m.should_not be_valid @m.value = 1 @m.should be_valid @m.value = 1.5 @m.should be_valid @m.value = 0 @m.should_not be_valid @m.value = 5 @m.should_not be_valid end specify "should raise an error if inclusion_of doesn't receive a valid :in option" do lambda{@c.validates_inclusion_of :value}.should raise_error(ArgumentError) lambda{@c.validates_inclusion_of :value, :in => 1}.should raise_error(ArgumentError) end specify "should raise an error if inclusion_of handles :allow_nil too" do @c.validates_inclusion_of :value, :in => 1..4, :allow_nil => true @m.value = nil @m.should be_valid @m.value = 0 @m.should_not be_valid end specify "should validate presence_of with if => true" do @c.validates_presence_of :value, :if => :dont_skip @m.should_not be_valid end specify "should validate presence_of with if => false" do @c.validates_presence_of :value, :if => :skip @m.should be_valid end specify "should validate presence_of with allow_missing => true" do @c.validates_presence_of :value, :allow_missing => true @m.should be_valid @m.value = nil @m.should_not be_valid end specify "should validate uniqueness_of with if => true" do @c.validates_uniqueness_of :value, :if => :dont_skip @m.value = 'a' @m.should_not be_valid end specify "should validate uniqueness_of with if => false" do @c.validates_uniqueness_of :value, :if => :skip @m.value = 'a' @m.should be_valid end specify "should validate uniqueness_of with allow_missing => true" do @c.validates_uniqueness_of :value, :allow_missing => true @m.should be_valid @m.value = 1 @m.should_not be_valid end end describe "Superclass validations" do before do @c1 = model_class.call Sequel::Model do columns :value validates_length_of :value, :minimum => 5 end @c2 = Class.new(@c1) @c2.class_eval do columns :value validates_format_of :value, :with => /^[a-z]+$/ end end specify "should be checked when validating" do o = @c2.new o.value = 'ab' o.valid?.should == false o.errors.full_messages.should == ['value is too short'] o.value = '12' o.valid?.should == false o.errors.full_messages.should == ['value is too short', 'value is invalid'] o.value = 'abcde' o.valid?.should be_true end specify "should have skip_superclass_validations? return whether superclass validations were skipped" do @c2.skip_superclass_validations?.should == nil @c2.skip_superclass_validations @c2.skip_superclass_validations?.should == true end specify "should be skipped if skip_superclass_validations is called" do @c2.skip_superclass_validations o = @c2.new o.value = 'ab' o.valid?.should be_true o.value = '12' o.valid?.should == false o.errors.full_messages.should == ['value is invalid'] o.value = 'abcde' o.valid?.should be_true end end describe ".validates with block" do specify "should support calling .each" do @c = model_class.call Sequel::Model do columns :vvv validates do each :vvv do |o, a, v| o.errors.add(a, "is less than zero") if v.to_i < 0 end end end o = @c.new o.vvv = 1 o.should be_valid o.vvv = -1 o.should_not be_valid end end describe Sequel::Model, "Validations" do before(:all) do class ::Person < Sequel::Model plugin :validation_class_methods columns :id,:name,:first_name,:last_name,:middle_name,:initials,:age, :terms end class ::Smurf < Person end class ::Can < Sequel::Model plugin :validation_class_methods columns :id, :name end class ::Cow < Sequel::Model plugin :validation_class_methods columns :id, :name, :got_milk end class ::User < Sequel::Model plugin :validation_class_methods columns :id, :username, :password end class ::Address < Sequel::Model plugin :validation_class_methods columns :id, :zip_code end end after(:all) do [:Person, :Smurf, :Cow, :User, :Address].each{|c| Object.send(:remove_const, c)} end it "should validate the acceptance of a column" do class ::Cow < Sequel::Model validations.clear validates_acceptance_of :got_milk, :accept => 'blah', :allow_nil => false end @cow = Cow.new @cow.should_not be_valid @cow.errors.full_messages.should == ["got_milk is not accepted"] @cow.got_milk = "blah" @cow.should be_valid end it "should validate the confirmation of a column" do class ::User < Sequel::Model def password_confirmation "test" end validations.clear validates_confirmation_of :password end @user = User.new @user.should_not be_valid @user.errors.full_messages.should == ["password is not confirmed"] @user.password = "test" @user.should be_valid end it "should validate format of column" do class ::Person < Sequel::Model validates_format_of :first_name, :with => /^[a-zA-Z]+$/ end @person = Person.new :first_name => "Lancelot99" @person.valid?.should be_false @person = Person.new :first_name => "Anita" @person.valid?.should be_true end it "should validate length of column" do class ::Person < Sequel::Model validations.clear validates_length_of :first_name, :maximum => 30 validates_length_of :last_name, :minimum => 30 validates_length_of :middle_name, :within => 1..5 validates_length_of :initials, :is => 2 end @person = Person.new( :first_name => "Anamethatiswaytofreakinglongandwayoverthirtycharacters", :last_name => "Alastnameunderthirtychars", :initials => "LGC", :middle_name => "danger" ) @person.should_not be_valid @person.errors.full_messages.size.should == 4 @person.errors.full_messages.should include( 'first_name is too long', 'last_name is too short', 'middle_name is the wrong length', 'initials is the wrong length' ) @person.first_name = "Lancelot" @person.last_name = "1234567890123456789012345678901" @person.initials = "LC" @person.middle_name = "Will" @person.should be_valid end it "should validate that a column has the correct type for the schema column" do p = model_class.call Sequel::Model do columns :age, :d self.raise_on_typecast_failure = false validates_schema_type :age validates_schema_type :d, :message=>'is a bad choice' @db_schema = {:age=>{:type=>:integer}, :d=>{:type=>:date}} end @person = p.new @person.should be_valid @person.age = 'a' @person.should_not be_valid @person.errors.full_messages.should == ['age is not a valid integer'] @person.age = 1 @person.should be_valid @person.d = 'a' @person.should_not be_valid @person.errors.full_messages.should == ['d is a bad choice'] @person.d = Date.today @person.should be_valid end it "should validate numericality of column" do class ::Person < Sequel::Model validations.clear validates_numericality_of :age end @person = Person.new :age => "Twenty" @person.should_not be_valid @person.errors.full_messages.should == ['age is not a number'] @person.age = 20 @person.should be_valid end it "should validate the presence of a column" do class ::Cow < Sequel::Model validations.clear validates_presence_of :name end @cow = Cow.new @cow.should_not be_valid @cow.errors.full_messages.should == ['name is not present'] @cow.name = "Betsy" @cow.should be_valid end it "should validate the uniqueness of a column" do class ::User < Sequel::Model validations.clear validates do uniqueness_of :username end end User.dataset._fetch = proc do |sql| case sql when /count.*username = '0records'/ {:v => 0} when /count.*username = '2records'/ {:v => 2} when /count.*username = '1record'/ {:v => 1} when /username = '1record'/ {:id => 3, :username => "1record", :password => "test"} end end @user = User.new(:username => "2records", :password => "anothertest") @user.should_not be_valid @user.errors.full_messages.should == ['username is already taken'] @user = User.new(:username => "1record", :password => "anothertest") @user.should_not be_valid @user.errors.full_messages.should == ['username is already taken'] @user = User.load(:id=>4, :username => "1record", :password => "anothertest") @user.should_not be_valid @user.errors.full_messages.should == ['username is already taken'] @user = User.load(:id=>3, :username => "1record", :password => "anothertest") @user.should be_valid @user.errors.full_messages.should == [] @user = User.new(:username => "0records", :password => "anothertest") @user.should be_valid @user.errors.full_messages.should == [] User.db.sqls @user = User.new(:password => "anothertest") @user.should be_valid @user.errors.full_messages.should == [] User.db.sqls.should == [] end it "should validate the uniqueness of multiple columns" do class ::User < Sequel::Model validations.clear validates do uniqueness_of [:username, :password] end end User.dataset._fetch = proc do |sql| case sql when /count.*username = '0records'/ {:v => 0} when /count.*username = '2records'/ {:v => 2} when /count.*username = '1record'/ {:v => 1} when /username = '1record'/ if sql =~ /password = 'anothertest'/ {:id => 3, :username => "1record", :password => "anothertest"} else {:id => 4, :username => "1record", :password => "test"} end end end @user = User.new(:username => "2records", :password => "anothertest") @user.should_not be_valid @user.errors.full_messages.should == ['username and password is already taken'] @user = User.new(:username => "1record", :password => "anothertest") @user.should_not be_valid @user.errors.full_messages.should == ['username and password is already taken'] @user = User.load(:id=>4, :username => "1record", :password => "anothertest") @user.should_not be_valid @user.errors.full_messages.should == ['username and password is already taken'] @user = User.load(:id=>3, :username => "1record", :password => "test") @user.should_not be_valid @user.errors.full_messages.should == ['username and password is already taken'] @user = User.load(:id=>3, :username => "1record", :password => "anothertest") @user.should be_valid @user.errors.full_messages.should == [] @user = User.new(:username => "0records", :password => "anothertest") @user.should be_valid @user.errors.full_messages.should == [] User.db.sqls @user = User.new(:password => "anothertest") @user.should be_valid @user.errors.full_messages.should == [] @user = User.new(:username => "0records") @user.should be_valid @user.errors.full_messages.should == [] @user = User.new @user.should be_valid @user.errors.full_messages.should == [] User.db.sqls.should == [] end it "should have a validates block that contains multiple validations" do class ::Person < Sequel::Model validations.clear validates do format_of :first_name, :with => /^[a-zA-Z]+$/ length_of :first_name, :maximum => 30 end end Person.validations[:first_name].size.should == 2 @person = Person.new :first_name => "Lancelot99" @person.valid?.should be_false @person2 = Person.new :first_name => "Wayne" @person2.valid?.should be_true end it "should allow 'longhand' validations direcly within the model." do lambda { class ::Person < Sequel::Model validations.clear validates_length_of :first_name, :maximum => 30 end }.should_not raise_error Person.validations.length.should eql(1) end it "should define a has_validations? method which returns true if the model has validations, false otherwise" do class ::Person < Sequel::Model validations.clear validates do format_of :first_name, :with => /\w+/ length_of :first_name, :maximum => 30 end end class ::Smurf < Person validations.clear end Person.should have_validations Smurf.should_not have_validations end it "should validate correctly instances initialized with string keys" do class ::Can < Sequel::Model validates_length_of :name, :minimum => 4 end Can.new('name' => 'ab').should_not be_valid Can.new('name' => 'abcd').should be_valid end end describe "Model#save" do before do @c = model_class.call Sequel::Model(:people) do columns :id, :x validates_each :x do |o, a, v| o.errors.add(a, 'blah') unless v == 7 end end @m = @c.load(:id => 4, :x=>6) DB.reset end specify "should save only if validations pass" do @m.raise_on_save_failure = false @m.should_not be_valid @m.save DB.sqls.should be_empty @m.x = 7 @m.should be_valid @m.save.should_not be_false DB.sqls.should == ['UPDATE people SET x = 7 WHERE (id = 4)'] end specify "should skip validations if the :validate=>false option is used" do @m.raise_on_save_failure = false @m.should_not be_valid @m.save(:validate=>false) DB.sqls.should == ['UPDATE people SET x = 6 WHERE (id = 4)'] end specify "should raise error if validations fail and raise_on_save_faiure is true" do proc{@m.save}.should raise_error(Sequel::ValidationFailed) end specify "should return nil if validations fail and raise_on_save_faiure is false" do @m.raise_on_save_failure = false @m.save.should == nil end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/validation_helpers_spec.rb����������������������������������������0000664�0000000�0000000�00000041437�12201565355�0024353�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Plugins::ValidationHelpers" do before do @c = Class.new(Sequel::Model) do def self.set_validations(&block) define_method(:validate, &block) end columns :value end @c.plugin :validation_helpers @m = @c.new end specify "should take an :allow_blank option" do @c.set_validations{validates_format(/.+_.+/, :value, :allow_blank=>true)} @m.value = 'abc_' @m.should_not be_valid @m.value = '1_1' @m.should be_valid o = Object.new @m.value = o @m.should_not be_valid def o.blank? true end @m.should be_valid end specify "should take an :allow_missing option" do @c.set_validations{validates_format(/.+_.+/, :value, :allow_missing=>true)} @m.values.clear @m.should be_valid @m.value = nil @m.should_not be_valid @m.value = '1_1' @m.should be_valid end specify "should take an :allow_nil option" do @c.set_validations{validates_format(/.+_.+/, :value, :allow_nil=>true)} @m.value = 'abc_' @m.should_not be_valid @m.value = '1_1' @m.should be_valid @m.value = nil @m.should be_valid end specify "should take a :message option" do @c.set_validations{validates_format(/.+_.+/, :value, :message=>"is so blah")} @m.value = 'abc_' @m.should_not be_valid @m.errors.full_messages.should == ['value is so blah'] @m.value = '1_1' @m.should be_valid end specify "should allow a proc for the :message option" do @c.set_validations{validates_format(/.+_.+/, :value, :message=>proc{|f| "doesn't match #{f.inspect}"})} @m.value = 'abc_' @m.should_not be_valid @m.errors.should == {:value=>["doesn't match /.+_.+/"]} end specify "should take multiple attributes in the same call" do @c.columns :value, :value2 @c.set_validations{validates_presence([:value, :value2])} @m.should_not be_valid @m.value = 1 @m.should_not be_valid @m.value2 = 1 @m.should be_valid end specify "should support modifying default options for all models" do @c.set_validations{validates_presence(:value)} @m.should_not be_valid @m.errors.should == {:value=>['is not present']} o = Sequel::Plugins::ValidationHelpers::DEFAULT_OPTIONS[:presence].dup Sequel::Plugins::ValidationHelpers::DEFAULT_OPTIONS[:presence][:message] = lambda{"was not entered"} @m.should_not be_valid @m.errors.should == {:value=>["was not entered"]} @m.value = 1 @m.should be_valid @m.values.clear Sequel::Plugins::ValidationHelpers::DEFAULT_OPTIONS[:presence][:allow_missing] = true @m.should be_valid @m.value = nil @m.should_not be_valid @m.errors.should == {:value=>["was not entered"]} c = Class.new(Sequel::Model) c.class_eval do plugin :validation_helpers set_columns([:value]) def validate validates_presence(:value) end end m = c.new(:value=>nil) m.should_not be_valid m.errors.should == {:value=>["was not entered"]} Sequel::Plugins::ValidationHelpers::DEFAULT_OPTIONS[:presence] = o end specify "should support modifying default validation options for a particular model" do @c.set_validations{validates_presence(:value)} @m.should_not be_valid @m.errors.should == {:value=>['is not present']} @c.class_eval do def default_validation_helpers_options(type) {:allow_missing=>true, :message=>proc{'was not entered'}} end end @m.value = nil @m.should_not be_valid @m.errors.should == {:value=>["was not entered"]} @m.value = 1 @m.should be_valid @m.values.clear @m.should be_valid c = Class.new(Sequel::Model) c.class_eval do plugin :validation_helpers attr_accessor :value def validate validates_presence(:value) end end m = c.new m.should_not be_valid m.errors.should == {:value=>['is not present']} end specify "should support validates_exact_length" do @c.set_validations{validates_exact_length(3, :value)} @m.should_not be_valid @m.value = '123' @m.should be_valid @m.value = '12' @m.should_not be_valid @m.value = '1234' @m.should_not be_valid end specify "should support validate_format" do @c.set_validations{validates_format(/.+_.+/, :value)} @m.value = 'abc_' @m.should_not be_valid @m.value = 'abc_def' @m.should be_valid end specify "should support validates_includes with an array" do @c.set_validations{validates_includes([1,2], :value)} @m.should_not be_valid @m.value = 1 @m.should be_valid @m.value = 1.5 @m.should_not be_valid @m.value = 2 @m.should be_valid @m.value = 3 @m.should_not be_valid end specify "should support validates_includes with a range" do @c.set_validations{validates_includes(1..4, :value)} @m.should_not be_valid @m.value = 1 @m.should be_valid @m.value = 1.5 @m.should be_valid @m.value = 0 @m.should_not be_valid @m.value = 5 @m.should_not be_valid end specify "should supports validates_integer" do @c.set_validations{validates_integer(:value)} @m.value = 'blah' @m.should_not be_valid @m.value = '123' @m.should be_valid @m.value = '123.1231' @m.should_not be_valid end specify "should support validates_length_range" do @c.set_validations{validates_length_range(2..5, :value)} @m.should_not be_valid @m.value = '12345' @m.should be_valid @m.value = '1' @m.should_not be_valid @m.value = '123456' @m.should_not be_valid end specify "should support validates_max_length" do @c.set_validations{validates_max_length(5, :value)} @m.should_not be_valid @m.value = '12345' @m.should be_valid @m.value = '123456' @m.should_not be_valid @m.errors[:value].should == ['is longer than 5 characters'] @m.value = nil @m.should_not be_valid @m.errors[:value].should == ['is not present'] end specify "should support validates_max_length with nil value" do @c.set_validations{validates_max_length(5, :value, :message=>'tl', :nil_message=>'np')} @m.value = '123456' @m.should_not be_valid @m.errors[:value].should == ['tl'] @m.value = nil @m.should_not be_valid @m.errors[:value].should == ['np'] end specify "should support validates_min_length" do @c.set_validations{validates_min_length(5, :value)} @m.should_not be_valid @m.value = '12345' @m.should be_valid @m.value = '1234' @m.should_not be_valid end specify "should support validates_schema_types" do @c.set_validations{validates_schema_types} @m.value = 123 @m.should be_valid @m.value = '123' @m.should be_valid @m.meta_def(:db_schema){{:value=>{:type=>:integer}}} @m.should_not be_valid @m.errors.full_messages.should == ['value is not a valid integer'] @c.set_validations{validates_schema_types(:value)} @m.meta_def(:db_schema){{:value=>{:type=>:integer}}} @m.should_not be_valid @m.errors.full_messages.should == ['value is not a valid integer'] @c.set_validations{validates_schema_types(:value, :message=>'is bad')} @m.meta_def(:db_schema){{:value=>{:type=>:integer}}} @m.should_not be_valid @m.errors.full_messages.should == ['value is bad'] end specify "should support validates_numeric" do @c.set_validations{validates_numeric(:value)} @m.value = 'blah' @m.should_not be_valid @m.value = '123' @m.should be_valid @m.value = '123.1231' @m.should be_valid @m.value = '+1' @m.should be_valid @m.value = '-1' @m.should be_valid @m.value = '+1.123' @m.should be_valid @m.value = '-0.123' @m.should be_valid @m.value = '-0.123E10' @m.should be_valid @m.value = '32.123e10' @m.should be_valid @m.value = '+32.123E10' @m.should be_valid @m.should be_valid @m.value = '.0123' end specify "should support validates_type" do @c.set_validations{validates_type(Integer, :value)} @m.value = 123 @m.should be_valid @m.value = '123' @m.should_not be_valid @m.errors.full_messages.should == ['value is not a valid integer'] @c.set_validations{validates_type(:String, :value)} @m.value = '123' @m.should be_valid @m.value = 123 @m.should_not be_valid @m.errors.full_messages.should == ['value is not a valid string'] @c.set_validations{validates_type('Integer', :value)} @m.value = 123 @m.should be_valid @m.value = 123.05 @m.should_not be_valid @m.errors.full_messages.should == ['value is not a valid integer'] @c.set_validations{validates_type(Integer, :value)} @m.value = 1 @m.should be_valid @m.value = false @m.should_not be_valid @c.set_validations{validates_type([Integer, Float], :value)} @m.value = 1 @m.should be_valid @m.value = 1.0 @m.should be_valid @m.value = BigDecimal.new('1.0') @m.should_not be_valid @m.errors.full_messages.should == ['value is not a valid integer or float'] end specify "should support validates_not_null" do @c.set_validations{validates_not_null(:value)} @m.should_not be_valid @m.value = '' @m.should be_valid @m.value = 1234 @m.should be_valid @m.value = nil @m.should_not be_valid @m.value = true @m.should be_valid @m.value = false @m.should be_valid @m.value = Time.now @m.should be_valid end specify "should support validates_presence" do @c.set_validations{validates_presence(:value)} @m.should_not be_valid @m.value = '' @m.should_not be_valid @m.value = 1234 @m.should be_valid @m.value = nil @m.should_not be_valid @m.value = true @m.should be_valid @m.value = false @m.should be_valid @m.value = Time.now @m.should be_valid end it "should support validates_unique with a single attribute" do @c.columns(:id, :username, :password) @c.set_dataset DB[:items] @c.set_validations{validates_unique(:username)} @c.dataset._fetch = proc do |sql| case sql when /count.*username = '0records'/ {:v => 0} when /count.*username = '1record'/ {:v => 1} end end @user = @c.new(:username => "0records", :password => "anothertest") @user.should be_valid @user = @c.load(:id=>3, :username => "0records", :password => "anothertest") @user.should be_valid DB.sqls @user = @c.new(:password => "anothertest") @user.should be_valid DB.sqls.should == [] @user = @c.new(:username => "1record", :password => "anothertest") @user.should_not be_valid @user.errors.full_messages.should == ['username is already taken'] @user = @c.load(:id=>4, :username => "1record", :password => "anothertest") @user.should_not be_valid @user.errors.full_messages.should == ['username is already taken'] ds1 = @c.dataset.filter([[:username, '0records']]) ds2 = ds1.exclude(:id=>1) @c.dataset.should_receive(:where).with([[:username, '0records']]).twice.and_return(ds1) ds1.should_receive(:exclude).with(:id=>1).once.and_return(ds2) @user = @c.load(:id=>1, :username => "0records", :password => "anothertest") @user.should be_valid DB.sqls.last.should == "SELECT count(*) AS count FROM items WHERE ((username = '0records') AND (id != 1)) LIMIT 1" @user = @c.new(:username => "0records", :password => "anothertest") @user.should be_valid DB.sqls.last.should == "SELECT count(*) AS count FROM items WHERE (username = '0records') LIMIT 1" end it "should support validates_unique with multiple attributes" do @c.columns(:id, :username, :password) @c.set_dataset DB[:items] @c.set_validations{validates_unique([:username, :password])} @c.dataset._fetch = proc do |sql| case sql when /count.*username = '0records'/ {:v => 0} when /count.*username = '1record'/ {:v => 1} end end @user = @c.new(:username => "0records", :password => "anothertest") @user.should be_valid @user = @c.load(:id=>3, :username => "0records", :password => "anothertest") @user.should be_valid DB.sqls @user = @c.new(:password => "anothertest") @user.should be_valid @user.errors.full_messages.should == [] @user = @c.new(:username => "0records") @user.should be_valid @user.errors.full_messages.should == [] @user = @c.new @user.should be_valid @user.errors.full_messages.should == [] DB.sqls.should == [] @user = @c.new(:username => "1record", :password => "anothertest") @user.should_not be_valid @user.errors.full_messages.should == ['username and password is already taken'] @user = @c.load(:id=>4, :username => "1record", :password => "anothertest") @user.should_not be_valid @user.errors.full_messages.should == ['username and password is already taken'] ds1 = @c.dataset.filter([[:username, '0records'], [:password, 'anothertest']]) ds2 = ds1.exclude(:id=>1) @c.dataset.should_receive(:where).with([[:username, '0records'], [:password, 'anothertest']]).twice.and_return(ds1) ds1.should_receive(:exclude).with(:id=>1).once.and_return(ds2) @user = @c.load(:id=>1, :username => "0records", :password => "anothertest") @user.should be_valid DB.sqls.last.should == "SELECT count(*) AS count FROM items WHERE ((username = '0records') AND (password = 'anothertest') AND (id != 1)) LIMIT 1" @user = @c.new(:username => "0records", :password => "anothertest") @user.should be_valid DB.sqls.last.should == "SELECT count(*) AS count FROM items WHERE ((username = '0records') AND (password = 'anothertest')) LIMIT 1" end it "should support validates_unique with a block" do @c.columns(:id, :username, :password) @c.set_dataset DB[:items] @c.set_validations{validates_unique(:username){|ds| ds.filter(:active)}} @c.dataset._fetch = {:v=>0} DB.reset @c.new(:username => "0records", :password => "anothertest").should be_valid @c.load(:id=>3, :username => "0records", :password => "anothertest").should be_valid DB.sqls.should == ["SELECT count(*) AS count FROM items WHERE ((username = '0records') AND active) LIMIT 1", "SELECT count(*) AS count FROM items WHERE ((username = '0records') AND active AND (id != 3)) LIMIT 1"] end it "should support validates_unique with a custom filter" do @c.columns(:id, :username, :password) @c.set_dataset DB[:items] @c.set_validations{validates_unique(:username, :where=>proc{|ds, obj, cols| ds.where(cols.map{|c| [Sequel.function(:lower, c), obj.send(c).downcase]})})} @c.dataset._fetch = {:v=>0} DB.reset @c.new(:username => "0RECORDS", :password => "anothertest").should be_valid @c.load(:id=>3, :username => "0RECORDS", :password => "anothertest").should be_valid DB.sqls.should == ["SELECT count(*) AS count FROM items WHERE (lower(username) = '0records') LIMIT 1", "SELECT count(*) AS count FROM items WHERE ((lower(username) = '0records') AND (id != 3)) LIMIT 1"] end it "should support :only_if_modified option for validates_unique, and not check uniqueness for existing records if values haven't changed" do @c.columns(:id, :username, :password) @c.set_dataset DB[:items] @c.set_validations{validates_unique([:username, :password], :only_if_modified=>true)} @c.dataset._fetch = {:v=>0} DB.reset @c.new(:username => "0records", :password => "anothertest").should be_valid DB.sqls.should == ["SELECT count(*) AS count FROM items WHERE ((username = '0records') AND (password = 'anothertest')) LIMIT 1"] DB.reset m = @c.load(:id=>3, :username => "0records", :password => "anothertest") m.should be_valid DB.sqls.should == [] m.username = '1' m.should be_valid DB.sqls.should == ["SELECT count(*) AS count FROM items WHERE ((username = '1') AND (password = 'anothertest') AND (id != 3)) LIMIT 1"] m = @c.load(:id=>3, :username => "0records", :password => "anothertest") DB.reset m.password = '1' m.should be_valid DB.sqls.should == ["SELECT count(*) AS count FROM items WHERE ((username = '0records') AND (password = '1') AND (id != 3)) LIMIT 1"] DB.reset m.username = '2' m.should be_valid DB.sqls.should == ["SELECT count(*) AS count FROM items WHERE ((username = '2') AND (password = '1') AND (id != 3)) LIMIT 1"] end it "should not attempt a database query if the underlying columns have validation errors" do @c.columns(:id, :username, :password) @c.set_dataset DB[:items] @c.set_validations{errors.add(:username, 'foo'); validates_unique([:username, :password])} @c.dataset._fetch = {:v=>0} DB.reset m = @c.new(:username => "1", :password => "anothertest") m.should_not be_valid DB.sqls.should == [] end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/extensions/xml_serializer_spec.rb��������������������������������������������0000664�0000000�0000000�00000023051�12201565355�0023520�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") begin require 'nokogiri' rescue LoadError => e skip_warn "xml_serializer plugin: can't load nokogiri (#{e.class}: #{e})" else describe "Sequel::Plugins::XmlSerializer" do before do class ::Artist < Sequel::Model unrestrict_primary_key plugin :xml_serializer columns :id, :name @db_schema = {:id=>{:type=>:integer}, :name=>{:type=>:string}} one_to_many :albums end class ::Album < Sequel::Model unrestrict_primary_key attr_accessor :blah plugin :xml_serializer columns :id, :name, :artist_id @db_schema = {:id=>{:type=>:integer}, :name=>{:type=>:string}, :artist_id=>{:type=>:integer}} many_to_one :artist end @artist = Artist.load(:id=>2, :name=>'YJM') @artist.associations[:albums] = [] @album = Album.load(:id=>1, :name=>'RF') @album.artist = @artist @album.blah = 'Blah' end after do Object.send(:remove_const, :Artist) Object.send(:remove_const, :Album) end it "should round trip successfully" do Artist.from_xml(@artist.to_xml).should == @artist Album.from_xml(@album.to_xml).should == @album end it "should round trip successfully for namespaced models" do module XmlSerializerTest class Artist < Sequel::Model unrestrict_primary_key plugin :xml_serializer columns :id, :name @db_schema = {:id=>{:type=>:integer}, :name=>{:type=>:string}} end end artist = XmlSerializerTest::Artist.load(:id=>2, :name=>'YJM') XmlSerializerTest::Artist.from_xml(artist.to_xml).should == artist end it "should round trip successfully with empty strings" do artist = Artist.load(:id=>2, :name=>'') Artist.from_xml(artist.to_xml).should == artist end it "should round trip successfully with nil values" do artist = Artist.load(:id=>2, :name=>nil) Artist.from_xml(artist.to_xml).should == artist end it "should handle the :only option" do Artist.from_xml(@artist.to_xml(:only=>:name)).should == Artist.load(:name=>@artist.name) Album.from_xml(@album.to_xml(:only=>[:id, :name])).should == Album.load(:id=>@album.id, :name=>@album.name) end it "should handle the :except option" do Artist.from_xml(@artist.to_xml(:except=>:id)).should == Artist.load(:name=>@artist.name) Album.from_xml(@album.to_xml(:except=>[:id, :artist_id])).should == Album.load(:name=>@album.name) end it "should handle the :include option for associations" do Artist.from_xml(@artist.to_xml(:include=>:albums), :associations=>:albums).albums.should == [@album] Album.from_xml(@album.to_xml(:include=>:artist), :associations=>:artist).artist.should == @artist end it "should handle the :include option for arbitrary attributes" do Album.from_xml(@album.to_xml(:include=>:blah)).blah.should == @album.blah end it "should handle multiple inclusions using an array for the :include option" do a = Album.from_xml(@album.to_xml(:include=>[:blah, :artist]), :associations=>:artist) a.blah.should == @album.blah a.artist.should == @artist end it "should handle cascading using a hash for the :include option" do Artist.from_xml(@artist.to_xml(:include=>{:albums=>{:include=>:artist}}), :associations=>{:albums=>{:associations=>:artist}}).albums.map{|a| a.artist}.should == [@artist] Album.from_xml(@album.to_xml(:include=>{:artist=>{:include=>:albums}}), :associations=>{:artist=>{:associations=>:albums}}).artist.albums.should == [@album] Artist.from_xml(@artist.to_xml(:include=>{:albums=>{:only=>:name}}), :associations=>{:albums=>{:fields=>%w'name'}}).albums.should == [Album.load(:name=>@album.name)] Album.from_xml(@album.to_xml(:include=>{:artist=>{:except=>:name}}), :associations=>:artist).artist.should == Artist.load(:id=>@artist.id) Artist.from_xml(@artist.to_xml(:include=>{:albums=>{:include=>{:artist=>{:include=>:albums}}}}), :associations=>{:albums=>{:associations=>{:artist=>{:associations=>:albums}}}}).albums.map{|a| a.artist.albums}.should == [[@album]] Album.from_xml(@album.to_xml(:include=>{:artist=>{:include=>{:albums=>{:only=>:name}}}}), :associations=>{:artist=>{:associations=>{:albums=>{:fields=>%w'name'}}}}).artist.albums.should == [Album.load(:name=>@album.name)] end it "should handle the :include option cascading with an empty hash" do Album.from_xml(@album.to_xml(:include=>{:artist=>{}}), :associations=>:artist).artist.should == @artist Album.from_xml(@album.to_xml(:include=>{:blah=>{}})).blah.should == @album.blah end it "should support #from_xml to set column values" do @artist.from_xml('<album><name>AS</name></album>') @artist.name.should == 'AS' @artist.id.should == 2 end it "should support a :name_proc option when serializing and deserializing" do Album.from_xml(@album.to_xml(:name_proc=>proc{|s| s.reverse}), :name_proc=>proc{|s| s.reverse}).should == @album end it "should support a :camelize option when serializing and :underscore option when deserializing" do Album.from_xml(@album.to_xml(:camelize=>true), :underscore=>true).should == @album end it "should support a :camelize option when serializing and :underscore option when deserializing" do Album.from_xml(@album.to_xml(:dasherize=>true), :underscore=>true).should == @album end it "should support an :encoding option when serializing" do ["<?xml version=\"1.0\" encoding=\"UTF-8\"?><artist><id>2</id><name>YJM</name></artist>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?><artist><name>YJM</name><id>2</id></artist>"].should include(@artist.to_xml(:encoding=>'UTF-8').gsub(/\n */m, '')) end it "should support a :builder_opts option when serializing" do ["<?xml version=\"1.0\" encoding=\"UTF-8\"?><artist><id>2</id><name>YJM</name></artist>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?><artist><name>YJM</name><id>2</id></artist>"].should include(@artist.to_xml(:builder_opts=>{:encoding=>'UTF-8'}).gsub(/\n */m, '')) end it "should support an :types option when serializing" do ["<?xml version=\"1.0\"?><artist><id type=\"integer\">2</id><name type=\"string\">YJM</name></artist>", "<?xml version=\"1.0\"?><artist><name type=\"string\">YJM</name><id type=\"integer\">2</id></artist>"].should include(@artist.to_xml(:types=>true).gsub(/\n */m, '')) end it "should support an :root_name option when serializing" do ["<?xml version=\"1.0\"?><ar><id>2</id><name>YJM</name></ar>", "<?xml version=\"1.0\"?><ar><name>YJM</name><id>2</id></ar>"].should include(@artist.to_xml(:root_name=>'ar').gsub(/\n */m, '')) end it "should support an :array_root_name option when serializing arrays" do artist = @artist Artist.dataset.meta_def(:all){[artist]} ["<?xml version=\"1.0\"?><ars><ar><id>2</id><name>YJM</name></ar></ars>", "<?xml version=\"1.0\"?><ars><ar><name>YJM</name><id>2</id></ar></ars>"].should include(Artist.to_xml(:array_root_name=>'ars', :root_name=>'ar').gsub(/\n */m, '')) end it "should raise an exception for xml tags that aren't associations, columns, or setter methods" do Album.send(:undef_method, :blah=) proc{Album.from_xml(@album.to_xml(:include=>:blah))}.should raise_error(Sequel::Error) end it "should support a to_xml class and dataset method" do album = @album Album.dataset.meta_def(:all){[album]} Album.array_from_xml(Album.to_xml).should == [@album] Album.array_from_xml(Album.to_xml(:include=>:artist), :associations=>:artist).map{|x| x.artist}.should == [@artist] Album.array_from_xml(Album.dataset.to_xml(:only=>:name)).should == [Album.load(:name=>@album.name)] end it "should have to_xml dataset method respect an :array option" do a = Album.load(:id=>1, :name=>'RF', :artist_id=>3) Album.array_from_xml(Album.to_xml(:array=>[a])).should == [a] a.associations[:artist] = artist = Artist.load(:id=>3, :name=>'YJM') Album.array_from_xml(Album.to_xml(:array=>[a], :include=>:artist), :associations=>:artist).first.artist.should == artist artist.associations[:albums] = [a] x = Artist.array_from_xml(Artist.to_xml(:array=>[artist], :include=>:albums), :associations=>[:albums]) x.should == [artist] x.first.albums.should == [a] end it "should raise an error if the dataset does not have a row_proc" do proc{Album.dataset.naked.to_xml}.should raise_error(Sequel::Error) end it "should raise an error if using parsing empty xml" do proc{Artist.from_xml("<?xml version=\"1.0\"?>\n")}.should raise_error(Sequel::Error) proc{Artist.array_from_xml("<?xml version=\"1.0\"?>\n")}.should raise_error(Sequel::Error) end it "should raise an error if attempting to set a restricted column and :all_columns is not used" do Artist.restrict_primary_key proc{Artist.from_xml(@artist.to_xml)}.should raise_error(Sequel::Error) end it "should raise an error if an unsupported association is passed in the :associations option" do Artist.association_reflections.delete(:albums) proc{Artist.from_xml(@artist.to_xml(:include=>:albums), :associations=>:albums)}.should raise_error(Sequel::Error) end it "should raise an error if using from_xml and XML represents an array" do proc{Artist.from_xml(Artist.to_xml(:array=>[@artist]))}.should raise_error(Sequel::Error) end it "should raise an error if using array_from_xml and XML does not represent an array" do proc{Artist.array_from_xml(@artist.to_xml)}.should raise_error(Sequel::Error) end it "should raise an error if using an unsupported :associations option" do proc{Artist.from_xml(@artist.to_xml, :associations=>'')}.should raise_error(Sequel::Error) end end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/�����������������������������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0016032�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/bad_down_migration/����������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0021660�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/bad_down_migration/001_create_alt_basic.rb�����������������������������0000664�0000000�0000000�00000000135�12201565355�0026030�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm11111){Integer :smc1}} down{get(:asdfsadfsa)} end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/bad_down_migration/002_create_alt_advanced.rb��������������������������0000664�0000000�0000000�00000000141�12201565355�0026512�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm22222){Integer :smc2}} down{drop_table(:sm22222)} end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/bad_timestamped_migrations/��������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0023410�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/bad_timestamped_migrations/1273253849_create_sessions.rb���������������0000664�0000000�0000000�00000000216�12201565355�0030300�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateSessions < Sequel::Migration def up create_table(:sm1111){Integer :smc1} end def down get(:asdfsadfas) end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/bad_timestamped_migrations/1273253851_create_nodes.rb������������������0000664�0000000�0000000�00000000212�12201565355�0027527�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Class.new(Sequel::Migration) do def up create_table(:sm2222){Integer :smc2} end def down drop_table(:sm2222) end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/bad_timestamped_migrations/1273253853_3_create_users.rb����������������0000664�0000000�0000000�00000000057�12201565355�0030013�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{get(:asdfsadfas)} end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/bad_up_migration/������������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0021335�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/bad_up_migration/001_create_alt_basic.rb�������������������������������0000664�0000000�0000000�00000000141�12201565355�0025502�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm11111){Integer :smc1}} down{drop_table(:sm11111)} end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/bad_up_migration/002_create_alt_advanced.rb����������������������������0000664�0000000�0000000�00000000056�12201565355�0026174�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{get(:asdfassfd)} end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/convert_to_timestamp_migrations/���������������������������������������0000775�0000000�0000000�00000000000�12201565355�0024533�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/convert_to_timestamp_migrations/001_create_sessions.rb�����������������0000664�0000000�0000000�00000000221�12201565355�0030624�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateSessions < Sequel::Migration def up create_table(:sm1111){Integer :smc1} end def down drop_table(:sm1111) end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/convert_to_timestamp_migrations/002_create_nodes.rb��������������������0000664�0000000�0000000�00000000212�12201565355�0030067�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Class.new(Sequel::Migration) do def up create_table(:sm2222){Integer :smc2} end def down drop_table(:sm2222) end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/convert_to_timestamp_migrations/003_3_create_users.rb������������������0000664�0000000�0000000�00000000137�12201565355�0030351�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm3333){Integer :smc3}} down{drop_table(:sm3333)} end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/convert_to_timestamp_migrations/1273253850_create_artists.rb�����������0000664�0000000�0000000�00000000221�12201565355�0031232�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateArtists < Sequel::Migration def up create_table(:sm1122){Integer :smc12} end def down drop_table(:sm1122) end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/convert_to_timestamp_migrations/1273253852_create_albums.rb������������0000664�0000000�0000000�00000000220�12201565355�0031025�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateAlbums < Sequel::Migration def up create_table(:sm2233){Integer :smc23} end def down drop_table(:sm2233) end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/duplicate_integer_migrations/������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0023755�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/duplicate_integer_migrations/001_create_alt_advanced.rb����������������0000664�0000000�0000000�00000000141�12201565355�0030606�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm33333){Integer :smc3}} down{drop_table(:sm33333)} end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/duplicate_integer_migrations/001_create_alt_basic.rb�������������������0000664�0000000�0000000�00000000141�12201565355�0030122�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm11111){Integer :smc1}} down{drop_table(:sm11111)} end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/duplicate_timestamped_migrations/��������������������������������������0000775�0000000�0000000�00000000000�12201565355�0024634�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/duplicate_timestamped_migrations/1273253849_create_sessions.rb���������0000664�0000000�0000000�00000000221�12201565355�0031520�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateSessions < Sequel::Migration def up create_table(:sm1111){Integer :smc1} end def down drop_table(:sm1111) end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/duplicate_timestamped_migrations/1273253853_create_nodes.rb������������0000664�0000000�0000000�00000000212�12201565355�0030755�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Class.new(Sequel::Migration) do def up create_table(:sm2222){Integer :smc2} end def down drop_table(:sm2222) end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/duplicate_timestamped_migrations/1273253853_create_users.rb������������0000664�0000000�0000000�00000000137�12201565355�0031014�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm3333){Integer :smc3}} down{drop_table(:sm3333)} end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/integer_migrations/����������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0021723�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/integer_migrations/001_create_sessions.rb������������������������������0000664�0000000�0000000�00000000221�12201565355�0026014�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateSessions < Sequel::Migration def up create_table(:sm1111){Integer :smc1} end def down drop_table(:sm1111) end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/integer_migrations/002_create_nodes.rb���������������������������������0000664�0000000�0000000�00000000212�12201565355�0025257�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Class.new(Sequel::Migration) do def up create_table(:sm2222){Integer :smc2} end def down drop_table(:sm2222) end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/integer_migrations/003_3_create_users.rb�������������������������������0000664�0000000�0000000�00000000137�12201565355�0025541�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm3333){Integer :smc3}} down{drop_table(:sm3333)} end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/interleaved_timestamped_migrations/������������������������������������0000775�0000000�0000000�00000000000�12201565355�0025164�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/interleaved_timestamped_migrations/1273253849_create_sessions.rb�������0000664�0000000�0000000�00000000221�12201565355�0032050�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateSessions < Sequel::Migration def up create_table(:sm1111){Integer :smc1} end def down drop_table(:sm1111) end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/interleaved_timestamped_migrations/1273253850_create_artists.rb��������0000664�0000000�0000000�00000000221�12201565355�0031663�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateArtists < Sequel::Migration def up create_table(:sm1122){Integer :smc12} end def down drop_table(:sm1122) end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/interleaved_timestamped_migrations/1273253851_create_nodes.rb����������0000664�0000000�0000000�00000000212�12201565355�0031303�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Class.new(Sequel::Migration) do def up create_table(:sm2222){Integer :smc2} end def down drop_table(:sm2222) end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/interleaved_timestamped_migrations/1273253852_create_albums.rb���������0000664�0000000�0000000�00000000220�12201565355�0031456�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateAlbums < Sequel::Migration def up create_table(:sm2233){Integer :smc23} end def down drop_table(:sm2233) end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/interleaved_timestamped_migrations/1273253853_3_create_users.rb��������0000664�0000000�0000000�00000000137�12201565355�0031566�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm3333){Integer :smc3}} down{drop_table(:sm3333)} end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/missing_integer_migrations/��������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0023454�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/missing_integer_migrations/001_create_alt_basic.rb���������������������0000664�0000000�0000000�00000000141�12201565355�0027621�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm11111){Integer :smc1}} down{drop_table(:sm11111)} end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/missing_integer_migrations/003_create_alt_advanced.rb������������������0000664�0000000�0000000�00000000141�12201565355�0030307�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm33333){Integer :smc3}} down{drop_table(:sm33333)} end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/missing_timestamped_migrations/����������������������������������������0000775�0000000�0000000�00000000000�12201565355�0024333�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/missing_timestamped_migrations/1273253849_create_sessions.rb�����������0000664�0000000�0000000�00000000221�12201565355�0031217�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateSessions < Sequel::Migration def up create_table(:sm1111){Integer :smc1} end def down drop_table(:sm1111) end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/missing_timestamped_migrations/1273253853_3_create_users.rb������������0000664�0000000�0000000�00000000137�12201565355�0030735�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm3333){Integer :smc3}} down{drop_table(:sm3333)} end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/reversible_migrations/�������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0022430�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/reversible_migrations/001_reversible.rb��������������������������������0000664�0000000�0000000�00000000113�12201565355�0025472�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do change do create_table(:a){Integer :a} end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/reversible_migrations/002_reversible.rb��������������������������������0000664�0000000�0000000�00000000110�12201565355�0025470�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do change do add_column :a, :b, String end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/reversible_migrations/003_reversible.rb��������������������������������0000664�0000000�0000000�00000000107�12201565355�0025477�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do change do rename_column :a, :b, :c end end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/reversible_migrations/004_reversible.rb��������������������������������0000664�0000000�0000000�00000000102�12201565355�0025473�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do change do rename_table :a, :b end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/reversible_migrations/005_reversible.rb��������������������������������0000664�0000000�0000000�00000000237�12201565355�0025505�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do change do alter_table(:b) do add_column :d, String end alter_table(:b) do rename_column :d, :e end end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/timestamped_migrations/������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0022602�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/timestamped_migrations/1273253849_create_sessions.rb�������������������0000664�0000000�0000000�00000000221�12201565355�0027466�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateSessions < Sequel::Migration def up create_table(:sm1111){Integer :smc1} end def down drop_table(:sm1111) end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/timestamped_migrations/1273253851_create_nodes.rb����������������������0000664�0000000�0000000�00000000212�12201565355�0026721�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Class.new(Sequel::Migration) do def up create_table(:sm2222){Integer :smc2} end def down drop_table(:sm2222) end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/timestamped_migrations/1273253853_3_create_users.rb��������������������0000664�0000000�0000000�00000000137�12201565355�0027204�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm3333){Integer :smc3}} down{drop_table(:sm3333)} end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/transaction_specified_migrations/��������������������������������������0000775�0000000�0000000�00000000000�12201565355�0024626�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/transaction_specified_migrations/001_create_alt_basic.rb���������������0000664�0000000�0000000�00000000126�12201565355�0030776�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do transaction change{create_table(:sm11111){Integer :smc1}} end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/transaction_specified_migrations/002_create_basic.rb�������������������0000664�0000000�0000000�00000000124�12201565355�0030135�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do no_transaction change{create_table(:sm){Integer :smc1}} end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/transaction_unspecified_migrations/������������������������������������0000775�0000000�0000000�00000000000�12201565355�0025171�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/transaction_unspecified_migrations/001_create_alt_basic.rb�������������0000664�0000000�0000000�00000000110�12201565355�0031332�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do change{create_table(:sm11111){Integer :smc1}} end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/transaction_unspecified_migrations/002_create_basic.rb�����������������0000664�0000000�0000000�00000000103�12201565355�0030475�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do change{create_table(:sm){Integer :smc1}} end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/uppercase_timestamped_migrations/��������������������������������������0000775�0000000�0000000�00000000000�12201565355�0024651�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/uppercase_timestamped_migrations/1273253849_CREATE_SESSIONS.RB���������0000664�0000000�0000000�00000000221�12201565355�0030535�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������class CreateSessions < Sequel::Migration def up create_table(:sm1111){Integer :smc1} end def down drop_table(:sm1111) end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/uppercase_timestamped_migrations/1273253851_CREATE_NODES.RB������������0000664�0000000�0000000�00000000212�12201565355�0030130�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Class.new(Sequel::Migration) do def up create_table(:sm2222){Integer :smc2} end def down drop_table(:sm2222) end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/files/uppercase_timestamped_migrations/1273253853_3_CREATE_USERS.RB����������0000664�0000000�0000000�00000000137�12201565355�0030413�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������Sequel.migration do up{create_table(:sm3333){Integer :smc3}} down{drop_table(:sm3333)} end ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/�����������������������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0017253�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/associations_test.rb���������������������������������������������0000664�0000000�0000000�00000140012�12201565355�0023334�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') shared_examples_for "one_to_one eager limit strategies" do specify "eager loading one_to_one associations should work correctly" do Artist.one_to_one :first_album, {:clone=>:first_album}.merge(@els) if @els Artist.one_to_one :last_album, {:clone=>:last_album}.merge(@els) if @els Artist.one_to_one :second_album, {:clone=>:second_album}.merge(@els) if @els @album.update(:artist => @artist) diff_album = @diff_album.call ar = @pr.call[1] a = Artist.eager(:first_album, :last_album, :second_album).order(:name).all a.should == [@artist, ar] a.first.first_album.should == @album a.first.last_album.should == diff_album a.first.second_album.should == diff_album a.last.first_album.should == nil a.last.last_album.should == nil a.last.second_album.should == nil # Check that no extra columns got added by the eager loading a.first.first_album.values.should == @album.values a.first.last_album.values.should == diff_album.values a.first.second_album.values.should == diff_album.values same_album = @same_album.call a = Artist.eager(:first_album).order(:name).all a.should == [@artist, ar] [@album, same_album].should include(a.first.first_album) a.last.first_album.should == nil end end shared_examples_for "one_to_many eager limit strategies" do specify "should correctly handle limits and offsets when eager loading one_to_many associations" do Artist.one_to_many :first_two_albums, {:clone=>:first_two_albums}.merge(@els) if @els Artist.one_to_many :second_two_albums, {:clone=>:second_two_albums}.merge(@els) if @els Artist.one_to_many :not_first_albums, {:clone=>:not_first_albums}.merge(@els) if @els Artist.one_to_many :last_two_albums, {:clone=>:last_two_albums}.merge(@els) if @els @album.update(:artist => @artist) middle_album = @middle_album.call diff_album = @diff_album.call ar = @pr.call[1] ars = Artist.eager(:first_two_albums, :second_two_albums, :not_first_albums, :last_two_albums).order(:name).all ars.should == [@artist, ar] ars.first.first_two_albums.should == [@album, middle_album] ars.first.second_two_albums.should == [middle_album, diff_album] ars.first.not_first_albums.should == [middle_album, diff_album] ars.first.last_two_albums.should == [diff_album, middle_album] ars.last.first_two_albums.should == [] ars.last.second_two_albums.should == [] ars.last.not_first_albums.should == [] ars.last.last_two_albums.should == [] # Check that no extra columns got added by the eager loading ars.first.first_two_albums.map{|x| x.values}.should == [@album, middle_album].map{|x| x.values} ars.first.second_two_albums.map{|x| x.values}.should == [middle_album, diff_album].map{|x| x.values} ars.first.not_first_albums.map{|x| x.values}.should == [middle_album, diff_album].map{|x| x.values} ars.first.last_two_albums.map{|x| x.values}.should == [diff_album, middle_album].map{|x| x.values} end end shared_examples_for "many_to_many eager limit strategies" do specify "should correctly handle limits and offsets when eager loading many_to_many associations" do Album.send @many_to_many_method||:many_to_many, :first_two_tags, {:clone=>:first_two_tags}.merge(@els) if @els Album.send @many_to_many_method||:many_to_many, :second_two_tags, {:clone=>:second_two_tags}.merge(@els) if @els Album.send @many_to_many_method||:many_to_many, :not_first_tags, {:clone=>:not_first_tags}.merge(@els) if @els Album.send @many_to_many_method||:many_to_many, :last_two_tags, {:clone=>:last_two_tags}.merge(@els) if @els tu, tv = @other_tags.call al = @pr.call.first als = Album.eager(:first_two_tags, :second_two_tags, :not_first_tags, :last_two_tags).order(:name).all als.should == [@album, al] als.first.first_two_tags.should == [@tag, tu] als.first.second_two_tags.should == [tu, tv] als.first.not_first_tags.should == [tu, tv] als.first.last_two_tags.should == [tv, tu] als.last.first_two_tags.should == [] als.last.second_two_tags.should == [] als.last.last_two_tags.should == [] # Check that no extra columns got added by the eager loading als.first.first_two_tags.map{|x| x.values}.should == [@tag, tu].map{|x| x.values} als.first.second_two_tags.map{|x| x.values}.should == [tu, tv].map{|x| x.values} als.first.not_first_tags.map{|x| x.values}.should == [tu, tv].map{|x| x.values} als.first.last_two_tags.map{|x| x.values}.should == [tv, tu].map{|x| x.values} end end shared_examples_for "many_through_many eager limit strategies" do specify "should correctly handle limits and offsets when eager loading many_through_many associations" do Artist.many_through_many :first_two_tags, {:clone=>:first_two_tags}.merge(@els) if @els Artist.many_through_many :second_two_tags, {:clone=>:second_two_tags}.merge(@els) if @els Artist.many_through_many :not_first_tags, {:clone=>:not_first_tags}.merge(@els) if @els Artist.many_through_many :last_two_tags, {:clone=>:last_two_tags}.merge(@els) if @els @album.update(:artist => @artist) tu, tv = @other_tags.call ar = @pr.call[1] ars = Artist.eager(:first_two_tags, :second_two_tags, :not_first_tags, :last_two_tags).order(:name).all ars.should == [@artist, ar] ars.first.first_two_tags.should == [@tag, tu] ars.first.second_two_tags.should == [tu, tv] ars.first.not_first_tags.should == [tu, tv] ars.first.last_two_tags.should == [tv, tu] ars.last.first_two_tags.should == [] ars.last.second_two_tags.should == [] ars.last.last_two_tags.should == [] # Check that no extra columns got added by the eager loading ars.first.first_two_tags.map{|x| x.values}.should == [@tag, tu].map{|x| x.values} ars.first.second_two_tags.map{|x| x.values}.should == [tu, tv].map{|x| x.values} ars.first.not_first_tags.map{|x| x.values}.should == [tu, tv].map{|x| x.values} ars.first.last_two_tags.map{|x| x.values}.should == [tv, tu].map{|x| x.values} end end shared_examples_for "eager limit strategies" do it_should_behave_like "one_to_one eager limit strategies" it_should_behave_like "one_to_many eager limit strategies" it_should_behave_like "many_to_many eager limit strategies" it_should_behave_like "many_through_many eager limit strategies" end shared_examples_for "filtering/excluding by associations" do specify "should work correctly when filtering by associations" do @album.update(:artist => @artist) @album.add_tag(@tag) @Artist.filter(:albums=>@album).all.should == [@artist] @Artist.filter(:first_album=>@album).all.should == [@artist] @Album.filter(:artist=>@artist).all.should == [@album] @Album.filter(:tags=>@tag).all.should == [@album] @Album.filter(:alias_tags=>@tag).all.should == [@album] @Tag.filter(:albums=>@album).all.should == [@tag] @Album.filter(:artist=>@artist, :tags=>@tag).all.should == [@album] @artist.albums_dataset.filter(:tags=>@tag).all.should == [@album] end specify "should work correctly when excluding by associations" do @album.update(:artist => @artist) @album.add_tag(@tag) album, artist, tag = @pr.call @Artist.exclude(:albums=>@album).all.should == [artist] @Artist.exclude(:first_album=>@album).all.should == [artist] @Album.exclude(:artist=>@artist).all.should == [album] @Album.exclude(:tags=>@tag).all.should == [album] @Album.exclude(:alias_tags=>@tag).all.should == [album] @Tag.exclude(:albums=>@album).all.should == [tag] @Album.exclude(:artist=>@artist, :tags=>@tag).all.should == [album] end specify "should work correctly when filtering by multiple associations" do album, artist, tag = @pr.call @album.update(:artist => @artist) @album.add_tag(@tag) @Artist.filter(:albums=>[@album, album]).all.should == [@artist] @Artist.filter(:first_album=>[@album, album]).all.should == [@artist] @Album.filter(:artist=>[@artist, artist]).all.should == [@album] @Album.filter(:tags=>[@tag, tag]).all.should == [@album] @Album.filter(:alias_tags=>[@tag, tag]).all.should == [@album] @Tag.filter(:albums=>[@album, album]).all.should == [@tag] @Album.filter(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.should == [@album] @artist.albums_dataset.filter(:tags=>[@tag, tag]).all.should == [@album] album.add_tag(tag) @Artist.filter(:albums=>[@album, album]).all.should == [@artist] @Artist.filter(:first_album=>[@album, album]).all.should == [@artist] @Album.filter(:artist=>[@artist, artist]).all.should == [@album] @Album.filter(:tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album] @Album.filter(:alias_tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album] @Tag.filter(:albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [@tag, tag] @Album.filter(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.should == [@album] album.update(:artist => artist) @Artist.filter(:albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [@artist, artist] @Artist.filter(:first_album=>[@album, album]).all.sort_by{|x| x.pk}.should == [@artist, artist] @Album.filter(:artist=>[@artist, artist]).all.sort_by{|x| x.pk}.should == [@album, album] @Album.filter(:tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album] @Album.filter(:alias_tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album] @Tag.filter(:albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [@tag, tag] @Album.filter(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album] end specify "should work correctly when excluding by multiple associations" do album, artist, tag = @pr.call @Artist.exclude(:albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [@artist, artist] @Artist.exclude(:first_album=>[@album, album]).all.sort_by{|x| x.pk}.should == [@artist, artist] @Album.exclude(:artist=>[@artist, artist]).all.sort_by{|x| x.pk}.should == [@album, album] @Album.exclude(:tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album] @Album.exclude(:alias_tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album] @Tag.exclude(:albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [@tag, tag] @Album.exclude(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album] @album.update(:artist => @artist) @album.add_tag(@tag) @Artist.exclude(:albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [artist] @Artist.exclude(:first_album=>[@album, album]).all.sort_by{|x| x.pk}.should == [artist] @Album.exclude(:artist=>[@artist, artist]).all.sort_by{|x| x.pk}.should == [album] @Album.exclude(:tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [album] @Album.exclude(:alias_tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [album] @Tag.exclude(:albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [tag] @Album.exclude(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [album] album.add_tag(tag) @Artist.exclude(:albums=>[@album, album]).all.should == [artist] @Artist.exclude(:first_album=>[@album, album]).all.should == [artist] @Album.exclude(:artist=>[@artist, artist]).all.should == [album] @Album.exclude(:tags=>[@tag, tag]).all.should == [] @Album.exclude(:alias_tags=>[@tag, tag]).all.should == [] @Tag.exclude(:albums=>[@album, album]).all.should == [] @Album.exclude(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.should == [album] album.update(:artist => artist) @Artist.exclude(:albums=>[@album, album]).all.should == [] @Artist.exclude(:first_album=>[@album, album]).all.should == [] @Album.exclude(:artist=>[@artist, artist]).all.should == [] @Album.exclude(:tags=>[@tag, tag]).all.should == [] @Album.exclude(:alias_tags=>[@tag, tag]).all.should == [] @Tag.exclude(:albums=>[@album, album]).all.should == [] @Album.exclude(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.should == [] end specify "should work correctly when excluding by associations in regards to NULL values" do @Artist.exclude(:albums=>@album).all.should == [@artist] @Artist.exclude(:first_album=>@album).all.should == [@artist] @Album.exclude(:artist=>@artist).all.should == [@album] @Album.exclude(:tags=>@tag).all.should == [@album] @Album.exclude(:alias_tags=>@tag).all.should == [@album] @Tag.exclude(:albums=>@album).all.should == [@tag] @Album.exclude(:artist=>@artist, :tags=>@tag).all.should == [@album] @album.update(:artist => @artist) @artist.albums_dataset.exclude(:tags=>@tag).all.should == [@album] end specify "should handle NULL values in join table correctly when filtering/excluding many_to_many associations" do @ins.call @Album.exclude(:tags=>@tag).all.should == [@album] @Album.exclude(:alias_tags=>@tag).all.should == [@album] @album.add_tag(@tag) @Album.filter(:tags=>@tag).all.should == [@album] @Album.filter(:alias_tags=>@tag).all.should == [@album] album, tag = @pr.call.values_at(0, 2) @Album.exclude(:tags=>@tag).all.should == [album] @Album.exclude(:alias_tags=>@tag).all.should == [album] @Album.exclude(:tags=>tag).all.sort_by{|x| x.pk}.should == [@album, album] @Album.exclude(:alias_tags=>tag).all.sort_by{|x| x.pk}.should == [@album, album] end specify "should work correctly when filtering by association datasets" do album, artist, tag = @pr.call @album.update(:artist => @artist) @album.add_tag(@tag) album.add_tag(tag) album.update(:artist => artist) @Artist.filter(:albums=>@Album).all.sort_by{|x| x.pk}.should == [@artist, artist] @Artist.filter(:albums=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [artist] @Artist.filter(:albums=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == [] @Artist.filter(:first_album=>@Album).all.sort_by{|x| x.pk}.should == [@artist, artist] @Artist.filter(:first_album=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [artist] @Artist.filter(:first_album=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == [] @Album.filter(:artist=>@Artist).all.sort_by{|x| x.pk}.should == [@album, album] @Album.filter(:artist=>@Artist.filter(Array(Artist.primary_key).map{|k| Sequel.qualify(Artist.table_name, k)}.zip(Array(artist.pk)))).all.sort_by{|x| x.pk}.should == [album] @Album.filter(:artist=>@Artist.filter(1=>0)).all.sort_by{|x| x.pk}.should == [] @Album.filter(:tags=>@Tag).all.sort_by{|x| x.pk}.should == [@album, album] @Album.filter(:tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [album] @Album.filter(:tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [] @Album.filter(:alias_tags=>@Tag).all.sort_by{|x| x.pk}.should == [@album, album] @Album.filter(:alias_tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [album] @Album.filter(:alias_tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [] @Tag.filter(:albums=>@Album).all.sort_by{|x| x.pk}.should == [@tag, tag] @Tag.filter(:albums=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [tag] @Tag.filter(:albums=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == [] end specify "should work correctly when excluding by association datasets" do album, artist, tag = @pr.call @album.update(:artist => @artist) @album.add_tag(@tag) album.add_tag(tag) album.update(:artist => artist) @Artist.exclude(:albums=>@Album).all.sort_by{|x| x.pk}.should == [] @Artist.exclude(:albums=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [@artist] @Artist.exclude(:albums=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@artist, artist] @Album.exclude(:artist=>@Artist).all.sort_by{|x| x.pk}.should == [] @Album.exclude(:artist=>@Artist.filter(Array(Artist.primary_key).map{|k| Sequel.qualify(Artist.table_name, k)}.zip(Array(artist.pk)))).all.sort_by{|x| x.pk}.should == [@album] @Album.exclude(:artist=>@Artist.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@album, album] @Album.exclude(:tags=>@Tag).all.sort_by{|x| x.pk}.should == [] @Album.exclude(:tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [@album] @Album.exclude(:tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@album, album] @Album.exclude(:alias_tags=>@Tag).all.sort_by{|x| x.pk}.should == [] @Album.exclude(:alias_tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [@album] @Album.exclude(:alias_tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@album, album] @Tag.exclude(:albums=>@Album).all.sort_by{|x| x.pk}.should == [] @Tag.exclude(:albums=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [@tag] @Tag.exclude(:albums=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@tag, tag] end end shared_examples_for "basic regular and composite key associations" do specify "should return no objects if none are associated" do @album.artist.should == nil @artist.first_album.should == nil @artist.albums.should == [] @album.tags.should == [] @album.alias_tags.should == [] @tag.albums.should == [] end specify "should have add and set methods work any associated objects" do @album.update(:artist => @artist) @album.add_tag(@tag) @album.reload @artist.reload @tag.reload @album.artist.should == @artist @artist.first_album.should == @album @artist.albums.should == [@album] @album.tags.should == [@tag] @album.alias_tags.should == [@tag] @tag.albums.should == [@album] end specify "should work correctly with prepared_statements_association plugin" do @album.update(:artist => @artist) @album.add_tag(@tag) @album.reload @artist.reload @tag.reload [Tag, Album, Artist].each{|x| x.plugin :prepared_statements_associations} @album.artist.should == @artist @artist.first_album.should == @album @artist.albums.should == [@album] @album.tags.should == [@tag] @album.alias_tags.should == [@tag] @tag.albums.should == [@album] end specify "should have working dataset associations" do album, artist, tag = @pr.call Tag.albums.all.should == [] Album.artists.all.should == [] Album.tags.all.should == [] Album.alias_tags.all.should == [] Artist.albums.all.should == [] Artist.tags.all.should == [] unless @no_many_through_many Artist.albums.tags.all.should == [] @album.update(:artist => @artist) @album.add_tag(@tag) Tag.albums.all.should == [@album] Album.artists.all.should == [@artist] Album.tags.all.should == [@tag] Album.alias_tags.all.should == [@tag] Artist.albums.all.should == [@album] Artist.tags.all.should == [@tag] unless @no_many_through_many Artist.albums.tags.all.should == [@tag] album.add_tag(tag) album.update(:artist => artist) Tag.albums.order(:name).all.should == [@album, album] Album.artists.order(:name).all.should == [@artist, artist] Album.tags.order(:name).all.should == [@tag, tag] Album.alias_tags.order(:name).all.should == [@tag, tag] Artist.albums.order(:name).all.should == [@album, album] Artist.tags.order(:name).all.should == [@tag, tag] unless @no_many_through_many Artist.albums.tags.order(:name).all.should == [@tag, tag] Tag.filter(Tag.qualified_primary_key_hash(tag.pk)).albums.all.should == [album] Album.filter(Album.qualified_primary_key_hash(album.pk)).artists.all.should == [artist] Album.filter(Album.qualified_primary_key_hash(album.pk)).tags.all.should == [tag] Album.filter(Album.qualified_primary_key_hash(album.pk)).alias_tags.all.should == [tag] Artist.filter(Artist.qualified_primary_key_hash(artist.pk)).albums.all.should == [album] Artist.filter(Artist.qualified_primary_key_hash(artist.pk)).tags.all.should == [tag] unless @no_many_through_many Artist.filter(Artist.qualified_primary_key_hash(artist.pk)).albums.tags.all.should == [tag] Artist.filter(Artist.qualified_primary_key_hash(artist.pk)).albums.filter(Album.qualified_primary_key_hash(album.pk)).tags.all.should == [tag] Artist.filter(Artist.qualified_primary_key_hash(@artist.pk)).albums.filter(Album.qualified_primary_key_hash(@album.pk)).tags.all.should == [@tag] Artist.filter(Artist.qualified_primary_key_hash(@artist.pk)).albums.filter(Album.qualified_primary_key_hash(album.pk)).tags.all.should == [] Artist.filter(Artist.qualified_primary_key_hash(artist.pk)).albums.filter(Album.qualified_primary_key_hash(@album.pk)).tags.all.should == [] end specify "should have remove methods work" do @album.update(:artist => @artist) @album.add_tag(@tag) @album.update(:artist => nil) @album.remove_tag(@tag) @album.add_alias_tag(@tag) @album.remove_alias_tag(@tag) @album.reload @artist.reload @tag.reload @album.artist.should == nil @artist.albums.should == [] @album.tags.should == [] @tag.albums.should == [] @album.add_alias_tag(@tag) @album.remove_alias_tag(@tag) @album.reload @album.alias_tags.should == [] end specify "should have remove_all methods work" do @artist.add_album(@album) @album.add_tag(@tag) @album.remove_all_tags @artist.remove_all_albums @album.reload @artist.reload @tag.reload @album.artist.should == nil @artist.albums.should == [] @album.tags.should == [] @tag.albums.should == [] @album.add_alias_tag(@tag) @album.remove_all_alias_tags @album.reload @album.alias_tags.should == [] end specify "should eager load via eager correctly" do @album.update(:artist => @artist) @album.add_tag(@tag) a = Artist.eager(:albums=>[:tags, :alias_tags]).eager(:first_album).all a.should == [@artist] a.first.albums.should == [@album] a.first.first_album.should == @album a.first.albums.first.tags.should == [@tag] a.first.albums.first.alias_tags.should == [@tag] a = Tag.eager(:albums=>:artist).all a.should == [@tag] a.first.albums.should == [@album] a.first.albums.first.artist.should == @artist end specify "should eager load via eager_graph correctly" do @album.update(:artist => @artist) @album.add_tag(@tag) a = Artist.eager_graph(:albums=>[:tags, :alias_tags]).eager_graph(:first_album).all a.should == [@artist] a.first.albums.should == [@album] a.first.first_album.should == @album a.first.albums.first.tags.should == [@tag] a.first.albums.first.alias_tags.should == [@tag] a = Tag.eager_graph(:albums=>:artist).all a.should == [@tag] a.first.albums.should == [@album] a.first.albums.first.artist.should == @artist end describe "when filtering/excluding by associations" do before do @Artist = Artist.dataset @Album = Album.dataset @Tag = Tag.dataset end it_should_behave_like "filtering/excluding by associations" end end shared_examples_for "regular and composite key associations" do it_should_behave_like "basic regular and composite key associations" describe "when filtering/excluding by associations when joining" do def self_join(c) c.join(Sequel.as(c.table_name, :b), Array(c.primary_key).zip(Array(c.primary_key))).select_all(c.table_name) end before do @Artist = self_join(Artist) @Album = self_join(Album) @Tag = self_join(Tag) end it_should_behave_like "filtering/excluding by associations" end describe "with :eager_limit_strategy=>:ruby" do before do @els = {:eager_limit_strategy=>:ruby} end it_should_behave_like "eager limit strategies" end describe "with :eager_limit_strategy=>true" do before do @els = {:eager_limit_strategy=>true} end it_should_behave_like "one_to_one eager limit strategies" end if DB.dataset.supports_ordered_distinct_on? describe "with :eager_limit_strategy=>:window_function" do before do @els = {:eager_limit_strategy=>:window_function} end it_should_behave_like "eager limit strategies" end if DB.dataset.supports_window_functions? specify "should work with a many_through_many association" do @album.update(:artist => @artist) @album.add_tag(@tag) @album.reload @artist.reload @tag.reload @album.tags.should == [@tag] a = Artist.eager(:tags).all a.should == [@artist] a.first.tags.should == [@tag] a = Artist.eager_graph(:tags).all a.should == [@artist] a.first.tags.should == [@tag] a = Album.eager(:artist=>:tags).all a.should == [@album] a.first.artist.should == @artist a.first.artist.tags.should == [@tag] a = Album.eager_graph(:artist=>:tags).all a.should == [@album] a.first.artist.should == @artist a.first.artist.tags.should == [@tag] end end describe "Sequel::Model Simple Associations" do before(:all) do @db = DB @db.drop_table?(:albums_tags, :tags, :albums, :artists) @db.create_table(:artists) do primary_key :id String :name end @db.create_table(:albums) do primary_key :id String :name foreign_key :artist_id, :artists end @db.create_table(:tags) do primary_key :id String :name end @db.create_table(:albums_tags) do foreign_key :album_id, :albums foreign_key :tag_id, :tags end end before do [:albums_tags, :tags, :albums, :artists].each{|t| @db[t].delete} class ::Artist < Sequel::Model(@db) plugin :dataset_associations one_to_many :albums, :order=>:name one_to_one :first_album, :class=>:Album, :order=>:name one_to_one :second_album, :class=>:Album, :order=>:name, :limit=>[nil, 1] one_to_one :last_album, :class=>:Album, :order=>Sequel.desc(:name) one_to_many :first_two_albums, :class=>:Album, :order=>:name, :limit=>2 one_to_many :second_two_albums, :class=>:Album, :order=>:name, :limit=>[2, 1] one_to_many :not_first_albums, :class=>:Album, :order=>:name, :limit=>[nil, 1] one_to_many :last_two_albums, :class=>:Album, :order=>Sequel.desc(:name), :limit=>2 plugin :many_through_many many_through_many :tags, [[:albums, :artist_id, :id], [:albums_tags, :album_id, :tag_id]] many_through_many :first_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>2 many_through_many :second_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>[2, 1] many_through_many :not_first_tags, :clone=>:tags, :order=>:tags__name, :limit=>[nil, 1] many_through_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:tags__name), :limit=>2 end class ::Album < Sequel::Model(@db) plugin :dataset_associations many_to_one :artist, :reciprocal=>nil many_to_many :tags, :right_key=>:tag_id many_to_many :alias_tags, :clone=>:tags, :join_table=>:albums_tags___at many_to_many :first_two_tags, :clone=>:tags, :order=>:name, :limit=>2 many_to_many :second_two_tags, :clone=>:tags, :order=>:name, :limit=>[2, 1] many_to_many :not_first_tags, :clone=>:tags, :order=>:name, :limit=>[nil, 1] many_to_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:name), :limit=>2 end class ::Tag < Sequel::Model(@db) plugin :dataset_associations many_to_many :albums end @album = Album.create(:name=>'Al') @artist = Artist.create(:name=>'Ar') @tag = Tag.create(:name=>'T') @same_album = lambda{Album.create(:name=>'Al', :artist_id=>@artist.id)} @diff_album = lambda{Album.create(:name=>'lA', :artist_id=>@artist.id)} @middle_album = lambda{Album.create(:name=>'Bl', :artist_id=>@artist.id)} @other_tags = lambda{t = [Tag.create(:name=>'U'), Tag.create(:name=>'V')]; @db[:albums_tags].insert([:album_id, :tag_id], Tag.select(@album.id, :id)); t} @pr = lambda{[Album.create(:name=>'Al2'),Artist.create(:name=>'Ar2'),Tag.create(:name=>'T2')]} @ins = lambda{@db[:albums_tags].insert(:tag_id=>@tag.id)} end after do [:Tag, :Album, :Artist].each{|x| Object.send(:remove_const, x)} end after(:all) do @db.drop_table?(:albums_tags, :tags, :albums, :artists) end it_should_behave_like "regular and composite key associations" specify "should handle many_to_one associations with same name as :key" do Album.def_column_alias(:artist_id_id, :artist_id) Album.many_to_one :artist_id, :key_column =>:artist_id, :class=>Artist @album.update(:artist_id_id => @artist.id) @album.artist_id.should == @artist as = Album.eager(:artist_id).all as.should == [@album] as.map{|a| a.artist_id}.should == [@artist] as = Album.eager_graph(:artist_id).all as.should == [@album] as.map{|a| a.artist_id}.should == [@artist] end specify "should handle aliased tables when eager_graphing" do @album.update(:artist => @artist) @album.add_tag(@tag) Artist.set_dataset(:artists___ar) Album.set_dataset(:albums___a) Tag.set_dataset(:tags___t) Artist.one_to_many :balbums, :class=>Album, :key=>:artist_id, :reciprocal=>nil Album.many_to_many :btags, :class=>Tag, :join_table=>:albums_tags, :right_key=>:tag_id Album.many_to_one :bartist, :class=>Artist, :key=>:artist_id Tag.many_to_many :balbums, :class=>Album, :join_table=>:albums_tags, :right_key=>:album_id a = Artist.eager_graph(:balbums=>:btags).all a.should == [@artist] a.first.balbums.should == [@album] a.first.balbums.first.btags.should == [@tag] a = Tag.eager_graph(:balbums=>:bartist).all a.should == [@tag] a.first.balbums.should == [@album] a.first.balbums.first.bartist.should == @artist end specify "should have add method accept hashes and create new records" do @artist.remove_all_albums Album.dataset.delete @album = @artist.add_album(:name=>'Al2') Album.first[:name].should == 'Al2' @artist.albums_dataset.first[:name].should == 'Al2' @album.remove_all_tags Tag.dataset.delete @album.add_tag(:name=>'T2') Tag.first[:name].should == 'T2' @album.tags_dataset.first[:name].should == 'T2' end specify "should have add method accept primary key and add related records" do @artist.remove_all_albums @artist.add_album(@album.id) @artist.albums_dataset.first[:id].should == @album.id @album.remove_all_tags @album.add_tag(@tag.id) @album.tags_dataset.first[:id].should == @tag.id end specify "should have remove method accept primary key and remove related album" do @artist.add_album(@album) @artist.reload.remove_album(@album.id) @artist.reload.albums.should == [] @album.add_tag(@tag) @album.reload.remove_tag(@tag.id) @tag.reload.albums.should == [] end specify "should handle dynamic callbacks for regular loading" do @artist.add_album(@album) @artist.albums.should == [@album] @artist.albums(proc{|ds| ds.exclude(:id=>@album.id)}).should == [] @artist.albums(proc{|ds| ds.filter(:id=>@album.id)}).should == [@album] @album.artist.should == @artist @album.artist(proc{|ds| ds.exclude(:id=>@artist.id)}).should == nil @album.artist(proc{|ds| ds.filter(:id=>@artist.id)}).should == @artist @artist.albums{|ds| ds.exclude(:id=>@album.id)}.should == [] @artist.albums{|ds| ds.filter(:id=>@album.id)}.should == [@album] @album.artist{|ds| ds.exclude(:id=>@artist.id)}.should == nil @album.artist{|ds| ds.filter(:id=>@artist.id)}.should == @artist end specify "should handle dynamic callbacks for eager loading via eager and eager_graph" do @artist.add_album(@album) @album.add_tag(@tag) album2 = @artist.add_album(:name=>'Foo') tag2 = album2.add_tag(:name=>'T2') artist = Artist.eager(:albums=>:tags).all.first artist.albums.should == [@album, album2] artist.albums.map{|x| x.tags}.should == [[@tag], [tag2]] artist = Artist.eager_graph(:albums=>:tags).all.first artist.albums.should == [@album, album2] artist.albums.map{|x| x.tags}.should == [[@tag], [tag2]] artist = Artist.eager(:albums=>{proc{|ds| ds.where(:id=>album2.id)}=>:tags}).all.first artist.albums.should == [album2] artist.albums.first.tags.should == [tag2] artist = Artist.eager_graph(:albums=>{proc{|ds| ds.where(:id=>album2.id)}=>:tags}).all.first artist.albums.should == [album2] artist.albums.first.tags.should == [tag2] end specify "should have remove method raise an error for one_to_many records if the object isn't already associated" do proc{@artist.remove_album(@album.id)}.should raise_error(Sequel::Error) proc{@artist.remove_album(@album)}.should raise_error(Sequel::Error) end end describe "Sequel::Model Composite Key Associations" do before(:all) do @db = DB @db.drop_table?(:albums_tags, :tags, :albums, :artists) @db.create_table(:artists) do Integer :id1 Integer :id2 String :name primary_key [:id1, :id2] end @db.create_table(:albums) do Integer :id1 Integer :id2 String :name Integer :artist_id1 Integer :artist_id2 foreign_key [:artist_id1, :artist_id2], :artists primary_key [:id1, :id2] end @db.create_table(:tags) do Integer :id1 Integer :id2 String :name primary_key [:id1, :id2] end @db.create_table(:albums_tags) do Integer :album_id1 Integer :album_id2 Integer :tag_id1 Integer :tag_id2 foreign_key [:album_id1, :album_id2], :albums foreign_key [:tag_id1, :tag_id2], :tags end end before do [:albums_tags, :tags, :albums, :artists].each{|t| @db[t].delete} class ::Artist < Sequel::Model(@db) plugin :dataset_associations set_primary_key [:id1, :id2] unrestrict_primary_key one_to_many :albums, :key=>[:artist_id1, :artist_id2], :order=>:name one_to_one :first_album, :clone=>:albums, :order=>:name one_to_one :last_album, :clone=>:albums, :order=>Sequel.desc(:name) one_to_one :second_album, :clone=>:albums, :limit=>[nil, 1] one_to_many :first_two_albums, :clone=>:albums, :order=>:name, :limit=>2 one_to_many :second_two_albums, :clone=>:albums, :order=>:name, :limit=>[2, 1] one_to_many :not_first_albums, :clone=>:albums, :order=>:name, :limit=>[nil, 1] one_to_many :last_two_albums, :clone=>:albums, :order=>Sequel.desc(:name), :limit=>2 plugin :many_through_many many_through_many :tags, [[:albums, [:artist_id1, :artist_id2], [:id1, :id2]], [:albums_tags, [:album_id1, :album_id2], [:tag_id1, :tag_id2]]] many_through_many :first_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>2 many_through_many :second_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>[2, 1] many_through_many :not_first_tags, :clone=>:tags, :order=>:tags__name, :limit=>[nil, 1] many_through_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:tags__name), :limit=>2 end class ::Album < Sequel::Model(@db) plugin :dataset_associations set_primary_key [:id1, :id2] unrestrict_primary_key many_to_one :artist, :key=>[:artist_id1, :artist_id2], :reciprocal=>nil many_to_many :tags, :left_key=>[:album_id1, :album_id2], :right_key=>[:tag_id1, :tag_id2] many_to_many :alias_tags, :clone=>:tags, :join_table=>:albums_tags___at many_to_many :first_two_tags, :clone=>:tags, :order=>:name, :limit=>2 many_to_many :second_two_tags, :clone=>:tags, :order=>:name, :limit=>[2, 1] many_to_many :not_first_tags, :clone=>:tags, :order=>:name, :limit=>[nil, 1] many_to_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:name), :limit=>2 end class ::Tag < Sequel::Model(@db) plugin :dataset_associations set_primary_key [:id1, :id2] unrestrict_primary_key many_to_many :albums, :right_key=>[:album_id1, :album_id2], :left_key=>[:tag_id1, :tag_id2] end @album = Album.create(:name=>'Al', :id1=>1, :id2=>2) @artist = Artist.create(:name=>'Ar', :id1=>3, :id2=>4) @tag = Tag.create(:name=>'T', :id1=>5, :id2=>6) @same_album = lambda{Album.create(:name=>'Al', :id1=>7, :id2=>8, :artist_id1=>3, :artist_id2=>4)} @diff_album = lambda{Album.create(:name=>'lA', :id1=>9, :id2=>10, :artist_id1=>3, :artist_id2=>4)} @middle_album = lambda{Album.create(:name=>'Bl', :id1=>13, :id2=>14, :artist_id1=>3, :artist_id2=>4)} @other_tags = lambda{t = [Tag.create(:name=>'U', :id1=>17, :id2=>18), Tag.create(:name=>'V', :id1=>19, :id2=>20)]; @db[:albums_tags].insert([:album_id1, :album_id2, :tag_id1, :tag_id2], Tag.select(1, 2, :id1, :id2)); t} @pr = lambda{[Album.create(:name=>'Al2', :id1=>11, :id2=>12),Artist.create(:name=>'Ar2', :id1=>13, :id2=>14),Tag.create(:name=>'T2', :id1=>15, :id2=>16)]} @ins = lambda{@db[:albums_tags].insert(:tag_id1=>@tag.id1, :tag_id2=>@tag.id2)} end after do [:Tag, :Album, :Artist].each{|x| Object.send(:remove_const, x)} end after(:all) do @db.drop_table?(:albums_tags, :tags, :albums, :artists) end it_should_behave_like "regular and composite key associations" specify "should have add method accept hashes and create new records" do @artist.remove_all_albums Album.dataset.delete @artist.add_album(:id1=>1, :id2=>2, :name=>'Al2') Album.first[:name].should == 'Al2' @artist.albums_dataset.first[:name].should == 'Al2' @album.remove_all_tags Tag.dataset.delete @album.add_tag(:id1=>1, :id2=>2, :name=>'T2') Tag.first[:name].should == 'T2' @album.tags_dataset.first[:name].should == 'T2' end specify "should have add method accept primary key and add related records" do @artist.remove_all_albums @artist.add_album([@album.id1, @album.id2]) @artist.albums_dataset.first.pk.should == [@album.id1, @album.id2] @album.remove_all_tags @album.add_tag([@tag.id1, @tag.id2]) @album.tags_dataset.first.pk.should == [@tag.id1, @tag.id2] end specify "should have remove method accept primary key and remove related album" do @artist.add_album(@album) @artist.reload.remove_album([@album.id1, @album.id2]) @artist.reload.albums.should == [] @album.add_tag(@tag) @album.reload.remove_tag([@tag.id1, @tag.id2]) @tag.reload.albums.should == [] end specify "should have remove method raise an error for one_to_many records if the object isn't already associated" do proc{@artist.remove_album([@album.id1, @album.id2])}.should raise_error(Sequel::Error) proc{@artist.remove_album(@album)}.should raise_error(Sequel::Error) end end describe "Sequel::Model pg_array_to_many" do before(:all) do @db = DB @db.extension :pg_array Sequel.extension :pg_array_ops @db.drop_table?(:tags, :albums, :artists) @db.create_table(:artists) do primary_key :id String :name end @db.create_table(:albums) do primary_key :id String :name foreign_key :artist_id, :artists column :tag_ids, 'int4[]' end @db.create_table(:tags) do primary_key :id String :name end end before do [:tags, :albums, :artists].each{|t| @db[t].delete} class ::Artist < Sequel::Model(@db) plugin :dataset_associations one_to_many :albums, :order=>:name one_to_one :first_album, :class=>:Album, :order=>:name end class ::Album < Sequel::Model(@db) plugin :dataset_associations plugin :pg_array_associations many_to_one :artist, :reciprocal=>nil pg_array_to_many :tags, :key=>:tag_ids, :save_after_modify=>true pg_array_to_many :alias_tags, :clone=>:tags pg_array_to_many :first_two_tags, :clone=>:tags, :order=>:name, :limit=>2 pg_array_to_many :second_two_tags, :clone=>:tags, :order=>:name, :limit=>[2, 1] pg_array_to_many :not_first_tags, :clone=>:tags, :order=>:name, :limit=>[nil, 1] pg_array_to_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:name), :limit=>2 end class ::Tag < Sequel::Model(@db) plugin :dataset_associations plugin :pg_array_associations many_to_pg_array :albums end @album = Album.create(:name=>'Al') @artist = Artist.create(:name=>'Ar') @tag = Tag.create(:name=>'T') @many_to_many_method = :pg_array_to_many @no_many_through_many = true @same_album = lambda{Album.create(:name=>'Al', :artist_id=>@artist.id)} @diff_album = lambda{Album.create(:name=>'lA', :artist_id=>@artist.id)} @middle_album = lambda{Album.create(:name=>'Bl', :artist_id=>@artist.id)} @other_tags = lambda{t = [Tag.create(:name=>'U'), Tag.create(:name=>'V')]; Tag.all{|x| @album.add_tag(x)}; t} @pr = lambda{[Album.create(:name=>'Al2'),Artist.create(:name=>'Ar2'),Tag.create(:name=>'T2')]} @ins = lambda{} end after do [:Tag, :Album, :Artist].each{|x| Object.send(:remove_const, x)} end after(:all) do @db.drop_table?(:tags, :albums, :artists) end it_should_behave_like "basic regular and composite key associations" it_should_behave_like "many_to_many eager limit strategies" it "should handle adding and removing entries in array" do a = Album.create a.typecast_on_assignment = false a.add_tag(@tag) a.remove_tag(@tag) a.save end end if DB.database_type == :postgres && DB.adapter_scheme == :postgres && DB.server_version >= 90300 describe "Sequel::Model many_to_pg_array" do before(:all) do @db = DB @db.extension :pg_array Sequel.extension :pg_array_ops @db.drop_table?(:tags, :albums, :artists) @db.create_table(:artists) do primary_key :id String :name end @db.create_table(:albums) do primary_key :id String :name foreign_key :artist_id, :artists end @db.create_table(:tags) do primary_key :id String :name column :album_ids, 'int4[]' end end before do [:tags, :albums, :artists].each{|t| @db[t].delete} class ::Artist < Sequel::Model(@db) plugin :dataset_associations one_to_many :albums, :order=>:name one_to_one :first_album, :class=>:Album, :order=>:name end class ::Album < Sequel::Model(@db) plugin :dataset_associations plugin :pg_array_associations many_to_one :artist, :reciprocal=>nil many_to_pg_array :tags many_to_pg_array :alias_tags, :clone=>:tags many_to_pg_array :first_two_tags, :clone=>:tags, :order=>:name, :limit=>2 many_to_pg_array :second_two_tags, :clone=>:tags, :order=>:name, :limit=>[2, 1] many_to_pg_array :not_first_tags, :clone=>:tags, :order=>:name, :limit=>[nil, 1] many_to_pg_array :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:name), :limit=>2 end class ::Tag < Sequel::Model(@db) plugin :dataset_associations plugin :pg_array_associations pg_array_to_many :albums end @album = Album.create(:name=>'Al') @artist = Artist.create(:name=>'Ar') @tag = Tag.create(:name=>'T') @many_to_many_method = :pg_array_to_many @no_many_through_many = true @same_album = lambda{Album.create(:name=>'Al', :artist_id=>@artist.id)} @diff_album = lambda{Album.create(:name=>'lA', :artist_id=>@artist.id)} @middle_album = lambda{Album.create(:name=>'Bl', :artist_id=>@artist.id)} @other_tags = lambda{t = [Tag.create(:name=>'U'), Tag.create(:name=>'V')]; Tag.all{|x| @album.add_tag(x)}; @tag.refresh; t.each{|x| x.refresh}; t} @pr = lambda{[Album.create(:name=>'Al2'),Artist.create(:name=>'Ar2'),Tag.create(:name=>'T2')]} @ins = lambda{} end after do [:Tag, :Album, :Artist].each{|x| Object.send(:remove_const, x)} end after(:all) do @db.drop_table?(:tags, :albums, :artists) end it_should_behave_like "basic regular and composite key associations" it_should_behave_like "many_to_many eager limit strategies" it "should handle adding and removing entries in array" do a = Album.create @tag.typecast_on_assignment = false a.add_tag(@tag) a.remove_tag(@tag) end end if DB.database_type == :postgres && DB.adapter_scheme == :postgres && DB.server_version >= 90300 describe "Sequel::Model Associations with clashing column names" do before(:all) do @db = DB @db.drop_table?(:bars_foos, :bars, :foos) @db.create_table(:foos) do primary_key :id Integer :object_id end @db.create_table(:bars) do primary_key :id Integer :object_id end @db.create_table(:bars_foos) do Integer :foo_id Integer :object_id primary_key [:foo_id, :object_id] end end before do [:bars_foos, :bars, :foos].each{|t| @db[t].delete} @Foo = Class.new(Sequel::Model(:foos)) @Bar = Class.new(Sequel::Model(:bars)) @Foo.def_column_alias(:obj_id, :object_id) @Bar.def_column_alias(:obj_id, :object_id) @Foo.one_to_many :bars, :primary_key=>:obj_id, :primary_key_column=>:object_id, :key=>:object_id, :key_method=>:obj_id, :class=>@Bar @Foo.one_to_one :bar, :primary_key=>:obj_id, :primary_key_column=>:object_id, :key=>:object_id, :key_method=>:obj_id, :class=>@Bar @Bar.many_to_one :foo, :key=>:obj_id, :key_column=>:object_id, :primary_key=>:object_id, :primary_key_method=>:obj_id, :class=>@Foo @Foo.many_to_many :mtmbars, :join_table=>:bars_foos, :left_primary_key=>:obj_id, :left_primary_key_column=>:object_id, :right_primary_key=>:object_id, :right_primary_key_method=>:obj_id, :left_key=>:foo_id, :right_key=>:object_id, :class=>@Bar @Bar.many_to_many :mtmfoos, :join_table=>:bars_foos, :left_primary_key=>:obj_id, :left_primary_key_column=>:object_id, :right_primary_key=>:object_id, :right_primary_key_method=>:obj_id, :left_key=>:object_id, :right_key=>:foo_id, :class=>@Foo @foo = @Foo.create(:obj_id=>2) @bar = @Bar.create(:obj_id=>2) @Foo.db[:bars_foos].insert(2, 2) end after(:all) do @db.drop_table?(:bars_foos, :bars, :foos) end it "should have working regular association methods" do @Bar.first.foo.should == @foo @Foo.first.bars.should == [@bar] @Foo.first.bar.should == @bar @Foo.first.mtmbars.should == [@bar] @Bar.first.mtmfoos.should == [@foo] end it "should have working eager loading methods" do @Bar.eager(:foo).all.map{|o| [o, o.foo]}.should == [[@bar, @foo]] @Foo.eager(:bars).all.map{|o| [o, o.bars]}.should == [[@foo, [@bar]]] @Foo.eager(:bar).all.map{|o| [o, o.bar]}.should == [[@foo, @bar]] @Foo.eager(:mtmbars).all.map{|o| [o, o.mtmbars]}.should == [[@foo, [@bar]]] @Bar.eager(:mtmfoos).all.map{|o| [o, o.mtmfoos]}.should == [[@bar, [@foo]]] end it "should have working eager graphing methods" do @Bar.eager_graph(:foo).all.map{|o| [o, o.foo]}.should == [[@bar, @foo]] @Foo.eager_graph(:bars).all.map{|o| [o, o.bars]}.should == [[@foo, [@bar]]] @Foo.eager_graph(:bar).all.map{|o| [o, o.bar]}.should == [[@foo, @bar]] @Foo.eager_graph(:mtmbars).all.map{|o| [o, o.mtmbars]}.should == [[@foo, [@bar]]] @Bar.eager_graph(:mtmfoos).all.map{|o| [o, o.mtmfoos]}.should == [[@bar, [@foo]]] end it "should have working modification methods" do b = @Bar.create(:obj_id=>3) f = @Foo.create(:obj_id=>3) @bar.foo = f @bar.obj_id.should == 3 @foo.bar = @bar @bar.obj_id.should == 2 @foo.add_bar(b) @foo.bars.sort_by{|x| x.obj_id}.should == [@bar, b] @foo.remove_bar(b) @foo.bars.should == [@bar] @foo.remove_all_bars @foo.bars.should == [] @bar.refresh.update(:obj_id=>2) b.refresh.update(:obj_id=>3) @foo.mtmbars.should == [@bar] @foo.remove_all_mtmbars @foo.mtmbars.should == [] @foo.add_mtmbar(b) @foo.mtmbars.should == [b] @foo.remove_mtmbar(b) @foo.mtmbars.should == [] @bar.add_mtmfoo(f) @bar.mtmfoos.should == [f] @bar.remove_all_mtmfoos @bar.mtmfoos.should == [] @bar.add_mtmfoo(f) @bar.mtmfoos.should == [f] @bar.remove_mtmfoo(f) @bar.mtmfoos.should == [] end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/database_test.rb�������������������������������������������������0000664�0000000�0000000�00000007620�12201565355�0022410�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') describe Sequel::Database do before do @db = DB end specify "should provide disconnect functionality" do @db.disconnect @db.pool.size.should == 0 @db.test_connection @db.pool.size.should == 1 end specify "should provide disconnect functionality after preparing a statement" do @db.create_table!(:items){Integer :i} @db[:items].prepare(:first, :a).call @db.disconnect @db.pool.size.should == 0 @db.drop_table?(:items) end specify "should raise Sequel::DatabaseError on invalid SQL" do proc{@db << "SELECT"}.should raise_error(Sequel::DatabaseError) end describe "constraint violations" do before do @db.drop_table?(:test2, :test) end after do @db.drop_table?(:test2, :test) end cspecify "should raise Sequel::UniqueConstraintViolation when a unique constraint is violated", [:jdbc, :sqlite], [:db2] do @db.create_table!(:test){String :a, :unique=>true, :null=>false} @db[:test].insert('1') proc{@db[:test].insert('1')}.should raise_error(Sequel::UniqueConstraintViolation) @db[:test].insert('2') proc{@db[:test].update(:a=>'1')}.should raise_error(Sequel::UniqueConstraintViolation) end cspecify "should raise Sequel::CheckConstraintViolation when a check constraint is violated", :mysql, :sqlite, [:db2] do @db.create_table!(:test){String :a; check Sequel.~(:a=>'1')} proc{@db[:test].insert('1')}.should raise_error(Sequel::CheckConstraintViolation) @db[:test].insert('2') proc{@db[:test].insert('1')}.should raise_error(Sequel::CheckConstraintViolation) end cspecify "should raise Sequel::ForeignKeyConstraintViolation when a foreign key constraint is violated", [:jdbc, :sqlite], [:db2] do @db.create_table!(:test, :engine=>:InnoDB){primary_key :id} @db.create_table!(:test2, :engine=>:InnoDB){foreign_key :tid, :test} proc{@db[:test2].insert(:tid=>1)}.should raise_error(Sequel::ForeignKeyConstraintViolation) @db[:test].insert @db[:test2].insert(:tid=>1) proc{@db[:test2].where(:tid=>1).update(:tid=>3)}.should raise_error(Sequel::ForeignKeyConstraintViolation) proc{@db[:test].where(:id=>1).delete}.should raise_error(Sequel::ForeignKeyConstraintViolation) end cspecify "should raise Sequel::NotNullConstraintViolation when a not null constraint is violated", [:jdbc, :sqlite], [:db2] do @db.create_table!(:test){Integer :a, :null=>false} proc{@db[:test].insert(:a=>nil)}.should raise_error(Sequel::NotNullConstraintViolation) unless @db.database_type == :mysql # Broken mysql silently changes NULL here to 0, and doesn't raise an exception. @db[:test].insert(2) proc{@db[:test].update(:a=>nil)}.should raise_error(Sequel::NotNullConstraintViolation) end end end specify "should store underlying wrapped exception in Sequel::DatabaseError" do begin @db << "SELECT" rescue Sequel::DatabaseError=>e if defined?(Java::JavaLang::Exception) (e.wrapped_exception.is_a?(Exception) || e.wrapped_exception.is_a?(Java::JavaLang::Exception)).should be_true else e.wrapped_exception.should be_a_kind_of(Exception) end end end specify "should not have the connection pool swallow non-StandardError based exceptions" do proc{@db.pool.hold{raise Interrupt, "test"}}.should raise_error(Interrupt) end specify "should be able to disconnect connections more than once without exceptions" do conn = @db.synchronize{|c| c} @db.disconnect @db.disconnect_connection(conn) @db.disconnect_connection(conn) end cspecify "should provide ability to check connections for validity", [:do, :postgres] do conn = @db.synchronize{|c| c} @db.valid_connection?(conn).should be_true @db.disconnect @db.valid_connection?(conn).should be_false end end ����������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/dataset_test.rb��������������������������������������������������0000664�0000000�0000000�00000202410�12201565355�0022263�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') describe "Simple Dataset operations" do before do @db = DB @db.create_table!(:items) do primary_key :id Integer :number end @ds = @db[:items] @ds.insert(:number=>10) end after do @db.drop_table?(:items) end specify "should support sequential primary keys" do @ds << {:number=>20} @ds << {:number=>30} @ds.order(:number).all.should == [ {:id => 1, :number=>10}, {:id => 2, :number=>20}, {:id => 3, :number=>30} ] end specify "should support sequential primary keys with a Bignum" do @db.create_table!(:items) do primary_key :id, :type=>Bignum Integer :number end @ds << {:number=>20} @ds << {:number=>30} @ds.order(:number).all.should == [{:id => 1, :number=>20}, {:id => 2, :number=>30}] end cspecify "should insert with a primary key specified", :db2, :mssql do @ds.insert(:id=>100, :number=>20) @ds.count.should == 2 @ds.order(:id).all.should == [{:id=>1, :number=>10}, {:id=>100, :number=>20}] end specify "should have insert return primary key value" do @ds.insert(:number=>20).should == 2 @ds.filter(:id=>2).first[:number].should == 20 end specify "should have insert work correctly with static SQL" do @db["INSERT INTO #{@ds.literal(:items)} (#{@ds.literal(:number)}) VALUES (20)"].insert @ds.filter(:id=>2).first[:number].should == 20 end specify "should have insert_multiple return primary key values" do @ds.extension(:sequel_3_dataset_methods).insert_multiple([{:number=>20}, {:number=>30}]).should == [2, 3] @ds.filter(:id=>2).get(:number).should == 20 @ds.filter(:id=>3).get(:number).should == 30 end specify "should join correctly" do @ds.join(:items___b, :id=>:id).select_all(:items).all.should == [{:id=>1, :number=>10}] end specify "should correctly deal with qualified columns and subselects" do @ds.from_self(:alias=>:a).select(:a__id, Sequel.qualify(:a, :number)).all.should == [{:id=>1, :number=>10}] @ds.join(@ds.as(:a), :id=>:id).select(:a__id, Sequel.qualify(:a, :number)).all.should == [{:id=>1, :number=>10}] end specify "should graph correctly" do @ds.graph(:items, {:id=>:id}, :table_alias=>:b).extension(:graph_each).all.should == [{:items=>{:id=>1, :number=>10}, :b=>{:id=>1, :number=>10}}] end specify "should graph correctly with a subselect" do @ds.from_self(:alias=>:items).graph(@ds.from_self, {:id=>:id}, :table_alias=>:b).extension(:graph_each).all.should == [{:items=>{:id=>1, :number=>10}, :b=>{:id=>1, :number=>10}}] end cspecify "should have insert work correctly when inserting a row with all NULL values", :hsqldb do @db.create_table!(:items) do String :name Integer :number end proc{@ds.insert}.should_not raise_error @ds.all.should == [{:name=>nil, :number=>nil}] end specify "should delete correctly" do @ds.filter(1=>1).delete.should == 1 @ds.count.should == 0 end specify "should update correctly" do @ds.update(:number=>Sequel.expr(:number)+1).should == 1 @ds.all.should == [{:id=>1, :number=>11}] end cspecify "should have update return the number of matched rows", [:do, :mysql], [:ado] do @ds.update(:number=>:number).should == 1 @ds.filter(:id=>1).update(:number=>:number).should == 1 @ds.filter(:id=>2).update(:number=>:number).should == 0 @ds.all.should == [{:id=>1, :number=>10}] end specify "should iterate over records as they come in" do called = false @ds.each{|row| called = true; row.should == {:id=>1, :number=>10}} called.should == true end specify "should support iterating over large numbers of records with paged_each" do (2..100).each{|i| @ds.insert(:number=>i*10)} rows = [] @ds.order(:number).paged_each(:rows_per_fetch=>5){|row| rows << row} rows.should == (1..100).map{|i| {:id=>i, :number=>i*10}} rows = [] @ds.order(:number).paged_each(:rows_per_fetch=>3){|row| rows << row} rows.should == (1..100).map{|i| {:id=>i, :number=>i*10}} rows = [] @ds.order(:number).limit(50, 25).paged_each(:rows_per_fetch=>3){|row| rows << row} rows.should == (26..75).map{|i| {:id=>i, :number=>i*10}} end specify "should fetch all results correctly" do @ds.all.should == [{:id=>1, :number=>10}] end specify "should fetch a single row correctly" do @ds.first.should == {:id=>1, :number=>10} end specify "should have distinct work with limit" do @ds.limit(1).distinct.all.should == [{:id=>1, :number=>10}] end specify "should fetch correctly with a limit" do @ds.order(:id).limit(2).all.should == [{:id=>1, :number=>10}] @ds.insert(:number=>20) @ds.order(:id).limit(1).all.should == [{:id=>1, :number=>10}] @ds.order(:id).limit(2).all.should == [{:id=>1, :number=>10}, {:id=>2, :number=>20}] end specify "should fetch correctly with a limit and offset" do @ds.order(:id).limit(2, 0).all.should == [{:id=>1, :number=>10}] @ds.order(:id).limit(2, 1).all.should == [] @ds.insert(:number=>20) @ds.order(:id).limit(1, 1).all.should == [{:id=>2, :number=>20}] @ds.order(:id).limit(2, 0).all.should == [{:id=>1, :number=>10}, {:id=>2, :number=>20}] @ds.order(:id).limit(2, 1).all.should == [{:id=>2, :number=>20}] end specify "should provide correct columns when using a limit and offset" do ds = @ds.order(:id).limit(1, 1) ds.all ds.columns.should == [:id, :number] @ds.order(:id).limit(1, 1).columns.should == [:id, :number] end specify "should fetch correctly with a limit and offset for different combinations of from and join tables" do @db.create_table!(:items2){primary_key :id2; Integer :number2} @db[:items2].insert(:number2=>10) @ds.from(:items, :items2).order(:id).limit(2, 0).all.should == [{:id=>1, :number=>10, :id2=>1, :number2=>10}] @ds.from(:items___i, :items2___i2).order(:id).limit(2, 0).all.should == [{:id=>1, :number=>10, :id2=>1, :number2=>10}] @ds.cross_join(:items2).order(:id).limit(2, 0).all.should ==[{:id=>1, :number=>10, :id2=>1, :number2=>10}] @ds.from(:items___i).cross_join(:items2___i2).order(:id).limit(2, 0).all.should == [{:id=>1, :number=>10, :id2=>1, :number2=>10}] @ds.cross_join(:items2___i).cross_join(@db[:items2].select(:id2___id3, :number2___number3)).order(:id).limit(2, 0).all.should == [{:id=>1, :number=>10, :id2=>1, :number2=>10, :id3=>1, :number3=>10}] @ds.from(:items, :items2).order(:id).limit(2, 1).all.should == [] @ds.from(:items___i, :items2___i2).order(:id).limit(2, 1).all.should == [] @ds.cross_join(:items2).order(:id).limit(2, 1).all.should == [] @ds.from(:items___i).cross_join(:items2___i2).order(:id).limit(2, 1).all.should == [] @ds.cross_join(:items2___i).cross_join(@db[:items2].select(:id2___id3, :number2___number3)).order(:id).limit(2, 1).all.should == [] @db.drop_table(:items2) end specify "should fetch correctly with a limit and offset without an order" do @ds.limit(2, 1).all.should == [] end specify "should fetch correctly with a limit in an IN subselect" do @ds.where(:id=>@ds.select(:id).order(:id).limit(2)).all.should == [{:id=>1, :number=>10}] @ds.insert(:number=>20) @ds.where(:id=>@ds.select(:id).order(:id).limit(1)).all.should == [{:id=>1, :number=>10}] @ds.where(:id=>@ds.select(:id).order(:id).limit(2)).order(:id).all.should == [{:id=>1, :number=>10}, {:id=>2, :number=>20}] end specify "should fetch correctly with a limit and offset in an IN subselect" do @ds.where(:id=>@ds.select(:id).order(:id).limit(2, 0)).all.should == [{:id=>1, :number=>10}] @ds.where(:id=>@ds.select(:id).order(:id).limit(2, 1)).all.should == [] @ds.insert(:number=>20) @ds.where(:id=>@ds.select(:id).order(:id).limit(1, 1)).all.should == [{:id=>2, :number=>20}] @ds.where(:id=>@ds.select(:id).order(:id).limit(2, 0)).order(:id).all.should == [{:id=>1, :number=>10}, {:id=>2, :number=>20}] @ds.where(:id=>@ds.select(:id).order(:id).limit(2, 1)).all.should == [{:id=>2, :number=>20}] end specify "should fetch correctly when using limit and offset in a from_self" do @ds.insert(:number=>20) ds = @ds.order(:id).limit(1, 1).from_self ds.all.should == [{:number=>20, :id=>2}] ds.columns.should == [:id, :number] @ds.order(:id).limit(1, 1).columns.should == [:id, :number] end specify "should fetch correctly when using nested limit and offset in a from_self" do @ds.insert(:number=>20) @ds.insert(:number=>30) ds = @ds.order(:id).limit(2, 1).from_self.reverse_order(:number).limit(1, 1) ds.all.should == [{:number=>20, :id=>2}] ds.columns.should == [:id, :number] @ds.order(:id).limit(2, 1).from_self.reverse_order(:number).limit(1, 1).columns.should == [:id, :number] ds = @ds.order(:id).limit(3, 1).from_self.limit(2, 1).from_self.limit(1, 1) ds.all.should == [] ds.columns.should == [:id, :number] @ds.insert(:number=>40) ds = @ds.order(:id).limit(3, 1).from_self.reverse_order(:number).limit(2, 1).from_self.reverse_order(:id).limit(1, 1) ds.all.should == [{:number=>20, :id=>2}] ds.columns.should == [:id, :number] end specify "should alias columns correctly" do @ds.select(:id___x, :number___n).first.should == {:x=>1, :n=>10} end specify "should handle true/false properly" do @ds.filter(Sequel::TRUE).select_map(:number).should == [10] @ds.filter(Sequel::FALSE).select_map(:number).should == [] @ds.filter(true).select_map(:number).should == [10] @ds.filter(false).select_map(:number).should == [] end end describe "Simple dataset operations with nasty table names" do before do @db = DB @table = :"i`t' [e]\"m\\s" @qi = @db.quote_identifiers? @db.quote_identifiers = true end after do @db.quote_identifiers = @qi end cspecify "should work correctly", :mssql, :oracle do @db.create_table!(@table) do primary_key :id Integer :number end @ds = @db[@table] @ds.insert(:number=>10).should == 1 @ds.all.should == [{:id=>1, :number=>10}] @ds.update(:number=>20).should == 1 @ds.all.should == [{:id=>1, :number=>20}] @ds.delete.should == 1 @ds.count.should == 0 proc{@db.drop_table?(@table)}.should_not raise_error end end describe Sequel::Dataset do before do DB.create_table!(:test) do String :name Integer :value end @d = DB[:test] end after do DB.drop_table?(:test) end specify "should return the correct record count" do @d.count.should == 0 @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.count.should == 3 end specify "should handle aggregate methods on limited datasets correctly" do @d << {:name => 'abc', :value => 6} @d << {:name => 'bcd', :value => 12} @d << {:name => 'def', :value => 18} @d = @d.order(:name).limit(2) @d.count.should == 2 @d.avg(:value).to_i.should == 9 @d.min(:value).to_i.should == 6 @d.reverse.min(:value).to_i.should == 12 @d.max(:value).to_i.should == 12 @d.sum(:value).to_i.should == 18 @d.interval(:value).to_i.should == 6 end specify "should return the correct records" do @d.to_a.should == [] @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.order(:value).to_a.should == [ {:name => 'abc', :value => 123}, {:name => 'abc', :value => 456}, {:name => 'def', :value => 789} ] end specify "should update records correctly" do @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.filter(:name => 'abc').update(:value => 530) @d[:name => 'def'][:value].should == 789 @d.filter(:value => 530).count.should == 2 end specify "should delete records correctly" do @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.filter(:name => 'abc').delete @d.count.should == 1 @d.first[:name].should == 'def' end specify "should be able to truncate the table" do @d << {:name => 'abc', :value => 123} @d << {:name => 'abc', :value => 456} @d << {:name => 'def', :value => 789} @d.count.should == 3 @d.truncate.should == nil @d.count.should == 0 end specify "should be able to literalize booleans" do proc {@d.literal(true)}.should_not raise_error proc {@d.literal(false)}.should_not raise_error end end describe Sequel::Database do specify "should correctly escape strings" do ["\\\n", "\\\\\n", "\\\r\n", "\\\\\r\n", "\\\\\n\n", "\\\\\r\n\r\n", "\\dingo", "\\'dingo", "\\\\''dingo", ].each do |str| DB.get(Sequel.cast(str, String)).should == str str = "1#{str}1" DB.get(Sequel.cast(str, String)).should == str str = "#{str}#{str}" DB.get(Sequel.cast(str, String)).should == str end end cspecify "should properly escape binary data", [:odbc], [:jdbc, :hsqldb], :oracle do DB.get(Sequel.cast(Sequel.blob("\1\2\3"), File).as(:a)).should == "\1\2\3" end cspecify "should properly escape identifiers", :db2, :oracle do DB.create_table(:"\\'\"[]"){Integer :id} DB.drop_table(:"\\'\"[]") end specify "should have a working table_exists?" do t = :basdfdsafsaddsaf DB.drop_table?(t) DB.table_exists?(t).should == false DB.create_table(t){Integer :a} begin DB.table_exists?(t).should == true ensure DB.drop_table(t) end end end describe Sequel::Dataset do before do DB.create_table! :items do primary_key :id Integer :value end @d = DB[:items] @d << {:value => 123} @d << {:value => 456} @d << {:value => 789} end after do DB.drop_table?(:items) end specify "should correctly return avg" do @d.avg(:value).to_i.should == 456 end specify "should correctly return sum" do @d.sum(:value).to_i.should == 1368 end specify "should correctly return max" do @d.max(:value).to_i.should == 789 end specify "should correctly return min" do @d.min(:value).to_i.should == 123 end end describe "Simple Dataset operations" do before do DB.create_table!(:items) do Integer :number TrueClass :flag end @ds = DB[:items] end after do DB.drop_table?(:items) end specify "should deal with boolean conditions correctly" do @ds.insert(:number=>1, :flag=>true) @ds.insert(:number=>2, :flag=>false) @ds.insert(:number=>3, :flag=>nil) @ds.order!(:number) @ds.filter(:flag=>true).map(:number).should == [1] @ds.filter(:flag=>false).map(:number).should == [2] @ds.filter(:flag=>nil).map(:number).should == [3] @ds.exclude(:flag=>true).map(:number).should == [2, 3] @ds.exclude(:flag=>false).map(:number).should == [1, 3] @ds.exclude(:flag=>nil).map(:number).should == [1, 2] end end describe "Simple Dataset operations in transactions" do before do DB.create_table!(:items) do primary_key :id integer :number end @ds = DB[:items] end after do DB.drop_table?(:items) end cspecify "should insert correctly with a primary key specified inside a transaction", :db2, :mssql do DB.transaction do @ds.insert(:id=>100, :number=>20) @ds.count.should == 1 @ds.order(:id).all.should == [{:id=>100, :number=>20}] end end specify "should have insert return primary key value inside a transaction" do DB.transaction do @ds.insert(:number=>20).should == 1 @ds.count.should == 1 @ds.order(:id).all.should == [{:id=>1, :number=>20}] end end specify "should support for_update" do DB.transaction{@ds.for_update.all.should == []} end end describe "Dataset UNION, EXCEPT, and INTERSECT" do before do DB.create_table!(:i1){integer :number} DB.create_table!(:i2){integer :number} @ds1 = DB[:i1] @ds1.insert(:number=>10) @ds1.insert(:number=>20) @ds2 = DB[:i2] @ds2.insert(:number=>10) @ds2.insert(:number=>30) end after do DB.drop_table?(:i1, :i2, :i3) end specify "should give the correct results for simple UNION, EXCEPT, and INTERSECT" do @ds1.union(@ds2).order(:number).map{|x| x[:number].to_s}.should == %w'10 20 30' if @ds1.supports_intersect_except? @ds1.except(@ds2).order(:number).map{|x| x[:number].to_s}.should == %w'20' @ds1.intersect(@ds2).order(:number).map{|x| x[:number].to_s}.should == %w'10' end end cspecify "should give the correct results for UNION, EXCEPT, and INTERSECT when used with ordering and limits", :mssql do @ds1.insert(:number=>8) @ds2.insert(:number=>9) @ds1.insert(:number=>38) @ds2.insert(:number=>39) @ds1.reverse_order(:number).union(@ds2).order(:number).map{|x| x[:number].to_s}.should == %w'8 9 10 20 30 38 39' @ds1.union(@ds2.reverse_order(:number)).order(:number).map{|x| x[:number].to_s}.should == %w'8 9 10 20 30 38 39' @ds1.reverse_order(:number).limit(1).union(@ds2).order(:number).map{|x| x[:number].to_s}.should == %w'9 10 30 38 39' @ds2.reverse_order(:number).limit(1).union(@ds1).order(:number).map{|x| x[:number].to_s}.should == %w'8 10 20 38 39' @ds1.union(@ds2.order(:number).limit(1)).order(:number).map{|x| x[:number].to_s}.should == %w'8 9 10 20 38' @ds2.union(@ds1.order(:number).limit(1)).order(:number).map{|x| x[:number].to_s}.should == %w'8 9 10 30 39' @ds1.union(@ds2).limit(2).order(:number).map{|x| x[:number].to_s}.should == %w'8 9' @ds2.union(@ds1).reverse_order(:number).limit(2).map{|x| x[:number].to_s}.should == %w'39 38' @ds1.reverse_order(:number).limit(2).union(@ds2.reverse_order(:number).limit(2)).order(:number).limit(3).map{|x| x[:number].to_s}.should == %w'20 30 38' @ds2.order(:number).limit(2).union(@ds1.order(:number).limit(2)).reverse_order(:number).limit(3).map{|x| x[:number].to_s}.should == %w'10 9 8' end specify "should give the correct results for compound UNION, EXCEPT, and INTERSECT" do DB.create_table!(:i3){integer :number} @ds3 = DB[:i3] @ds3.insert(:number=>10) @ds3.insert(:number=>40) @ds1.union(@ds2).union(@ds3).order(:number).map{|x| x[:number].to_s}.should == %w'10 20 30 40' @ds1.union(@ds2.union(@ds3)).order(:number).map{|x| x[:number].to_s}.should == %w'10 20 30 40' if @ds1.supports_intersect_except? @ds1.union(@ds2).except(@ds3).order(:number).map{|x| x[:number].to_s}.should == %w'20 30' @ds1.union(@ds2.except(@ds3)).order(:number).map{|x| x[:number].to_s}.should == %w'10 20 30' @ds1.union(@ds2).intersect(@ds3).order(:number).map{|x| x[:number].to_s}.should == %w'10 ' @ds1.union(@ds2.intersect(@ds3)).order(:number).map{|x| x[:number].to_s}.should == %w'10 20' @ds1.except(@ds2).union(@ds3).order(:number).map{|x| x[:number].to_s}.should == %w'10 20 40' @ds1.except(@ds2.union(@ds3)).order(:number).map{|x| x[:number].to_s}.should == %w'20' @ds1.except(@ds2).except(@ds3).order(:number).map{|x| x[:number].to_s}.should == %w'20' @ds1.except(@ds2.except(@ds3)).order(:number).map{|x| x[:number].to_s}.should == %w'10 20' @ds1.except(@ds2).intersect(@ds3).order(:number).map{|x| x[:number].to_s}.should == %w'' @ds1.except(@ds2.intersect(@ds3)).order(:number).map{|x| x[:number].to_s}.should == %w'20' @ds1.intersect(@ds2).union(@ds3).order(:number).map{|x| x[:number].to_s}.should == %w'10 40' @ds1.intersect(@ds2.union(@ds3)).order(:number).map{|x| x[:number].to_s}.should == %w'10' @ds1.intersect(@ds2).except(@ds3).order(:number).map{|x| x[:number].to_s}.should == %w'' @ds1.intersect(@ds2.except(@ds3)).order(:number).map{|x| x[:number].to_s}.should == %w'' @ds1.intersect(@ds2).intersect(@ds3).order(:number).map{|x| x[:number].to_s}.should == %w'10' @ds1.intersect(@ds2.intersect(@ds3)).order(:number).map{|x| x[:number].to_s}.should == %w'10' end end end if DB.dataset.supports_cte? describe "Common Table Expressions" do before(:all) do @db = DB @db.create_table!(:i1){Integer :id; Integer :parent_id} @ds = @db[:i1] @ds.insert(:id=>1) @ds.insert(:id=>2) @ds.insert(:id=>3, :parent_id=>1) @ds.insert(:id=>4, :parent_id=>1) @ds.insert(:id=>5, :parent_id=>3) @ds.insert(:id=>6, :parent_id=>5) end after(:all) do @db.drop_table?(:i1) end specify "should give correct results for WITH" do @db[:t].with(:t, @ds.filter(:parent_id=>nil).select(:id)).order(:id).map(:id).should == [1, 2] end cspecify "should give correct results for recursive WITH", :db2 do ds = @db[:t].select(:i___id, :pi___parent_id).with_recursive(:t, @ds.filter(:parent_id=>nil), @ds.join(:t, :i=>:parent_id).select(:i1__id, :i1__parent_id), :args=>[:i, :pi]) ds.all.should == [{:parent_id=>nil, :id=>1}, {:parent_id=>nil, :id=>2}, {:parent_id=>1, :id=>3}, {:parent_id=>1, :id=>4}, {:parent_id=>3, :id=>5}, {:parent_id=>5, :id=>6}] ps = @db[:t].select(:i___id, :pi___parent_id).with_recursive(:t, @ds.filter(:parent_id=>:$n), @ds.join(:t, :i=>:parent_id).filter(:t__i=>:parent_id).select(:i1__id, :i1__parent_id), :args=>[:i, :pi]).prepare(:select, :cte_sel) ps.call(:n=>1).should == [{:id=>3, :parent_id=>1}, {:id=>4, :parent_id=>1}, {:id=>5, :parent_id=>3}, {:id=>6, :parent_id=>5}] ps.call(:n=>3).should == [{:id=>5, :parent_id=>3}, {:id=>6, :parent_id=>5}] ps.call(:n=>5).should == [{:id=>6, :parent_id=>5}] end specify "should support joining a dataset with a CTE" do @ds.inner_join(@db[:t].with(:t, @ds.filter(:parent_id=>nil)), :id => :id).select(:i1__id).order(:i1__id).map(:id).should == [1,2] @db[:t].with(:t, @ds).inner_join(@db[:s].with(:s, @ds.filter(:parent_id=>nil)), :id => :id).select(:t__id).order(:t__id).map(:id).should == [1,2] end specify "should support a subselect in the FROM clause with a CTE" do @ds.from(@db[:t].with(:t, @ds)).select_order_map(:id).should == [1,2,3,4,5,6] @db[:t].with(:t, @ds).from_self.select_order_map(:id).should == [1,2,3,4,5,6] end specify "should support using a CTE inside a CTE" do @db[:s].with(:s, @db[:t].with(:t, @ds)).select_order_map(:id).should == [1,2,3,4,5,6] @db[:s].with_recursive(:s, @db[:t].with(:t, @ds), @db[:t2].with(:t2, @ds)).select_order_map(:id).should == [1,1,2,2,3,3,4,4,5,5,6,6] end specify "should support using a CTE inside UNION/EXCEPT/INTERSECT" do @ds.union(@db[:t].with(:t, @ds)).select_order_map(:id).should == [1,2,3,4,5,6] if @ds.supports_intersect_except? @ds.intersect(@db[:t].with(:t, @ds)).select_order_map(:id).should == [1,2,3,4,5,6] @ds.except(@db[:t].with(:t, @ds)).select_order_map(:id).should == [] end end end end if DB.dataset.supports_cte?(:update) # Assume INSERT and DELETE support as well describe "Common Table Expressions in INSERT/UPDATE/DELETE" do before do @db = DB @db.create_table!(:i1){Integer :id} @ds = @db[:i1] @ds2 = @ds.with(:t, @ds) @ds.insert(:id=>1) @ds.insert(:id=>2) end after do @db.drop_table?(:i1) end specify "should give correct results for WITH" do @ds2.insert(@db[:t]) @ds.select_order_map(:id).should == [1, 1, 2, 2] @ds2.filter(:id=>@db[:t].select{max(id)}).update(:id=>Sequel.+(:id, 1)) @ds.select_order_map(:id).should == [1, 1, 3, 3] @ds2.filter(:id=>@db[:t].select{max(id)}).delete @ds.select_order_map(:id).should == [1, 1] end end end if DB.dataset.supports_returning?(:insert) describe "RETURNING clauses in INSERT" do before do @db = DB @db.create_table!(:i1){Integer :id; Integer :foo} @ds = @db[:i1] end after do @db.drop_table?(:i1) end specify "should give correct results" do h = {} @ds.returning(:foo).insert(1, 2){|r| h = r} h.should == {:foo=>2} @ds.returning(:id).insert(3, 4){|r| h = r} h.should == {:id=>3} @ds.returning.insert(5, 6){|r| h = r} h.should == {:id=>5, :foo=>6} @ds.returning(:id___foo, :foo___id).insert(7, 8){|r| h = r} h.should == {:id=>8, :foo=>7} end end end if DB.dataset.supports_returning?(:update) # Assume DELETE support as well describe "RETURNING clauses in UPDATE/DELETE" do before do @db = DB @db.create_table!(:i1){Integer :id; Integer :foo} @ds = @db[:i1] @ds.insert(1, 2) end after do @db.drop_table?(:i1) end specify "should give correct results" do h = [] @ds.returning(:foo).update(:id=>Sequel.+(:id, 1), :foo=>Sequel.*(:foo, 2)){|r| h << r} h.should == [{:foo=>4}] h.clear @ds.returning(:id).update(:id=>Sequel.+(:id, 1), :foo=>Sequel.*(:foo, 2)){|r| h << r} h.should == [{:id=>3}] h.clear @ds.returning.update(:id=>Sequel.+(:id, 1), :foo=>Sequel.*(:foo, 2)){|r| h << r} h.should == [{:id=>4, :foo=>16}] h.clear @ds.returning(:id___foo, :foo___id).update(:id=>Sequel.+(:id, 1), :foo=>Sequel.*(:foo, 2)){|r| h << r} h.should == [{:id=>32, :foo=>5}] h.clear @ds.returning.delete{|r| h << r} h.should == [{:id=>5, :foo=>32}] h.clear @ds.returning.delete{|r| h << r} h.should == [] end end end if DB.dataset.supports_window_functions? describe "Window Functions" do before(:all) do @db = DB @db.create_table!(:i1){Integer :id; Integer :group_id; Integer :amount} @ds = @db[:i1].order(:id) @ds.insert(:id=>1, :group_id=>1, :amount=>1) @ds.insert(:id=>2, :group_id=>1, :amount=>10) @ds.insert(:id=>3, :group_id=>1, :amount=>100) @ds.insert(:id=>4, :group_id=>2, :amount=>1000) @ds.insert(:id=>5, :group_id=>2, :amount=>10000) @ds.insert(:id=>6, :group_id=>2, :amount=>100000) end after(:all) do @db.drop_table?(:i1) end specify "should give correct results for aggregate window functions" do @ds.select(:id){sum(:over, :args=>amount, :partition=>group_id){}.as(:sum)}.all.should == [{:sum=>111, :id=>1}, {:sum=>111, :id=>2}, {:sum=>111, :id=>3}, {:sum=>111000, :id=>4}, {:sum=>111000, :id=>5}, {:sum=>111000, :id=>6}] @ds.select(:id){sum(:over, :args=>amount){}.as(:sum)}.all.should == [{:sum=>111111, :id=>1}, {:sum=>111111, :id=>2}, {:sum=>111111, :id=>3}, {:sum=>111111, :id=>4}, {:sum=>111111, :id=>5}, {:sum=>111111, :id=>6}] end specify "should give correct results for ranking window functions with orders" do @ds.select(:id){rank(:over, :partition=>group_id, :order=>id){}.as(:rank)}.all.should == [{:rank=>1, :id=>1}, {:rank=>2, :id=>2}, {:rank=>3, :id=>3}, {:rank=>1, :id=>4}, {:rank=>2, :id=>5}, {:rank=>3, :id=>6}] @ds.select(:id){rank(:over, :order=>id){}.as(:rank)}.all.should == [{:rank=>1, :id=>1}, {:rank=>2, :id=>2}, {:rank=>3, :id=>3}, {:rank=>4, :id=>4}, {:rank=>5, :id=>5}, {:rank=>6, :id=>6}] end cspecify "should give correct results for aggregate window functions with orders", :mssql do @ds.select(:id){sum(:over, :args=>amount, :partition=>group_id, :order=>id){}.as(:sum)}.all.should == [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1000, :id=>4}, {:sum=>11000, :id=>5}, {:sum=>111000, :id=>6}] @ds.select(:id){sum(:over, :args=>amount, :order=>id){}.as(:sum)}.all.should == [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1111, :id=>4}, {:sum=>11111, :id=>5}, {:sum=>111111, :id=>6}] end cspecify "should give correct results for aggregate window functions with frames", :mssql do @ds.select(:id){sum(:over, :args=>amount, :partition=>group_id, :order=>id, :frame=>:all){}.as(:sum)}.all.should == [{:sum=>111, :id=>1}, {:sum=>111, :id=>2}, {:sum=>111, :id=>3}, {:sum=>111000, :id=>4}, {:sum=>111000, :id=>5}, {:sum=>111000, :id=>6}] @ds.select(:id){sum(:over, :args=>amount, :order=>id, :frame=>:all){}.as(:sum)}.all.should == [{:sum=>111111, :id=>1}, {:sum=>111111, :id=>2}, {:sum=>111111, :id=>3}, {:sum=>111111, :id=>4}, {:sum=>111111, :id=>5}, {:sum=>111111, :id=>6}] @ds.select(:id){sum(:over, :args=>amount, :partition=>group_id, :order=>id, :frame=>:rows){}.as(:sum)}.all.should == [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1000, :id=>4}, {:sum=>11000, :id=>5}, {:sum=>111000, :id=>6}] @ds.select(:id){sum(:over, :args=>amount, :order=>id, :frame=>:rows){}.as(:sum)}.all.should == [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1111, :id=>4}, {:sum=>11111, :id=>5}, {:sum=>111111, :id=>6}] end end end describe Sequel::SQL::Constants do before do @db = DB @ds = @db[:constants] @c = proc do |v| case v when Time v when DateTime, String Time.parse(v.to_s) else v end end @c2 = proc{|v| v.is_a?(Date) ? v : Date.parse(v) } end after do @db.drop_table?(:constants) end cspecify "should have working CURRENT_DATE", [:odbc, :mssql], [:jdbc, :sqlite], :oracle do @db.create_table!(:constants){Date :d} @ds.insert(:d=>Sequel::CURRENT_DATE) d = @c2[@ds.get(:d)] d.should be_a_kind_of(Date) d.to_s.should == Date.today.to_s end cspecify "should have working CURRENT_TIME", [:do, :mysql], [:jdbc, :sqlite], [:mysql2] do @db.create_table!(:constants){Time :t, :only_time=>true} @ds.insert(:t=>Sequel::CURRENT_TIME) (Time.now - @c[@ds.get(:t)]).should be_within(60).of(0) end cspecify "should have working CURRENT_TIMESTAMP", [:jdbc, :sqlite], [:swift] do @db.create_table!(:constants){DateTime :ts} @ds.insert(:ts=>Sequel::CURRENT_TIMESTAMP) (Time.now - @c[@ds.get(:ts)]).should be_within(60).of(0) end cspecify "should have working CURRENT_TIMESTAMP when used as a column default", [:jdbc, :sqlite], [:swift] do @db.create_table!(:constants){DateTime :ts, :default=>Sequel::CURRENT_TIMESTAMP} @ds.insert (Time.now - @c[@ds.get(:ts)]).should be_within(60).of(0) end end describe "Sequel::Dataset#import and #multi_insert" do before(:all) do @db = DB @db.create_table!(:imp){Integer :i} @ids = @db[:imp].order(:i) end before do @ids.delete end after(:all) do @db.drop_table?(:imp) end it "should import with multi_insert and an array of hashes" do @ids.multi_insert([{:i=>10}, {:i=>20}]) @ids.all.should == [{:i=>10}, {:i=>20}] end it "should import with an array of arrays of values" do @ids.import([:i], [[10], [20]]) @ids.all.should == [{:i=>10}, {:i=>20}] end it "should import with a dataset" do @db.create_table!(:exp2){Integer :i} @db[:exp2].import([:i], [[10], [20]]) @ids.import([:i], @db[:exp2]) @ids.all.should == [{:i=>10}, {:i=>20}] @db.drop_table(:exp2) end it "should have import work with the :slice_size option" do @ids.import([:i], [[10], [20], [30]], :slice_size=>1) @ids.all.should == [{:i=>10}, {:i=>20}, {:i=>30}] @ids.delete @ids.import([:i], [[10], [20], [30]], :slice_size=>2) @ids.all.should == [{:i=>10}, {:i=>20}, {:i=>30}] @ids.delete @ids.import([:i], [[10], [20], [30]], :slice_size=>3) @ids.all.should == [{:i=>10}, {:i=>20}, {:i=>30}] end end describe "Sequel::Dataset#import and #multi_insert :return=>:primary_key " do before do @db = DB @db.create_table!(:imp){primary_key :id; Integer :i} @ds = @db[:imp] end after do @db.drop_table?(:imp) end specify "should return primary key values" do @ds.multi_insert([{:i=>10}, {:i=>20}, {:i=>30}], :return=>:primary_key).should == [1, 2, 3] @ds.import([:i], [[40], [50], [60]], :return=>:primary_key).should == [4, 5, 6] @ds.order(:id).map([:id, :i]).should == [[1, 10], [2, 20], [3, 30], [4, 40], [5, 50], [6, 60]] end specify "should return primary key values when :slice is used" do @ds.multi_insert([{:i=>10}, {:i=>20}, {:i=>30}], :return=>:primary_key, :slice=>2).should == [1, 2, 3] @ds.import([:i], [[40], [50], [60]], :return=>:primary_key, :slice=>2).should == [4, 5, 6] @ds.order(:id).map([:id, :i]).should == [[1, 10], [2, 20], [3, 30], [4, 40], [5, 50], [6, 60]] end end describe "Sequel::Dataset convenience methods" do before(:all) do @db = DB @db.create_table!(:a){Integer :a; Integer :b; Integer :c} @ds = @db[:a] @ds.insert(1, 3, 5) @ds.insert(1, 3, 6) @ds.insert(1, 4, 5) @ds.insert(2, 3, 5) @ds.insert(2, 4, 6) end after(:all) do @db.drop_table?(:a) end it "#group_rollup should include hierarchy of groupings" do @ds.group_by(:a).group_rollup.select_map([:a, Sequel.function(:sum, :b).cast(Integer).as(:b), Sequel.function(:sum, :c).cast(Integer).as(:c)]).sort_by{|x| x.inspect}.should == [[1, 10, 16], [2, 7, 11], [nil, 17, 27]] @ds.group_by(:a, :b).group_rollup.select_map([:a, :b, Sequel.function(:sum, :c).cast(Integer).as(:c)]).sort_by{|x| x.inspect}.should == [[1, 3, 11], [1, 4, 5], [1, nil, 16], [2, 3, 5], [2, 4, 6], [2, nil, 11], [nil, nil, 27]] end if DB.dataset.supports_group_rollup? it "#group_cube should include all combinations of groupings" do @ds.group_by(:a).group_cube.select_map([:a, Sequel.function(:sum, :b).cast(Integer).as(:b), Sequel.function(:sum, :c).cast(Integer).as(:c)]).sort_by{|x| x.inspect}.should == [[1, 10, 16], [2, 7, 11], [nil, 17, 27]] @ds.group_by(:a, :b).group_cube.select_map([:a, :b, Sequel.function(:sum, :c).cast(Integer).as(:c)]).sort_by{|x| x.inspect}.should == [[1, 3, 11], [1, 4, 5], [1, nil, 16], [2, 3, 5], [2, 4, 6], [2, nil, 11], [nil, 3, 16], [nil, 4, 11], [nil, nil, 27]] end if DB.dataset.supports_group_cube? end describe "Sequel::Dataset convenience methods" do before(:all) do @db = DB @db.create_table!(:a){Integer :a; Integer :b} @ds = @db[:a].order(:a) end before do @ds.delete end after(:all) do @db.drop_table?(:a) end it "#[]= should update matching rows" do @ds.insert(20, 10) @ds.extension(:sequel_3_dataset_methods)[:a=>20] = {:b=>30} @ds.all.should == [{:a=>20, :b=>30}] end it "#empty? should return whether the dataset returns no rows" do @ds.empty?.should == true @ds.insert(20, 10) @ds.empty?.should == false end it "#empty? should work correctly for datasets with limits" do ds = @ds.limit(1) ds.empty?.should == true ds.insert(20, 10) ds.empty?.should == false end it "#empty? should work correctly for datasets with limits and offsets" do ds = @ds.limit(1, 1) ds.empty?.should == true ds.insert(20, 10) ds.empty?.should == true ds.insert(20, 10) ds.empty?.should == false end it "#group_and_count should return a grouping by count" do @ds.group_and_count(:a).order{count(:a)}.all.should == [] @ds.insert(20, 10) @ds.group_and_count(:a).order{count(:a)}.all.each{|h| h[:count] = h[:count].to_i}.should == [{:a=>20, :count=>1}] @ds.insert(20, 30) @ds.group_and_count(:a).order{count(:a)}.all.each{|h| h[:count] = h[:count].to_i}.should == [{:a=>20, :count=>2}] @ds.insert(30, 30) @ds.group_and_count(:a).order{count(:a)}.all.each{|h| h[:count] = h[:count].to_i}.should == [{:a=>30, :count=>1}, {:a=>20, :count=>2}] end it "#group_and_count should support column aliases" do @ds.group_and_count(:a___c).order{count(:a)}.all.should == [] @ds.insert(20, 10) @ds.group_and_count(:a___c).order{count(:a)}.all.each{|h| h[:count] = h[:count].to_i}.should == [{:c=>20, :count=>1}] @ds.insert(20, 30) @ds.group_and_count(:a___c).order{count(:a)}.all.each{|h| h[:count] = h[:count].to_i}.should == [{:c=>20, :count=>2}] @ds.insert(30, 30) @ds.group_and_count(:a___c).order{count(:a)}.all.each{|h| h[:count] = h[:count].to_i}.should == [{:c=>30, :count=>1}, {:c=>20, :count=>2}] end specify "#range should return the range between the maximum and minimum values" do @ds = @ds.unordered @ds.insert(20, 10) @ds.insert(30, 10) @ds.range(:a).should == (20..30) @ds.range(:b).should == (10..10) end it "#interval should return the different between the maximum and minimum values" do @ds = @ds.unordered @ds.insert(20, 10) @ds.insert(30, 10) @ds.interval(:a).to_i.should == 10 @ds.interval(:b).to_i.should == 0 end end describe "Sequel::Dataset main SQL methods" do before(:all) do @db = DB @db.create_table!(:d){Integer :a; Integer :b} @ds = @db[:d].order(:a) end before do @ds.delete end after(:all) do @db.drop_table?(:d) end it "#exists should return a usable exists clause" do @ds.filter(@db[:d___c].filter(:c__a=>:d__b).exists).all.should == [] @ds.insert(20, 30) @ds.insert(10, 20) @ds.filter(@db[:d___c].filter(:c__a=>:d__b).exists).all.should == [{:a=>10, :b=>20}] end it "#filter and #exclude should work with placeholder strings" do @ds.insert(20, 30) @ds.filter("a > ?", 15).all.should == [{:a=>20, :b=>30}] @ds.exclude("b < ?", 15).all.should == [{:a=>20, :b=>30}] @ds.filter("b < ?", 15).invert.all.should == [{:a=>20, :b=>30}] end it "#and and #or should work correctly" do @ds.insert(20, 30) @ds.filter(:a=>20).and(:b=>30).all.should == [{:a=>20, :b=>30}] @ds.filter(:a=>20).and(:b=>15).all.should == [] @ds.filter(:a=>20).or(:b=>15).all.should == [{:a=>20, :b=>30}] @ds.filter(:a=>10).or(:b=>15).all.should == [] end it "#select_group should work correctly" do @ds.unordered! @ds.select_group(:a).all.should == [] @ds.insert(20, 30) @ds.select_group(:a).all.should == [{:a=>20}] @ds.select_group(:b).all.should == [{:b=>30}] @ds.insert(20, 40) @ds.select_group(:a).all.should == [{:a=>20}] @ds.order(:b).select_group(:b).all.should == [{:b=>30}, {:b=>40}] end it "#select_group should work correctly when aliasing" do @ds.unordered! @ds.insert(20, 30) @ds.select_group(:b___c).all.should == [{:c=>30}] end it "#having should work correctly" do @ds.unordered! @ds.select{[b, max(a).as(c)]}.group(:b).having{max(a) > 30}.all.should == [] @ds.insert(20, 30) @ds.select{[b, max(a).as(c)]}.group(:b).having{max(a) > 30}.all.should == [] @ds.insert(40, 20) @ds.select{[b, max(a).as(c)]}.group(:b).having{max(a) > 30}.all.each{|h| h[:c] = h[:c].to_i}.should == [{:b=>20, :c=>40}] end cspecify "#having should work without a previous group", :sqlite do @ds.unordered! @ds.select{max(a).as(c)}.having{max(a) > 30}.all.should == [] @ds.insert(20, 30) @ds.select{max(a).as(c)}.having{max(a) > 30}.all.should == [] @ds.insert(40, 20) @ds.select{max(a).as(c)}.having{max(a) > 30}.all.each{|h| h[:c] = h[:c].to_i}.should == [{:c=>40}] end end describe "Sequel::Dataset convenience methods" do before(:all) do @db = DB @db.create_table!(:a){Integer :a; Integer :b; Integer :c; Integer :d} @ds = @db[:a].order(:a) end before do @ds.delete @ds.insert(1, 2, 3, 4) @ds.insert(5, 6, 7, 8) end after(:all) do @db.drop_table?(:a) end specify "should have working #map" do @ds.map(:a).should == [1, 5] @ds.map(:b).should == [2, 6] @ds.map([:a, :b]).should == [[1, 2], [5, 6]] end specify "should have working #to_hash" do @ds.to_hash(:a).should == {1=>{:a=>1, :b=>2, :c=>3, :d=>4}, 5=>{:a=>5, :b=>6, :c=>7, :d=>8}} @ds.to_hash(:b).should == {2=>{:a=>1, :b=>2, :c=>3, :d=>4}, 6=>{:a=>5, :b=>6, :c=>7, :d=>8}} @ds.to_hash([:a, :b]).should == {[1, 2]=>{:a=>1, :b=>2, :c=>3, :d=>4}, [5, 6]=>{:a=>5, :b=>6, :c=>7, :d=>8}} @ds.to_hash(:a, :b).should == {1=>2, 5=>6} @ds.to_hash([:a, :c], :b).should == {[1, 3]=>2, [5, 7]=>6} @ds.to_hash(:a, [:b, :c]).should == {1=>[2, 3], 5=>[6, 7]} @ds.to_hash([:a, :c], [:b, :d]).should == {[1, 3]=>[2, 4], [5, 7]=>[6, 8]} end specify "should have working #to_hash_groups" do ds = @ds.order(*@ds.columns) ds.insert(1, 2, 3, 9) ds.to_hash_groups(:a).should == {1=>[{:a=>1, :b=>2, :c=>3, :d=>4}, {:a=>1, :b=>2, :c=>3, :d=>9}], 5=>[{:a=>5, :b=>6, :c=>7, :d=>8}]} ds.to_hash_groups(:b).should == {2=>[{:a=>1, :b=>2, :c=>3, :d=>4}, {:a=>1, :b=>2, :c=>3, :d=>9}], 6=>[{:a=>5, :b=>6, :c=>7, :d=>8}]} ds.to_hash_groups([:a, :b]).should == {[1, 2]=>[{:a=>1, :b=>2, :c=>3, :d=>4}, {:a=>1, :b=>2, :c=>3, :d=>9}], [5, 6]=>[{:a=>5, :b=>6, :c=>7, :d=>8}]} ds.to_hash_groups(:a, :d).should == {1=>[4, 9], 5=>[8]} ds.to_hash_groups([:a, :c], :d).should == {[1, 3]=>[4, 9], [5, 7]=>[8]} ds.to_hash_groups(:a, [:b, :d]).should == {1=>[[2, 4], [2, 9]], 5=>[[6, 8]]} ds.to_hash_groups([:a, :c], [:b, :d]).should == {[1, 3]=>[[2, 4], [2, 9]], [5, 7]=>[[6, 8]]} end specify "should have working #select_map" do @ds.select_map(:a).should == [1, 5] @ds.select_map(:b).should == [2, 6] @ds.select_map([:a]).should == [[1], [5]] @ds.select_map([:a, :b]).should == [[1, 2], [5, 6]] @ds.select_map(:a___e).should == [1, 5] @ds.select_map(:b___e).should == [2, 6] @ds.select_map([:a___e, :b___f]).should == [[1, 2], [5, 6]] @ds.select_map([:a__a___e, :a__b___f]).should == [[1, 2], [5, 6]] @ds.select_map([Sequel.expr(:a__a).as(:e), Sequel.expr(:a__b).as(:f)]).should == [[1, 2], [5, 6]] @ds.select_map([Sequel.qualify(:a, :a).as(:e), Sequel.qualify(:a, :b).as(:f)]).should == [[1, 2], [5, 6]] @ds.select_map([Sequel.identifier(:a).qualify(:a).as(:e), Sequel.qualify(:a, :b).as(:f)]).should == [[1, 2], [5, 6]] end specify "should have working #select_order_map" do @ds.select_order_map(:a).should == [1, 5] @ds.select_order_map(Sequel.desc(:a__b)).should == [6, 2] @ds.select_order_map(Sequel.desc(:a__b___e)).should == [6, 2] @ds.select_order_map(Sequel.qualify(:a, :b).as(:e)).should == [2, 6] @ds.select_order_map([:a]).should == [[1], [5]] @ds.select_order_map([Sequel.desc(:a), :b]).should == [[5, 6], [1, 2]] @ds.select_order_map(:a___e).should == [1, 5] @ds.select_order_map(:b___e).should == [2, 6] @ds.select_order_map([Sequel.desc(:a___e), :b___f]).should == [[5, 6], [1, 2]] @ds.select_order_map([Sequel.desc(:a__a___e), :a__b___f]).should == [[5, 6], [1, 2]] @ds.select_order_map([Sequel.desc(:a__a), Sequel.expr(:a__b).as(:f)]).should == [[5, 6], [1, 2]] @ds.select_order_map([Sequel.qualify(:a, :a).desc, Sequel.qualify(:a, :b).as(:f)]).should == [[5, 6], [1, 2]] @ds.select_order_map([Sequel.identifier(:a).qualify(:a).desc, Sequel.qualify(:a, :b).as(:f)]).should == [[5, 6], [1, 2]] end specify "should have working #select_hash" do @ds.select_hash(:a, :b).should == {1=>2, 5=>6} @ds.select_hash(:a__a___e, :b).should == {1=>2, 5=>6} @ds.select_hash(Sequel.expr(:a__a).as(:e), :b).should == {1=>2, 5=>6} @ds.select_hash(Sequel.qualify(:a, :a).as(:e), :b).should == {1=>2, 5=>6} @ds.select_hash(Sequel.identifier(:a).qualify(:a).as(:e), :b).should == {1=>2, 5=>6} @ds.select_hash([:a, :c], :b).should == {[1, 3]=>2, [5, 7]=>6} @ds.select_hash(:a, [:b, :c]).should == {1=>[2, 3], 5=>[6, 7]} @ds.select_hash([:a, :c], [:b, :d]).should == {[1, 3]=>[2, 4], [5, 7]=>[6, 8]} end specify "should have working #select_hash_groups" do ds = @ds.order(*@ds.columns) ds.insert(1, 2, 3, 9) ds.select_hash_groups(:a, :d).should == {1=>[4, 9], 5=>[8]} ds.select_hash_groups(:a__a___e, :d).should == {1=>[4, 9], 5=>[8]} ds.select_hash_groups(Sequel.expr(:a__a).as(:e), :d).should == {1=>[4, 9], 5=>[8]} ds.select_hash_groups(Sequel.qualify(:a, :a).as(:e), :d).should == {1=>[4, 9], 5=>[8]} ds.select_hash_groups(Sequel.identifier(:a).qualify(:a).as(:e), :d).should == {1=>[4, 9], 5=>[8]} ds.select_hash_groups([:a, :c], :d).should == {[1, 3]=>[4, 9], [5, 7]=>[8]} ds.select_hash_groups(:a, [:b, :d]).should == {1=>[[2, 4], [2, 9]], 5=>[[6, 8]]} ds.select_hash_groups([:a, :c], [:b, :d]).should == {[1, 3]=>[[2, 4], [2, 9]], [5, 7]=>[[6, 8]]} end end describe "Sequel::Dataset DSL support" do before(:all) do @db = DB @db.create_table!(:a){Integer :a; Integer :b} @ds = @db[:a].order(:a) end before do @ds.delete end after(:all) do @db.drop_table?(:a) end it "should work with standard mathematical operators" do @ds.insert(20, 10) @ds.get{a + b}.to_i.should == 30 @ds.get{a - b}.to_i.should == 10 @ds.get{a * b}.to_i.should == 200 @ds.get{a / b}.to_i.should == 2 end cspecify "should work with bitwise shift operators", :derby do @ds.insert(3, 2) @ds.get{a.sql_number << b}.to_i.should == 12 @ds.get{a.sql_number >> b}.to_i.should == 0 @ds.get{a.sql_number << b << 1}.to_i.should == 24 @ds.delete @ds.insert(3, 1) @ds.get{a.sql_number << b}.to_i.should == 6 @ds.get{a.sql_number >> b}.to_i.should == 1 @ds.get{a.sql_number >> b >> 1}.to_i.should == 0 end cspecify "should work with bitwise AND and OR operators", :derby do @ds.insert(3, 5) @ds.get{a.sql_number | b}.to_i.should == 7 @ds.get{a.sql_number & b}.to_i.should == 1 @ds.get{a.sql_number | b | 8}.to_i.should == 15 @ds.get{a.sql_number & b & 8}.to_i.should == 0 end specify "should work with the bitwise compliment operator" do @ds.insert(-3, 3) @ds.get{~a.sql_number}.to_i.should == 2 @ds.get{~b.sql_number}.to_i.should == -4 end cspecify "should work with the bitwise xor operator", :derby do @ds.insert(3, 5) @ds.get{a.sql_number ^ b}.to_i.should == 6 @ds.get{a.sql_number ^ b ^ 1}.to_i.should == 7 end specify "should work with the modulus operator" do @ds.insert(3, 5) @ds.get{a.sql_number % 4}.to_i.should == 3 @ds.get{b.sql_number % 4}.to_i.should == 1 @ds.get{a.sql_number % 4 % 2}.to_i.should == 1 end specify "should work with inequality operators" do @ds.insert(10, 11) @ds.insert(11, 11) @ds.insert(20, 19) @ds.insert(20, 20) @ds.filter{a > b}.select_order_map(:a).should == [20] @ds.filter{a >= b}.select_order_map(:a).should == [11, 20, 20] @ds.filter{a < b}.select_order_map(:a).should == [10] @ds.filter{a <= b}.select_order_map(:a).should == [10, 11, 20] end specify "should work with casting and string concatentation" do @ds.insert(20, 20) @ds.get{Sequel.cast(a, String).sql_string + Sequel.cast(b, String)}.should == '2020' end it "should work with ordering" do @ds.insert(10, 20) @ds.insert(20, 10) @ds.order(:a, :b).all.should == [{:a=>10, :b=>20}, {:a=>20, :b=>10}] @ds.order(Sequel.asc(:a), Sequel.asc(:b)).all.should == [{:a=>10, :b=>20}, {:a=>20, :b=>10}] @ds.order(Sequel.desc(:a), Sequel.desc(:b)).all.should == [{:a=>20, :b=>10}, {:a=>10, :b=>20}] end it "should work with qualifying" do @ds.insert(10, 20) @ds.get(:a__b).should == 20 @ds.get{a__b}.should == 20 @ds.get(Sequel.qualify(:a, :b)).should == 20 end it "should work with aliasing" do @ds.insert(10, 20) @ds.get(:a__b___c).should == 20 @ds.get{a__b.as(c)}.should == 20 @ds.get(Sequel.qualify(:a, :b).as(:c)).should == 20 @ds.get(Sequel.as(:b, :c)).should == 20 end it "should work with selecting all columns of a table" do @ds.insert(20, 10) @ds.select_all(:a).all.should == [{:a=>20, :b=>10}] end it "should work with ranges as hash values" do @ds.insert(20, 10) @ds.filter(:a=>(10..30)).all.should == [{:a=>20, :b=>10}] @ds.filter(:a=>(25..30)).all.should == [] @ds.filter(:a=>(10..15)).all.should == [] @ds.exclude(:a=>(10..30)).all.should == [] @ds.exclude(:a=>(25..30)).all.should == [{:a=>20, :b=>10}] @ds.exclude(:a=>(10..15)).all.should == [{:a=>20, :b=>10}] end it "should work with nil as hash value" do @ds.insert(20, nil) @ds.filter(:a=>nil).all.should == [] @ds.filter(:b=>nil).all.should == [{:a=>20, :b=>nil}] @ds.exclude(:b=>nil).all.should == [] @ds.exclude(:a=>nil).all.should == [{:a=>20, :b=>nil}] end it "should work with arrays as hash values" do @ds.insert(20, 10) @ds.filter(:a=>[10]).all.should == [] @ds.filter(:a=>[20, 10]).all.should == [{:a=>20, :b=>10}] @ds.exclude(:a=>[10]).all.should == [{:a=>20, :b=>10}] @ds.exclude(:a=>[20, 10]).all.should == [] end it "should work with ranges as hash values" do @ds.insert(20, 10) @ds.filter(:a=>(10..30)).all.should == [{:a=>20, :b=>10}] @ds.filter(:a=>(25..30)).all.should == [] @ds.filter(:a=>(10..15)).all.should == [] @ds.exclude(:a=>(10..30)).all.should == [] @ds.exclude(:a=>(25..30)).all.should == [{:a=>20, :b=>10}] @ds.exclude(:a=>(10..15)).all.should == [{:a=>20, :b=>10}] end it "should work with CASE statements" do @ds.insert(20, 10) @ds.filter(Sequel.case({{:a=>20}=>20}, 0) > 0).all.should == [{:a=>20, :b=>10}] @ds.filter(Sequel.case({{:a=>15}=>20}, 0) > 0).all.should == [] @ds.filter(Sequel.case({20=>20}, 0, :a) > 0).all.should == [{:a=>20, :b=>10}] @ds.filter(Sequel.case({15=>20}, 0, :a) > 0).all.should == [] end specify "should work with multiple value arrays" do @ds.insert(20, 10) @ds.quote_identifiers = false @ds.filter([:a, :b]=>[[20, 10]]).all.should == [{:a=>20, :b=>10}] @ds.filter([:a, :b]=>[[10, 20]]).all.should == [] @ds.filter([:a, :b]=>[[20, 10], [1, 2]]).all.should == [{:a=>20, :b=>10}] @ds.filter([:a, :b]=>[[10, 10], [20, 20]]).all.should == [] @ds.exclude([:a, :b]=>[[20, 10]]).all.should == [] @ds.exclude([:a, :b]=>[[10, 20]]).all.should == [{:a=>20, :b=>10}] @ds.exclude([:a, :b]=>[[20, 10], [1, 2]]).all.should == [] @ds.exclude([:a, :b]=>[[10, 10], [20, 20]]).all.should == [{:a=>20, :b=>10}] end it "should work with IN/NOT in with datasets" do @ds.insert(20, 10) ds = @ds.unordered @ds.quote_identifiers = false @ds.filter(:a=>ds.select(:a)).all.should == [{:a=>20, :b=>10}] @ds.filter(:a=>ds.select(:a).where(:a=>15)).all.should == [] @ds.exclude(:a=>ds.select(:a)).all.should == [] @ds.exclude(:a=>ds.select(:a).where(:a=>15)).all.should == [{:a=>20, :b=>10}] @ds.filter([:a, :b]=>ds.select(:a, :b)).all.should == [{:a=>20, :b=>10}] @ds.filter([:a, :b]=>ds.select(:b, :a)).all.should == [] @ds.exclude([:a, :b]=>ds.select(:a, :b)).all.should == [] @ds.exclude([:a, :b]=>ds.select(:b, :a)).all.should == [{:a=>20, :b=>10}] @ds.filter([:a, :b]=>ds.select(:a, :b).where(:a=>15)).all.should == [] @ds.exclude([:a, :b]=>ds.select(:a, :b).where(:a=>15)).all.should == [{:a=>20, :b=>10}] end specify "should work empty arrays" do @ds.insert(20, 10) @ds.filter(:a=>[]).all.should == [] @ds.exclude(:a=>[]).all.should == [{:a=>20, :b=>10}] @ds.filter([:a, :b]=>[]).all.should == [] @ds.exclude([:a, :b]=>[]).all.should == [{:a=>20, :b=>10}] end specify "should work empty arrays with nulls" do @ds.insert(nil, nil) @ds.filter(:a=>[]).all.should == [] @ds.exclude(:a=>[]).all.should == [] @ds.filter([:a, :b]=>[]).all.should == [] @ds.exclude([:a, :b]=>[]).all.should == [] unless Sequel.guarded?(:mssql, :oracle, :db2) # Some databases don't like boolean results in the select list pr = proc{|r| r.is_a?(Integer) ? (r != 0) : r} pr[@ds.get(Sequel.expr(:a=>[]))].should == nil pr[@ds.get(~Sequel.expr(:a=>[]))].should == nil pr[@ds.get(Sequel.expr([:a, :b]=>[]))].should == nil pr[@ds.get(~Sequel.expr([:a, :b]=>[]))].should == nil end end specify "should work empty arrays with nulls and the empty_array_ignore_nulls extension" do ds = @ds.extension(:empty_array_ignore_nulls) ds.insert(nil, nil) ds.filter(:a=>[]).all.should == [] ds.exclude(:a=>[]).all.should == [{:a=>nil, :b=>nil}] ds.filter([:a, :b]=>[]).all.should == [] ds.exclude([:a, :b]=>[]).all.should == [{:a=>nil, :b=>nil}] unless Sequel.guarded?(:mssql, :oracle, :db2) # Some databases don't like boolean results in the select list pr = proc{|r| r.is_a?(Integer) ? (r != 0) : r} pr[ds.get(Sequel.expr(:a=>[]))].should == false pr[ds.get(~Sequel.expr(:a=>[]))].should == true pr[ds.get(Sequel.expr([:a, :b]=>[]))].should == false pr[ds.get(~Sequel.expr([:a, :b]=>[]))].should == true end end it "should work multiple conditions" do @ds.insert(20, 10) @ds.filter(:a=>20, :b=>10).all.should == [{:a=>20, :b=>10}] @ds.filter([[:a, 20], [:b, 10]]).all.should == [{:a=>20, :b=>10}] @ds.filter({:a=>20}, {:b=>10}).all.should == [{:a=>20, :b=>10}] @ds.filter(Sequel.|({:a=>20}, {:b=>5})).all.should == [{:a=>20, :b=>10}] @ds.filter(Sequel.~(:a=>10)).all.should == [{:a=>20, :b=>10}] end end describe "SQL Extract Function" do before do @db = DB @db.create_table!(:a){DateTime :a} @ds = @db[:a].order(:a) end after do @db.drop_table?(:a) end specify "should return the part of the datetime asked for" do t = Time.now def @ds.supports_timestamp_timezones?() false end @ds.insert(t) @ds.get{a.extract(:year)}.should == t.year @ds.get{a.extract(:month)}.should == t.month @ds.get{a.extract(:day)}.should == t.day @ds.get{a.extract(:hour)}.should == t.hour @ds.get{a.extract(:minute)}.should == t.min @ds.get{a.extract(:second)}.to_i.should == t.sec end end describe "Dataset string methods" do before(:all) do @db = DB csc = {} cic = {} csc[:collate] = @db.dataset_class::CASE_SENSITIVE_COLLATION if defined? @db.dataset_class::CASE_SENSITIVE_COLLATION cic[:collate] = @db.dataset_class::CASE_INSENSITIVE_COLLATION if defined? @db.dataset_class::CASE_INSENSITIVE_COLLATION @db.create_table!(:a) do String :a, csc String :b, cic end @ds = @db[:a].order(:a) end before do @ds.delete end after(:all) do @db.drop_table?(:a) end it "#grep should return matching rows" do @ds.insert('foo', 'bar') @ds.grep(:a, 'foo').all.should == [{:a=>'foo', :b=>'bar'}] @ds.grep(:b, 'foo').all.should == [] @ds.grep(:b, 'bar').all.should == [{:a=>'foo', :b=>'bar'}] @ds.grep(:a, 'bar').all.should == [] @ds.grep([:a, :b], %w'foo bar').all.should == [{:a=>'foo', :b=>'bar'}] @ds.grep([:a, :b], %w'boo far').all.should == [] end it "#grep should work with :all_patterns and :all_columns options" do @ds.insert('foo bar', ' ') @ds.insert('foo d', 'bar') @ds.insert('foo e', ' ') @ds.insert(' ', 'bar') @ds.insert('foo f', 'baz') @ds.insert('foo baz', 'bar baz') @ds.insert('foo boo', 'boo foo') @ds.grep([:a, :b], %w'%foo% %bar%', :all_patterns=>true).all.should == [{:a=>'foo bar', :b=>' '}, {:a=>'foo baz', :b=>'bar baz'}, {:a=>'foo d', :b=>'bar'}] @ds.grep([:a, :b], %w'%foo% %bar% %blob%', :all_patterns=>true).all.should == [] @ds.grep([:a, :b], %w'%bar% %foo%', :all_columns=>true).all.should == [{:a=>"foo baz", :b=>"bar baz"}, {:a=>"foo boo", :b=>"boo foo"}, {:a=>"foo d", :b=>"bar"}] @ds.grep([:a, :b], %w'%baz%', :all_columns=>true).all.should == [{:a=>'foo baz', :b=>'bar baz'}] @ds.grep([:a, :b], %w'%baz% %foo%', :all_columns=>true, :all_patterns=>true).all.should == [] @ds.grep([:a, :b], %w'%boo% %foo%', :all_columns=>true, :all_patterns=>true).all.should == [{:a=>'foo boo', :b=>'boo foo'}] end it "#like should return matching rows" do @ds.insert('foo', 'bar') @ds.filter(Sequel.expr(:a).like('foo')).all.should == [{:a=>'foo', :b=>'bar'}] @ds.filter(Sequel.expr(:a).like('bar')).all.should == [] @ds.filter(Sequel.expr(:a).like('foo', 'bar')).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:a).like('foo')).all.should == [] @ds.exclude(Sequel.expr(:a).like('bar')).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:a).like('foo', 'bar')).all.should == [] end it "#like should be case sensitive" do @ds.insert('foo', 'bar') @ds.filter(Sequel.expr(:a).like('Foo')).all.should == [] @ds.filter(Sequel.expr(:b).like('baR')).all.should == [] @ds.filter(Sequel.expr(:a).like('FOO', 'BAR')).all.should == [] @ds.exclude(Sequel.expr(:a).like('Foo')).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:b).like('baR')).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:a).like('FOO', 'BAR')).all.should == [{:a=>'foo', :b=>'bar'}] end it "#ilike should return matching rows, in a case insensitive manner" do @ds.insert('foo', 'bar') @ds.filter(Sequel.expr(:a).ilike('Foo')).all.should == [{:a=>'foo', :b=>'bar'}] @ds.filter(Sequel.expr(:a).ilike('baR')).all.should == [] @ds.filter(Sequel.expr(:a).ilike('FOO', 'BAR')).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:a).ilike('Foo')).all.should == [] @ds.exclude(Sequel.expr(:a).ilike('baR')).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:a).ilike('FOO', 'BAR')).all.should == [] end it "#escape_like should escape any metacharacters" do @ds.insert('foo', 'bar') @ds.insert('foo.', 'bar..') @ds.insert('foo\\..', 'bar\\..') @ds.insert('foo\\_', 'bar\\%') @ds.insert('foo_', 'bar%') @ds.insert('foo_.', 'bar%.') @ds.insert('foo_..', 'bar%..') @ds.insert('[f#*?oo_]', '[bar%]') @ds.filter(Sequel.expr(:a).like(@ds.escape_like('foo_'))).select_order_map(:a).should == ['foo_'] @ds.filter(Sequel.expr(:b).like(@ds.escape_like('bar%'))).select_order_map(:b).should == ['bar%'] @ds.filter(Sequel.expr(:a).like(@ds.escape_like('foo\\_'))).select_order_map(:a).should == ['foo\\_'] @ds.filter(Sequel.expr(:b).like(@ds.escape_like('bar\\%'))).select_order_map(:b).should == ['bar\\%'] @ds.filter(Sequel.expr(:a).like(@ds.escape_like('[f#*?oo_]'))).select_order_map(:a).should == ['[f#*?oo_]'] @ds.filter(Sequel.expr(:b).like(@ds.escape_like('[bar%]'))).select_order_map(:b).should == ['[bar%]'] @ds.filter(Sequel.expr(:b).like("#{@ds.escape_like('bar%')}_")).select_order_map(:b).should == ['bar%.'] @ds.filter(Sequel.expr(:b).like("#{@ds.escape_like('bar%')}%")).select_order_map(:b).should == ['bar%', 'bar%.', 'bar%..'] @ds.filter(Sequel.expr(:a).ilike(@ds.escape_like('Foo_'))).select_order_map(:a).should == ['foo_'] @ds.filter(Sequel.expr(:b).ilike(@ds.escape_like('Bar%'))).select_order_map(:b).should == ['bar%'] @ds.filter(Sequel.expr(:a).ilike(@ds.escape_like('Foo\\_'))).select_order_map(:a).should == ['foo\\_'] @ds.filter(Sequel.expr(:b).ilike(@ds.escape_like('Bar\\%'))).select_order_map(:b).should == ['bar\\%'] @ds.filter(Sequel.expr(:a).ilike(@ds.escape_like('[F#*?oo_]'))).select_order_map(:a).should == ['[f#*?oo_]'] @ds.filter(Sequel.expr(:b).ilike(@ds.escape_like('[Bar%]'))).select_order_map(:b).should == ['[bar%]'] @ds.filter(Sequel.expr(:b).ilike("#{@ds.escape_like('Bar%')}_")).select_order_map(:b).should == ['bar%.'] @ds.filter(Sequel.expr(:b).ilike("#{@ds.escape_like('Bar%')}%")).select_order_map(:b).should == ['bar%', 'bar%.', 'bar%..'] end if DB.dataset.supports_regexp? it "#like with regexp return matching rows" do @ds.insert('foo', 'bar') @ds.filter(Sequel.expr(:a).like(/fo/)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.filter(Sequel.expr(:a).like(/fo$/)).all.should == [] @ds.filter(Sequel.expr(:a).like(/fo/, /ar/)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:a).like(/fo/)).all.should == [] @ds.exclude(Sequel.expr(:a).like(/fo$/)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:a).like(/fo/, /ar/)).all.should == [] end it "#like with regexp should be case sensitive if regexp is case sensitive" do @ds.insert('foo', 'bar') @ds.filter(Sequel.expr(:a).like(/Fo/)).all.should == [] @ds.filter(Sequel.expr(:b).like(/baR/)).all.should == [] @ds.filter(Sequel.expr(:a).like(/FOO/, /BAR/)).all.should == [] @ds.exclude(Sequel.expr(:a).like(/Fo/)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:b).like(/baR/)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:a).like(/FOO/, /BAR/)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.filter(Sequel.expr(:a).like(/Fo/i)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.filter(Sequel.expr(:b).like(/baR/i)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.filter(Sequel.expr(:a).like(/FOO/i, /BAR/i)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:a).like(/Fo/i)).all.should == [] @ds.exclude(Sequel.expr(:b).like(/baR/i)).all.should == [] @ds.exclude(Sequel.expr(:a).like(/FOO/i, /BAR/i)).all.should == [] end it "#ilike with regexp should return matching rows, in a case insensitive manner" do @ds.insert('foo', 'bar') @ds.filter(Sequel.expr(:a).ilike(/Fo/)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.filter(Sequel.expr(:b).ilike(/baR/)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.filter(Sequel.expr(:a).ilike(/FOO/, /BAR/)).all.should == [{:a=>'foo', :b=>'bar'}] @ds.exclude(Sequel.expr(:a).ilike(/Fo/)).all.should == [] @ds.exclude(Sequel.expr(:b).ilike(/baR/)).all.should == [] @ds.exclude(Sequel.expr(:a).ilike(/FOO/, /BAR/)).all.should == [] end end it "should work with strings created with Sequel.join" do @ds.insert('foo', 'bar') @ds.get(Sequel.join([:a, "bar"])).should == 'foobar' @ds.get(Sequel.join(["foo", :b], ' ')).should == 'foo bar' end end describe "Dataset identifier methods" do before(:all) do class ::String def uprev upcase.reverse end end @db = DB @db.create_table!(:a){Integer :ab} @db[:a].insert(1) end before do @ds = @db[:a].order(:ab) end after(:all) do @db.drop_table?(:a) end specify "#identifier_output_method should change how identifiers are output" do @ds.identifier_output_method = :upcase @ds.first.should == {:AB=>1} @ds.identifier_output_method = :uprev @ds.first.should == {:BA=>1} end it "should work with a nil identifier_output_method" do @ds.identifier_output_method = nil [{:ab=>1}, {:AB=>1}].should include(@ds.first) end it "should work when not quoting identifiers" do @ds.quote_identifiers = false @ds.first.should == {:ab=>1} end end describe "Dataset defaults and overrides" do before(:all) do @db = DB @db.create_table!(:a){Integer :a} @ds = @db[:a].order(:a).extension(:set_overrides) end before do @ds.delete end after(:all) do @db.drop_table?(:a) end it "#set_defaults should set defaults that can be overridden" do @ds = @ds.set_defaults(:a=>10) @ds.insert @ds.insert(:a=>20) @ds.all.should == [{:a=>10}, {:a=>20}] end it "#set_overrides should set defaults that cannot be overridden" do @ds = @ds.set_overrides(:a=>10) @ds.insert @ds.insert(:a=>20) @ds.all.should == [{:a=>10}, {:a=>10}] end end if DB.dataset.supports_modifying_joins? describe "Modifying joined datasets" do before do @db = DB @db.create_table!(:a){Integer :a; Integer :d} @db.create_table!(:b){Integer :b; Integer :e} @db.create_table!(:c){Integer :c; Integer :f} @ds = @db.from(:a, :b).join(:c, {:c=>Sequel.identifier(:e)}, :qualify=>:symbol).where(:d=>:b, :f=>6) @db[:a].insert(1, 2) @db[:a].insert(3, 4) @db[:b].insert(2, 5) @db[:c].insert(5, 6) @db[:b].insert(4, 7) @db[:c].insert(7, 8) end after do @db.drop_table?(:a, :b, :c) end it "#update should allow updating joined datasets" do @ds.update(:a=>10) @ds.all.should == [{:c=>5, :b=>2, :a=>10, :d=>2, :e=>5, :f=>6}] @db[:a].order(:a).all.should == [{:a=>3, :d=>4}, {:a=>10, :d=>2}] @db[:b].order(:b).all.should == [{:b=>2, :e=>5}, {:b=>4, :e=>7}] @db[:c].order(:c).all.should == [{:c=>5, :f=>6}, {:c=>7, :f=>8}] end it "#delete should allow deleting from joined datasets" do @ds.delete @ds.all.should == [] @db[:a].order(:a).all.should == [{:a=>3, :d=>4}] @db[:b].order(:b).all.should == [{:b=>2, :e=>5}, {:b=>4, :e=>7}] @db[:c].order(:c).all.should == [{:c=>5, :f=>6}, {:c=>7, :f=>8}] end end end describe "Emulated functions" do before(:all) do @db = DB @db.create_table!(:a){String :a} @ds = @db[:a] end after(:all) do @db.drop_table?(:a) end after do @ds.delete end specify "Sequel.char_length should return the length of characters in the string" do @ds.get(Sequel.char_length(:a)).should == nil @ds.insert(:a=>'foo') @ds.get(Sequel.char_length(:a)).should == 3 # Check behavior with leading/trailing blanks @ds.update(:a=>' foo22 ') @ds.get(Sequel.char_length(:a)).should == 7 end specify "Sequel.trim should return the string with spaces trimmed from both sides" do @ds.get(Sequel.trim(:a)).should == nil @ds.insert(:a=>'foo') @ds.get(Sequel.trim(:a)).should == 'foo' # Check behavior with leading/trailing blanks @ds.update(:a=>' foo22 ') @ds.get(Sequel.trim(:a)).should == 'foo22' end end describe "Dataset replace" do before do DB.create_table(:items){Integer :id, :unique=>true; Integer :value} sqls = [] DB.loggers << Class.new{%w'info error'.each{|m| define_method(m){|sql| sqls << sql}}}.new @d = DB[:items] sqls.clear end after do DB.drop_table?(:items) end specify "should use support arrays, datasets, and multiple values" do @d.replace([1, 2]) @d.all.should == [{:id=>1, :value=>2}] @d.replace(1, 2) @d.all.should == [{:id=>1, :value=>2}] @d.replace(@d) @d.all.should == [{:id=>1, :value=>2}] end specify "should create a record if the condition is not met" do @d.replace(:id => 111, :value => 333) @d.all.should == [{:id => 111, :value => 333}] end specify "should update a record if the condition is met" do @d << {:id => 111} @d.all.should == [{:id => 111, :value => nil}] @d.replace(:id => 111, :value => 333) @d.all.should == [{:id => 111, :value => 333}] end end if DB.dataset.supports_replace? ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/eager_loader_test.rb���������������������������������������������0000664�0000000�0000000�00000057301�12201565355�0023256�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') describe "Eagerly loading a tree structure" do before(:all) do DB.instance_variable_set(:@schemas, {}) DB.create_table!(:nodes) do primary_key :id foreign_key :parent_id, :nodes end class ::Node < Sequel::Model many_to_one :parent one_to_many :children, :key=>:parent_id # Only useful when eager loading many_to_one :ancestors, :eager_loader_key=>nil, :eager_loader=>(proc do |eo| # Handle cases where the root node has the same parent_id as primary_key # and also when it is NULL non_root_nodes = eo[:rows].reject do |n| if [nil, n.pk].include?(n.parent_id) # Make sure root nodes have their parent association set to nil n.associations[:parent] = nil true else false end end unless non_root_nodes.empty? id_map = {} # Create an map of parent_ids to nodes that have that parent id non_root_nodes.each{|n| (id_map[n.parent_id] ||= []) << n} # Doesn't cause an infinte loop, because when only the root node # is left, this is not called. Node.filter(Node.primary_key=>id_map.keys.sort).eager(:ancestors).all do |node| # Populate the parent association for each node id_map[node.pk].each{|n| n.associations[:parent] = node} end end end) many_to_one :descendants, :eager_loader_key=>nil, :eager_loader=>(proc do |eo| id_map = {} eo[:rows].each do |n| # Initialize an empty array of child associations for each parent node n.associations[:children] = [] # Populate identity map of nodes id_map[n.pk] = n end # Doesn't cause an infinite loop, because the :eager_loader is not called # if no records are returned. Exclude id = parent_id to avoid infinite loop # if the root note is one of the returned records and it has parent_id = id # instead of parent_id = NULL. Node.filter(:parent_id=>id_map.keys.sort).exclude(:id=>:parent_id).eager(:descendants).all do |node| # Get the parent from the identity map parent = id_map[node.parent_id] # Set the child's parent association to the parent node.associations[:parent] = parent # Add the child association to the array of children in the parent parent.associations[:children] << node end end) end Node.insert(:parent_id=>1) Node.insert(:parent_id=>1) Node.insert(:parent_id=>1) Node.insert(:parent_id=>2) Node.insert(:parent_id=>4) Node.insert(:parent_id=>5) Node.insert(:parent_id=>6) end after(:all) do DB.drop_table :nodes Object.send(:remove_const, :Node) end it "#descendants should get all descendants in one call" do nodes = Node.filter(:id=>1).eager(:descendants).all nodes.length.should == 1 node = nodes.first node.pk.should == 1 node.children.length.should == 2 node.children.collect{|x| x.pk}.sort.should == [2, 3] node.children.collect{|x| x.parent}.should == [node, node] node = nodes.first.children.find{|x| x.pk == 2} node.children.length.should == 1 node.children.first.pk.should == 4 node.children.first.parent.should == node node = node.children.first node.children.length.should == 1 node.children.first.pk.should == 5 node.children.first.parent.should == node node = node.children.first node.children.length.should == 1 node.children.first.pk.should == 6 node.children.first.parent.should == node node = node.children.first node.children.length.should == 1 node.children.first.pk.should == 7 node.children.first.parent.should == node end it "#ancestors should get all ancestors in one call" do nodes = Node.filter(:id=>[7,3]).order(:id).eager(:ancestors).all nodes.length.should == 2 nodes.collect{|x| x.pk}.should == [3, 7] nodes.first.parent.pk.should == 1 nodes.first.parent.parent.should == nil node = nodes.last node.parent.pk.should == 6 node = node.parent node.parent.pk.should == 5 node = node.parent node.parent.pk.should == 4 node = node.parent node.parent.pk.should == 2 node = node.parent node.parent.pk.should == 1 node.parent.parent.should == nil end end describe "Association Extensions" do before do module ::FindOrCreate def find_or_create(vals) first(vals) || model.create(vals.merge(:author_id=>model_object.pk)) end def find_or_create_by_name(name) first(:name=>name) || model.create(:name=>name, :author_id=>model_object.pk) end end DB.instance_variable_set(:@schemas, {}) DB.create_table!(:authors) do primary_key :id end class ::Author < Sequel::Model one_to_many :authorships, :extend=>FindOrCreate end DB.create_table!(:authorships) do primary_key :id foreign_key :author_id, :authors String :name end class ::Authorship < Sequel::Model many_to_one :author end @author = Author.create end after do DB.drop_table :authorships, :authors Object.send(:remove_const, :Author) Object.send(:remove_const, :Authorship) end it "should allow methods to be called on the dataset method" do Authorship.count.should == 0 authorship = @author.authorships_dataset.find_or_create_by_name('Bob') Authorship.count.should == 1 Authorship.first.should == authorship authorship.name.should == 'Bob' authorship.author_id.should == @author.id @author.authorships_dataset.find_or_create_by_name('Bob').should == authorship Authorship.count.should == 1 authorship2 = @author.authorships_dataset.find_or_create(:name=>'Jim') Authorship.count.should == 2 Authorship.order(:name).map(:name).should == ['Bob', 'Jim'] authorship2.name.should == 'Jim' authorship2.author_id.should == @author.id @author.authorships_dataset.find_or_create(:name=>'Jim').should == authorship2 end end describe "has_many :through has_many and has_one :through belongs_to" do before(:all) do DB.instance_variable_set(:@schemas, {}) DB.create_table!(:firms) do primary_key :id end class ::Firm < Sequel::Model one_to_many :clients one_to_many :invoices, :read_only=>true, \ :dataset=>proc{Invoice.eager_graph(:client).filter(:client__firm_id=>pk)}, \ :after_load=>(proc do |firm, invs| invs.each do |inv| inv.client.associations[:firm] = inv.associations[:firm] = firm end end), \ :eager_loader=>(proc do |eo| id_map = eo[:id_map] eo[:rows].each{|firm| firm.associations[:invoices] = []} Invoice.eager_graph(:client).filter(:client__firm_id=>id_map.keys).all do |inv| id_map[inv.client.firm_id].each do |firm| firm.associations[:invoices] << inv end end end) end DB.create_table!(:clients) do primary_key :id foreign_key :firm_id, :firms end class ::Client < Sequel::Model many_to_one :firm one_to_many :invoices end DB.create_table!(:invoices) do primary_key :id foreign_key :client_id, :clients end class ::Invoice < Sequel::Model many_to_one :client many_to_one :firm, :key=>nil, :read_only=>true, \ :dataset=>proc{Firm.eager_graph(:clients).filter(:clients__id=>client_id)}, \ :after_load=>(proc do |inv, firm| # Delete the cached associations from firm, because it only has the # client with this invoice, instead of all clients of the firm if c = firm.associations.delete(:clients) firm.associations[:invoice_client] = c.first end inv.associations[:client] ||= firm.associations[:invoice_client] end), \ :eager_loader=>(proc do |eo| id_map = {} eo[:rows].each do |inv| inv.associations[:firm] = nil (id_map[inv.client_id] ||= []) << inv end Firm.eager_graph(:clients).filter(:clients__id=>id_map.keys).all do |firm| # Delete the cached associations from firm, because it only has the # clients related the invoices being eagerly loaded, instead of all # clients of the firm. firm.associations[:clients].each do |client| id_map[client.pk].each do |inv| inv.associations[:firm] = firm inv.associations[:client] = client end end end end) end @firm1 = Firm.create @firm2 = Firm.create @client1 = Client.create(:firm => @firm1) @client2 = Client.create(:firm => @firm1) @client3 = Client.create(:firm => @firm2) @invoice1 = Invoice.create(:client => @client1) @invoice2 = Invoice.create(:client => @client1) @invoice3 = Invoice.create(:client => @client2) @invoice4 = Invoice.create(:client => @client3) @invoice5 = Invoice.create(:client => @client3) end after(:all) do DB.drop_table :invoices, :clients, :firms Object.send(:remove_const, :Firm) Object.send(:remove_const, :Client) Object.send(:remove_const, :Invoice) end it "should return has_many :through has_many records for a single object" do invs = @firm1.invoices.sort_by{|x| x.pk} invs.should == [@invoice1, @invoice2, @invoice3] invs[0].client.should == @client1 invs[1].client.should == @client1 invs[2].client.should == @client2 invs.collect{|i| i.firm}.should == [@firm1, @firm1, @firm1] invs.collect{|i| i.client.firm}.should == [@firm1, @firm1, @firm1] end it "should eagerly load has_many :through has_many records for multiple objects" do firms = Firm.order(:id).eager(:invoices).all firms.should == [@firm1, @firm2] firm1, firm2 = firms invs1 = firm1.invoices.sort_by{|x| x.pk} invs2 = firm2.invoices.sort_by{|x| x.pk} invs1.should == [@invoice1, @invoice2, @invoice3] invs2.should == [@invoice4, @invoice5] invs1[0].client.should == @client1 invs1[1].client.should == @client1 invs1[2].client.should == @client2 invs2[0].client.should == @client3 invs2[1].client.should == @client3 invs1.collect{|i| i.firm}.should == [@firm1, @firm1, @firm1] invs2.collect{|i| i.firm}.should == [@firm2, @firm2] invs1.collect{|i| i.client.firm}.should == [@firm1, @firm1, @firm1] invs2.collect{|i| i.client.firm}.should == [@firm2, @firm2] end it "should return has_one :through belongs_to records for a single object" do firm = @invoice1.firm firm.should == @firm1 @invoice1.client.should == @client1 @invoice1.client.firm.should == @firm1 firm.associations[:clients].should == nil end it "should eagerly load has_one :through belongs_to records for multiple objects" do invs = Invoice.order(:id).eager(:firm).all invs.should == [@invoice1, @invoice2, @invoice3, @invoice4, @invoice5] invs[0].firm.should == @firm1 invs[0].client.should == @client1 invs[0].client.firm.should == @firm1 invs[0].firm.associations[:clients].should == nil invs[1].firm.should == @firm1 invs[1].client.should == @client1 invs[1].client.firm.should == @firm1 invs[1].firm.associations[:clients].should == nil invs[2].firm.should == @firm1 invs[2].client.should == @client2 invs[2].client.firm.should == @firm1 invs[2].firm.associations[:clients].should == nil invs[3].firm.should == @firm2 invs[3].client.should == @client3 invs[3].client.firm.should == @firm2 invs[3].firm.associations[:clients].should == nil invs[4].firm.should == @firm2 invs[4].client.should == @client3 invs[4].client.firm.should == @firm2 invs[4].firm.associations[:clients].should == nil end end describe "Polymorphic Associations" do before(:all) do DB.instance_variable_set(:@schemas, {}) DB.create_table!(:assets) do primary_key :id Integer :attachable_id String :attachable_type end class ::Asset < Sequel::Model m = method(:constantize) many_to_one :attachable, :reciprocal=>:assets, :setter=>(proc do |attachable| self[:attachable_id] = (attachable.pk if attachable) self[:attachable_type] = (attachable.class.name if attachable) end), :dataset=>(proc do klass = m.call(attachable_type) klass.where(klass.primary_key=>attachable_id) end), :eager_loader=>(proc do |eo| id_map = {} eo[:rows].each do |asset| asset.associations[:attachable] = nil ((id_map[asset.attachable_type] ||= {})[asset.attachable_id] ||= []) << asset end id_map.each do |klass_name, idmap| klass = m.call(klass_name) klass.where(klass.primary_key=>idmap.keys).all do |attach| idmap[attach.pk].each do |asset| asset.associations[:attachable] = attach end end end end) end DB.create_table!(:posts) do primary_key :id end class ::Post < Sequel::Model one_to_many :assets, :key=>:attachable_id, :reciprocal=>:attachable, :conditions=>{:attachable_type=>'Post'}, :adder=>proc{|asset| asset.update(:attachable_id=>pk, :attachable_type=>'Post')}, :remover=>proc{|asset| asset.update(:attachable_id=>nil, :attachable_type=>nil)}, :clearer=>proc{assets_dataset.update(:attachable_id=>nil, :attachable_type=>nil)} end DB.create_table!(:notes) do primary_key :id end class ::Note < Sequel::Model one_to_many :assets, :key=>:attachable_id, :reciprocal=>:attachable, :conditions=>{:attachable_type=>'Note'}, :adder=>proc{|asset| asset.update(:attachable_id=>pk, :attachable_type=>'Note')}, :remover=>proc{|asset| asset.update(:attachable_id=>nil, :attachable_type=>nil)}, :clearer=>proc{assets_dataset.update(:attachable_id=>nil, :attachable_type=>nil)} end end before do [:assets, :posts, :notes].each{|t| DB[t].delete} @post = Post.create Note.create @note = Note.create @asset1 = Asset.create(:attachable=>@post) @asset2 = Asset.create(:attachable=>@note) @asset1.associations.clear @asset2.associations.clear end after(:all) do DB.drop_table :assets, :posts, :notes Object.send(:remove_const, :Asset) Object.send(:remove_const, :Post) Object.send(:remove_const, :Note) end it "should load the correct associated object for a single object" do @asset1.attachable.should == @post @asset2.attachable.should == @note end it "should eagerly load the correct associated object for a group of objects" do assets = Asset.order(:id).eager(:attachable).all assets.should == [@asset1, @asset2] assets[0].attachable.should == @post assets[1].attachable.should == @note end it "should set items correctly" do @asset1.attachable = @note @asset2.attachable = @post @asset1.attachable.should == @note @asset1.attachable_id.should == @note.pk @asset1.attachable_type.should == 'Note' @asset2.attachable.should == @post @asset2.attachable_id.should == @post.pk @asset2.attachable_type.should == 'Post' @asset1.attachable = nil @asset1.attachable.should == nil @asset1.attachable_id.should == nil @asset1.attachable_type.should == nil end it "should add items correctly" do @post.assets.should == [@asset1] @post.add_asset(@asset2) @post.assets.should == [@asset1, @asset2] @asset2.attachable.should == @post @asset2.attachable_id.should == @post.pk @asset2.attachable_type.should == 'Post' end it "should remove items correctly" do @note.assets.should == [@asset2] @note.remove_asset(@asset2) @note.assets.should == [] @asset2.attachable.should == nil @asset2.attachable_id.should == nil @asset2.attachable_type.should == nil end it "should remove all items correctly" do @post.remove_all_assets @note.remove_all_assets @asset1.reload.attachable.should == nil @asset2.reload.attachable.should == nil end end describe "many_to_one/one_to_many not referencing primary key" do before(:all) do DB.instance_variable_set(:@schemas, {}) DB.create_table!(:clients) do primary_key :id String :name end class ::Client < Sequel::Model one_to_many :invoices, :reciprocal=>:client, :adder=>(proc do |invoice| invoice.client_name = name invoice.save end), :remover=>(proc do |invoice| invoice.client_name = nil invoice.save end), :clearer=>proc{invoices_dataset.update(:client_name=>nil)}, :dataset=>proc{Invoice.filter(:client_name=>name)}, :eager_loader=>(proc do |eo| id_map = {} eo[:rows].each do |client| id_map[client.name] = client client.associations[:invoices] = [] end Invoice.filter(:client_name=>id_map.keys.sort).all do |inv| inv.associations[:client] = client = id_map[inv.client_name] client.associations[:invoices] << inv end end) end DB.create_table!(:invoices) do primary_key :id String :client_name end class ::Invoice < Sequel::Model many_to_one :client, :key=>:client_name, :setter=>proc{|client| self.client_name = (client.name if client)}, :dataset=>proc{Client.filter(:name=>client_name)}, :eager_loader=>(proc do |eo| id_map = eo[:id_map] eo[:rows].each{|inv| inv.associations[:client] = nil} Client.filter(:name=>id_map.keys).all do |client| id_map[client.name].each{|inv| inv.associations[:client] = client} end end) end end before do Client.dataset.delete Invoice.dataset.delete @client1 = Client.create(:name=>'X') @client2 = Client.create(:name=>'Y') @invoice1 = Invoice.create(:client_name=>'X') @invoice2 = Invoice.create(:client_name=>'X') end after(:all) do DB.drop_table :invoices, :clients Object.send(:remove_const, :Client) Object.send(:remove_const, :Invoice) end it "should load all associated one_to_many objects for a single object" do invs = @client1.invoices invs.should == [@invoice1, @invoice2] invs[0].client.should == @client1 invs[1].client.should == @client1 end it "should load the associated many_to_one object for a single object" do client = @invoice1.client client.should == @client1 end it "should eagerly load all associated one_to_many objects for a group of objects" do clients = Client.order(:id).eager(:invoices).all clients.should == [@client1, @client2] clients[1].invoices.should == [] invs = clients[0].invoices.sort_by{|x| x.pk} invs.should == [@invoice1, @invoice2] invs[0].client.should == @client1 invs[1].client.should == @client1 end it "should eagerly load the associated many_to_one object for a group of objects" do invoices = Invoice.order(:id).eager(:client).all invoices.should == [@invoice1, @invoice2] invoices[0].client.should == @client1 invoices[1].client.should == @client1 end it "should set the associated object correctly" do @invoice1.client = @client2 @invoice1.client.should == @client2 @invoice1.client_name.should == 'Y' @invoice1.client = nil @invoice1.client_name.should == nil end it "should add the associated object correctly" do @client2.invoices.should == [] @client2.add_invoice(@invoice1) @client2.invoices.should == [@invoice1] @invoice1.client_name.should == 'Y' @invoice1.client = nil @invoice1.client_name.should == nil end it "should remove the associated object correctly" do invs = @client1.invoices.sort_by{|x| x.pk} invs.should == [@invoice1, @invoice2] @client1.remove_invoice(@invoice1) @client1.invoices.should == [@invoice2] @invoice1.client_name.should == nil @invoice1.client.should == nil end it "should remove all associated objects correctly" do @client1.remove_all_invoices @invoice1.refresh.client.should == nil @invoice1.client_name.should == nil @invoice2.refresh.client.should == nil @invoice2.client_name.should == nil end end describe "statistics associations" do before(:all) do DB.create_table!(:projects) do primary_key :id String :name end class ::Project < Sequel::Model many_to_one :ticket_hours, :read_only=>true, :key=>:id, :class=>:Ticket, :dataset=>proc{Ticket.filter(:project_id=>id).select{sum(hours).as(hours)}}, :eager_loader=>(proc do |eo| eo[:rows].each{|p| p.associations[:ticket_hours] = nil} Ticket.filter(:project_id=>eo[:id_map].keys). select_group(:project_id). select_append{sum(hours).as(hours)}. all do |t| p = eo[:id_map][t.values.delete(:project_id)].first p.associations[:ticket_hours] = t end end) def ticket_hours if s = super s[:hours] end end end DB.create_table!(:tickets) do primary_key :id foreign_key :project_id, :projects Integer :hours end class ::Ticket < Sequel::Model many_to_one :project end @project1 = Project.create(:name=>'X') @project2 = Project.create(:name=>'Y') @ticket1 = Ticket.create(:project=>@project1, :hours=>1) @ticket2 = Ticket.create(:project=>@project1, :hours=>10) @ticket3 = Ticket.create(:project=>@project2, :hours=>2) @ticket4 = Ticket.create(:project=>@project2, :hours=>20) end after(:all) do DB.drop_table :tickets, :projects Object.send(:remove_const, :Project) Object.send(:remove_const, :Ticket) end it "should give the correct sum of ticket hours for each project" do @project1.ticket_hours.to_i.should == 11 @project2.ticket_hours.to_i.should == 22 end it "should give the correct sum of ticket hours for each project when eager loading" do p1, p2 = Project.order(:name).eager(:ticket_hours).all p1.ticket_hours.to_i.should == 11 p2.ticket_hours.to_i.should == 22 end end describe "one to one associations" do before(:all) do DB.create_table!(:books) do primary_key :id end class ::Book < Sequel::Model one_to_one :first_page, :class=>:Page, :conditions=>{:page_number=>1}, :reciprocal=>nil one_to_one :second_page, :class=>:Page, :conditions=>{:page_number=>2}, :reciprocal=>nil end DB.create_table!(:pages) do primary_key :id foreign_key :book_id, :books Integer :page_number end class ::Page < Sequel::Model many_to_one :book, :reciprocal=>nil end @book1 = Book.create @book2 = Book.create @page1 = Page.create(:book=>@book1, :page_number=>1) @page2 = Page.create(:book=>@book1, :page_number=>2) @page3 = Page.create(:book=>@book2, :page_number=>1) @page4 = Page.create(:book=>@book2, :page_number=>2) end after(:all) do DB.drop_table :pages, :books Object.send(:remove_const, :Book) Object.send(:remove_const, :Page) end it "should be eager loadable" do bk1, bk2 = Book.filter(:books__id=>[1,2]).eager(:first_page).all bk1.first_page.should == @page1 bk2.first_page.should == @page3 end it "should be eager graphable" do bk1, bk2 = Book.filter(:books__id=>[1,2]).eager_graph(:first_page).all bk1.first_page.should == @page1 bk2.first_page.should == @page3 end it "should be eager graphable two at once" do bk1, bk2 = Book.filter(:books__id=>[1,2]).eager_graph(:first_page, :second_page).all bk1.first_page.should == @page1 bk1.second_page.should == @page2 bk2.first_page.should == @page3 bk2.second_page.should == @page4 end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/migrator_test.rb�������������������������������������������������0000664�0000000�0000000�00000030754�12201565355�0022474�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') Sequel.extension :migration describe Sequel::Migrator do before do @db = DB @m = Sequel::Migrator end after do @db.drop_table?(:schema_info, :schema_migrations, :sm1111, :sm1122, :sm2222, :sm2233, :sm3333, :sm11111, :sm22222) end specify "should be able to migrate up and down all the way successfully" do @dir = 'spec/files/integer_migrations' @m.apply(@db, @dir) [:schema_info, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_info].get(:version).should == 3 @m.apply(@db, @dir, 0) [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_info].get(:version).should == 0 end specify "should be able to migrate up and down to specific versions successfully" do @dir = 'spec/files/integer_migrations' @m.apply(@db, @dir, 2) [:schema_info, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true} @db.table_exists?(:sm3333).should be_false @db[:schema_info].get(:version).should == 2 @m.apply(@db, @dir, 1) [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db.table_exists?(:sm1111).should be_true @db[:schema_info].get(:version).should == 1 end specify "should correctly set migration version to the last successful migration if the migration raises an error when migrating up" do @dir = 'spec/files/bad_up_migration' proc{@m.apply(@db, @dir)}.should raise_error [:schema_info, :sm11111].each{|n| @db.table_exists?(n).should be_true} @db.table_exists?(:sm22222).should be_false @db[:schema_info].get(:version).should == 1 @m.apply(@db, @dir, 0) [:sm11111, :sm22222].each{|n| @db.table_exists?(n).should be_false} @db[:schema_info].get(:version).should == 0 end specify "should correctly set migration version to the last successful migration if the migration raises an error when migrating down" do @dir = 'spec/files/bad_down_migration' @m.apply(@db, @dir) [:schema_info, :sm11111, :sm22222].each{|n| @db.table_exists?(n).should be_true} @db[:schema_info].get(:version).should == 2 proc{@m.apply(@db, @dir, 0)}.should raise_error [:sm22222].each{|n| @db.table_exists?(n).should be_false} @db.table_exists?(:sm11111).should be_true @db[:schema_info].get(:version).should == 1 end specify "should handle migrating up or down all the way with timestamped migrations" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb' @m.apply(@db, @dir, 0) [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == [] end specify "should handle migrating up or down to specific timestamps with timestamped migrations" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir, 1273253851) [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true} @db.table_exists?(:sm3333).should be_false @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb' @m.apply(@db, @dir, 1273253849) [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db.table_exists?(:sm1111).should be_true @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb' end specify "should apply all missing files when migrating up with timestamped migrations" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) @dir = 'spec/files/interleaved_timestamped_migrations' @m.apply(@db, @dir) [:schema_migrations, :sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253850_create_artists.rb 1273253851_create_nodes.rb 1273253852_create_albums.rb 1273253853_3_create_users.rb' end specify "should not apply down action to migrations where up action hasn't been applied" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) @dir = 'spec/files/interleaved_timestamped_migrations' @m.apply(@db, @dir, 0) [:sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == [] end specify "should handle updating to a specific timestamp when interleaving migrations with timestamps" do @dir = 'spec/files/timestamped_migrations' @m.apply(@db, @dir) @dir = 'spec/files/interleaved_timestamped_migrations' @m.apply(@db, @dir, 1273253851) [:schema_migrations, :sm1111, :sm1122, :sm2222].each{|n| @db.table_exists?(n).should be_true} [:sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253850_create_artists.rb 1273253851_create_nodes.rb' end specify "should correctly update schema_migrations table when an error occurs when migrating up or down using timestamped migrations" do @dir = 'spec/files/bad_timestamped_migrations' proc{@m.apply(@db, @dir)}.should raise_error [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true} @db.table_exists?(:sm3333).should be_false @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb' proc{@m.apply(@db, @dir, 0)}.should raise_error [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db.table_exists?(:sm1111).should be_true @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb' end specify "should handle multiple migrations with the same timestamp correctly" do @dir = 'spec/files/duplicate_timestamped_migrations' @m.apply(@db, @dir) [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253853_create_nodes.rb 1273253853_create_users.rb' @m.apply(@db, @dir, 1273253853) [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253853_create_nodes.rb 1273253853_create_users.rb' @m.apply(@db, @dir, 1273253849) [:sm1111].each{|n| @db.table_exists?(n).should be_true} [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb' @m.apply(@db, @dir, 1273253848) [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == [] end specify "should convert schema_info table to schema_migrations table" do @dir = 'spec/files/integer_migrations' @m.apply(@db, @dir) [:schema_info, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @dir = 'spec/files/convert_to_timestamp_migrations' @m.apply(@db, @dir) [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb 1273253852_create_albums.rb' @m.apply(@db, @dir, 4) [:schema_info, :schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true} [:sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb' @m.apply(@db, @dir, 0) [:schema_info, :schema_migrations].each{|n| @db.table_exists?(n).should be_true} [:sm1111, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == [] end specify "should handle unapplied migrations when migrating schema_info table to schema_migrations table" do @dir = 'spec/files/integer_migrations' @m.apply(@db, @dir, 2) [:schema_info, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @dir = 'spec/files/convert_to_timestamp_migrations' @m.apply(@db, @dir, 1273253850) [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122].each{|n| @db.table_exists?(n).should be_true} [:sm2233].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb' end specify "should handle unapplied migrations when migrating schema_info table to schema_migrations table and target is less than last integer migration version" do @dir = 'spec/files/integer_migrations' @m.apply(@db, @dir, 1) [:schema_info, :sm1111].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @dir = 'spec/files/convert_to_timestamp_migrations' @m.apply(@db, @dir, 2) [:schema_info, :sm1111, :sm2222, :schema_migrations].each{|n| @db.table_exists?(n).should be_true} [:sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false} @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb' @m.apply(@db, @dir) [:schema_info, :sm1111, :sm2222, :schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_true} @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb 1273253852_create_albums.rb' end specify "should handle reversible migrations" do @dir = 'spec/files/reversible_migrations' @db.drop_table?(:a, :b) @m.apply(@db, @dir, 1) [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false} @db[:a].columns.should == [:a] @m.apply(@db, @dir, 2) [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false} @db[:a].columns.should == [:a, :b] @m.apply(@db, @dir, 3) [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false} @db[:a].columns.should == [:a, :c] @m.apply(@db, @dir, 4) [:schema_info, :b].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :a].each{|n| @db.table_exists?(n).should be_false} @db[:b].columns.should == [:a, :c] @m.apply(@db, @dir, 5) [:schema_info, :b].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :a].each{|n| @db.table_exists?(n).should be_false} @db[:b].columns.should == [:a, :c, :e] @m.apply(@db, @dir, 4) [:schema_info, :b].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :a].each{|n| @db.table_exists?(n).should be_false} @db[:b].columns.should == [:a, :c] @m.apply(@db, @dir, 3) [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false} @db[:a].columns.should == [:a, :c] @m.apply(@db, @dir, 2) [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false} @db[:a].columns.should == [:a, :b] @m.apply(@db, @dir, 1) [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false} @db[:a].columns.should == [:a] @m.apply(@db, @dir, 0) [:schema_info].each{|n| @db.table_exists?(n).should be_true} [:schema_migrations, :a, :b].each{|n| @db.table_exists?(n).should be_false} end end ��������������������ruby-sequel-4.1.1/spec/integration/model_test.rb����������������������������������������������������0000664�0000000�0000000�00000013612�12201565355�0021742�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') describe "Sequel::Model basic support" do before do @db = DB @db.create_table!(:items, :engine=>:InnoDB) do primary_key :id String :name end class ::Item < Sequel::Model(@db) end end after do @db.drop_table?(:items) Object.send(:remove_const, :Item) end specify ".find should return first matching item" do Item.all.should == [] Item.find(:name=>'J').should == nil Item.create(:name=>'J') Item.find(:name=>'J').should == Item.load(:id=>1, :name=>'J') end specify ".find_or_create should return first matching item, or create it if it doesn't exist" do Item.all.should == [] Item.find_or_create(:name=>'J').should == Item.load(:id=>1, :name=>'J') Item.all.should == [Item.load(:id=>1, :name=>'J')] Item.find_or_create(:name=>'J').should == Item.load(:id=>1, :name=>'J') Item.all.should == [Item.load(:id=>1, :name=>'J')] end specify "should not raise an error if the implied database table doesn't exist " do class ::Item::Thing < Sequel::Model(@db) set_dataset :items end Item.create(:name=>'J') Item::Thing.first.should == Item::Thing.load(:id=>1, :name=>'J') end specify "should create accessors for all table columns even if all dataset columns aren't selected" do c = Class.new(Sequel::Model(@db[:items].select(:id))) o = c.new o.name = 'A' o.save.should == c.load(:id=>1) c.select_map(:name).should == ['A'] end specify "should work correctly when a dataset restricts the colums it selects" do class ::Item::Thing < Sequel::Model(@db[:items].select(:name)) end Item.create(:name=>'J') Item::Thing.first.should == Item::Thing.load(:name=>'J') end specify "#delete should delete items correctly" do i = Item.create(:name=>'J') Item.count.should == 1 i.delete Item.count.should == 0 end specify "#save should return nil if raise_on_save_failure is false and save isn't successful" do i = Item.new(:name=>'J') i.use_transactions = true def i.after_save raise Sequel::Rollback end i.save.should be_nil end specify "#should respect after_commit, after_rollback, after_destroy_commit, and after_destroy_rollback hooks" do i = Item.create(:name=>'J') i.use_transactions = true def i.hooks @hooks end def i.rb=(x) @hooks = [] @rb = x end def i.after_save @hooks << :as raise Sequel::Rollback if @rb end def i.after_destroy @hooks << :ad raise Sequel::Rollback if @rb end def i.after_commit @hooks << :ac end def i.after_rollback @hooks << :ar end def i.after_destroy_commit @hooks << :adc end def i.after_destroy_rollback @hooks << :adr end i.name = 'K' i.rb = true i.save.should be_nil i.reload.name.should == 'J' i.hooks.should == [:as, :ar] i.rb = true i.destroy.should be_nil i.exists?.should be_true i.hooks.should == [:ad, :adr] i.name = 'K' i.rb = false i.save.should_not be_nil i.reload.name.should == 'K' i.hooks.should == [:as, :ac] i.rb = false i.destroy.should_not be_nil i.exists?.should be_false i.hooks.should == [:ad, :adc] end specify "#exists? should return whether the item is still in the database" do i = Item.create(:name=>'J') i.exists?.should == true Item.dataset.delete i.exists?.should == false end specify "#save should only update specified columns when saving" do @db.create_table!(:items) do primary_key :id String :name Integer :num end Item.dataset = Item.dataset i = Item.create(:name=>'J', :num=>1) Item.all.should == [Item.load(:id=>1, :name=>'J', :num=>1)] i.set(:name=>'K', :num=>2) i.save(:columns=>:name) Item.all.should == [Item.load(:id=>1, :name=>'K', :num=>1)] i.set(:name=>'L') i.save(:columns=>:num) Item.all.should == [Item.load(:id=>1, :name=>'K', :num=>2)] end specify "#save should check that the only a single row is modified, unless require_modification is false" do i = Item.create(:name=>'a') i.require_modification = true i.delete proc{i.save}.should raise_error(Sequel::NoExistingObject) proc{i.delete}.should raise_error(Sequel::NoExistingObject) i.require_modification = false i.save i.delete end specify ".to_hash should return a hash keyed on primary key if no argument provided" do Item.create(:name=>'J') Item.to_hash.should == {1=>Item.load(:id=>1, :name=>'J')} end specify ".to_hash should return a hash keyed on argument if one argument provided" do Item.create(:name=>'J') Item.to_hash(:name).should == {'J'=>Item.load(:id=>1, :name=>'J')} end specify "should be marshallable before and after saving if marshallable! is called" do i = Item.new(:name=>'J') s = nil i2 = nil i.marshallable! proc{s = Marshal.dump(i)}.should_not raise_error proc{i2 = Marshal.load(s)}.should_not raise_error i2.should == i i.save i.marshallable! proc{s = Marshal.dump(i)}.should_not raise_error proc{i2 = Marshal.load(s)}.should_not raise_error i2.should == i i.save i.marshallable! proc{s = Marshal.dump(i)}.should_not raise_error proc{i2 = Marshal.load(s)}.should_not raise_error i2.should == i end specify "#lock! should lock records" do Item.db.transaction do i = Item.create(:name=>'J') i.lock! i.update(:name=>'K') end end end describe "Sequel::Model with no existing table" do specify "should not raise an error when setting the dataset" do db = DB db.drop_table?(:items) proc{class ::Item < Sequel::Model(db); end; Object.send(:remove_const, :Item)}.should_not raise_error proc{c = Class.new(Sequel::Model); c.set_dataset(db[:items])}.should_not raise_error end end ����������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/plugin_test.rb���������������������������������������������������0000664�0000000�0000000�00000243275�12201565355�0022152�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') # DB2 does not seem to support USING joins in every version; it seems to be # valid expression in DB2 iSeries UDB though. unless !DB.dataset.supports_join_using? || Sequel.guarded?(:db2) describe "Class Table Inheritance Plugin" do before(:all) do @db = DB @db.instance_variable_set(:@schemas, {}) @db.drop_table?(:staff, :executives, :managers, :employees) @db.create_table(:employees) do primary_key :id String :name String :kind end @db.create_table(:managers) do foreign_key :id, :employees, :primary_key=>true Integer :num_staff end @db.create_table(:executives) do foreign_key :id, :managers, :primary_key=>true Integer :num_managers end @db.create_table(:staff) do foreign_key :id, :employees, :primary_key=>true foreign_key :manager_id, :managers end end before do [:staff, :executives, :managers, :employees].each{|t| @db[t].delete} class ::Employee < Sequel::Model(@db) plugin :class_table_inheritance, :key=>:kind, :table_map=>{:Staff=>:staff} end class ::Manager < Employee one_to_many :staff_members, :class=>:Staff end class ::Executive < Manager end class ::Staff < Employee many_to_one :manager, :qualify=>false end @i1 =@db[:employees].insert(:name=>'E', :kind=>'Employee') @i2 = @db[:employees].insert(:name=>'S', :kind=>'Staff') @i3 = @db[:employees].insert(:name=>'M', :kind=>'Manager') @i4 = @db[:employees].insert(:name=>'Ex', :kind=>'Executive') @db[:managers].insert(:id=>@i3, :num_staff=>7) @db[:managers].insert(:id=>@i4, :num_staff=>5) @db[:executives].insert(:id=>@i4, :num_managers=>6) @db[:staff].insert(:id=>@i2, :manager_id=>@i4) end after do [:Executive, :Manager, :Staff, :Employee].each{|s| Object.send(:remove_const, s)} end after(:all) do @db.drop_table? :staff, :executives, :managers, :employees end specify "should return rows as subclass instances" do Employee.order(:id).all.should == [ Employee.load(:id=>@i1, :name=>'E', :kind=>'Employee'), Staff.load(:id=>@i2, :name=>'S', :kind=>'Staff'), Manager.load(:id=>@i3, :name=>'M', :kind=>'Manager'), Executive.load(:id=>@i4, :name=>'Ex', :kind=>'Executive') ] end specify "should lazily load columns in subclass tables" do a = Employee.order(:id).all a[1][:manager_id].should == nil a[1].manager_id.should == @i4 end specify "should include schema for columns for tables for ancestor classes" do Employee.db_schema.keys.sort_by{|x| x.to_s}.should == [:id, :kind, :name] Staff.db_schema.keys.sort_by{|x| x.to_s}.should == [:id, :kind, :manager_id, :name] Manager.db_schema.keys.sort_by{|x| x.to_s}.should == [:id, :kind, :name, :num_staff] Executive.db_schema.keys.sort_by{|x| x.to_s}.should == [:id, :kind, :name, :num_managers, :num_staff] end specify "should include columns for tables for ancestor classes" do Employee.columns.should == [:id, :name, :kind] Staff.columns.should == [:id, :name, :kind, :manager_id] Manager.columns.should == [:id, :name, :kind, :num_staff] Executive.columns.should == [:id, :name, :kind, :num_staff, :num_managers] end specify "should delete rows from all tables" do e = Executive.first i = e.id e.staff_members_dataset.destroy e.destroy @db[:executives][:id=>i].should == nil @db[:managers][:id=>i].should == nil @db[:employees][:id=>i].should == nil end specify "should handle associations only defined in subclasses" do Employee.filter(:id=>@i2).all.first.manager.id.should == @i4 end cspecify "should insert rows into all tables", [proc{|db| db.sqlite_version < 30709}, :sqlite] do e = Executive.create(:name=>'Ex2', :num_managers=>8, :num_staff=>9) i = e.id @db[:employees][:id=>i].should == {:id=>i, :name=>'Ex2', :kind=>'Executive'} @db[:managers][:id=>i].should == {:id=>i, :num_staff=>9} @db[:executives][:id=>i].should == {:id=>i, :num_managers=>8} end specify "should update rows in all tables" do Executive.first.update(:name=>'Ex2', :num_managers=>8, :num_staff=>9) @db[:employees][:id=>@i4].should == {:id=>@i4, :name=>'Ex2', :kind=>'Executive'} @db[:managers][:id=>@i4].should == {:id=>@i4, :num_staff=>9} @db[:executives][:id=>@i4].should == {:id=>@i4, :num_managers=>8} end specify "should handle many_to_one relationships" do m = Staff.first.manager m.should == Manager[@i4] m.should be_a_kind_of(Executive) end specify "should handle eagerly loading many_to_one relationships" do Staff.limit(1).eager(:manager).all.map{|x| x.manager}.should == [Manager[@i4]] end specify "should handle eagerly graphing many_to_one relationships" do ss = Staff.eager_graph(:manager).all ss.should == [Staff[@i2]] ss.map{|x| x.manager}.should == [Manager[@i4]] end specify "should handle one_to_many relationships" do Executive.first.staff_members.should == [Staff[@i2]] end specify "should handle eagerly loading one_to_many relationships" do Executive.limit(1).eager(:staff_members).first.staff_members.should == [Staff[@i2]] end cspecify "should handle eagerly graphing one_to_many relationships", [proc{|db| db.sqlite_version < 30709}, :sqlite] do es = Executive.limit(1).eager_graph(:staff_members).all es.should == [Executive[@i4]] es.map{|x| x.staff_members}.should == [[Staff[@i2]]] end end end describe "Many Through Many Plugin" do before(:all) do @db = DB @db.instance_variable_set(:@schemas, {}) @db.drop_table?(:albums_artists, :albums, :artists) @db.create_table(:albums) do primary_key :id String :name end @db.create_table(:artists) do primary_key :id String :name end @db.create_table(:albums_artists) do foreign_key :album_id, :albums foreign_key :artist_id, :artists end end before do [:albums_artists, :albums, :artists].each{|t| @db[t].delete} class ::Album < Sequel::Model(@db) many_to_many :artists end class ::Artist < Sequel::Model(@db) plugin :many_through_many end @artist1 = Artist.create(:name=>'1') @artist2 = Artist.create(:name=>'2') @artist3 = Artist.create(:name=>'3') @artist4 = Artist.create(:name=>'4') @album1 = Album.create(:name=>'A') @album1.add_artist(@artist1) @album1.add_artist(@artist2) @album2 = Album.create(:name=>'B') @album2.add_artist(@artist3) @album2.add_artist(@artist4) @album3 = Album.create(:name=>'C') @album3.add_artist(@artist2) @album3.add_artist(@artist3) @album4 = Album.create(:name=>'D') @album4.add_artist(@artist1) @album4.add_artist(@artist4) end after do [:Album, :Artist].each{|s| Object.send(:remove_const, s)} end after(:all) do @db.drop_table? :albums_artists, :albums, :artists end def self_join(c) c.join(Sequel.as(c.table_name, :b), Array(c.primary_key).zip(Array(c.primary_key))).select_all(c.table_name) end specify "should handle super simple case with 1 join table" do Artist.many_through_many :albums, [[:albums_artists, :artist_id, :album_id]] Artist[@artist1.id].albums.map{|x| x.name}.sort.should == %w'A D' Artist[@artist2.id].albums.map{|x| x.name}.sort.should == %w'A C' Artist[@artist3.id].albums.map{|x| x.name}.sort.should == %w'B C' Artist[@artist4.id].albums.map{|x| x.name}.sort.should == %w'B D' Artist.plugin :prepared_statements_associations Artist[@artist1.id].albums.map{|x| x.name}.sort.should == %w'A D' Artist[@artist2.id].albums.map{|x| x.name}.sort.should == %w'A C' Artist[@artist3.id].albums.map{|x| x.name}.sort.should == %w'B C' Artist[@artist4.id].albums.map{|x| x.name}.sort.should == %w'B D' Artist.filter(:id=>1).eager(:albums).all.map{|x| x.albums.map{|a| a.name}}.flatten.sort.should == %w'A D' Artist.filter(:id=>2).eager(:albums).all.map{|x| x.albums.map{|a| a.name}}.flatten.sort.should == %w'A C' Artist.filter(:id=>3).eager(:albums).all.map{|x| x.albums.map{|a| a.name}}.flatten.sort.should == %w'B C' Artist.filter(:id=>4).eager(:albums).all.map{|x| x.albums.map{|a| a.name}}.flatten.sort.should == %w'B D' Artist.filter(:artists__id=>1).eager_graph(:albums).all.map{|x| x.albums.map{|a| a.name}}.flatten.sort.should == %w'A D' Artist.filter(:artists__id=>2).eager_graph(:albums).all.map{|x| x.albums.map{|a| a.name}}.flatten.sort.should == %w'A C' Artist.filter(:artists__id=>3).eager_graph(:albums).all.map{|x| x.albums.map{|a| a.name}}.flatten.sort.should == %w'B C' Artist.filter(:artists__id=>4).eager_graph(:albums).all.map{|x| x.albums.map{|a| a.name}}.flatten.sort.should == %w'B D' Artist.filter(:albums=>@album1).all.map{|a| a.name}.sort.should == %w'1 2' Artist.filter(:albums=>@album2).all.map{|a| a.name}.sort.should == %w'3 4' Artist.filter(:albums=>@album3).all.map{|a| a.name}.sort.should == %w'2 3' Artist.filter(:albums=>@album4).all.map{|a| a.name}.sort.should == %w'1 4' Artist.exclude(:albums=>@album1).all.map{|a| a.name}.sort.should == %w'3 4' Artist.exclude(:albums=>@album2).all.map{|a| a.name}.sort.should == %w'1 2' Artist.exclude(:albums=>@album3).all.map{|a| a.name}.sort.should == %w'1 4' Artist.exclude(:albums=>@album4).all.map{|a| a.name}.sort.should == %w'2 3' Artist.filter(:albums=>[@album1, @album3]).all.map{|a| a.name}.sort.should == %w'1 2 3' Artist.filter(:albums=>[@album2, @album4]).all.map{|a| a.name}.sort.should == %w'1 3 4' Artist.exclude(:albums=>[@album1, @album3]).all.map{|a| a.name}.sort.should == %w'4' Artist.exclude(:albums=>[@album2, @album4]).all.map{|a| a.name}.sort.should == %w'2' Artist.filter(:albums=>Album.filter(:id=>[@album1.id, @album3.id])).all.map{|a| a.name}.sort.should == %w'1 2 3' Artist.exclude(:albums=>Album.filter(:id=>[@album1.id, @album3.id])).all.map{|a| a.name}.sort.should == %w'4' c = self_join(Artist) c.filter(:albums=>@album1).all.map{|a| a.name}.sort.should == %w'1 2' c.filter(:albums=>@album2).all.map{|a| a.name}.sort.should == %w'3 4' c.filter(:albums=>@album3).all.map{|a| a.name}.sort.should == %w'2 3' c.filter(:albums=>@album4).all.map{|a| a.name}.sort.should == %w'1 4' c.exclude(:albums=>@album1).all.map{|a| a.name}.sort.should == %w'3 4' c.exclude(:albums=>@album2).all.map{|a| a.name}.sort.should == %w'1 2' c.exclude(:albums=>@album3).all.map{|a| a.name}.sort.should == %w'1 4' c.exclude(:albums=>@album4).all.map{|a| a.name}.sort.should == %w'2 3' c.filter(:albums=>[@album1, @album3]).all.map{|a| a.name}.sort.should == %w'1 2 3' c.filter(:albums=>[@album2, @album4]).all.map{|a| a.name}.sort.should == %w'1 3 4' c.exclude(:albums=>[@album1, @album3]).all.map{|a| a.name}.sort.should == %w'4' c.exclude(:albums=>[@album2, @album4]).all.map{|a| a.name}.sort.should == %w'2' c.filter(:albums=>self_join(Album).filter(:albums__id=>[@album1.id, @album3.id])).all.map{|a| a.name}.sort.should == %w'1 2 3' c.exclude(:albums=>self_join(Album).filter(:albums__id=>[@album1.id, @album3.id])).all.map{|a| a.name}.sort.should == %w'4' end specify "should handle typical case with 3 join tables" do Artist.many_through_many :related_artists, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id]], :class=>Artist, :distinct=>true Artist[@artist1.id].related_artists.map{|x| x.name}.sort.should == %w'1 2 4' Artist[@artist2.id].related_artists.map{|x| x.name}.sort.should == %w'1 2 3' Artist[@artist3.id].related_artists.map{|x| x.name}.sort.should == %w'2 3 4' Artist[@artist4.id].related_artists.map{|x| x.name}.sort.should == %w'1 3 4' Artist.plugin :prepared_statements_associations Artist[@artist1.id].related_artists.map{|x| x.name}.sort.should == %w'1 2 4' Artist[@artist2.id].related_artists.map{|x| x.name}.sort.should == %w'1 2 3' Artist[@artist3.id].related_artists.map{|x| x.name}.sort.should == %w'2 3 4' Artist[@artist4.id].related_artists.map{|x| x.name}.sort.should == %w'1 3 4' Artist.filter(:id=>@artist1.id).eager(:related_artists).all.map{|x| x.related_artists.map{|a| a.name}}.flatten.sort.should == %w'1 2 4' Artist.filter(:id=>@artist2.id).eager(:related_artists).all.map{|x| x.related_artists.map{|a| a.name}}.flatten.sort.should == %w'1 2 3' Artist.filter(:id=>@artist3.id).eager(:related_artists).all.map{|x| x.related_artists.map{|a| a.name}}.flatten.sort.should == %w'2 3 4' Artist.filter(:id=>@artist4.id).eager(:related_artists).all.map{|x| x.related_artists.map{|a| a.name}}.flatten.sort.should == %w'1 3 4' Artist.filter(:artists__id=>@artist1.id).eager_graph(:related_artists).all.map{|x| x.related_artists.map{|a| a.name}}.flatten.sort.should == %w'1 2 4' Artist.filter(:artists__id=>@artist2.id).eager_graph(:related_artists).all.map{|x| x.related_artists.map{|a| a.name}}.flatten.sort.should == %w'1 2 3' Artist.filter(:artists__id=>@artist3.id).eager_graph(:related_artists).all.map{|x| x.related_artists.map{|a| a.name}}.flatten.sort.should == %w'2 3 4' Artist.filter(:artists__id=>@artist4.id).eager_graph(:related_artists).all.map{|x| x.related_artists.map{|a| a.name}}.flatten.sort.should == %w'1 3 4' Artist.filter(:related_artists=>@artist1).all.map{|a| a.name}.sort.should == %w'1 2 4' Artist.filter(:related_artists=>@artist2).all.map{|a| a.name}.sort.should == %w'1 2 3' Artist.filter(:related_artists=>@artist3).all.map{|a| a.name}.sort.should == %w'2 3 4' Artist.filter(:related_artists=>@artist4).all.map{|a| a.name}.sort.should == %w'1 3 4' Artist.exclude(:related_artists=>@artist1).all.map{|a| a.name}.sort.should == %w'3' Artist.exclude(:related_artists=>@artist2).all.map{|a| a.name}.sort.should == %w'4' Artist.exclude(:related_artists=>@artist3).all.map{|a| a.name}.sort.should == %w'1' Artist.exclude(:related_artists=>@artist4).all.map{|a| a.name}.sort.should == %w'2' Artist.filter(:related_artists=>[@artist1, @artist4]).all.map{|a| a.name}.sort.should == %w'1 2 3 4' Artist.exclude(:related_artists=>[@artist1, @artist4]).all.map{|a| a.name}.sort.should == %w'' Artist.filter(:related_artists=>Artist.filter(:id=>@artist1.id)).all.map{|a| a.name}.sort.should == %w'1 2 4' Artist.exclude(:related_artists=>Artist.filter(:id=>@artist1.id)).all.map{|a| a.name}.sort.should == %w'3' c = self_join(Artist) c.filter(:related_artists=>@artist1).all.map{|a| a.name}.sort.should == %w'1 2 4' c.filter(:related_artists=>@artist2).all.map{|a| a.name}.sort.should == %w'1 2 3' c.filter(:related_artists=>@artist3).all.map{|a| a.name}.sort.should == %w'2 3 4' c.filter(:related_artists=>@artist4).all.map{|a| a.name}.sort.should == %w'1 3 4' c.exclude(:related_artists=>@artist1).all.map{|a| a.name}.sort.should == %w'3' c.exclude(:related_artists=>@artist2).all.map{|a| a.name}.sort.should == %w'4' c.exclude(:related_artists=>@artist3).all.map{|a| a.name}.sort.should == %w'1' c.exclude(:related_artists=>@artist4).all.map{|a| a.name}.sort.should == %w'2' c.filter(:related_artists=>[@artist1, @artist4]).all.map{|a| a.name}.sort.should == %w'1 2 3 4' c.exclude(:related_artists=>[@artist1, @artist4]).all.map{|a| a.name}.sort.should == %w'' c.filter(:related_artists=>c.filter(:artists__id=>@artist1.id)).all.map{|a| a.name}.sort.should == %w'1 2 4' c.exclude(:related_artists=>c.filter(:artists__id=>@artist1.id)).all.map{|a| a.name}.sort.should == %w'3' end specify "should handle extreme case with 5 join tables" do Artist.many_through_many :related_albums, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id], [:artists, :id, :id], [:albums_artists, :artist_id, :album_id]], :class=>Album, :distinct=>true @db[:albums_artists].delete @album1.add_artist(@artist1) @album1.add_artist(@artist2) @album2.add_artist(@artist2) @album2.add_artist(@artist3) @album3.add_artist(@artist1) @album4.add_artist(@artist3) @album4.add_artist(@artist4) Artist[@artist1.id].related_albums.map{|x| x.name}.sort.should == %w'A B C' Artist[@artist2.id].related_albums.map{|x| x.name}.sort.should == %w'A B C D' Artist[@artist3.id].related_albums.map{|x| x.name}.sort.should == %w'A B D' Artist[@artist4.id].related_albums.map{|x| x.name}.sort.should == %w'B D' Artist.plugin :prepared_statements_associations Artist[@artist1.id].related_albums.map{|x| x.name}.sort.should == %w'A B C' Artist[@artist2.id].related_albums.map{|x| x.name}.sort.should == %w'A B C D' Artist[@artist3.id].related_albums.map{|x| x.name}.sort.should == %w'A B D' Artist[@artist4.id].related_albums.map{|x| x.name}.sort.should == %w'B D' Artist.filter(:id=>@artist1.id).eager(:related_albums).all.map{|x| x.related_albums.map{|a| a.name}}.flatten.sort.should == %w'A B C' Artist.filter(:id=>@artist2.id).eager(:related_albums).all.map{|x| x.related_albums.map{|a| a.name}}.flatten.sort.should == %w'A B C D' Artist.filter(:id=>@artist3.id).eager(:related_albums).all.map{|x| x.related_albums.map{|a| a.name}}.flatten.sort.should == %w'A B D' Artist.filter(:id=>@artist4.id).eager(:related_albums).all.map{|x| x.related_albums.map{|a| a.name}}.flatten.sort.should == %w'B D' Artist.filter(:artists__id=>@artist1.id).eager_graph(:related_albums).all.map{|x| x.related_albums.map{|a| a.name}}.flatten.sort.should == %w'A B C' Artist.filter(:artists__id=>@artist2.id).eager_graph(:related_albums).all.map{|x| x.related_albums.map{|a| a.name}}.flatten.sort.should == %w'A B C D' Artist.filter(:artists__id=>@artist3.id).eager_graph(:related_albums).all.map{|x| x.related_albums.map{|a| a.name}}.flatten.sort.should == %w'A B D' Artist.filter(:artists__id=>@artist4.id).eager_graph(:related_albums).all.map{|x| x.related_albums.map{|a| a.name}}.flatten.sort.should == %w'B D' Artist.filter(:related_albums=>@album1).all.map{|a| a.name}.sort.should == %w'1 2 3' Artist.filter(:related_albums=>@album2).all.map{|a| a.name}.sort.should == %w'1 2 3 4' Artist.filter(:related_albums=>@album3).all.map{|a| a.name}.sort.should == %w'1 2' Artist.filter(:related_albums=>@album4).all.map{|a| a.name}.sort.should == %w'2 3 4' Artist.exclude(:related_albums=>@album1).all.map{|a| a.name}.sort.should == %w'4' Artist.exclude(:related_albums=>@album2).all.map{|a| a.name}.sort.should == %w'' Artist.exclude(:related_albums=>@album3).all.map{|a| a.name}.sort.should == %w'3 4' Artist.exclude(:related_albums=>@album4).all.map{|a| a.name}.sort.should == %w'1' Artist.filter(:related_albums=>[@album1, @album3]).all.map{|a| a.name}.sort.should == %w'1 2 3' Artist.filter(:related_albums=>[@album3, @album4]).all.map{|a| a.name}.sort.should == %w'1 2 3 4' Artist.exclude(:related_albums=>[@album1, @album3]).all.map{|a| a.name}.sort.should == %w'4' Artist.exclude(:related_albums=>[@album2, @album4]).all.map{|a| a.name}.sort.should == %w'' Artist.filter(:related_albums=>Album.filter(:id=>[@album1.id, @album3.id])).all.map{|a| a.name}.sort.should == %w'1 2 3' Artist.exclude(:related_albums=>Album.filter(:id=>[@album1.id, @album3.id])).all.map{|a| a.name}.sort.should == %w'4' c = self_join(Artist) c.filter(:related_albums=>@album1).all.map{|a| a.name}.sort.should == %w'1 2 3' c.filter(:related_albums=>@album2).all.map{|a| a.name}.sort.should == %w'1 2 3 4' c.filter(:related_albums=>@album3).all.map{|a| a.name}.sort.should == %w'1 2' c.filter(:related_albums=>@album4).all.map{|a| a.name}.sort.should == %w'2 3 4' c.exclude(:related_albums=>@album1).all.map{|a| a.name}.sort.should == %w'4' c.exclude(:related_albums=>@album2).all.map{|a| a.name}.sort.should == %w'' c.exclude(:related_albums=>@album3).all.map{|a| a.name}.sort.should == %w'3 4' c.exclude(:related_albums=>@album4).all.map{|a| a.name}.sort.should == %w'1' c.filter(:related_albums=>[@album1, @album3]).all.map{|a| a.name}.sort.should == %w'1 2 3' c.filter(:related_albums=>[@album3, @album4]).all.map{|a| a.name}.sort.should == %w'1 2 3 4' c.exclude(:related_albums=>[@album1, @album3]).all.map{|a| a.name}.sort.should == %w'4' c.exclude(:related_albums=>[@album2, @album4]).all.map{|a| a.name}.sort.should == %w'' c.filter(:related_albums=>self_join(Album).filter(:albums__id=>[@album1.id, @album3.id])).all.map{|a| a.name}.sort.should == %w'1 2 3' c.exclude(:related_albums=>self_join(Album).filter(:albums__id=>[@album1.id, @album3.id])).all.map{|a| a.name}.sort.should == %w'4' end end describe "Lazy Attributes plugin" do before(:all) do @db = DB @db.create_table!(:items) do primary_key :id String :name Integer :num end @db[:items].delete class ::Item < Sequel::Model(@db) plugin :lazy_attributes, :num end Item.create(:name=>'J', :num=>1) end after(:all) do @db.drop_table?(:items) Object.send(:remove_const, :Item) end specify "should not include lazy attribute columns by default" do Item.first.should == Item.load(:id=>1, :name=>'J') end specify "should load lazy attribute on access" do Item.first.num.should == 1 end specify "should typecast lazy attribute in setter" do i = Item.new i.num = '1' i.num.should == 1 end specify "should load lazy attribute for all items returned when accessing any item if using identity map " do Item.create(:name=>'K', :num=>2) a = Item.order(:name).all a.should == [Item.load(:id=>1, :name=>'J'), Item.load(:id=>2, :name=>'K')] a.map{|x| x[:num]}.should == [nil, nil] a.first.num.should == 1 a.map{|x| x[:num]}.should == [1, 2] a.last.num.should == 2 end end describe "Tactical Eager Loading Plugin" do before(:all) do @db = DB @db.instance_variable_set(:@schemas, {}) @db.create_table!(:artists) do primary_key :id String :name end @db.create_table!(:albums) do primary_key :id String :name foreign_key :artist_id, :artists end end before do @db[:albums].delete @db[:artists].delete class ::Album < Sequel::Model(@db) plugin :tactical_eager_loading many_to_one :artist end class ::Artist < Sequel::Model(@db) plugin :tactical_eager_loading one_to_many :albums, :order=>:name end @artist1 = Artist.create(:name=>'1') @artist2 = Artist.create(:name=>'2') @artist3 = Artist.create(:name=>'3') @artist4 = Artist.create(:name=>'4') @album1 = Album.create(:name=>'A', :artist=>@artist1) @album2 = Album.create(:name=>'B', :artist=>@artist1) @album3 = Album.create(:name=>'C', :artist=>@artist2) @album4 = Album.create(:name=>'D', :artist=>@artist3) end after do [:Album, :Artist].each{|s| Object.send(:remove_const, s)} end after(:all) do @db.drop_table? :albums, :artists end specify "should eagerly load associations for all items when accessing any item" do a = Artist.order(:name).all a.map{|x| x.associations}.should == [{}, {}, {}, {}] a.first.albums.should == [@album1, @album2] a.map{|x| x.associations}.should == [{:albums=>[@album1, @album2]}, {:albums=>[@album3]}, {:albums=>[@album4]}, {:albums=>[]}] a = Album.order(:name).all a.map{|x| x.associations}.should == [{}, {}, {}, {}] a.first.artist.should == @artist1 a.map{|x| x.associations}.should == [{:artist=>@artist1}, {:artist=>@artist1}, {:artist=>@artist2}, {:artist=>@artist3}] end end describe "Touch plugin" do before(:all) do @db = DB @db.drop_table? :albums_artists, :albums, :artists @db.create_table(:artists) do primary_key :id String :name DateTime :updated_at end @db.create_table(:albums) do primary_key :id String :name foreign_key :artist_id, :artists DateTime :updated_at end @db.create_join_table({:album_id=>:albums, :artist_id=>:artists}, :no_index=>true) end before do @db[:albums].delete @db[:artists].delete class ::Album < Sequel::Model(@db) end class ::Artist < Sequel::Model(@db) end @artist = Artist.create(:name=>'1') @album = Album.create(:name=>'A', :artist_id=>@artist.id) end after do [:Album, :Artist].each{|s| Object.send(:remove_const, s)} end after(:all) do @db.drop_table? :albums_artists, :albums, :artists end specify "should update the timestamp column when touching the record" do Album.plugin :touch @album.updated_at.should == nil @album.touch @album.updated_at.to_i.should be_within(2).of(Time.now.to_i) end cspecify "should update the timestamp column for many_to_one associated records when the record is updated or destroyed", [:do, :sqlite], [:jdbc, :sqlite], [:swift] do Album.many_to_one :artist Album.plugin :touch, :associations=>:artist @artist.updated_at.should == nil @album.update(:name=>'B') ua = @artist.reload.updated_at if ua.is_a?(Time) ua.to_i.should be_within(60).of(Time.now.to_i) else (DateTime.now - ua).should be_within(60.0/86400).of(0) end @artist.update(:updated_at=>nil) @album.destroy if ua.is_a?(Time) ua.to_i.should be_within(60).of(Time.now.to_i) else (DateTime.now - ua).should be_within(60.0/86400).of(0) end end cspecify "should update the timestamp column for one_to_many associated records when the record is updated", [:do, :sqlite], [:jdbc, :sqlite], [:swift] do Artist.one_to_many :albums Artist.plugin :touch, :associations=>:albums @album.updated_at.should == nil @artist.update(:name=>'B') ua = @album.reload.updated_at if ua.is_a?(Time) ua.to_i.should be_within(60).of(Time.now.to_i) else (DateTime.now - ua).should be_within(60.0/86400).of(0) end end cspecify "should update the timestamp column for many_to_many associated records when the record is updated", [:do, :sqlite], [:jdbc, :sqlite], [:swift] do Artist.many_to_many :albums Artist.plugin :touch, :associations=>:albums @artist.add_album(@album) @album.updated_at.should == nil @artist.update(:name=>'B') ua = @album.reload.updated_at if ua.is_a?(Time) ua.to_i.should be_within(60).of(Time.now.to_i) else (DateTime.now - ua).should be_within(60.0/86400).of(0) end end end describe "Serialization plugin" do before do @db = DB @db.create_table!(:items) do primary_key :id String :stuff end class ::Item < Sequel::Model(@db) plugin :serialization, :marshal, :stuff end end after do @db.drop_table?(:items) Object.send(:remove_const, :Item) end specify "should serialize and deserialize items as needed" do i = Item.create(:stuff=>{:a=>1}) i.stuff.should == {:a=>1} i.stuff = [1, 2, 3] i.save Item.first.stuff.should == [1, 2, 3] i.update(:stuff=>Item.new) Item.first.stuff.should == Item.new end end describe "OptimisticLocking plugin" do before(:all) do @db = DB @db.create_table!(:people) do primary_key :id String :name Integer :lock_version, :default=>0, :null=>false end class ::Person < Sequel::Model(@db) plugin :optimistic_locking end end before do @db[:people].delete @p = Person.create(:name=>'John') end after(:all) do @db.drop_table?(:people) Object.send(:remove_const, :Person) end specify "should raise an error when updating a stale record" do p1 = Person[@p.id] p2 = Person[@p.id] p1.update(:name=>'Jim') proc{p2.update(:name=>'Bob')}.should raise_error(Sequel::Plugins::OptimisticLocking::Error) end specify "should raise an error when destroying a stale record" do p1 = Person[@p.id] p2 = Person[@p.id] p1.update(:name=>'Jim') proc{p2.destroy}.should raise_error(Sequel::Plugins::OptimisticLocking::Error) end specify "should not raise an error when updating the same record twice" do p1 = Person[@p.id] p1.update(:name=>'Jim') proc{p1.update(:name=>'Bob')}.should_not raise_error end end describe "Composition plugin" do before do @db = DB @db.create_table!(:events) do primary_key :id Integer :year Integer :month Integer :day end class ::Event < Sequel::Model(@db) plugin :composition composition :date, :composer=>proc{Date.new(year, month, day) if year && month && day}, :decomposer=>(proc do if date self.year = date.year self.month = date.month self.day = date.day else self.year, self.month, self.day = nil end end) composition :date, :mapping=>[:year, :month, :day] end @e1 = Event.create(:year=>2010, :month=>2, :day=>15) @e2 = Event.create(:year=>nil) end after do @db.drop_table?(:events) Object.send(:remove_const, :Event) end specify "should return a composed object if the underlying columns have a value" do @e1.date.should == Date.civil(2010, 2, 15) @e2.date.should == nil end specify "should decompose the object when saving the record" do @e1.date = Date.civil(2009, 1, 2) @e1.save @e1.year.should == 2009 @e1.month.should == 1 @e1.day.should == 2 end specify "should save all columns when saving changes" do @e2.date = Date.civil(2009, 10, 2) @e2.save_changes @e2.reload @e2.year.should == 2009 @e2.month.should == 10 @e2.day.should == 2 end end # DB2's implemention of CTE is too limited to use this plugin if DB.dataset.supports_cte? and !Sequel.guarded?(:db2) describe "RcteTree Plugin" do before(:all) do @db = DB @db.create_table!(:nodes) do primary_key :id Integer :parent_id String :name end class ::Node < Sequel::Model(@db) plugin :rcte_tree, :order=>:name end @nodes = [] @nodes << @a = Node.create(:name=>'a') @nodes << @b = Node.create(:name=>'b') @nodes << @aa = Node.create(:name=>'aa', :parent=>@a) @nodes << @ab = Node.create(:name=>'ab', :parent=>@a) @nodes << @ba = Node.create(:name=>'ba', :parent=>@b) @nodes << @bb = Node.create(:name=>'bb', :parent=>@b) @nodes << @aaa = Node.create(:name=>'aaa', :parent=>@aa) @nodes << @aab = Node.create(:name=>'aab', :parent=>@aa) @nodes << @aba = Node.create(:name=>'aba', :parent=>@ab) @nodes << @abb = Node.create(:name=>'abb', :parent=>@ab) @nodes << @aaaa = Node.create(:name=>'aaaa', :parent=>@aaa) @nodes << @aaab = Node.create(:name=>'aaab', :parent=>@aaa) @nodes << @aaaaa = Node.create(:name=>'aaaaa', :parent=>@aaaa) end before do @nodes.each{|n| n.associations.clear} end after(:all) do @db.drop_table? :nodes Object.send(:remove_const, :Node) end specify "should load all standard (not-CTE) methods correctly" do @a.children.should == [@aa, @ab] @b.children.should == [@ba, @bb] @aa.children.should == [@aaa, @aab] @ab.children.should == [@aba, @abb] @ba.children.should == [] @bb.children.should == [] @aaa.children.should == [@aaaa, @aaab] @aab.children.should == [] @aba.children.should == [] @abb.children.should == [] @aaaa.children.should == [@aaaaa] @aaab.children.should == [] @aaaaa.children.should == [] @a.parent.should == nil @b.parent.should == nil @aa.parent.should == @a @ab.parent.should == @a @ba.parent.should == @b @bb.parent.should == @b @aaa.parent.should == @aa @aab.parent.should == @aa @aba.parent.should == @ab @abb.parent.should == @ab @aaaa.parent.should == @aaa @aaab.parent.should == @aaa @aaaaa.parent.should == @aaaa end specify "should load all ancestors and descendants lazily for a given instance" do @a.descendants.should == [@aa, @aaa, @aaaa, @aaaaa, @aaab, @aab, @ab, @aba, @abb] @b.descendants.should == [@ba, @bb] @aa.descendants.should == [@aaa, @aaaa, @aaaaa, @aaab, @aab] @ab.descendants.should == [@aba, @abb] @ba.descendants.should == [] @bb.descendants.should == [] @aaa.descendants.should == [@aaaa, @aaaaa, @aaab] @aab.descendants.should == [] @aba.descendants.should == [] @abb.descendants.should == [] @aaaa.descendants.should == [@aaaaa] @aaab.descendants.should == [] @aaaaa.descendants.should == [] @a.ancestors.should == [] @b.ancestors.should == [] @aa.ancestors.should == [@a] @ab.ancestors.should == [@a] @ba.ancestors.should == [@b] @bb.ancestors.should == [@b] @aaa.ancestors.should == [@a, @aa] @aab.ancestors.should == [@a, @aa] @aba.ancestors.should == [@a, @ab] @abb.ancestors.should == [@a, @ab] @aaaa.ancestors.should == [@a, @aa, @aaa] @aaab.ancestors.should == [@a, @aa, @aaa] @aaaaa.ancestors.should == [@a, @aa, @aaa, @aaaa] end specify "should eagerly load all ancestors and descendants for a dataset" do nodes = Node.filter(:id=>[@a.id, @b.id, @aaa.id]).order(:name).eager(:ancestors, :descendants).all nodes.should == [@a, @aaa, @b] nodes[0].descendants.should == [@aa, @aaa, @aaaa, @aaaaa, @aaab, @aab, @ab, @aba, @abb] nodes[1].descendants.should == [@aaaa, @aaaaa, @aaab] nodes[2].descendants.should == [@ba, @bb] nodes[0].ancestors.should == [] nodes[1].ancestors.should == [@a, @aa] nodes[2].ancestors.should == [] end specify "should work correctly if not all columns are selected" do c = Class.new(Sequel::Model(@db[:nodes])) c.plugin :rcte_tree, :order=>:name c.plugin :lazy_attributes, :name c[:name=>'aaaa'].descendants.should == [c.load(:parent_id=>11, :id=>13)] c[:name=>'aa'].ancestors.should == [c.load(:parent_id=>nil, :id=>1)] nodes = c.filter(:id=>[@a.id, @b.id, @aaa.id]).order(:name).eager(:ancestors, :descendants).all nodes.should == [{:parent_id=>nil, :id=>1}, {:parent_id=>3, :id=>7}, {:parent_id=>nil, :id=>2}].map{|x| c.load(x)} nodes[2].descendants.should == [{:parent_id=>2, :id=>5}, {:parent_id=>2, :id=>6}].map{|x| c.load(x)} nodes[1].ancestors.should == [{:parent_id=>nil, :id=>1}, {:parent_id=>1, :id=>3}].map{|x| c.load(x)} end specify "should eagerly load descendants to a given level" do nodes = Node.filter(:id=>[@a.id, @b.id, @aaa.id]).order(:name).eager(:descendants=>1).all nodes.should == [@a, @aaa, @b] nodes[0].descendants.should == [@aa, @ab] nodes[1].descendants.should == [@aaaa, @aaab] nodes[2].descendants.should == [@ba, @bb] nodes = Node.filter(:id=>[@a.id, @b.id, @aaa.id]).order(:name).eager(:descendants=>2).all nodes.should == [@a, @aaa, @b] nodes[0].descendants.should == [@aa, @aaa, @aab, @ab, @aba, @abb] nodes[1].descendants.should == [@aaaa, @aaaaa, @aaab] nodes[2].descendants.should == [@ba, @bb] end specify "should populate all :children associations when eagerly loading descendants for a dataset" do nodes = Node.filter(:id=>[@a.id, @b.id, @aaa.id]).order(:name).eager(:descendants).all nodes[0].associations[:children].should == [@aa, @ab] nodes[1].associations[:children].should == [@aaaa, @aaab] nodes[2].associations[:children].should == [@ba, @bb] nodes[0].associations[:children].map{|c1| c1.associations[:children]}.should == [[@aaa, @aab], [@aba, @abb]] nodes[1].associations[:children].map{|c1| c1.associations[:children]}.should == [[@aaaaa], []] nodes[2].associations[:children].map{|c1| c1.associations[:children]}.should == [[], []] nodes[0].associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children]}}.should == [[[@aaaa, @aaab], []], [[], []]] nodes[1].associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children]}}.should == [[[]], []] nodes[0].associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children].map{|c3| c3.associations[:children]}}}.should == [[[[@aaaaa], []], []], [[], []]] nodes[0].associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children].map{|c3| c3.associations[:children].map{|c4| c4.associations[:children]}}}}.should == [[[[[]], []], []], [[], []]] end specify "should not populate :children associations for final level when loading descendants to a given level" do nodes = Node.filter(:id=>[@a.id, @b.id, @aaa.id]).order(:name).eager(:descendants=>1).all nodes[0].associations[:children].should == [@aa, @ab] nodes[0].associations[:children].map{|c1| c1.associations[:children]}.should == [nil, nil] nodes[1].associations[:children].should == [@aaaa, @aaab] nodes[1].associations[:children].map{|c1| c1.associations[:children]}.should == [nil, nil] nodes[2].associations[:children].should == [@ba, @bb] nodes[2].associations[:children].map{|c1| c1.associations[:children]}.should == [nil, nil] nodes[0].associations[:children].map{|c1| c1.children}.should == [[@aaa, @aab], [@aba, @abb]] nodes[1].associations[:children].map{|c1| c1.children}.should == [[@aaaaa], []] nodes[2].associations[:children].map{|c1| c1.children}.should == [[], []] nodes = Node.filter(:id=>[@a.id, @b.id, @aaa.id]).order(:name).eager(:descendants=>2).all nodes[0].associations[:children].should == [@aa, @ab] nodes[0].associations[:children].map{|c1| c1.associations[:children]}.should == [[@aaa, @aab], [@aba, @abb]] nodes[0].associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children]}}.should == [[[@aaaa, @aaab], nil], [nil, nil]] nodes[0].associations[:children].map{|c1| c1.associations[:children].map{|c2| (cc2 = c2.associations[:children]) ? cc2.map{|c3| c3.associations[:children]} : nil}}.should == [[[[@aaaaa], []], nil], [nil, nil]] nodes[0].associations[:children].map{|c1| c1.associations[:children].map{|c2| (cc2 = c2.associations[:children]) ? cc2.map{|c3| (cc3 = c3.associations[:children]) ? cc3.map{|c4| c4.associations[:children]} : nil} : nil}}.should == [[[[nil], []], nil], [nil, nil]] nodes[1].associations[:children].should == [@aaaa, @aaab] nodes[1].associations[:children].map{|c1| c1.associations[:children]}.should == [[@aaaaa], []] nodes[1].associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children]}}.should == [[nil], []] nodes[2].associations[:children].should == [@ba, @bb] nodes[2].associations[:children].map{|c1| c1.associations[:children]}.should == [[], []] nodes[0].associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.children}}.should == [[[@aaaa, @aaab], []], [[], []]] nodes[0].associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.children.map{|c3| c3.children}}}.should == [[[[@aaaaa], []], []], [[], []]] nodes[0].associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.children.map{|c3| c3.children.map{|c4| c4.children}}}}.should == [[[[[]], []], []], [[], []]] nodes[1].associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.children}}.should == [[[]], []] end specify "should populate all :children associations when lazily loading descendants" do @a.descendants @a.associations[:children].should == [@aa, @ab] @a.associations[:children].map{|c1| c1.associations[:children]}.should == [[@aaa, @aab], [@aba, @abb]] @a.associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children]}}.should == [[[@aaaa, @aaab], []], [[], []]] @a.associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children].map{|c3| c3.associations[:children]}}}.should == [[[[@aaaaa], []], []], [[], []]] @a.associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children].map{|c3| c3.associations[:children].map{|c4| c4.associations[:children]}}}}.should == [[[[[]], []], []], [[], []]] @b.descendants @b.associations[:children].should == [@ba, @bb] @b.associations[:children].map{|c1| c1.associations[:children]}.should == [[], []] @aaa.descendants @aaa.associations[:children].map{|c1| c1.associations[:children]}.should == [[@aaaaa], []] @aaa.associations[:children].map{|c1| c1.associations[:children].map{|c2| c2.associations[:children]}}.should == [[[]], []] end specify "should populate all :parent associations when eagerly loading ancestors for a dataset" do nodes = Node.filter(:id=>[@a.id, @ba.id, @aaa.id, @aaaaa.id]).order(:name).eager(:ancestors).all nodes[0].associations.fetch(:parent, 1).should == nil nodes[1].associations[:parent].should == @aa nodes[1].associations[:parent].associations[:parent].should == @a nodes[1].associations[:parent].associations[:parent].associations.fetch(:parent, 1).should == nil nodes[2].associations[:parent].should == @aaaa nodes[2].associations[:parent].associations[:parent].should == @aaa nodes[2].associations[:parent].associations[:parent].associations[:parent].should == @aa nodes[2].associations[:parent].associations[:parent].associations[:parent].associations[:parent].should == @a nodes[2].associations[:parent].associations[:parent].associations[:parent].associations[:parent].associations.fetch(:parent, 1).should == nil nodes[3].associations[:parent].should == @b nodes[3].associations[:parent].associations.fetch(:parent, 1).should == nil end specify "should populate all :parent associations when lazily loading ancestors" do @a.reload @a.ancestors @a.associations[:parent].should == nil @ba.reload @ba.ancestors @ba.associations[:parent].should == @b @ba.associations[:parent].associations.fetch(:parent, 1).should == nil @ba.reload @aaaaa.ancestors @aaaaa.associations[:parent].should == @aaaa @aaaaa.associations[:parent].associations[:parent].should == @aaa @aaaaa.associations[:parent].associations[:parent].associations[:parent].should == @aa @aaaaa.associations[:parent].associations[:parent].associations[:parent].associations[:parent].should == @a @aaaaa.associations[:parent].associations[:parent].associations[:parent].associations[:parent].associations.fetch(:parent, 1).should == nil end end end describe "Instance Filters plugin" do before(:all) do @db = DB @db.create_table!(:items) do primary_key :id String :name Integer :cost Integer :number end class ::Item < Sequel::Model(@db) plugin :instance_filters end end before do @db[:items].delete @i = Item.create(:name=>'J', :number=>1, :cost=>2) @i.instance_filter(:number=>1) @i.set(:name=>'K') end after(:all) do @db.drop_table?(:items) Object.send(:remove_const, :Item) end specify "should not raise an error if saving only updates one row" do @i.save @i.refresh.name.should == 'K' end specify "should raise error if saving doesn't update a row" do @i.this.update(:number=>2) proc{@i.save}.should raise_error(Sequel::Error) end specify "should apply all instance filters" do @i.instance_filter{cost <= 2} @i.this.update(:number=>2) proc{@i.save}.should raise_error(Sequel::Error) @i.this.update(:number=>1, :cost=>3) proc{@i.save}.should raise_error(Sequel::Error) @i.this.update(:cost=>2) @i.save @i.refresh.name.should == 'K' end specify "should clear instance filters after successful save" do @i.save @i.this.update(:number=>2) @i.update(:name=>'L') @i.refresh.name.should == 'L' end specify "should not raise an error if deleting only deletes one row" do @i.destroy proc{@i.refresh}.should raise_error(Sequel::Error, 'Record not found') end specify "should raise error if destroying doesn't delete a row" do @i.this.update(:number=>2) proc{@i.destroy}.should raise_error(Sequel::Error) end end describe "UpdatePrimaryKey plugin" do before(:all) do @db = DB @db.create_table!(:t) do Integer :a, :primary_key=>true Integer :b end @ds = @db[:t] @c = Class.new(Sequel::Model(@ds)) @c.set_primary_key(:a) @c.unrestrict_primary_key @c.plugin :update_primary_key end before do @ds.delete @ds.insert(:a=>1, :b=>3) end after(:all) do @db.drop_table?(:t) end specify "should handle regular updates" do @c.first.update(:b=>4) @db[:t].all.should == [{:a=>1, :b=>4}] @c.first.set(:b=>5).save @db[:t].all.should == [{:a=>1, :b=>5}] @c.first.set(:b=>6).save(:columns=>:b) @db[:t].all.should == [{:a=>1, :b=>6}] end specify "should handle updating the primary key field with another field" do @c.first.update(:a=>2, :b=>4) @db[:t].all.should == [{:a=>2, :b=>4}] end specify "should handle updating just the primary key field when saving changes" do @c.first.update(:a=>2) @db[:t].all.should == [{:a=>2, :b=>3}] @c.first.set(:a=>3).save(:columns=>:a) @db[:t].all.should == [{:a=>3, :b=>3}] end specify "should handle saving after modifying the primary key field with another field" do @c.first.set(:a=>2, :b=>4).save @db[:t].all.should == [{:a=>2, :b=>4}] end specify "should handle saving after modifying just the primary key field" do @c.first.set(:a=>2).save @db[:t].all.should == [{:a=>2, :b=>3}] end specify "should handle saving after updating the primary key" do @c.first.update(:a=>2).update(:b=>4).set(:b=>5).save @db[:t].all.should == [{:a=>2, :b=>5}] end end describe "AssociationPks plugin" do before(:all) do @db = DB @db.drop_table?(:albums_tags, :albums_vocalists, :vocalists_instruments, :vocalists_hits, :hits, :instruments, :vocalists, :tags, :albums, :artists) @db.create_table(:artists) do primary_key :id String :name end @db.create_table(:albums) do primary_key :id String :name foreign_key :artist_id, :artists end @db.create_table(:tags) do primary_key :id String :name end @db.create_table(:albums_tags) do foreign_key :album_id, :albums foreign_key :tag_id, :tags end @db.create_table(:vocalists) do String :first String :last primary_key [:first, :last] foreign_key :album_id, :albums end @db.create_table(:albums_vocalists) do foreign_key :album_id, :albums String :first String :last foreign_key [:first, :last], :vocalists end @db.create_table(:instruments) do primary_key :id String :first String :last foreign_key [:first, :last], :vocalists end @db.create_table(:vocalists_instruments) do String :first String :last foreign_key [:first, :last], :vocalists foreign_key :instrument_id, :instruments end @db.create_table(:hits) do Integer :year Integer :week primary_key [:year, :week] String :first String :last foreign_key [:first, :last], :vocalists end @db.create_table(:vocalists_hits) do String :first String :last foreign_key [:first, :last], :vocalists Integer :year Integer :week foreign_key [:year, :week], :hits end class ::Artist < Sequel::Model plugin :association_pks one_to_many :albums, :order=>:id end class ::Album < Sequel::Model plugin :association_pks many_to_many :tags, :order=>:id end class ::Tag < Sequel::Model end class ::Vocalist < Sequel::Model set_primary_key [:first, :last] plugin :association_pks end class ::Instrument < Sequel::Model plugin :association_pks end class ::Hit < Sequel::Model set_primary_key [:year, :week] end end before do [:albums_tags, :albums_vocalists, :vocalists_instruments, :vocalists_hits, :hits, :instruments, :vocalists, :tags, :albums, :artists].each{|t| @db[t].delete} @ar1 =@db[:artists].insert(:name=>'YJM') @ar2 =@db[:artists].insert(:name=>'AS') @al1 =@db[:albums].insert(:name=>'RF', :artist_id=>@ar1) @al2 =@db[:albums].insert(:name=>'MO', :artist_id=>@ar1) @al3 =@db[:albums].insert(:name=>'T', :artist_id=>@ar1) @t1 = @db[:tags].insert(:name=>'A') @t2 = @db[:tags].insert(:name=>'B') @t3 = @db[:tags].insert(:name=>'C') {@al1=>[@t1, @t2, @t3], @al2=>[@t2]}.each do |aid, tids| tids.each{|tid| @db[:albums_tags].insert([aid, tid])} end @v1 = ['F1', 'L1'] @v2 = ['F2', 'L2'] @v3 = ['F3', 'L3'] @db[:vocalists].insert(@v1 + [@al1]) @db[:vocalists].insert(@v2 + [@al1]) @db[:vocalists].insert(@v3 + [@al1]) @i1 = @db[:instruments].insert([:first, :last], @v1) @i2 = @db[:instruments].insert([:first, :last], @v1) @i3 = @db[:instruments].insert([:first, :last], @v1) @h1 = [1997, 1] @h2 = [1997, 2] @h3 = [1997, 3] @db[:hits].insert(@h1 + @v1) @db[:hits].insert(@h2 + @v1) @db[:hits].insert(@h3 + @v1) {@al1=>[@v1, @v2, @v3], @al2=>[@v2]}.each do |aid, vids| vids.each{|vid| @db[:albums_vocalists].insert([aid] + vid)} end {@v1=>[@i1, @i2, @i3], @v2=>[@i2]}.each do |vid, iids| iids.each{|iid| @db[:vocalists_instruments].insert(vid + [iid])} end {@v1=>[@h1, @h2, @h3], @v2=>[@h2]}.each do |vid, hids| hids.each{|hid| @db[:vocalists_hits].insert(vid + hid)} end end after(:all) do @db.drop_table? :albums_tags, :albums_vocalists, :vocalists_instruments, :vocalists_hits, :hits, :instruments, :vocalists, :tags, :albums, :artists [:Artist, :Album, :Tag, :Vocalist, :Instrument, :Hit].each{|s| Object.send(:remove_const, s)} end specify "should return correct associated pks for one_to_many associations" do Artist.order(:id).all.map{|a| a.album_pks}.should == [[@al1, @al2, @al3], []] end specify "should return correct associated pks for many_to_many associations" do Album.order(:id).all.map{|a| a.tag_pks.sort}.should == [[@t1, @t2, @t3], [@t2], []] end specify "should return correct associated right-side cpks for one_to_many associations" do Album.one_to_many :vocalists, :order=>:first Album.order(:id).all.map{|a| a.vocalist_pks.sort}.should == [[@v1, @v2, @v3], [], []] end specify "should return correct associated right-side cpks for many_to_many associations" do Album.many_to_many :vocalists, :join_table=>:albums_vocalists, :right_key=>[:first, :last], :order=>:first Album.order(:id).all.map{|a| a.vocalist_pks.sort}.should == [[@v1, @v2, @v3], [@v2], []] end specify "should return correct associated pks for left-side cpks for one_to_many associations" do Vocalist.one_to_many :instruments, :key=>[:first, :last], :order=>:id Vocalist.order(:first, :last).all.map{|a| a.instrument_pks.sort}.should == [[@i1, @i2, @i3], [], []] end specify "should return correct associated pks for left-side cpks for many_to_many associations" do Vocalist.many_to_many :instruments, :join_table=>:vocalists_instruments, :left_key=>[:first, :last], :order=>:id Vocalist.order(:first, :last).all.map{|a| a.instrument_pks.sort}.should == [[@i1, @i2, @i3], [@i2], []] end specify "should return correct associated right-side cpks for left-side cpks for one_to_many associations" do Vocalist.one_to_many :hits, :key=>[:first, :last], :order=>:week Vocalist.order(:first, :last).all.map{|a| a.hit_pks.sort}.should == [[@h1, @h2, @h3], [], []] end specify "should return correct associated right-side cpks for left-side cpks for many_to_many associations" do Vocalist.many_to_many :hits, :join_table=>:vocalists_hits, :left_key=>[:first, :last], :right_key=>[:year, :week], :order=>:week Vocalist.order(:first, :last).all.map{|a| a.hit_pks.sort}.should == [[@h1, @h2, @h3], [@h2], []] end specify "should set associated pks correctly for a one_to_many association" do Artist.use_transactions = true Album.order(:id).select_map(:artist_id).should == [@ar1, @ar1, @ar1] Artist[@ar2].album_pks = [@t1, @t3] Artist[@ar1].album_pks.should == [@t2] Album.order(:id).select_map(:artist_id).should == [@ar2, @ar1, @ar2] Artist[@ar1].album_pks = [@t1] Artist[@ar2].album_pks.should == [@t3] Album.order(:id).select_map(:artist_id).should == [@ar1, nil, @ar2] Artist[@ar1].album_pks = [@t1, @t2] Artist[@ar2].album_pks.should == [@t3] Album.order(:id).select_map(:artist_id).should == [@ar1, @ar1, @ar2] end specify "should set associated pks correctly for a many_to_many association" do Artist.use_transactions = true @db[:albums_tags].filter(:album_id=>@al1).select_order_map(:tag_id).should == [@t1, @t2, @t3] Album[@al1].tag_pks = [@t1, @t3] @db[:albums_tags].filter(:album_id=>@al1).select_order_map(:tag_id).should == [@t1, @t3] Album[@al1].tag_pks = [] @db[:albums_tags].filter(:album_id=>@al1).select_order_map(:tag_id).should == [] @db[:albums_tags].filter(:album_id=>@al2).select_order_map(:tag_id).should == [@t2] Album[@al2].tag_pks = [@t1, @t2] @db[:albums_tags].filter(:album_id=>@al2).select_order_map(:tag_id).should == [@t1, @t2] Album[@al2].tag_pks = [] @db[:albums_tags].filter(:album_id=>@al1).select_order_map(:tag_id).should == [] @db[:albums_tags].filter(:album_id=>@al3).select_order_map(:tag_id).should == [] Album[@al3].tag_pks = [@t1, @t3] @db[:albums_tags].filter(:album_id=>@al3).select_order_map(:tag_id).should == [@t1, @t3] Album[@al3].tag_pks = [] @db[:albums_tags].filter(:album_id=>@al1).select_order_map(:tag_id).should == [] end specify "should set associated right-side cpks correctly for a one_to_many association" do Album.use_transactions = true Album.one_to_many :vocalists, :order=>:first Album.order(:id).all.map{|a| a.vocalist_pks.sort}.should == [[@v1, @v2, @v3], [], []] Album[@al2].vocalist_pks = [@v1, @v3] Album[@al1].vocalist_pks.should == [@v2] Vocalist.order(:first, :last).select_map(:album_id).should == [@al2, @al1, @al2] Album[@al1].vocalist_pks = [@v1] Album[@al2].vocalist_pks.should == [@v3] Vocalist.order(:first, :last).select_map(:album_id).should == [@al1, nil, @al2] Album[@al1].vocalist_pks = [@v1, @v2] Album[@al2].vocalist_pks.should == [@v3] Vocalist.order(:first, :last).select_map(:album_id).should == [@al1, @al1, @al2] end specify "should set associated right-side cpks correctly for a many_to_many association" do Album.use_transactions = true Album.many_to_many :vocalists, :join_table=>:albums_vocalists, :right_key=>[:first, :last], :order=>:first @db[:albums_vocalists].filter(:album_id=>@al1).select_order_map([:first, :last]).should == [@v1, @v2, @v3] Album[@al1].vocalist_pks = [@v1, @v3] @db[:albums_vocalists].filter(:album_id=>@al1).select_order_map([:first, :last]).should == [@v1, @v3] Album[@al1].vocalist_pks = [] @db[:albums_vocalists].filter(:album_id=>@al1).select_order_map([:first, :last]).should == [] @db[:albums_vocalists].filter(:album_id=>@al2).select_order_map([:first, :last]).should == [@v2] Album[@al2].vocalist_pks = [@v1, @v2] @db[:albums_vocalists].filter(:album_id=>@al2).select_order_map([:first, :last]).should == [@v1, @v2] Album[@al2].vocalist_pks = [] @db[:albums_vocalists].filter(:album_id=>@al1).select_order_map([:first, :last]).should == [] @db[:albums_vocalists].filter(:album_id=>@al3).select_order_map([:first, :last]).should == [] Album[@al3].vocalist_pks = [@v1, @v3] @db[:albums_vocalists].filter(:album_id=>@al3).select_order_map([:first, :last]).should == [@v1, @v3] Album[@al3].vocalist_pks = [] @db[:albums_vocalists].filter(:album_id=>@al1).select_order_map([:first, :last]).should == [] end specify "should set associated pks correctly with left-side cpks for a one_to_many association" do Vocalist.use_transactions = true Vocalist.one_to_many :instruments, :key=>[:first, :last], :order=>:id Vocalist.order(:first, :last).all.map{|a| a.instrument_pks.sort}.should == [[@i1, @i2, @i3], [], []] Vocalist[@v2].instrument_pks = [@i1, @i3] Vocalist[@v1].instrument_pks.should == [@i2] Instrument.order(:id).select_map([:first, :last]).should == [@v2, @v1, @v2] Vocalist[@v1].instrument_pks = [@i1] Vocalist[@v2].instrument_pks.should == [@i3] Instrument.order(:id).select_map([:first, :last]).should == [@v1, [nil, nil], @v2] Vocalist[@v1].instrument_pks = [@i1, @i2] Vocalist[@v2].instrument_pks.should == [@i3] Instrument.order(:id).select_map([:first, :last]).should == [@v1, @v1, @v2] end specify "should set associated pks correctly with left-side cpks for a many_to_many association" do Vocalist.use_transactions = true Vocalist.many_to_many :instruments, :join_table=>:vocalists_instruments, :left_key=>[:first, :last], :order=>:id @db[:vocalists_instruments].filter([:first, :last]=>[@v1]).select_order_map(:instrument_id).should == [@i1, @i2, @i3] Vocalist[@v1].instrument_pks = [@i1, @i3] @db[:vocalists_instruments].filter([:first, :last]=>[@v1]).select_order_map(:instrument_id).should == [@i1, @i3] Vocalist[@v1].instrument_pks = [] @db[:vocalists_instruments].filter([:first, :last]=>[@v1]).select_order_map(:instrument_id).should == [] @db[:vocalists_instruments].filter([:first, :last]=>[@v2]).select_order_map(:instrument_id).should == [@i2] Vocalist[@v2].instrument_pks = [@i1, @i2] @db[:vocalists_instruments].filter([:first, :last]=>[@v2]).select_order_map(:instrument_id).should == [@i1, @i2] Vocalist[@v2].instrument_pks = [] @db[:vocalists_instruments].filter([:first, :last]=>[@v1]).select_order_map(:instrument_id).should == [] @db[:vocalists_instruments].filter([:first, :last]=>[@v3]).select_order_map(:instrument_id).should == [] Vocalist[@v3].instrument_pks = [@i1, @i3] @db[:vocalists_instruments].filter([:first, :last]=>[@v3]).select_order_map(:instrument_id).should == [@i1, @i3] Vocalist[@v3].instrument_pks = [] @db[:vocalists_instruments].filter([:first, :last]=>[@v1]).select_order_map(:instrument_id).should == [] end specify "should set associated right-side cpks correctly with left-side cpks for a one_to_many association" do Vocalist.use_transactions = true Vocalist.one_to_many :hits, :key=>[:first, :last], :order=>:week Vocalist.order(:first, :last).all.map{|a| a.hit_pks.sort}.should == [[@h1, @h2, @h3], [], []] Vocalist[@v2].hit_pks = [@h1, @h3] Vocalist[@v1].hit_pks.should == [@h2] Hit.order(:year, :week).select_map([:first, :last]).should == [@v2, @v1, @v2] Vocalist[@v1].hit_pks = [@h1] Vocalist[@v2].hit_pks.should == [@h3] Hit.order(:year, :week).select_map([:first, :last]).should == [@v1, [nil, nil], @v2] Vocalist[@v1].hit_pks = [@h1, @h2] Vocalist[@v2].hit_pks.should == [@h3] Hit.order(:year, :week).select_map([:first, :last]).should == [@v1, @v1, @v2] end specify "should set associated right-side cpks correctly with left-side cpks for a many_to_many association" do Vocalist.use_transactions = true Vocalist.many_to_many :hits, :join_table=>:vocalists_hits, :left_key=>[:first, :last], :right_key=>[:year, :week], :order=>:week @db[:vocalists_hits].filter([:first, :last]=>[@v1]).select_order_map([:year, :week]).should == [@h1, @h2, @h3] Vocalist[@v1].hit_pks = [@h1, @h3] @db[:vocalists_hits].filter([:first, :last]=>[@v1]).select_order_map([:year, :week]).should == [@h1, @h3] Vocalist[@v1].hit_pks = [] @db[:vocalists_hits].filter([:first, :last]=>[@v1]).select_order_map([:year, :week]).should == [] @db[:vocalists_hits].filter([:first, :last]=>[@v2]).select_order_map([:year, :week]).should == [@h2] Vocalist[@v2].hit_pks = [@h1, @h2] @db[:vocalists_hits].filter([:first, :last]=>[@v2]).select_order_map([:year, :week]).should == [@h1, @h2] Vocalist[@v2].hit_pks = [] @db[:vocalists_hits].filter([:first, :last]=>[@v1]).select_order_map([:year, :week]).should == [] @db[:vocalists_hits].filter([:first, :last]=>[@v3]).select_order_map([:year, :week]).should == [] Vocalist[@v3].hit_pks = [@h1, @h3] @db[:vocalists_hits].filter([:first, :last]=>[@v3]).select_order_map([:year, :week]).should == [@h1, @h3] Vocalist[@v3].hit_pks = [] @db[:vocalists_hits].filter([:first, :last]=>[@v1]).select_order_map([:year, :week]).should == [] end end describe "List plugin without a scope" do before(:all) do @db = DB @db.create_table!(:sites) do primary_key :id String :name Integer :position end @c = Class.new(Sequel::Model(@db[:sites])) @c.plugin :list end before do @c.dataset.delete @c.create :name => "abc" @c.create :name => "def" @c.create :name => "hig" end after(:all) do @db.drop_table?(:sites) end it "should return rows in order of position" do @c.map(:position).should == [1,2,3] @c.map(:name).should == %w[ abc def hig ] end it "should define prev and next" do i = @c[:name => "abc"] i.prev.should == nil i = @c[:name => "def"] i.prev.should == @c[:name => "abc"] i.next.should == @c[:name => "hig"] i = @c[:name => "hig"] i.next.should == nil end it "should define move_to" do @c[:name => "def"].move_to(1) @c.map(:name).should == %w[ def abc hig ] @c[:name => "abc"].move_to(3) @c.map(:name).should == %w[ def hig abc ] proc { @c[:name => "abc"].move_to(-1) }.should raise_error(Sequel::Error) proc { @c[:name => "abc"].move_to(10) }.should raise_error(Sequel::Error) end it "should define move_to_top and move_to_bottom" do @c[:name => "def"].move_to_top @c.map(:name).should == %w[ def abc hig ] @c[:name => "def"].move_to_bottom @c.map(:name).should == %w[ abc hig def ] end it "should define move_up and move_down" do @c[:name => "def"].move_up @c.map(:name).should == %w[ def abc hig ] @c[:name => "abc"].move_down @c.map(:name).should == %w[ def hig abc ] @c[:name => "abc"].move_up(2) @c.map(:name).should == %w[ abc def hig ] @c[:name => "abc"].move_down(2) @c.map(:name).should == %w[ def hig abc ] proc { @c[:name => "def"].move_up(10) }.should raise_error(Sequel::Error) proc { @c[:name => "def"].move_down(10) }.should raise_error(Sequel::Error) end end describe "List plugin with a scope" do before(:all) do @db = DB @db.create_table!(:pages) do primary_key :id String :name Integer :pos Integer :parent_id end @c = Class.new(Sequel::Model(@db[:pages])) @c.plugin :list, :field => :pos, :scope => :parent_id end before do @c.dataset.delete p1 = @c.create :name => "Hm", :parent_id => 0 p2 = @c.create :name => "Ps", :parent_id => p1.id @c.create :name => "P1", :parent_id => p2.id @c.create :name => "P2", :parent_id => p2.id @c.create :name => "P3", :parent_id => p2.id @c.create :name => "Au", :parent_id => p1.id end after(:all) do @db.drop_table?(:pages) end specify "should return rows in order of position" do @c.map(:name).should == %w[ Hm Ps Au P1 P2 P3 ] end it "should define prev and next" do @c[:name => "Ps"].next.name.should == 'Au' @c[:name => "Au"].prev.name.should == 'Ps' @c[:name => "P1"].next.name.should == 'P2' @c[:name => "P2"].prev.name.should == 'P1' @c[:name => "P1"].next(2).name.should == 'P3' @c[:name => "P2"].next(-1).name.should == 'P1' @c[:name => "P3"].prev(2).name.should == 'P1' @c[:name => "P2"].prev(-1).name.should == 'P3' @c[:name => "Ps"].prev.should == nil @c[:name => "Au"].next.should == nil @c[:name => "P1"].prev.should == nil @c[:name => "P3"].next.should == nil end specify "should define move_to" do @c[:name => "P2"].move_to(1) @c.map(:name).should == %w[ Hm Ps Au P2 P1 P3 ] @c[:name => "P2"].move_to(3) @c.map(:name).should == %w[ Hm Ps Au P1 P3 P2 ] proc { @c[:name => "P2"].move_to(-1) }.should raise_error(Sequel::Error) proc { @c[:name => "P2"].move_to(10) }.should raise_error(Sequel::Error) end specify "should define move_to_top and move_to_bottom" do @c[:name => "Au"].move_to_top @c.map(:name).should == %w[ Hm Au Ps P1 P2 P3 ] @c[:name => "Au"].move_to_bottom @c.map(:name).should == %w[ Hm Ps Au P1 P2 P3 ] end specify "should define move_up and move_down" do @c[:name => "P2"].move_up @c.map(:name).should == %w[ Hm Ps Au P2 P1 P3 ] @c[:name => "P1"].move_down @c.map(:name).should == %w[ Hm Ps Au P2 P3 P1 ] proc { @c[:name => "P1"].move_up(10) }.should raise_error(Sequel::Error) proc { @c[:name => "P1"].move_down(10) }.should raise_error(Sequel::Error) end end describe "Sequel::Plugins::Tree" do before(:all) do @db = DB end describe "with natural database order" do before(:all) do @db.create_table!(:nodes) do Integer :id, :primary_key=>true String :name Integer :parent_id Integer :position end @nodes = [{:id => 1, :name => 'one', :parent_id => nil, :position => 1}, {:id => 2, :name => 'two', :parent_id => nil, :position => 2}, {:id => 3, :name => 'three', :parent_id => nil, :position => 3}, {:id => 4, :name => "two.one", :parent_id => 2, :position => 1}, {:id => 5, :name => "two.two", :parent_id => 2, :position => 2}, {:id => 6, :name => "two.two.one", :parent_id => 5, :position => 1}, {:id => 7, :name => "one.two", :parent_id => 1, :position => 2}, {:id => 8, :name => "one.one", :parent_id => 1, :position => 1}, {:id => 9, :name => "five", :parent_id => nil, :position => 5}, {:id => 10, :name => "four", :parent_id => nil, :position => 4}, {:id => 11, :name => "five.one", :parent_id => 9, :position => 1}, {:id => 12, :name => "two.three", :parent_id => 2, :position => 3}] @nodes.each{|node| @db[:nodes].insert(node)} class ::Node < Sequel::Model plugin :tree end end after(:all) do @db.drop_table?(:nodes) Object.send(:remove_const, :Node) end it "should instantiate" do Node.all.size.should == 12 end it "should find top level nodes" do Node.roots_dataset.count.should == 5 end it "should find all descendants of a node" do two = Node.find(:id => 2) two.name.should == "two" two.descendants.map{|m| m[:id]}.should == [4, 5, 12, 6] end it "should find all ancestors of a node" do twotwoone = Node.find(:id => 6) twotwoone.name.should == "two.two.one" twotwoone.ancestors.map{|m| m[:id]}.should == [5, 2] end it "should find all siblings of a node, excepting self" do twoone = Node.find(:id => 4) twoone.name.should == "two.one" twoone.siblings.map{|m| m[:id]}.should == [5, 12] end it "should find all siblings of a node, including self" do twoone = Node.find(:id => 4) twoone.name.should == "two.one" twoone.self_and_siblings.map{|m| m[:id]}.should == [4, 5, 12] end it "should find siblings for root nodes" do three = Node.find(:id => 3) three.name.should == "three" three.self_and_siblings.map{|m| m[:id]}.should == [1, 2, 3, 9, 10] end it "should find correct root for a node" do twotwoone = Node.find(:id => 6) twotwoone.name.should == "two.two.one" twotwoone.root[:id].should == 2 three = Node.find(:id => 3) three.name.should == "three" three.root[:id].should == 3 fiveone = Node.find(:id => 11) fiveone.name.should == "five.one" fiveone.root[:id].should == 9 end it "iterate top-level nodes in natural database order" do Node.roots_dataset.count.should == 5 Node.roots.inject([]){|ids, p| ids << p.position}.should == [1, 2, 3, 5, 4] end it "should have children" do one = Node.find(:id => 1) one.name.should == "one" one.children.size.should == 2 end it "children should be natural database order" do one = Node.find(:id => 1) one.name.should == "one" one.children.map{|m| m[:position]}.should == [2, 1] end describe "Nodes in specified order" do before(:all) do class ::OrderedNode < Sequel::Model(:nodes) plugin :tree, :order => :position end end after(:all) do Object.send(:remove_const, :OrderedNode) end it "iterate top-level nodes in order by position" do OrderedNode.roots_dataset.count.should == 5 OrderedNode.roots.inject([]){|ids, p| ids << p.position}.should == [1, 2, 3, 4, 5] end it "children should be in specified order" do one = OrderedNode.find(:id => 1) one.name.should == "one" one.children.map{|m| m[:position]}.should == [1, 2] end end end describe "Lorems in specified order" do before(:all) do @db.create_table!(:lorems) do Integer :id, :primary_key=>true String :name Integer :ipsum_id Integer :neque end @lorems = [{:id => 1, :name => 'Lorem', :ipsum_id => nil, :neque => 4}, {:id => 2, :name => 'Ipsum', :ipsum_id => nil, :neque => 3}, {:id => 4, :name => "Neque", :ipsum_id => 2, :neque => 2}, {:id => 5, :name => "Porro", :ipsum_id => 2, :neque => 1}] @lorems.each{|lorem| @db[:lorems].insert(lorem)} class ::Lorem < Sequel::Model plugin :tree, :key => :ipsum_id, :order => :neque end end after(:all) do @db.drop_table?(:lorems) Object.send(:remove_const, :Lorem) end it "iterate top-level nodes in order by position" do Lorem.roots_dataset.count.should == 2 Lorem.roots.inject([]){|ids, p| ids << p.neque}.should == [3, 4] end it "children should be specified order" do one = Lorem.find(:id => 2) one.children.map{|m| m[:neque]}.should == [1, 2] end end end describe "Sequel::Plugins::PreparedStatements" do before(:all) do @db = DB @db.create_table!(:ps_test) do primary_key :id String :name Integer :i end @c = Class.new(Sequel::Model(@db[:ps_test])) @c.plugin :prepared_statements_with_pk end before do @c.dataset.delete @foo = @c.create(:name=>'foo', :i=>10) @bar = @c.create(:name=>'bar', :i=>20) end after(:all) do @db.drop_table?(:ps_test) end it "should work with looking up using Model.[]" do @c[@foo.id].should == @foo @c[@bar.id].should == @bar @c[0].should == nil @c[nil].should == nil end it "should work with looking up using Dataset#with_pk" do @c.dataset.with_pk(@foo.id).should == @foo @c.dataset.with_pk(@bar.id).should == @bar @c.dataset.with_pk(0).should == nil @c.dataset.with_pk(nil).should == nil @c.dataset.filter(:i=>0).with_pk(@foo.id).should == nil @c.dataset.filter(:i=>10).with_pk(@foo.id).should == @foo @c.dataset.filter(:i=>20).with_pk(@bar.id).should == @bar @c.dataset.filter(:i=>10).with_pk(nil).should == nil @c.dataset.filter(:name=>'foo').with_pk(@foo.id).should == @foo @c.dataset.filter(:name=>'bar').with_pk(@bar.id).should == @bar @c.dataset.filter(:name=>'baz').with_pk(@bar.id).should == nil @c.dataset.filter(:name=>'bar').with_pk(nil).should == nil end it "should work with Model#destroy" do @foo.destroy @bar.destroy @c[@foo.id].should == nil @c[@bar.id].should == nil end it "should work with Model#update" do @foo.update(:name=>'foo2', :i=>30) @c[@foo.id].should == @c.load(:id=>@foo.id, :name=>'foo2', :i=>30) @foo.update(:name=>'foo3') @c[@foo.id].should == @c.load(:id=>@foo.id, :name=>'foo3', :i=>30) @foo.update(:i=>40) @c[@foo.id].should == @c.load(:id=>@foo.id, :name=>'foo3', :i=>40) @foo.update(:i=>nil) @c[@foo.id].should == @c.load(:id=>@foo.id, :name=>'foo3', :i=>nil) end it "should work with Model#create" do o = @c.create(:name=>'foo2', :i=>30) @c[o.id].should == @c.load(:id=>o.id, :name=>'foo2', :i=>30) o = @c.create(:name=>'foo2') @c[o.id].should == @c.load(:id=>o.id, :name=>'foo2', :i=>nil) o = @c.create(:i=>30) @c[o.id].should == @c.load(:id=>o.id, :name=>nil, :i=>30) o = @c.create(:name=>nil, :i=>40) @c[o.id].should == @c.load(:id=>o.id, :name=>nil, :i=>40) end end describe "Caching plugins" do before(:all) do @db = DB @db.drop_table?(:albums, :artists) @db.create_table(:artists) do primary_key :id end @db.create_table(:albums) do primary_key :id foreign_key :artist_id, :artists end @db[:artists].insert @db[:albums].insert(:artist_id=>1) end before do @Album = Class.new(Sequel::Model(@db[:albums])) @Album.plugin :many_to_one_pk_lookup end after(:all) do @db.drop_table?(:albums, :artists) end shared_examples_for "a caching plugin" do it "should work with looking up using Model.[]" do @Artist[1].should equal(@Artist[1]) @Artist[:id=>1].should == @Artist[1] @Artist[0].should == nil @Artist[nil].should == nil end it "should work with lookup up many_to_one associated objects" do a = @Artist[1] @Album.first.artist.should equal(a) end end describe "caching plugin" do before do @cache_class = Class.new(Hash) do def set(k, v, ttl) self[k] = v end alias get [] end @cache = @cache_class.new @Artist = Class.new(Sequel::Model(@db[:artists])) @Artist.plugin :caching, @cache @Album.many_to_one :artist, :class=>@Artist end it_should_behave_like "a caching plugin" end describe "static_cache plugin" do before do @Artist = Class.new(Sequel::Model(@db[:artists])) @Artist.plugin :static_cache @Album.many_to_one :artist, :class=>@Artist end it_should_behave_like "a caching plugin" end end describe "Sequel::Plugins::ConstraintValidations" do before(:all) do @db = DB @db.extension(:constraint_validations) @db.create_constraint_validations_table @ds = @db[:cv_test] @regexp = regexp = @db.dataset.supports_regexp? @validation_opts = {} opts_proc = proc{@validation_opts} @validate_block = proc do |opts| opts = opts_proc.call presence :pre, opts.merge(:name=>:p) exact_length 5, :exactlen, opts.merge(:name=>:el) min_length 5, :minlen, opts.merge(:name=>:minl) max_length 5, :maxlen, opts.merge(:name=>:maxl) length_range 3..5, :lenrange, opts.merge(:name=>:lr) if regexp format(/^foo\d+/, :form, opts.merge(:name=>:f)) end like 'foo%', :lik, opts.merge(:name=>:l) ilike 'foo%', :ilik, opts.merge(:name=>:il) includes %w'abc def', :inc, opts.merge(:name=>:i) unique :uniq, opts.merge(:name=>:u) max_length 6, :minlen, opts.merge(:name=>:maxl2) end @valid_row = {:pre=>'a', :exactlen=>'12345', :minlen=>'12345', :maxlen=>'12345', :lenrange=>'1234', :lik=>'fooabc', :ilik=>'FooABC', :inc=>'abc', :uniq=>'u'} @violations = [ [:pre, [nil, '', ' ']], [:exactlen, [nil, '', '1234', '123456']], [:minlen, [nil, '', '1234']], [:maxlen, [nil, '123456']], [:lenrange, [nil, '', '12', '123456']], [:lik, [nil, '', 'fo', 'fotabc', 'FOOABC']], [:ilik, [nil, '', 'fo', 'fotabc']], [:inc, [nil, '', 'ab', 'abcd']], ] if @regexp @valid_row[:form] = 'foo1' @violations << [:form, [nil, '', 'foo', 'fooa']] end end after(:all) do @db.drop_constraint_validations_table end shared_examples_for "constraint validations" do cspecify "should set up constraints that work even outside the model", :mysql do proc{@ds.insert(@valid_row)}.should_not raise_error # Test for unique constraint proc{@ds.insert(@valid_row)}.should raise_error(Sequel::DatabaseError) @ds.delete @violations.each do |col, vals| try = @valid_row.dup vals += ['1234567'] if col == :minlen vals.each do |val| next if val.nil? && @validation_opts[:allow_nil] try[col] = val proc{@ds.insert(try)}.should raise_error(Sequel::DatabaseError) end end # Test for dropping of constraint @db.alter_table(:cv_test){validate{drop :maxl2}} proc{@ds.insert(@valid_row.merge(:minlen=>'1234567'))}.should_not raise_error end it "should set up automatic validations inside the model" do c = Class.new(Sequel::Model(@ds)) c.plugin :constraint_validations c.dataset.delete proc{c.create(@valid_row)}.should_not raise_error # Test for unique validation c.new(@valid_row).should_not be_valid c.dataset.delete @violations.each do |col, vals| try = @valid_row.dup vals.each do |val| next if val.nil? && @validation_opts[:allow_nil] try[col] = val c.new(try).should_not be_valid end end c.db.constraint_validations = nil end end describe "via create_table" do before(:all) do @table_block = proc do regexp = @regexp validate_block = @validate_block @db.create_table!(:cv_test) do primary_key :id String :pre String :exactlen String :minlen String :maxlen String :lenrange if regexp String :form end String :lik String :ilik String :inc String :uniq, :null=>false validate(&validate_block) end end end after(:all) do @db.drop_table?(:cv_test) @db.drop_constraint_validations_for(:table=>:cv_test) end describe "with :allow_nil=>true" do before(:all) do @validation_opts = {:allow_nil=>true} @table_block.call end it_should_behave_like "constraint validations" end describe "with :allow_nil=>false" do before(:all) do @table_block.call end it_should_behave_like "constraint validations" end end describe "via alter_table" do before(:all) do @table_block = proc do regexp = @regexp validate_block = @validate_block @db.create_table!(:cv_test) do primary_key :id String :lik String :ilik String :inc String :uniq, :null=>false end @db.alter_table(:cv_test) do add_column :pre, String add_column :exactlen, String add_column :minlen, String add_column :maxlen, String add_column :lenrange, String if regexp add_column :form, String end validate(&validate_block) end end end after(:all) do @db.drop_table?(:cv_test) @db.drop_constraint_validations_for(:table=>:cv_test) end describe "with :allow_nil=>true" do before(:all) do @validation_opts = {:allow_nil=>true} @table_block.call end it_should_behave_like "constraint validations" end describe "with :allow_nil=>false" do before(:all) do @table_block.call end it_should_behave_like "constraint validations" end end end describe "date_arithmetic extension" do asd = begin require 'active_support/duration' require 'active_support/inflector' require 'active_support/core_ext/string/inflections' true rescue LoadError false end before(:all) do @db = DB @db.extension(:date_arithmetic) if @db.database_type == :sqlite @db.use_timestamp_timezones = false end @date = Date.civil(2010, 7, 12) @dt = Time.local(2010, 7, 12) if asd @d0 = ActiveSupport::Duration.new(0, [[:days, 0]]) @d1 = ActiveSupport::Duration.new(1, [[:days, 1]]) @d2 = ActiveSupport::Duration.new(1, [[:years, 1], [:months, 1], [:days, 1], [:minutes, 61], [:seconds, 1]]) end @h0 = {:days=>0} @h1 = {:days=>1, :years=>nil, :hours=>0} @h2 = {:years=>1, :months=>1, :days=>1, :hours=>1, :minutes=>1, :seconds=>1} @a1 = Time.local(2010, 7, 13) @a2 = Time.local(2011, 8, 13, 1, 1, 1) @s1 = Time.local(2010, 7, 11) @s2 = Time.local(2009, 6, 10, 22, 58, 59) @check = lambda do |meth, in_date, in_interval, should| output = @db.get(Sequel.send(meth, in_date, in_interval)) output = Time.parse(output.to_s) unless output.is_a?(Time) || output.is_a?(DateTime) output.year.should == should.year output.month.should == should.month output.day.should == should.day output.hour.should == should.hour output.min.should == should.min output.sec.should == should.sec end end after(:all) do if @db.database_type == :sqlite @db.use_timestamp_timezones = true end end if asd specify "be able to use Sequel.date_add to add ActiveSupport::Duration objects to dates and datetimes" do @check.call(:date_add, @date, @d0, @dt) @check.call(:date_add, @date, @d1, @a1) @check.call(:date_add, @date, @d2, @a2) @check.call(:date_add, @dt, @d0, @dt) @check.call(:date_add, @dt, @d1, @a1) @check.call(:date_add, @dt, @d2, @a2) end specify "be able to use Sequel.date_sub to subtract ActiveSupport::Duration objects from dates and datetimes" do @check.call(:date_sub, @date, @d0, @dt) @check.call(:date_sub, @date, @d1, @s1) @check.call(:date_sub, @date, @d2, @s2) @check.call(:date_sub, @dt, @d0, @dt) @check.call(:date_sub, @dt, @d1, @s1) @check.call(:date_sub, @dt, @d2, @s2) end end specify "be able to use Sequel.date_add to add interval hashes to dates and datetimes" do @check.call(:date_add, @date, @h0, @dt) @check.call(:date_add, @date, @h1, @a1) @check.call(:date_add, @date, @h2, @a2) @check.call(:date_add, @dt, @h0, @dt) @check.call(:date_add, @dt, @h1, @a1) @check.call(:date_add, @dt, @h2, @a2) end specify "be able to use Sequel.date_sub to subtract interval hashes from dates and datetimes" do @check.call(:date_sub, @date, @h0, @dt) @check.call(:date_sub, @date, @h1, @s1) @check.call(:date_sub, @date, @h2, @s2) @check.call(:date_sub, @dt, @h0, @dt) @check.call(:date_sub, @dt, @h1, @s1) @check.call(:date_sub, @dt, @h2, @s2) end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/prepared_statement_test.rb���������������������������������������0000664�0000000�0000000�00000044173�12201565355�0024536�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') describe "Prepared Statements and Bound Arguments" do before do @db = DB @db.create_table!(:items) do primary_key :id integer :numb end @c = Class.new(Sequel::Model(:items)) @ds = @db[:items] @ds.insert(:numb=>10) @pr = @ds.requires_placeholder_type_specifiers? ? proc{|i| :"#{i}__integer"} : proc{|i| i} end after do @db.drop_table?(:items) end specify "should support bound variables when selecting" do @ds.filter(:numb=>:$n).call(:each, :n=>10){|h| h.should == {:id=>1, :numb=>10}} @ds.filter(:numb=>:$n).call(:select, :n=>10).should == [{:id=>1, :numb=>10}] @ds.filter(:numb=>:$n).call(:all, :n=>10).should == [{:id=>1, :numb=>10}] @ds.filter(:numb=>:$n).call(:first, :n=>10).should == {:id=>1, :numb=>10} @ds.filter(:numb=>:$n).call([:map, :numb], :n=>10).should == [10] @ds.filter(:numb=>:$n).call([:to_hash, :id, :numb], :n=>10).should == {1=>10} @ds.filter(:numb=>:$n).call([:to_hash_groups, :id, :numb], :n=>10).should == {1=>[10]} end specify "should support blocks for each, select, all, and map when using bound variables" do a = [] @ds.filter(:numb=>:$n).call(:each, :n=>10){|r| r[:numb] *= 2; a << r}; a.should == [{:id=>1, :numb=>20}] @ds.filter(:numb=>:$n).call(:select, :n=>10){|r| r[:numb] *= 2}.should == [{:id=>1, :numb=>20}] @ds.filter(:numb=>:$n).call(:all, :n=>10){|r| r[:numb] *= 2}.should == [{:id=>1, :numb=>20}] @ds.filter(:numb=>:$n).call([:map], :n=>10){|r| r[:numb] * 2}.should == [20] end specify "should support binding variables before the call with #bind" do @ds.filter(:numb=>:$n).bind(:n=>10).call(:select).should == [{:id=>1, :numb=>10}] @ds.filter(:numb=>:$n).bind(:n=>10).call(:all).should == [{:id=>1, :numb=>10}] @ds.filter(:numb=>:$n).bind(:n=>10).call(:first).should == {:id=>1, :numb=>10} @ds.bind(:n=>10).filter(:numb=>:$n).call(:select).should == [{:id=>1, :numb=>10}] @ds.bind(:n=>10).filter(:numb=>:$n).call(:all).should == [{:id=>1, :numb=>10}] @ds.bind(:n=>10).filter(:numb=>:$n).call(:first).should == {:id=>1, :numb=>10} end specify "should allow overriding variables specified with #bind" do @ds.filter(:numb=>:$n).bind(:n=>1).call(:select, :n=>10).should == [{:id=>1, :numb=>10}] @ds.filter(:numb=>:$n).bind(:n=>1).call(:all, :n=>10).should == [{:id=>1, :numb=>10}] @ds.filter(:numb=>:$n).bind(:n=>1).call(:first, :n=>10).should == {:id=>1, :numb=>10} @ds.filter(:numb=>:$n).bind(:n=>1).bind(:n=>10).call(:select).should == [{:id=>1, :numb=>10}] @ds.filter(:numb=>:$n).bind(:n=>1).bind(:n=>10).call(:all).should == [{:id=>1, :numb=>10}] @ds.filter(:numb=>:$n).bind(:n=>1).bind(:n=>10).call(:first).should == {:id=>1, :numb=>10} end specify "should support placeholder literal strings with call" do @ds.filter("numb = ?", :$n).call(:select, :n=>10).should == [{:id=>1, :numb=>10}] end specify "should support named placeholder literal strings and handle multiple named placeholders correctly with call" do @ds.filter("numb = :n", :n=>:$n).call(:select, :n=>10).should == [{:id=>1, :numb=>10}] @ds.insert(:numb=>20) @ds.insert(:numb=>30) @ds.filter("numb > :n1 AND numb < :n2 AND numb = :n3", :n3=>:$n3, :n2=>:$n2, :n1=>:$n1).call(:select, :n3=>20, :n2=>30, :n1=>10).should == [{:id=>2, :numb=>20}] end specify "should support datasets with static sql and placeholders with call" do @db["SELECT * FROM items WHERE numb = ?", :$n].call(:select, :n=>10).should == [{:id=>1, :numb=>10}] end specify "should support subselects with call" do @ds.filter(:id=>:$i).filter(:numb=>@ds.select(:numb).filter(:numb=>:$n)).filter(:id=>:$j).call(:select, :n=>10, :i=>1, :j=>1).should == [{:id=>1, :numb=>10}] end specify "should support subselects with exists with call" do @ds.filter(:id=>:$i).filter(@ds.select(:numb).filter(:numb=>:$n).exists).filter(:id=>:$j).call(:select, :n=>10, :i=>1, :j=>1).should == [{:id=>1, :numb=>10}] end specify "should support subselects with literal strings with call" do @ds.filter(:id=>:$i, :numb=>@ds.select(:numb).filter("numb = ?", :$n)).call(:select, :n=>10, :i=>1).should == [{:id=>1, :numb=>10}] end specify "should support subselects with static sql and placeholders with call" do @ds.filter(:id=>:$i, :numb=>@db["SELECT numb FROM items WHERE numb = ?", :$n]).call(:select, :n=>10, :i=>1).should == [{:id=>1, :numb=>10}] end specify "should support subselects of subselects with call" do @ds.filter(:id=>:$i).filter(:numb=>@ds.select(:numb).filter(:numb=>@ds.select(:numb).filter(:numb=>:$n))).filter(:id=>:$j).call(:select, :n=>10, :i=>1, :j=>1).should == [{:id=>1, :numb=>10}] end cspecify "should support using a bound variable for a limit and offset", [:jdbc, :db2] do @ds.insert(:numb=>20) ds = @ds.limit(:$n, :$n2).order(:id) ds.call(:select, :n=>1, :n2=>0).should == [{:id=>1, :numb=>10}] ds.call(:select, :n=>1, :n2=>1).should == [{:id=>2, :numb=>20}] ds.call(:select, :n=>1, :n2=>2).should == [] ds.call(:select, :n=>2, :n2=>0).should == [{:id=>1, :numb=>10}, {:id=>2, :numb=>20}] ds.call(:select, :n=>2, :n2=>1).should == [{:id=>2, :numb=>20}] end specify "should support bound variables with insert" do @ds.call(:insert, {:n=>20}, :numb=>:$n) @ds.count.should == 2 @ds.order(:id).map(:numb).should == [10, 20] end specify "should support bound variables with NULL values" do @ds.delete @ds.call(:insert, {:n=>nil}, :numb=>@pr[:$n]) @ds.count.should == 1 @ds.map(:numb).should == [nil] end specify "should have insert return primary key value when using bound arguments" do @ds.call(:insert, {:n=>20}, :numb=>:$n).should == 2 @ds.filter(:id=>2).first[:numb].should == 20 end specify "should support bound variables with delete" do @ds.filter(:numb=>:$n).call(:delete, :n=>10).should == 1 @ds.count.should == 0 end specify "should support bound variables with update" do @ds.filter(:numb=>:$n).call(:update, {:n=>10, :nn=>20}, :numb=>Sequel.+(:numb, :$nn)).should == 1 @ds.all.should == [{:id=>1, :numb=>30}] end specify "should support prepared statements when selecting" do @ds.filter(:numb=>:$n).prepare(:each, :select_n) @db.call(:select_n, :n=>10){|h| h.should == {:id=>1, :numb=>10}} @ds.filter(:numb=>:$n).prepare(:select, :select_n) @db.call(:select_n, :n=>10).should == [{:id=>1, :numb=>10}] @ds.filter(:numb=>:$n).prepare(:all, :select_n) @db.call(:select_n, :n=>10).should == [{:id=>1, :numb=>10}] @ds.filter(:numb=>:$n).prepare(:first, :select_n) @db.call(:select_n, :n=>10).should == {:id=>1, :numb=>10} @ds.filter(:numb=>:$n).prepare([:map, :numb], :select_n) @db.call(:select_n, :n=>10).should == [10] @ds.filter(:numb=>:$n).prepare([:to_hash, :id, :numb], :select_n) @db.call(:select_n, :n=>10).should == {1=>10} end specify "should support blocks for each, select, all, and map when using prepared statements" do a = [] @ds.filter(:numb=>:$n).prepare(:each, :select_n).call(:n=>10){|r| r[:numb] *= 2; a << r}; a.should == [{:id=>1, :numb=>20}] a = [] @db.call(:select_n, :n=>10){|r| r[:numb] *= 2; a << r}; a.should == [{:id=>1, :numb=>20}] @ds.filter(:numb=>:$n).prepare(:select, :select_n).call(:n=>10){|r| r[:numb] *= 2}.should == [{:id=>1, :numb=>20}] @db.call(:select_n, :n=>10){|r| r[:numb] *= 2}.should == [{:id=>1, :numb=>20}] @ds.filter(:numb=>:$n).prepare(:all, :select_n).call(:n=>10){|r| r[:numb] *= 2}.should == [{:id=>1, :numb=>20}] @db.call(:select_n, :n=>10){|r| r[:numb] *= 2}.should == [{:id=>1, :numb=>20}] @ds.filter(:numb=>:$n).prepare([:map], :select_n).call(:n=>10){|r| r[:numb] *= 2}.should == [20] @db.call(:select_n, :n=>10){|r| r[:numb] *= 2}.should == [20] end specify "should support prepared statements being called multiple times with different arguments" do @ds.filter(:numb=>:$n).prepare(:select, :select_n) @db.call(:select_n, :n=>10).should == [{:id=>1, :numb=>10}] @db.call(:select_n, :n=>0).should == [] @db.call(:select_n, :n=>10).should == [{:id=>1, :numb=>10}] end specify "should support placeholder literal strings with prepare" do @ds.filter("numb = ?", :$n).prepare(:select, :seq_select).call(:n=>10).should == [{:id=>1, :numb=>10}] end specify "should support named placeholder literal strings and handle multiple named placeholders correctly with prepare" do @ds.filter("numb = :n", :n=>:$n).prepare(:select, :seq_select).call(:n=>10).should == [{:id=>1, :numb=>10}] @ds.insert(:numb=>20) @ds.insert(:numb=>30) @ds.filter("numb > :n1 AND numb < :n2 AND numb = :n3", :n3=>:$n3, :n2=>:$n2, :n1=>:$n1).call(:select, :n3=>20, :n2=>30, :n1=>10).should == [{:id=>2, :numb=>20}] end specify "should support datasets with static sql and placeholders with prepare" do @db["SELECT * FROM items WHERE numb = ?", :$n].prepare(:select, :seq_select).call(:n=>10).should == [{:id=>1, :numb=>10}] end specify "should support subselects with prepare" do @ds.filter(:id=>:$i).filter(:numb=>@ds.select(:numb).filter(:numb=>:$n)).filter(:id=>:$j).prepare(:select, :seq_select).call(:n=>10, :i=>1, :j=>1).should == [{:id=>1, :numb=>10}] end specify "should support subselects with exists with prepare" do @ds.filter(:id=>:$i).filter(@ds.select(:numb).filter(:numb=>:$n).exists).filter(:id=>:$j).prepare(:select, :seq_select).call(:n=>10, :i=>1, :j=>1).should == [{:id=>1, :numb=>10}] end specify "should support subselects with literal strings with prepare" do @ds.filter(:id=>:$i, :numb=>@ds.select(:numb).filter("numb = ?", :$n)).prepare(:select, :seq_select).call(:n=>10, :i=>1).should == [{:id=>1, :numb=>10}] end specify "should support subselects with static sql and placeholders with prepare" do @ds.filter(:id=>:$i, :numb=>@db["SELECT numb FROM items WHERE numb = ?", :$n]).prepare(:select, :seq_select).call(:n=>10, :i=>1).should == [{:id=>1, :numb=>10}] end specify "should support subselects of subselects with prepare" do @ds.filter(:id=>:$i).filter(:numb=>@ds.select(:numb).filter(:numb=>@ds.select(:numb).filter(:numb=>:$n))).filter(:id=>:$j).prepare(:select, :seq_select).call(:n=>10, :i=>1, :j=>1).should == [{:id=>1, :numb=>10}] end cspecify "should support using a prepared_statement for a limit and offset", :db2 do @ds.insert(:numb=>20) ps = @ds.limit(:$n, :$n2).order(:id).prepare(:select, :seq_select) ps.call(:n=>1, :n2=>0).should == [{:id=>1, :numb=>10}] ps.call(:n=>1, :n2=>1).should == [{:id=>2, :numb=>20}] ps.call(:n=>1, :n2=>2).should == [] ps.call(:n=>2, :n2=>0).should == [{:id=>1, :numb=>10}, {:id=>2, :numb=>20}] ps.call(:n=>2, :n2=>1).should == [{:id=>2, :numb=>20}] end specify "should support prepared statements with insert" do @ds.prepare(:insert, :insert_n, :numb=>:$n) @db.call(:insert_n, :n=>20) @ds.count.should == 2 @ds.order(:id).map(:numb).should == [10, 20] end specify "should support prepared statements with NULL values" do @ds.delete @ds.prepare(:insert, :insert_n, :numb=>@pr[:$n]) @db.call(:insert_n, :n=>nil) @ds.count.should == 1 @ds.map(:numb).should == [nil] end specify "should have insert return primary key value when using prepared statements" do @ds.prepare(:insert, :insert_n, :numb=>:$n) @db.call(:insert_n, :n=>20).should == 2 @ds.filter(:id=>2).first[:numb].should == 20 end specify "should support prepared statements with delete" do @ds.filter(:numb=>:$n).prepare(:delete, :delete_n) @db.call(:delete_n, :n=>10).should == 1 @ds.count.should == 0 end specify "should support prepared statements with update" do @ds.filter(:numb=>:$n).prepare(:update, :update_n, :numb=>Sequel.+(:numb, :$nn)) @db.call(:update_n, :n=>10, :nn=>20).should == 1 @ds.all.should == [{:id=>1, :numb=>30}] end specify "model datasets should return model instances when using select, all, and first with bound variables" do @c.filter(:numb=>:$n).call(:select, :n=>10).should == [@c.load(:id=>1, :numb=>10)] @c.filter(:numb=>:$n).call(:all, :n=>10).should == [@c.load(:id=>1, :numb=>10)] @c.filter(:numb=>:$n).call(:first, :n=>10).should == @c.load(:id=>1, :numb=>10) end specify "model datasets should return model instances when using select, all, and first with prepared statements" do @c.filter(:numb=>:$n).prepare(:select, :select_n1) @db.call(:select_n1, :n=>10).should == [@c.load(:id=>1, :numb=>10)] @c.filter(:numb=>:$n).prepare(:all, :select_n1) @db.call(:select_n1, :n=>10).should == [@c.load(:id=>1, :numb=>10)] @c.filter(:numb=>:$n).prepare(:first, :select_n1) @db.call(:select_n1, :n=>10).should == @c.load(:id=>1, :numb=>10) end end describe "Bound Argument Types" do before(:all) do @db = DB @db.create_table!(:items) do primary_key :id Date :d DateTime :dt File :file String :s Time :t Float :f TrueClass :b end @ds = @db[:items] @vs = {:d=>Date.civil(2010, 10, 11), :dt=>DateTime.civil(2010, 10, 12, 13, 14, 15), :f=>1.0, :s=>'str', :t=>Time.at(20101010), :file=>Sequel::SQL::Blob.new('blob'), :b=>true} end before do @ds.delete @ds.insert(@vs) end after do Sequel.datetime_class = Time end after(:all) do @db.drop_table?(:items) end cspecify "should handle date type", [:do, :sqlite], :mssql, [:jdbc, :sqlite], :oracle do @ds.filter(:d=>:$x).prepare(:first, :ps_date).call(:x=>@vs[:d])[:d].should == @vs[:d] end cspecify "should handle datetime type", [:do], [:mysql2], [:jdbc, :sqlite], [:tinytds], [:oracle] do Sequel.datetime_class = DateTime @ds.filter(:dt=>:$x).prepare(:first, :ps_datetime).call(:x=>@vs[:dt])[:dt].should == @vs[:dt] end cspecify "should handle datetime type with fractional seconds", [:do], [:mysql2], [:jdbc, :sqlite], [:tinytds], [:oracle] do Sequel.datetime_class = DateTime fract_time = DateTime.parse('2010-10-12 13:14:15.500000') @ds.prepare(:update, :ps_datetime_up, :dt=>:$x).call(:x=>fract_time) @ds.literal(@ds.filter(:dt=>:$x).prepare(:first, :ps_datetime).call(:x=>fract_time)[:dt]).should == @ds.literal(fract_time) end cspecify "should handle time type", [:do], [:jdbc, :sqlite], [:swift], [:oracle] do @ds.filter(:t=>:$x).prepare(:first, :ps_time).call(:x=>@vs[:t])[:t].should == @vs[:t] end cspecify "should handle time type with fractional seconds", [:do], [:jdbc, :sqlite], [:oracle], [:swift, :postgres] do fract_time = @vs[:t] + 0.5 @ds.prepare(:update, :ps_time_up, :t=>:$x).call(:x=>fract_time) @ds.literal(@ds.filter(:t=>:$x).prepare(:first, :ps_time).call(:x=>fract_time)[:t]).should == @ds.literal(fract_time) end cspecify "should handle blob type", [:odbc], [:oracle] do @ds.delete @ds.prepare(:insert, :ps_blob, {:file=>:$x}).call(:x=>@vs[:file]) @ds.get(:file).should == @vs[:file] end cspecify "should handle blob type with nil values", [:oracle], [:tinytds], [:jdbc, proc{|db| defined?(Sequel::JDBC::SQLServer::DatabaseMethods) && db.is_a?(Sequel::JDBC::SQLServer::DatabaseMethods)}] do @ds.delete @ds.prepare(:insert, :ps_blob, {:file=>:$x}).call(:x=>nil) @ds.get(:file).should == nil end cspecify "should handle blob type with embedded zeros", [:odbc], [:oracle] do zero_blob = Sequel::SQL::Blob.new("a\0"*100) @ds.delete @ds.prepare(:insert, :ps_blob, {:file=>:$x}).call(:x=>zero_blob) @ds.get(:file).should == zero_blob end cspecify "should handle float type", [:swift, :sqlite] do @ds.filter(:f=>:$x).prepare(:first, :ps_float).call(:x=>@vs[:f])[:f].should == @vs[:f] end specify "should handle string type" do @ds.filter(:s=>:$x).prepare(:first, :ps_string).call(:x=>@vs[:s])[:s].should == @vs[:s] end cspecify "should handle boolean type", [:do, :sqlite], [:odbc, :mssql], [:jdbc, :sqlite], [:jdbc, :db2], :oracle do @ds.filter(:b=>:$x).prepare(:first, :ps_string).call(:x=>@vs[:b])[:b].should == @vs[:b] end end describe "Dataset#unbind" do before do @ds = ds = DB[:items] @ct = proc do |t, v| DB.create_table!(:items) do column :c, t end ds.insert(:c=>v) end @u = proc{|ds1| ds2, bv = ds1.unbind; ds2.call(:first, bv)} end after do DB.drop_table?(:items) end specify "should unbind values assigned to equality and inequality statements" do @ct[Integer, 10] @u[@ds.filter(:c=>10)].should == {:c=>10} @u[@ds.exclude(:c=>10)].should == nil @u[@ds.filter{c < 10}].should == nil @u[@ds.filter{c <= 10}].should == {:c=>10} @u[@ds.filter{c > 10}].should == nil @u[@ds.filter{c >= 10}].should == {:c=>10} end cspecify "should handle numerics and strings", [:odbc], [:swift, :sqlite] do @ct[Integer, 10] @u[@ds.filter(:c=>10)].should == {:c=>10} @ct[Float, 0.0] @u[@ds.filter{c < 1}].should == {:c=>0.0} @ct[String, 'foo'] @u[@ds.filter(:c=>'foo')].should == {:c=>'foo'} DB.create_table!(:items) do BigDecimal :c, :size=>[15,2] end @ds.insert(:c=>BigDecimal.new('1.1')) @u[@ds.filter{c > 0}].should == {:c=>BigDecimal.new('1.1')} end cspecify "should handle dates and times", [:do], [:jdbc, :mssql], [:jdbc, :sqlite], [:swift], [:tinytds], :oracle do @ct[Date, Date.today] @u[@ds.filter(:c=>Date.today)].should == {:c=>Date.today} t = Time.now @ct[Time, t] @u[@ds.filter{c < t + 1}][:c].to_i.should == t.to_i end specify "should handle QualifiedIdentifiers" do @ct[Integer, 10] @u[@ds.filter{items__c > 1}].should == {:c=>10} end specify "should handle deep nesting" do DB.create_table!(:items) do Integer :a Integer :b Integer :c Integer :d end @ds.insert(:a=>2, :b=>0, :c=>3, :d=>5) @u[@ds.filter{a > 1}.and{b < 2}.or(:c=>3).and(Sequel.case({~Sequel.expr(:d=>4)=>1}, 0) => 1)].should == {:a=>2, :b=>0, :c=>3, :d=>5} @u[@ds.filter{a > 1}.and{b < 2}.or(:c=>3).and(Sequel.case({~Sequel.expr(:d=>5)=>1}, 0) => 1)].should == nil end specify "should handle case where the same variable has the same value in multiple places " do @ct[Integer, 1] @u[@ds.filter{c > 1}.or{c < 1}.invert].should == {:c=>1} @u[@ds.filter{c > 1}.or{c < 1}].should == nil end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/schema_test.rb���������������������������������������������������0000664�0000000�0000000�00000070071�12201565355�0022104�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') describe "Database schema parser" do before do @iom = DB.identifier_output_method @iim = DB.identifier_input_method @qi = DB.quote_identifiers? end after do DB.identifier_output_method = @iom DB.identifier_input_method = @iim DB.quote_identifiers = @qi DB.drop_table?(:items) end specify "should handle a database with a identifier methods" do DB.identifier_output_method = :reverse DB.identifier_input_method = :reverse DB.quote_identifiers = true DB.create_table!(:items){Integer :number} begin DB.schema(:items, :reload=>true).should be_a_kind_of(Array) DB.schema(:items, :reload=>true).first.first.should == :number ensure DB.drop_table(:items) end end specify "should handle a dataset with identifier methods different than the database's" do DB.identifier_output_method = :reverse DB.identifier_input_method = :reverse DB.quote_identifiers = true DB.create_table!(:items){Integer :number} DB.identifier_output_method = @iom DB.identifier_input_method = @iim ds = DB[:items] ds.identifier_output_method = :reverse ds.identifier_input_method = :reverse begin DB.schema(ds, :reload=>true).should be_a_kind_of(Array) DB.schema(ds, :reload=>true).first.first.should == :number ensure DB.identifier_output_method = :reverse DB.identifier_input_method = :reverse DB.drop_table(:items) end end specify "should not issue an sql query if the schema has been loaded unless :reload is true" do DB.create_table!(:items){Integer :number} DB.schema(:items, :reload=>true) DB.schema(:items) DB.schema(:items, :reload=>true) end specify "Model schema should include columns in the table, even if they aren't selected" do DB.create_table!(:items){String :a; Integer :number} m = Sequel::Model(DB[:items].select(:a)) m.columns.should == [:a] m.db_schema[:number][:type].should == :integer end specify "should raise an error when the table doesn't exist" do proc{DB.schema(:no_table)}.should raise_error(Sequel::Error) end specify "should return the schema correctly" do DB.create_table!(:items){Integer :number} schema = DB.schema(:items, :reload=>true) schema.should be_a_kind_of(Array) schema.length.should == 1 col = schema.first col.should be_a_kind_of(Array) col.length.should == 2 col.first.should == :number col_info = col.last col_info.should be_a_kind_of(Hash) col_info[:type].should == :integer DB.schema(:items) end specify "should parse primary keys from the schema properly" do DB.create_table!(:items){Integer :number} DB.schema(:items).collect{|k,v| k if v[:primary_key]}.compact.should == [] DB.create_table!(:items){primary_key :number} DB.schema(:items).collect{|k,v| k if v[:primary_key]}.compact.should == [:number] DB.create_table!(:items){Integer :number1; Integer :number2; primary_key [:number1, :number2]} DB.schema(:items).collect{|k,v| k if v[:primary_key]}.compact.should == [:number1, :number2] end specify "should parse NULL/NOT NULL from the schema properly" do DB.create_table!(:items){Integer :number, :null=>true} DB.schema(:items).first.last[:allow_null].should == true DB.create_table!(:items){Integer :number, :null=>false} DB.schema(:items).first.last[:allow_null].should == false end specify "should parse defaults from the schema properly" do DB.create_table!(:items){Integer :number} DB.schema(:items).first.last[:ruby_default].should == nil DB.create_table!(:items){Integer :number, :default=>0} DB.schema(:items).first.last[:ruby_default].should == 0 DB.create_table!(:items){String :a, :default=>"blah"} DB.schema(:items).first.last[:ruby_default].should == 'blah' end specify "should parse current timestamp defaults from the schema properly" do DB.create_table!(:items){Time :a, :default=>Sequel::CURRENT_TIMESTAMP} DB.schema(:items).first.last[:ruby_default].should == Sequel::CURRENT_TIMESTAMP end cspecify "should parse current date defaults from the schema properly", :mysql, :oracle do DB.create_table!(:items){Date :a, :default=>Sequel::CURRENT_DATE} DB.schema(:items).first.last[:ruby_default].should == Sequel::CURRENT_DATE end cspecify "should parse types from the schema properly", [:jdbc, :db2], :oracle do DB.create_table!(:items){Integer :number} DB.schema(:items).first.last[:type].should == :integer DB.create_table!(:items){Fixnum :number} DB.schema(:items).first.last[:type].should == :integer DB.create_table!(:items){Bignum :number} DB.schema(:items).first.last[:type].should == :integer DB.create_table!(:items){Float :number} DB.schema(:items).first.last[:type].should == :float DB.create_table!(:items){BigDecimal :number, :size=>[11, 2]} DB.schema(:items).first.last[:type].should == :decimal DB.create_table!(:items){Numeric :number, :size=>[12, 0]} DB.schema(:items).first.last[:type].should == :integer DB.create_table!(:items){String :number} DB.schema(:items).first.last[:type].should == :string DB.create_table!(:items){Date :number} DB.schema(:items).first.last[:type].should == :date DB.create_table!(:items){Time :number} DB.schema(:items).first.last[:type].should == :datetime DB.create_table!(:items){DateTime :number} DB.schema(:items).first.last[:type].should == :datetime DB.create_table!(:items){File :number} DB.schema(:items).first.last[:type].should == :blob DB.create_table!(:items){TrueClass :number} DB.schema(:items).first.last[:type].should == :boolean DB.create_table!(:items){FalseClass :number} DB.schema(:items).first.last[:type].should == :boolean end end if DB.supports_schema_parsing? describe "Database index parsing" do after do DB.drop_table?(:items) end specify "should parse indexes into a hash" do # Delete :deferrable entry, since not all adapters implement it f = lambda{h = DB.indexes(:items); h.values.each{|h2| h2.delete(:deferrable)}; h} DB.create_table!(:items){Integer :n; Integer :a} f.call.should == {} DB.add_index(:items, :n) f.call.should == {:items_n_index=>{:columns=>[:n], :unique=>false}} DB.drop_index(:items, :n) f.call.should == {} DB.add_index(:items, :n, :unique=>true, :name=>:blah_blah_index) f.call.should == {:blah_blah_index=>{:columns=>[:n], :unique=>true}} DB.add_index(:items, [:n, :a]) f.call.should == {:blah_blah_index=>{:columns=>[:n], :unique=>true}, :items_n_a_index=>{:columns=>[:n, :a], :unique=>false}} DB.drop_index(:items, :n, :name=>:blah_blah_index) f.call.should == {:items_n_a_index=>{:columns=>[:n, :a], :unique=>false}} DB.drop_index(:items, [:n, :a]) f.call.should == {} end specify "should not include a primary key index" do DB.create_table!(:items){primary_key :n} DB.indexes(:items).should == {} DB.create_table!(:items){Integer :n; Integer :a; primary_key [:n, :a]} DB.indexes(:items).should == {} end end if DB.supports_index_parsing? describe "Database foreign key parsing" do before do @db = DB @pr = lambda do |table, *expected| actual = @db.foreign_key_list(table).sort_by{|c| c[:columns].map{|s| s.to_s}.join << (c[:key]||[]).map{|s| s.to_s}.join}.map{|v| v.values_at(:columns, :table, :key)} actual.zip(expected).each do |a, e| if e.last.first == :pk if a.last == nil a.pop e.pop else e.last.shift end end a.should == e end actual.length.should == expected.length end end after do @db.drop_table?(:b, :a) end specify "should parse foreign key information into an array of hashes" do @db.create_table!(:a, :engine=>:InnoDB){primary_key :c; Integer :d; index :d, :unique=>true} @db.create_table!(:b, :engine=>:InnoDB){foreign_key :e, :a} @pr[:a] @pr[:b, [[:e], :a, [:pk, :c]]] @db.alter_table(:b){add_foreign_key :f, :a, :key=>[:d]} @pr[:b, [[:e], :a, [:pk, :c]], [[:f], :a, [:d]]] @db.alter_table(:b){add_foreign_key [:f], :a, :key=>[:c]} @pr[:b, [[:e], :a, [:pk, :c]], [[:f], :a, [:c]], [[:f], :a, [:d]]] @db.alter_table(:a){add_index [:d, :c], :unique=>true} @db.alter_table(:b){add_foreign_key [:f, :e], :a, :key=>[:d, :c]} @pr[:b, [[:e], :a, [:pk, :c]], [[:f], :a, [:c]], [[:f], :a, [:d]], [[:f, :e], :a, [:d, :c]]] @db.alter_table(:b){drop_foreign_key [:f, :e]} @pr[:b, [[:e], :a, [:pk, :c]], [[:f], :a, [:c]], [[:f], :a, [:d]]] @db.alter_table(:b){drop_foreign_key :e} @pr[:b, [[:f], :a, [:c]], [[:f], :a, [:d]]] proc{@db.alter_table(:b){drop_foreign_key :f}}.should raise_error(Sequel::Error) @pr[:b, [[:f], :a, [:c]], [[:f], :a, [:d]]] end specify "should handle composite foreign and primary keys" do @db.create_table!(:a, :engine=>:InnoDB){Integer :b; Integer :c; primary_key [:b, :c]; index [:c, :b], :unique=>true} @db.create_table!(:b, :engine=>:InnoDB){Integer :e; Integer :f; foreign_key [:e, :f], :a; foreign_key [:f, :e], :a, :key=>[:c, :b]} @pr[:b, [[:e, :f], :a, [:pk, :b, :c]], [[:f, :e], :a, [:c, :b]]] end end if DB.supports_foreign_key_parsing? describe "Database schema modifiers" do before do @db = DB @ds = @db[:items] end after do # Use instead of drop_table? to work around issues on jdbc/db2 @db.drop_table(:items) rescue nil @db.drop_table(:items2) rescue nil end specify "should create tables correctly" do @db.create_table!(:items){Integer :number} @db.table_exists?(:items).should == true @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:number] @ds.insert([10]) @ds.columns!.should == [:number] end specify "should create tables from select statements correctly" do @db.create_table!(:items){Integer :number} @ds.insert([10]) @db.create_table(:items2, :as=>@db[:items]) @db.schema(:items2, :reload=>true).map{|x| x.first}.should == [:number] @db[:items2].columns.should == [:number] @db[:items2].all.should == [{:number=>10}] end describe "views" do before do @db.drop_view(:items_view) rescue nil @db.create_table(:items){Integer :number} @ds.insert(:number=>1) @ds.insert(:number=>2) end after do @db.drop_view(:items_view) rescue nil end specify "should create views correctly" do @db.create_view(:items_view, @ds.where(:number=>1)) @db[:items_view].map(:number).should == [1] end cspecify "should create views with explicit columns correctly", :sqlite do @db.create_view(:items_view, @ds.where(:number=>1), :columns=>[:n]) @db[:items_view].map(:n).should == [1] end specify "should create or replace views correctly" do @db.create_or_replace_view(:items_view, @ds.where(:number=>1)) @db[:items_view].map(:number).should == [1] @db.create_or_replace_view(:items_view, @ds.where(:number=>2)) @db[:items_view].map(:number).should == [2] end end specify "should handle create table in a rolled back transaction" do @db.drop_table?(:items) @db.transaction(:rollback=>:always){@db.create_table(:items){Integer :number}} @db.table_exists?(:items).should be_false end if DB.supports_transactional_ddl? describe "join tables" do after do @db.drop_join_table(:cat_id=>:cats, :dog_id=>:dogs) if @db.table_exists?(:cats_dogs) @db.drop_table(:cats, :dogs) @db.table_exists?(:cats_dogs).should == false end specify "should create join tables correctly" do @db.create_table!(:cats){primary_key :id} @db.create_table!(:dogs){primary_key :id} @db.create_join_table(:cat_id=>:cats, :dog_id=>:dogs) @db.table_exists?(:cats_dogs).should == true end end specify "should create temporary tables without raising an exception" do @db.create_table!(:items_temp, :temp=>true){Integer :number} end specify "should have create_table? only create the table if it doesn't already exist" do @db.create_table!(:items){String :a} @db.create_table?(:items){String :b} @db[:items].columns.should == [:a] @db.drop_table?(:items) @db.create_table?(:items){String :b} @db[:items].columns.should == [:b] end specify "should have create_table? work correctly with indexes" do @db.create_table!(:items){String :a, :index=>true} @db.create_table?(:items){String :b, :index=>true} @db[:items].columns.should == [:a] @db.drop_table?(:items) @db.create_table?(:items){String :b, :index=>true} @db[:items].columns.should == [:b] end specify "should rename tables correctly" do @db.drop_table?(:items) @db.create_table!(:items2){Integer :number} @db.rename_table(:items2, :items) @db.table_exists?(:items).should == true @db.table_exists?(:items2).should == false @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:number] @ds.insert([10]) @ds.columns!.should == [:number] end specify "should allow creating indexes with tables" do @db.create_table!(:items){Integer :number; index :number} @db.table_exists?(:items).should == true @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:number] @ds.insert([10]) @ds.columns!.should == [:number] end specify "should handle combination of default, unique, and not null" do @db.create_table!(:items){Integer :number, :default=>0, :null=>false, :unique=>true} @db.table_exists?(:items).should == true @db.schema(:items, :reload=>true).map{|x| x.last}.first.values_at(:ruby_default, :allow_null).should == [0, false] @ds.insert([10]) end specify "should be able to specify constraint names for column constraints" do @db.create_table!(:items2){primary_key :id, :primary_key_constraint_name=>:foo_pk} @db.create_table!(:items){foreign_key :id, :items2, :unique=>true, :foreign_key_constraint_name => :foo_fk, :unique_constraint_name => :foo_uk, :null=>false} @db.alter_table(:items){drop_constraint :foo_fk, :type=>:foreign_key; drop_constraint :foo_uk, :type=>:unique} @db.alter_table(:items2){drop_constraint :foo_pk, :type=>:primary_key} end specify "should handle foreign keys correctly when creating tables" do @db.create_table!(:items) do primary_key :id foreign_key :item_id, :items unique [:item_id, :id] foreign_key [:id, :item_id], :items, :key=>[:item_id, :id] end @db.table_exists?(:items).should == true @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id, :item_id] @ds.columns!.should == [:id, :item_id] end specify "should add columns to tables correctly" do @db.create_table!(:items){Integer :number} @ds.insert(:number=>10) @db.alter_table(:items){add_column :name, String} @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:number, :name] @ds.columns!.should == [:number, :name] @ds.all.should == [{:number=>10, :name=>nil}] end cspecify "should add primary key columns to tables correctly", :h2, :derby do @db.create_table!(:items){Integer :number} @ds.insert(:number=>10) @db.alter_table(:items){add_primary_key :id} @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:number, :id] @ds.columns!.should == [:number, :id] @ds.map(:number).should == [10] proc{@ds.insert(:id=>@ds.map(:id).first)}.should raise_error end specify "should drop primary key constraints from tables correctly" do @db.create_table!(:items){Integer :number; primary_key [:number], :name=>:items_pk} @ds.insert(:number=>10) @db.alter_table(:items){drop_constraint :items_pk, :type=>:primary_key} @ds.map(:number).should == [10] proc{@ds.insert(10)}.should_not raise_error end cspecify "should add foreign key columns to tables correctly", :hsqldb do @db.create_table!(:items){primary_key :id} @ds.insert i = @ds.get(:id) @db.alter_table(:items){add_foreign_key :item_id, :items} @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id, :item_id] @ds.columns!.should == [:id, :item_id] @ds.all.should == [{:id=>i, :item_id=>nil}] end specify "should not allow NULLs in a primary key" do @db.create_table!(:items){String :id, :primary_key=>true} proc{@ds.insert(:id=>nil)}.should raise_error(Sequel::DatabaseError) end specify "should rename columns correctly" do @db.create_table!(:items){Integer :id} @ds.insert(:id=>10) @db.alter_table(:items){rename_column :id, :id2} @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id2] @ds.columns!.should == [:id2] @ds.all.should == [{:id2=>10}] end specify "should rename columns with defaults correctly" do @db.create_table!(:items){String :n, :default=>'blah'} @ds.insert @db.alter_table(:items){rename_column :n, :n2} @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:n2] @ds.columns!.should == [:n2] @ds.insert @ds.all.should == [{:n2=>'blah'}, {:n2=>'blah'}] end specify "should rename columns with not null constraints" do @db.create_table!(:items, :engine=>:InnoDB){String :n, :null=>false} @ds.insert(:n=>'blah') @db.alter_table(:items){rename_column :n, :n2} @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:n2] @ds.columns!.should == [:n2] @ds.insert(:n2=>'blah') @ds.all.should == [{:n2=>'blah'}, {:n2=>'blah'}] proc{@ds.insert(:n=>nil)}.should raise_error(Sequel::DatabaseError) end specify "should rename columns when the table is referenced by a foreign key" do @db.create_table!(:items2){primary_key :id; Integer :a} @db.create_table!(:items){Integer :id, :primary_key=>true; foreign_key :items_id, :items2} @db[:items2].insert(:a=>10) @ds.insert(:id=>1) @db.alter_table(:items2){rename_column :a, :b} @db[:items2].insert(:b=>20) @ds.insert(:id=>2) @db[:items2].select_order_map([:id, :b]).should == [[1, 10], [2, 20]] end cspecify "should rename primary_key columns correctly", :db2 do @db.create_table!(:items){Integer :id, :primary_key=>true} @ds.insert(:id=>10) @db.alter_table(:items){rename_column :id, :id2} @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id2] @ds.columns!.should == [:id2] @ds.all.should == [{:id2=>10}] end cspecify "should set column NULL/NOT NULL correctly", [:jdbc, :db2], [:db2] do @db.create_table!(:items, :engine=>:InnoDB){Integer :id} @ds.insert(:id=>10) @db.alter_table(:items){set_column_allow_null :id, false} @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id] @ds.columns!.should == [:id] proc{@ds.insert(:id=>nil)}.should raise_error(Sequel::DatabaseError) @db.alter_table(:items){set_column_allow_null :id, true} @ds.insert(:id=>nil) @ds.all.should == [{:id=>10}, {:id=>nil}] end specify "should set column defaults correctly" do @db.create_table!(:items){Integer :id} @ds.insert(:id=>10) @db.alter_table(:items){set_column_default :id, 20} @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id] @ds.columns!.should == [:id] @ds.insert @ds.all.should == [{:id=>10}, {:id=>20}] end cspecify "should set column types correctly", [:jdbc, :db2], [:db2], :oracle do @db.create_table!(:items){Integer :id} @ds.insert(:id=>10) @db.alter_table(:items){set_column_type :id, String} @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id] @ds.columns!.should == [:id] @ds.insert(:id=>'20') @ds.all.should == [{:id=>"10"}, {:id=>"20"}] end cspecify "should set column types without modifying NULL/NOT NULL", [:jdbc, :db2], [:db2], :oracle, :derby do @db.create_table!(:items){Integer :id, :null=>false, :default=>2} proc{@ds.insert(:id=>nil)}.should raise_error(Sequel::DatabaseError) @db.alter_table(:items){set_column_type :id, String} proc{@ds.insert(:id=>nil)}.should raise_error(Sequel::DatabaseError) @db.create_table!(:items){Integer :id} @ds.insert(:id=>nil) @db.alter_table(:items){set_column_type :id, String} @ds.insert(:id=>nil) @ds.map(:id).should == [nil, nil] end cspecify "should set column types without modifying defaults", [:jdbc, :db2], [:db2], :oracle, :derby do @db.create_table!(:items){Integer :id, :default=>0} @ds.insert @ds.map(:id).should == [0] @db.alter_table(:items){set_column_type :id, String} @ds.insert @ds.map(:id).should == ['0', '0'] @db.create_table!(:items){String :id, :default=>'a'} @ds.insert @ds.map(:id).should == %w'a' @db.alter_table(:items){set_column_type :id, String, :size=>1} @ds.insert @ds.map(:id).should == %w'a a' end specify "should add unnamed unique constraints and foreign key table constraints correctly" do @db.create_table!(:items, :engine=>:InnoDB){Integer :id; Integer :item_id} @db.alter_table(:items) do add_unique_constraint [:item_id, :id] add_foreign_key [:id, :item_id], :items, :key=>[:item_id, :id] end @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id, :item_id] @ds.columns!.should == [:id, :item_id] proc{@ds.insert(1, 1)}.should_not raise_error proc{@ds.insert(1, 1)}.should raise_error proc{@ds.insert(1, 2)}.should raise_error end specify "should add named unique constraints and foreign key table constraints correctly" do @db.create_table!(:items, :engine=>:InnoDB){Integer :id, :null=>false; Integer :item_id, :null=>false} @db.alter_table(:items) do add_unique_constraint [:item_id, :id], :name=>:unique_iii add_foreign_key [:id, :item_id], :items, :key=>[:item_id, :id], :name=>:fk_iii end @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id, :item_id] @ds.columns!.should == [:id, :item_id] proc{@ds.insert(1, 1)}.should_not raise_error proc{@ds.insert(1, 1)}.should raise_error proc{@ds.insert(1, 2)}.should raise_error end specify "should drop unique constraints and foreign key table constraints correctly" do @db.create_table!(:items) do Integer :id Integer :item_id unique [:item_id, :id], :name=>:items_uk foreign_key [:id, :item_id], :items, :key=>[:item_id, :id], :name=>:items_fk end @db.alter_table(:items) do drop_constraint(:items_fk, :type=>:foreign_key) drop_constraint(:items_uk, :type=>:unique) end @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id, :item_id] @ds.columns!.should == [:id, :item_id] proc{@ds.insert(1, 2)}.should_not raise_error proc{@ds.insert(1, 2)}.should_not raise_error end specify "should remove columns from tables correctly" do @db.create_table!(:items) do primary_key :id Integer :i end @ds.insert(:i=>10) @db.drop_column(:items, :i) @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id] end specify "should remove columns with defaults from tables correctly" do @db.create_table!(:items) do primary_key :id Integer :i, :default=>20 end @ds.insert(:i=>10) @db.drop_column(:items, :i) @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id] end cspecify "should remove foreign key columns from tables correctly", :h2, :mssql, :hsqldb do # MySQL with InnoDB cannot drop foreign key columns unless you know the # name of the constraint, see Bug #14347 @db.create_table!(:items, :engine=>:MyISAM) do primary_key :id Integer :i foreign_key :item_id, :items end @ds.insert(:i=>10) @db.drop_column(:items, :item_id) @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id, :i] end specify "should remove multiple columns in a single alter_table block" do @db.create_table!(:items) do primary_key :id String :name Integer :number end @ds.insert(:number=>10) @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id, :name, :number] @db.alter_table(:items) do drop_column :name drop_column :number end @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id] end cspecify "should work correctly with many operations in a single alter_table call", [:jdbc, :db2], [:db2] do @db.create_table!(:items) do primary_key :id String :name2 String :number2 constraint :bar, Sequel.~(:id=>nil) end @ds.insert(:name2=>'A12') @db.alter_table(:items) do add_column :number, Integer drop_column :number2 rename_column :name2, :name drop_constraint :bar set_column_not_null :name set_column_default :name, 'A13' add_constraint :foo, Sequel.like(:name, 'A%') end @db[:items].first.should == {:id=>1, :name=>'A12', :number=>nil} @db[:items].delete proc{@db[:items].insert(:name=>nil)}.should raise_error(Sequel::DatabaseError) @db[:items].insert(:number=>1) @db[:items].get(:name).should == 'A13' end specify "should support deferrable foreign key constraints" do @db.create_table!(:items2){Integer :id, :primary_key=>true} @db.create_table!(:items){foreign_key :id, :items2, :deferrable=>true} proc{@db[:items].insert(1)}.should raise_error(Sequel::DatabaseError) proc{@db.transaction{proc{@db[:items].insert(1)}.should_not raise_error}}.should raise_error(Sequel::DatabaseError) end if DB.supports_deferrable_foreign_key_constraints? specify "should support deferrable unique constraints when creating or altering tables" do @db.create_table!(:items){Integer :t; unique [:t], :name=>:atest_def, :deferrable=>true, :using=>:btree} @db[:items].insert(1) @db[:items].insert(2) proc{@db[:items].insert(2)}.should raise_error(Sequel::DatabaseError) proc{@db.transaction{proc{@db[:items].insert(2)}.should_not raise_error}}.should raise_error(Sequel::DatabaseError) @db.create_table!(:items){Integer :t} @db.alter_table(:items){add_unique_constraint [:t], :name=>:atest_def, :deferrable=>true, :using=>:btree} @db[:items].insert(1) @db[:items].insert(2) proc{@db[:items].insert(2)}.should raise_error(Sequel::DatabaseError) proc{@db.transaction{proc{@db[:items].insert(2)}.should_not raise_error}}.should raise_error(Sequel::DatabaseError) end if DB.supports_deferrable_constraints? end describe "Database#tables" do before do class ::String @@xxxxx = 0 def xxxxx "xxxxx#{@@xxxxx += 1}" end end @db = DB @db.create_table(:sequel_test_table){Integer :a} @db.create_view :sequel_test_view, @db[:sequel_test_table] @iom = @db.identifier_output_method @iim = @db.identifier_input_method end after do @db.identifier_output_method = @iom @db.identifier_input_method = @iim @db.drop_view :sequel_test_view @db.drop_table :sequel_test_table end specify "should return an array of symbols" do ts = @db.tables ts.should be_a_kind_of(Array) ts.each{|t| t.should be_a_kind_of(Symbol)} ts.should include(:sequel_test_table) ts.should_not include(:sequel_test_view) end specify "should respect the database's identifier_output_method" do @db.identifier_output_method = :xxxxx @db.identifier_input_method = :xxxxx @db.tables.each{|t| t.to_s.should =~ /\Ax{5}\d+\z/} end end if DB.supports_table_listing? describe "Database#views" do before do class ::String @@xxxxx = 0 def xxxxx "xxxxx#{@@xxxxx += 1}" end end @db = DB @db.create_table(:sequel_test_table){Integer :a} @db.create_view :sequel_test_view, @db[:sequel_test_table] @iom = @db.identifier_output_method @iim = @db.identifier_input_method end after do @db.identifier_output_method = @iom @db.identifier_input_method = @iim @db.drop_view :sequel_test_view @db.drop_table :sequel_test_table end specify "should return an array of symbols" do ts = @db.views ts.should be_a_kind_of(Array) ts.each{|t| t.should be_a_kind_of(Symbol)} ts.should_not include(:sequel_test_table) ts.should include(:sequel_test_view) end specify "should respect the database's identifier_output_method" do @db.identifier_output_method = :xxxxx @db.identifier_input_method = :xxxxx @db.views.each{|t| t.to_s.should =~ /\Ax{5}\d+\z/} end end if DB.supports_view_listing? �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/spec_helper.rb���������������������������������������������������0000664�0000000�0000000�00000005214�12201565355�0022073�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'rubygems' require 'logger' if ENV['COVERAGE'] require File.join(File.dirname(File.expand_path(__FILE__)), "../sequel_coverage") SimpleCov.sequel_coverage(:group=>%r{lib/sequel/adapters}) end unless Object.const_defined?('Sequel') $:.unshift(File.join(File.dirname(File.expand_path(__FILE__)), "../../lib/")) require 'sequel' end begin require File.join(File.dirname(File.dirname(__FILE__)), 'spec_config.rb') unless defined?(DB) rescue LoadError end Sequel::Deprecation.backtrace_filter = lambda{|line, lineno| lineno < 4 || line =~ /_(spec|test)\.rb/} Sequel::Database.extension :columns_introspection if ENV['SEQUEL_COLUMNS_INTROSPECTION'] Sequel::Model.use_transactions = false Sequel.cache_anonymous_models = false unless defined?(RSpec) module Spec::Matchers class BeWithin include Spec::Matchers def initialize(delta); @delta = delta; end def of(expected); be_close(expected, @delta); end end def be_within(delta) BeWithin.new(delta) end end end def Sequel.guarded?(*checked) unless ENV['SEQUEL_NO_PENDING'] checked.each do |c| case c when DB.database_type return c when Array case c.length when 1 return c if c.first == DB.adapter_scheme when 2 if c.first.is_a?(Proc) return c if c.last == DB.database_type && c.first.call(DB) elsif c.last.is_a?(Proc) return c if c.first == DB.adapter_scheme && c.last.call(DB) else return c if c.first == DB.adapter_scheme && c.last == DB.database_type end when 3 return c if c[0] == DB.adapter_scheme && c[1] == DB.database_type && c[2].call(DB) end end end end false end (defined?(RSpec) ? RSpec::Core::ExampleGroup : Spec::Example::ExampleGroup).class_eval do def log begin DB.loggers << Logger.new(STDOUT) yield ensure DB.loggers.pop end end def self.cspecify(message, *checked, &block) if pending = Sequel.guarded?(*checked) specify(message){pending("Not yet working on #{Array(pending).map{|x| x.is_a?(Proc) ? :proc : x}.join(', ')}", &block)} else specify(message, &block) end end end unless defined?(DB) DB = Sequel.connect(ENV['SEQUEL_INTEGRATION_URL']) end if DB.adapter_scheme == :ibmdb || (DB.adapter_scheme == :ado && DB.database_type == :access) def DB.drop_table(*tables) super rescue Sequel::DatabaseError disconnect super end end if ENV['SEQUEL_CONNECTION_VALIDATOR'] ENV['SEQUEL_NO_CHECK_SQLS'] = '1' DB.extension(:connection_validator) DB.pool.connection_validation_timeout = -1 end ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/timezone_test.rb�������������������������������������������������0000664�0000000�0000000�00000006004�12201565355�0022471�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') describe "Sequel timezone support" do def test_timezone(timezone=Sequel.application_timezone) Sequel.datetime_class = Time # Tests should cover both DST and non-DST times. [Time.now, Time.local(2010,1,1,12), Time.local(2010,6,1,12)].each do |t| @db[:t].insert(t) t2 = @db[:t].single_value t2 = @db.to_application_timestamp(t2.to_s) unless t2.is_a?(Time) (t2 - t).should be_within(2).of(0) t2.utc_offset.should == 0 if timezone == :utc t2.utc_offset.should == t.getlocal.utc_offset if timezone == :local @db[:t].delete end Sequel.datetime_class = DateTime local_dst_offset = Time.local(2010, 6).utc_offset/86400.0 local_std_offset = Time.local(2010, 1).utc_offset/86400.0 [DateTime.now, DateTime.civil(2010,1,1,12,0,0,local_std_offset), DateTime.civil(2010,6,1,12,0,0,local_dst_offset)].each do |dt| @db[:t].insert(dt) dt2 = @db[:t].single_value dt2 = @db.to_application_timestamp(dt2.to_s) unless dt2.is_a?(DateTime) (dt2 - dt).should be_within(0.00002).of(0) dt2.offset.should == 0 if timezone == :utc dt2.offset.should == dt.offset if timezone == :local @db[:t].delete end end before do @db = DB @db.create_table!(:t){DateTime :t} end after do @db.timezone = nil Sequel.default_timezone = nil Sequel.datetime_class = Time @db.drop_table(:t) end cspecify "should support using UTC for database storage and local time for the application", [:swift], [:tinytds], [:do, :mysql], [:do, :postgres], [:oracle] do Sequel.database_timezone = :utc Sequel.application_timezone = :local test_timezone Sequel.database_timezone = nil @db.timezone = :utc test_timezone end cspecify "should support using local time for database storage and UTC for the application", [:swift], [:tinytds], [:do, :mysql], [:do, :postgres], [:oracle] do Sequel.database_timezone = :local Sequel.application_timezone = :utc test_timezone Sequel.database_timezone = nil @db.timezone = :local test_timezone end cspecify "should support using UTC for both database storage and for application", [:do, :mysql], [:do, :postgres], [:oracle] do Sequel.default_timezone = :utc test_timezone Sequel.database_timezone = :local @db.timezone = :utc test_timezone end cspecify "should support using local time for both database storage and for application", [:do, :mysql], [:do, :postgres], [:oracle], [:swift, :postgres], [:swift, :mysql] do Sequel.default_timezone = :local test_timezone Sequel.database_timezone = :utc @db.timezone = :local test_timezone end specify "should allow overriding the database_timezone on a per-database basis" do Sequel.database_timezone = :utc @db.timezone = :local t = Time.now @db[:t].insert(t) s = @db[:t].get(Sequel.cast(:t, String)) if o = Date._parse(s)[:offset] o.should == t.utc_offset end end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/transaction_test.rb����������������������������������������������0000664�0000000�0000000�00000024157�12201565355�0023175�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') describe "Database transactions" do before(:all) do @db = DB @db.create_table!(:items, :engine=>'InnoDB'){String :name; Integer :value} @d = @db[:items] end before do @d.delete end after(:all) do @db.drop_table?(:items) end specify "should support transactions" do @db.transaction{@d << {:name => 'abc', :value => 1}} @d.count.should == 1 end specify "should have #transaction yield the connection" do @db.transaction{|conn| conn.should_not == nil} end specify "should have #in_transaction? work correctly" do @db.in_transaction?.should be_false c = nil @db.transaction{c = @db.in_transaction?} c.should be_true end specify "should correctly rollback transactions" do proc do @db.transaction do @d << {:name => 'abc', :value => 1} raise Interrupt, 'asdf' end end.should raise_error(Interrupt) @db.transaction do @d << {:name => 'abc', :value => 1} raise Sequel::Rollback end.should be_nil proc do @db.transaction(:rollback=>:reraise) do @d << {:name => 'abc', :value => 1} raise Sequel::Rollback end end.should raise_error(Sequel::Rollback) @db.transaction(:rollback=>:always) do @d << {:name => 'abc', :value => 1} 2 end.should be_nil @d.count.should == 0 end specify "should support nested transactions" do @db.transaction do @db.transaction do @d << {:name => 'abc', :value => 1} end end @d.count.should == 1 @d.delete proc {@db.transaction do @d << {:name => 'abc', :value => 1} @db.transaction do raise Sequel::Rollback end end}.should_not raise_error @d.count.should == 0 proc {@db.transaction do @d << {:name => 'abc', :value => 1} @db.transaction do raise Interrupt, 'asdf' end end}.should raise_error(Interrupt) @d.count.should == 0 end if DB.supports_savepoints? cspecify "should support nested transactions through savepoints using the savepoint option", [:jdbc, :sqlite] do @db.transaction do @d << {:name => '1'} @db.transaction(:savepoint=>true) do @d << {:name => '2'} @db.transaction do @d << {:name => '3'} raise Sequel::Rollback end end @d << {:name => '4'} @db.transaction do @d << {:name => '6'} @db.transaction(:savepoint=>true) do @d << {:name => '7'} raise Sequel::Rollback end end @d << {:name => '5'} end @d.order(:name).map(:name).should == %w{1 4 5 6} end end specify "should handle returning inside of the block by committing" do def @db.ret_commit transaction do self[:items] << {:name => 'abc'} return end end @d.count.should == 0 @db.ret_commit @d.count.should == 1 @db.ret_commit @d.count.should == 2 proc do @db.transaction do raise Interrupt, 'asdf' end end.should raise_error(Interrupt) @d.count.should == 2 end if DB.supports_prepared_transactions? specify "should allow saving and destroying of model objects" do c = Class.new(Sequel::Model(@d)) c.set_primary_key :name c.unrestrict_primary_key c.use_after_commit_rollback = false @db.transaction(:prepare=>'XYZ'){c.create(:name => '1'); c.create(:name => '2').destroy} @db.commit_prepared_transaction('XYZ') @d.select_order_map(:name).should == ['1'] end specify "should commit prepared transactions using commit_prepared_transaction" do @db.transaction(:prepare=>'XYZ'){@d << {:name => '1'}} @db.commit_prepared_transaction('XYZ') @d.select_order_map(:name).should == ['1'] end specify "should rollback prepared transactions using rollback_prepared_transaction" do @db.transaction(:prepare=>'XYZ'){@d << {:name => '1'}} @db.rollback_prepared_transaction('XYZ') @d.select_order_map(:name).should == [] end if DB.supports_savepoints_in_prepared_transactions? specify "should support savepoints when using prepared transactions" do @db.transaction(:prepare=>'XYZ'){@db.transaction(:savepoint=>true){@d << {:name => '1'}}} @db.commit_prepared_transaction('XYZ') @d.select_order_map(:name).should == ['1'] end end end specify "should support all transaction isolation levels" do [:uncommitted, :committed, :repeatable, :serializable].each_with_index do |l, i| @db.transaction(:isolation=>l){@d << {:name => 'abc', :value => 1}} @d.count.should == i + 1 end end specify "should support after_commit outside transactions" do c = nil @db.after_commit{c = 1} c.should == 1 end specify "should support after_rollback outside transactions" do c = nil @db.after_rollback{c = 1} c.should be_nil end specify "should support after_commit inside transactions" do c = nil @db.transaction{@db.after_commit{c = 1}; c.should be_nil} c.should == 1 end specify "should support after_rollback inside transactions" do c = nil @db.transaction{@db.after_rollback{c = 1}; c.should be_nil} c.should be_nil end specify "should not call after_commit if the transaction rolls back" do c = nil @db.transaction{@db.after_commit{c = 1}; c.should be_nil; raise Sequel::Rollback} c.should be_nil end specify "should call after_rollback if the transaction rolls back" do c = nil @db.transaction{@db.after_rollback{c = 1}; c.should be_nil; raise Sequel::Rollback} c.should == 1 end specify "should support multiple after_commit blocks inside transactions" do c = [] @db.transaction{@db.after_commit{c << 1}; @db.after_commit{c << 2}; c.should == []} c.should == [1, 2] end specify "should support multiple after_rollback blocks inside transactions" do c = [] @db.transaction{@db.after_rollback{c << 1}; @db.after_rollback{c << 2}; c.should == []; raise Sequel::Rollback} c.should == [1, 2] end specify "should support after_commit inside nested transactions" do c = nil @db.transaction{@db.transaction{@db.after_commit{c = 1}}; c.should be_nil} c.should == 1 end specify "should support after_rollback inside nested transactions" do c = nil @db.transaction{@db.transaction{@db.after_rollback{c = 1}}; c.should be_nil; raise Sequel::Rollback} c.should == 1 end if DB.supports_savepoints? specify "should support after_commit inside savepoints" do c = nil @db.transaction{@db.transaction(:savepoint=>true){@db.after_commit{c = 1}}; c.should be_nil} c.should == 1 end specify "should support after_rollback inside savepoints" do c = nil @db.transaction{@db.transaction(:savepoint=>true){@db.after_rollback{c = 1}}; c.should be_nil; raise Sequel::Rollback} c.should == 1 end end if DB.supports_prepared_transactions? specify "should raise an error if you attempt to use after_commit or after_rollback inside a prepared transaction" do proc{@db.transaction(:prepare=>'XYZ'){@db.after_commit{}}}.should raise_error(Sequel::Error) proc{@db.transaction(:prepare=>'XYZ'){@db.after_rollback{}}}.should raise_error(Sequel::Error) end if DB.supports_savepoints_in_prepared_transactions? specify "should raise an error if you attempt to use after_commit or after rollback inside a savepoint in a prepared transaction" do proc{@db.transaction(:prepare=>'XYZ'){@db.transaction(:savepoint=>true){@db.after_commit{}}}}.should raise_error(Sequel::Error) proc{@db.transaction(:prepare=>'XYZ'){@db.transaction(:savepoint=>true){@db.after_rollback{}}}}.should raise_error(Sequel::Error) end end end end if (! defined?(RUBY_ENGINE) or RUBY_ENGINE == 'ruby' or (RUBY_ENGINE == 'rbx' && !Sequel.guarded?([:do, :sqlite], [:tinytds, :mssql]))) and RUBY_VERSION < '1.9' describe "Database transactions and Thread#kill" do before do @db = DB @db.create_table!(:items, :engine=>'InnoDB'){String :name; Integer :value} @d = @db[:items] end after do @db.drop_table?(:items) end specify "should handle transactions inside threads" do q = Queue.new q1 = Queue.new t = Thread.new do @db.transaction do @d << {:name => 'abc', :value => 1} q1.push nil q.pop @d << {:name => 'def', :value => 2} end end q1.pop t.kill @d.count.should == 0 end if DB.supports_savepoints? specify "should handle transactions with savepoints inside threads" do q = Queue.new q1 = Queue.new t = Thread.new do @db.transaction do @d << {:name => 'abc', :value => 1} @db.transaction(:savepoint=>true) do @d << {:name => 'def', :value => 2} q1.push nil q.pop @d << {:name => 'ghi', :value => 3} end @d << {:name => 'jkl', :value => 4} end end q1.pop t.kill @d.count.should == 0 end end end end describe "Database transaction retrying" do before(:all) do @db = DB @db.create_table!(:items, :engine=>'InnoDB'){String :a, :unique=>true, :null=>false} @d = @db[:items] end before do @d.delete end after(:all) do @db.drop_table?(:items) end cspecify "should be supported using the :retry_on option", [:db2] do @d.insert('b') @d.insert('c') s = 'a' @db.transaction(:retry_on=>Sequel::ConstraintViolation) do s = s.succ @d.insert(s) end @d.select_order_map(:a).should == %w'b c d' end cspecify "should limit number of retries via the :num_retries option", [:db2] do @d.insert('b') @d.insert('c') s = 'a' lambda do @db.transaction(:num_retries=>1, :retry_on=>Sequel::ConstraintViolation) do s = s.succ @d.insert(s) end end.should raise_error(Sequel::ConstraintViolation) @d.select_order_map(:a).should == %w'b c' end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/integration/type_test.rb�����������������������������������������������������0000664�0000000�0000000�00000011501�12201565355�0021616�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb') describe "Supported types" do def create_items_table_with_column(name, type, opts={}) DB.create_table!(:items){column name, type, opts} DB[:items] end specify "should support casting correctly" do ds = create_items_table_with_column(:number, Integer) ds.insert(:number => 1) ds.select(Sequel.cast(:number, String).as(:n)).map(:n).should == %w'1' ds = create_items_table_with_column(:name, String) ds.insert(:name=> '1') ds.select(Sequel.cast(:name, Integer).as(:n)).map(:n).should == [1] end specify "should support NULL correctly" do ds = create_items_table_with_column(:number, Integer) ds.insert(:number => nil) ds.all.should == [{:number=>nil}] end specify "should support generic integer type" do ds = create_items_table_with_column(:number, Integer) ds.insert(:number => 2) ds.all.should == [{:number=>2}] end specify "should support generic fixnum type" do ds = create_items_table_with_column(:number, Fixnum) ds.insert(:number => 2) ds.all.should == [{:number=>2}] end specify "should support generic bignum type" do ds = create_items_table_with_column(:number, Bignum) ds.insert(:number => 2**34) ds.all.should == [{:number=>2**34}] end cspecify "should support generic float type", [:swift, :sqlite] do ds = create_items_table_with_column(:number, Float) ds.insert(:number => 2.1) ds.all.should == [{:number=>2.1}] end cspecify "should support generic numeric type", [:odbc, :mssql], [:swift, :sqlite] do ds = create_items_table_with_column(:number, Numeric, :size=>[15, 10]) ds.insert(:number => BigDecimal.new('2.123456789')) ds.all.should == [{:number=>BigDecimal.new('2.123456789')}] ds = create_items_table_with_column(:number, BigDecimal, :size=>[15, 10]) ds.insert(:number => BigDecimal.new('2.123456789')) ds.all.should == [{:number=>BigDecimal.new('2.123456789')}] end specify "should support generic string type" do ds = create_items_table_with_column(:name, String) ds.insert(:name => 'Test User') ds.all.should == [{:name=>'Test User'}] end specify "should support generic text type" do ds = create_items_table_with_column(:name, String, :text=>true) ds.insert(:name => 'Test User'*100) ds.all.should == [{:name=>'Test User'*100}] ds.update(:name=>ds.get(:name)) ds.all.should == [{:name=>'Test User'*100}] end cspecify "should support generic date type", [:do, :sqlite], [:jdbc, :sqlite], :mssql, :oracle do ds = create_items_table_with_column(:dat, Date) d = Date.today ds.insert(:dat => d) ds.first[:dat].should be_a_kind_of(Date) ds.first[:dat].to_s.should == d.to_s end cspecify "should support generic time type", [:do], [:swift], [:odbc], [:jdbc, :mssql], [:jdbc, :sqlite], [:mysql2], [:tinytds], :oracle do ds = create_items_table_with_column(:tim, Time, :only_time=>true) t = Sequel::SQLTime.now ds.insert(:tim => t) v = ds.first[:tim] ds.literal(v).should == ds.literal(t) v.should be_a_kind_of(Sequel::SQLTime) ds.delete ds.insert(:tim => v) v2 = ds.first[:tim] ds.literal(v2).should == ds.literal(t) v2.should be_a_kind_of(Sequel::SQLTime) end cspecify "should support generic datetime type", [:do, :sqlite], [:jdbc, :sqlite] do ds = create_items_table_with_column(:tim, DateTime) t = DateTime.now ds.insert(:tim => t) ds.first[:tim].strftime('%Y%m%d%H%M%S').should == t.strftime('%Y%m%d%H%M%S') ds = create_items_table_with_column(:tim, Time) t = Time.now ds.insert(:tim => t) ds.first[:tim].strftime('%Y%m%d%H%M%S').should == t.strftime('%Y%m%d%H%M%S') end cspecify "should support generic file type", [:do], [:odbc, :mssql], [:mysql2], [:tinytds] do ds = create_items_table_with_column(:name, File) ds.insert(:name =>Sequel.blob("a\0"*300)) ds.all.should == [{:name=>Sequel.blob("a\0"*300)}] ds.first[:name].should be_a_kind_of(::Sequel::SQL::Blob) end cspecify "should support generic boolean type", [:do, :sqlite], [:jdbc, :sqlite], [:jdbc, :db2], [:odbc, :mssql], :oracle do ds = create_items_table_with_column(:number, TrueClass) ds.insert(:number => true) ds.all.should == [{:number=>true}] ds = create_items_table_with_column(:number, FalseClass) ds.insert(:number => true) ds.all.should == [{:number=>true}] end cspecify "should support generic boolean type with defaults", [:do, :sqlite], [:jdbc, :sqlite], [:jdbc, :db2], [:odbc, :mssql], :oracle do ds = create_items_table_with_column(:number, TrueClass, :default=>true) ds.insert ds.all.should == [{:number=>true}] ds = create_items_table_with_column(:number, FalseClass, :default=>false) ds.insert ds.all.should == [{:number=>false}] end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/�����������������������������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0016030�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/association_reflection_spec.rb�����������������������������������������0000664�0000000�0000000�00000050535�12201565355�0024125�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model::Associations::AssociationReflection, "#associated_class" do before do @c = Class.new(Sequel::Model(:foo)) class ::ParParent < Sequel::Model; end end after do Object.send(:remove_const, :ParParent) end it "should use the :class value if present" do @c.many_to_one :c, :class=>ParParent @c.association_reflection(:c).keys.should include(:class) @c.association_reflection(:c).associated_class.should == ParParent end it "should figure out the class if the :class value is not present" do @c.many_to_one :c, :class=>'ParParent' @c.association_reflection(:c).keys.should_not include(:class) @c.association_reflection(:c).associated_class.should == ParParent end end describe Sequel::Model::Associations::AssociationReflection, "#primary_key" do before do @c = Class.new(Sequel::Model(:foo)) class ::ParParent < Sequel::Model; end end after do Object.send(:remove_const, :ParParent) end it "should use the :primary_key value if present" do @c.many_to_one :c, :class=>ParParent, :primary_key=>:blah__blah @c.association_reflection(:c).keys.should include(:primary_key) @c.association_reflection(:c).primary_key.should == :blah__blah end it "should use the associated table's primary key if :primary_key is not present" do @c.many_to_one :c, :class=>'ParParent' @c.association_reflection(:c).keys.should_not include(:primary_key) @c.association_reflection(:c).primary_key.should == :id end end describe Sequel::Model::Associations::AssociationReflection, "#reciprocal" do before do class ::ParParent < Sequel::Model; end class ::ParParentTwo < Sequel::Model; end class ::ParParentThree < Sequel::Model; end end after do Object.send(:remove_const, :ParParent) Object.send(:remove_const, :ParParentTwo) Object.send(:remove_const, :ParParentThree) end it "should use the :reciprocal value if present" do @c = Class.new(Sequel::Model(:foo)) @d = Class.new(Sequel::Model(:foo)) @c.many_to_one :c, :class=>@d, :reciprocal=>:xx @c.association_reflection(:c).keys.should include(:reciprocal) @c.association_reflection(:c).reciprocal.should == :xx end it "should require the associated class is the current class to be a reciprocal" do ParParent.many_to_one :par_parent_two, :key=>:blah ParParent.many_to_one :par_parent_three, :key=>:blah ParParentTwo.one_to_many :par_parents, :key=>:blah ParParentThree.one_to_many :par_parents, :key=>:blah ParParentTwo.association_reflection(:par_parents).reciprocal.should == :par_parent_two ParParentThree.association_reflection(:par_parents).reciprocal.should == :par_parent_three ParParent.many_to_many :par_parent_twos, :left_key=>:l, :right_key=>:r, :join_table=>:jt ParParent.many_to_many :par_parent_threes, :left_key=>:l, :right_key=>:r, :join_table=>:jt ParParentTwo.many_to_many :par_parents, :right_key=>:l, :left_key=>:r, :join_table=>:jt ParParentThree.many_to_many :par_parents, :right_key=>:l, :left_key=>:r, :join_table=>:jt ParParentTwo.association_reflection(:par_parents).reciprocal.should == :par_parent_twos ParParentThree.association_reflection(:par_parents).reciprocal.should == :par_parent_threes end it "should handle composite keys" do ParParent.many_to_one :par_parent_two, :key=>[:a, :b], :primary_key=>[:c, :b] ParParent.many_to_one :par_parent_three, :key=>[:d, :e], :primary_key=>[:c, :b] ParParentTwo.one_to_many :par_parents, :primary_key=>[:c, :b], :key=>[:a, :b] ParParentThree.one_to_many :par_parents, :primary_key=>[:c, :b], :key=>[:d, :e] ParParentTwo.association_reflection(:par_parents).reciprocal.should == :par_parent_two ParParentThree.association_reflection(:par_parents).reciprocal.should == :par_parent_three ParParent.many_to_many :par_parent_twos, :left_key=>[:l1, :l2], :right_key=>[:r1, :r2], :left_primary_key=>[:pl1, :pl2], :right_primary_key=>[:pr1, :pr2], :join_table=>:jt ParParent.many_to_many :par_parent_threes, :right_key=>[:l1, :l2], :left_key=>[:r1, :r2], :left_primary_key=>[:pl1, :pl2], :right_primary_key=>[:pr1, :pr2], :join_table=>:jt ParParentTwo.many_to_many :par_parents, :right_key=>[:l1, :l2], :left_key=>[:r1, :r2], :right_primary_key=>[:pl1, :pl2], :left_primary_key=>[:pr1, :pr2], :join_table=>:jt ParParentThree.many_to_many :par_parents, :left_key=>[:l1, :l2], :right_key=>[:r1, :r2], :right_primary_key=>[:pl1, :pl2], :left_primary_key=>[:pr1, :pr2], :join_table=>:jt ParParentTwo.association_reflection(:par_parents).reciprocal.should == :par_parent_twos ParParentThree.association_reflection(:par_parents).reciprocal.should == :par_parent_threes end it "should figure out the reciprocal if the :reciprocal value is not present" do ParParent.many_to_one :par_parent_two ParParentTwo.one_to_many :par_parents ParParent.many_to_many :par_parent_threes ParParentThree.many_to_many :par_parents ParParent.association_reflection(:par_parent_two).keys.should_not include(:reciprocal) ParParent.association_reflection(:par_parent_two).reciprocal.should == :par_parents ParParentTwo.association_reflection(:par_parents).keys.should_not include(:reciprocal) ParParentTwo.association_reflection(:par_parents).reciprocal.should == :par_parent_two ParParent.association_reflection(:par_parent_threes).keys.should_not include(:reciprocal) ParParent.association_reflection(:par_parent_threes).reciprocal.should == :par_parents ParParentThree.association_reflection(:par_parents).keys.should_not include(:reciprocal) ParParentThree.association_reflection(:par_parents).reciprocal.should == :par_parent_threes end it "should handle ambiguous reciprocals where only one doesn't have conditions/blocks" do ParParent.many_to_one :par_parent_two, :class=>ParParentTwo, :key=>:par_parent_two_id ParParent.many_to_one :par_parent_two2, :clone=>:par_parent_two, :conditions=>{:id=>:id} ParParentTwo.one_to_many :par_parents ParParent.many_to_many :par_parent_threes, :class=>ParParentThree, :right_key=>:par_parent_three_id ParParent.many_to_many :par_parent_threes2, :clone=>:par_parent_threes do |ds| ds end ParParentThree.many_to_many :par_parents ParParentTwo.association_reflection(:par_parents).reciprocal.should == :par_parent_two ParParentThree.association_reflection(:par_parents).reciprocal.should == :par_parent_threes end it "should handle ambiguous reciprocals where only one has matching primary keys" do ParParent.many_to_one :par_parent_two, :class=>ParParentTwo, :key=>:par_parent_two_id ParParent.many_to_one :par_parent_two2, :clone=>:par_parent_two, :primary_key=>:foo ParParentTwo.one_to_many :par_parents, :class=>ParParent, :key=>:par_parent_two_id ParParentTwo.one_to_many :par_parents2, :clone=>:par_parents, :primary_key=>:foo ParParent.many_to_many :par_parent_threes, :class=>ParParentThree, :right_key=>:par_parent_three_id ParParent.many_to_many :par_parent_threes2, :clone=>:par_parent_threes, :right_primary_key=>:foo ParParent.many_to_many :par_parent_threes3, :clone=>:par_parent_threes, :left_primary_key=>:foo ParParentThree.many_to_many :par_parents ParParent.association_reflection(:par_parent_two).reciprocal.should == :par_parents ParParent.association_reflection(:par_parent_two2).reciprocal.should == :par_parents2 ParParentTwo.association_reflection(:par_parents).reciprocal.should == :par_parent_two ParParentTwo.association_reflection(:par_parents2).reciprocal.should == :par_parent_two2 ParParentThree.association_reflection(:par_parents).reciprocal.should == :par_parent_threes end specify "should handle reciprocals where current association has conditions/block" do ParParent.many_to_one :par_parent_two, :conditions=>{:id=>:id} ParParentTwo.one_to_many :par_parents ParParent.many_to_many :par_parent_threes do |ds| ds end ParParentThree.many_to_many :par_parents ParParent.association_reflection(:par_parent_two).reciprocal.should == :par_parents ParParent.association_reflection(:par_parent_threes).reciprocal.should == :par_parents end end describe Sequel::Model::Associations::AssociationReflection, "#select" do before do @c = Class.new(Sequel::Model(:foo)) class ::ParParent < Sequel::Model; end end after do Object.send(:remove_const, :ParParent) end it "should use the :select value if present" do @c.many_to_one :c, :class=>ParParent, :select=>[:par_parents__id] @c.association_reflection(:c).keys.should include(:select) @c.association_reflection(:c).select.should == [:par_parents__id] end it "should be the associated_table.* if :select is not present for a many_to_many associaiton" do @c.many_to_many :cs, :class=>'ParParent' @c.association_reflection(:cs).keys.should_not include(:select) @c.association_reflection(:cs).select.should == Sequel::SQL::ColumnAll.new(:par_parents) end it "should be blank if :select is not present for a many_to_one and one_to_many associaiton" do @c.one_to_many :cs, :class=>'ParParent' @c.association_reflection(:cs).keys.should_not include(:select) @c.association_reflection(:cs).select.should == nil @c.many_to_one :c, :class=>'ParParent' @c.association_reflection(:c).keys.should_not include(:select) @c.association_reflection(:c).select.should == nil end end describe Sequel::Model::Associations::AssociationReflection, "#can_have_associated_objects?" do it "should be true for any given object (for backward compatibility)" do Sequel::Model::Associations::AssociationReflection.new.can_have_associated_objects?(Object.new).should == true end end describe Sequel::Model::Associations::AssociationReflection, "#associated_object_keys" do before do @c = Class.new(Sequel::Model(:foo)) class ::ParParent < Sequel::Model; end end after do Object.send(:remove_const, :ParParent) end it "should use the primary keys for a many_to_one association" do @c.many_to_one :c, :class=>ParParent @c.association_reflection(:c).associated_object_keys.should == [:id] @c.many_to_one :c, :class=>ParParent, :primary_key=>:d_id @c.association_reflection(:c).associated_object_keys.should == [:d_id] @c.many_to_one :c, :class=>ParParent, :key=>[:c_id1, :c_id2], :primary_key=>[:id1, :id2] @c.association_reflection(:c).associated_object_keys.should == [:id1, :id2] end it "should use the keys for a one_to_many association" do ParParent.one_to_many :cs, :class=>ParParent ParParent.association_reflection(:cs).associated_object_keys.should == [:par_parent_id] @c.one_to_many :cs, :class=>ParParent, :key=>:d_id @c.association_reflection(:cs).associated_object_keys.should == [:d_id] @c.one_to_many :cs, :class=>ParParent, :key=>[:c_id1, :c_id2], :primary_key=>[:id1, :id2] @c.association_reflection(:cs).associated_object_keys.should == [:c_id1, :c_id2] end it "should use the right primary keys for a many_to_many association" do @c.many_to_many :cs, :class=>ParParent @c.association_reflection(:cs).associated_object_keys.should == [:id] @c.many_to_many :cs, :class=>ParParent, :right_primary_key=>:d_id @c.association_reflection(:cs).associated_object_keys.should == [:d_id] @c.many_to_many :cs, :class=>ParParent, :right_key=>[:c_id1, :c_id2], :right_primary_key=>[:id1, :id2] @c.association_reflection(:cs).associated_object_keys.should == [:id1, :id2] end end describe Sequel::Model::Associations::AssociationReflection do before do @c = Class.new(Sequel::Model(:foo)) def @c.name() "C" end end it "#eager_loading_predicate_key should be an alias of predicate_key for backwards compatibility" do @c.one_to_many :cs, :class=>@c @c.dataset.literal(@c.association_reflection(:cs).eager_loading_predicate_key).should == 'foo.c_id' end it "one_to_many #qualified_primary_key should be a qualified version of the primary key" do @c.one_to_many :cs, :class=>@c @c.dataset.literal(@c.association_reflection(:cs).qualified_primary_key).should == 'foo.id' end it "many_to_many #associated_key_column should be the left key" do @c.many_to_many :cs, :class=>@c @c.association_reflection(:cs).associated_key_column.should == :c_id end it "many_to_many #qualified_right_key should be a qualified version of the primary key" do @c.many_to_many :cs, :class=>@c, :right_key=>:c2_id @c.dataset.literal(@c.association_reflection(:cs).qualified_right_key).should == 'cs_cs.c2_id' end it "many_to_many #qualified_right_primary_key should be a qualified version of the primary key" do @c.many_to_many :cs, :class=>@c @c.dataset.literal(@c.association_reflection(:cs).qualified_right_primary_key).should == 'foo.id' end end describe Sequel::Model::Associations::AssociationReflection, "#remove_before_destroy?" do before do @c = Class.new(Sequel::Model(:foo)) end it "should be true for many_to_one and many_to_many associations" do @c.many_to_one :c, :class=>@c @c.association_reflection(:c).remove_before_destroy?.should be_true @c.many_to_many :cs, :class=>@c @c.association_reflection(:cs).remove_before_destroy?.should be_true end it "should be false for one_to_one and one_to_many associations" do @c.one_to_one :c, :class=>@c @c.association_reflection(:c).remove_before_destroy?.should be_false @c.one_to_many :cs, :class=>@c @c.association_reflection(:cs).remove_before_destroy?.should be_false end end describe Sequel::Model::Associations::AssociationReflection, "#eager_limit_strategy" do before do @c = Class.new(Sequel::Model(:a)) end after do Sequel::Model.default_eager_limit_strategy = true end it "should be nil by default for *_one associations" do @c.many_to_one :c, :class=>@c @c.association_reflection(:c).eager_limit_strategy.should be_nil @c.one_to_one :c, :class=>@c @c.association_reflection(:c).eager_limit_strategy.should be_nil end it "should be :ruby by default for *_many associations" do @c.one_to_many :cs, :class=>@c, :limit=>1 @c.association_reflection(:cs).eager_limit_strategy.should == :ruby @c.many_to_many :cs, :class=>@c, :limit=>1 @c.association_reflection(:cs).eager_limit_strategy.should == :ruby end it "should be nil for many_to_one associations" do @c.many_to_one :c, :class=>@c, :eager_limit_strategy=>true @c.association_reflection(:c).eager_limit_strategy.should be_nil @c.many_to_one :c, :class=>@c, :eager_limit_strategy=>:distinct_on @c.association_reflection(:c).eager_limit_strategy.should be_nil end it "should be a symbol for other associations if given a symbol" do @c.one_to_one :c, :class=>@c, :eager_limit_strategy=>:distinct_on @c.association_reflection(:c).eager_limit_strategy.should == :distinct_on @c.one_to_many :cs, :class=>@c, :eager_limit_strategy=>:window_function, :limit=>1 @c.association_reflection(:cs).eager_limit_strategy.should == :window_function end it "should use :distinct_on for one_to_one associations if picking and the association dataset supports ordered distinct on" do def (@c.dataset).supports_ordered_distinct_on?() true end @c.one_to_one :c, :class=>@c, :eager_limit_strategy=>true @c.association_reflection(:c).eager_limit_strategy.should == :distinct_on end it "should use :window_function for associations if picking and the association dataset supports window functions" do def (@c.dataset).supports_window_functions?() true end @c.one_to_one :c, :class=>@c, :eager_limit_strategy=>true @c.association_reflection(:c).eager_limit_strategy.should == :window_function @c.one_to_many :cs, :class=>@c, :eager_limit_strategy=>true, :limit=>1 @c.association_reflection(:cs).eager_limit_strategy.should == :window_function @c.many_to_many :cs, :class=>@c, :eager_limit_strategy=>true, :limit=>1 @c.association_reflection(:cs).eager_limit_strategy.should == :window_function end it "should use :ruby for *_many associations if picking and the association dataset doesn't window functions" do @c.one_to_many :cs, :class=>@c, :eager_limit_strategy=>true, :limit=>1 @c.association_reflection(:cs).eager_limit_strategy.should == :ruby @c.many_to_many :cs, :class=>@c, :eager_limit_strategy=>true, :limit=>1 @c.association_reflection(:cs).eager_limit_strategy.should == :ruby end it "should respect Model.default_eager_limit_strategy to *_many associations" do Sequel::Model.default_eager_limit_strategy = :window_function Sequel::Model.default_eager_limit_strategy.should == :window_function c = Class.new(Sequel::Model) c.dataset = :a c.default_eager_limit_strategy.should == :window_function c.one_to_many :cs, :class=>c, :limit=>1 c.association_reflection(:cs).eager_limit_strategy.should == :window_function c.many_to_many :cs, :class=>c, :limit=>1 c.association_reflection(:cs).eager_limit_strategy.should == :window_function Sequel::Model.default_eager_limit_strategy = true c = Class.new(Sequel::Model) c.dataset = :a c.one_to_many :cs, :class=>c, :limit=>1 c.association_reflection(:cs).eager_limit_strategy.should == :ruby def (c.dataset).supports_window_functions?() true end c.many_to_many :cs, :class=>c, :limit=>1 c.association_reflection(:cs).eager_limit_strategy.should == :window_function end it "should ignore Model.default_eager_limit_strategy for one_to_one associations" do @c.default_eager_limit_strategy = :window_function @c.one_to_one :c, :class=>@c @c.association_reflection(:c).eager_limit_strategy.should be_nil end end describe Sequel::Model, " association reflection methods" do before do @c1 = Class.new(Sequel::Model(:nodes)) do def self.name; 'Node'; end def self.to_s; 'Node'; end end DB.reset end it "#all_association_reflections should include all association reflection hashes" do @c1.all_association_reflections.should == [] @c1.associate :many_to_one, :parent, :class => @c1 @c1.all_association_reflections.collect{|v| v[:name]}.should == [:parent] @c1.all_association_reflections.collect{|v| v[:type]}.should == [:many_to_one] @c1.all_association_reflections.collect{|v| v[:class]}.should == [@c1] @c1.associate :one_to_many, :children, :class => @c1 @c1.all_association_reflections.sort_by{|x|x[:name].to_s} @c1.all_association_reflections.sort_by{|x|x[:name].to_s}.collect{|v| v[:name]}.should == [:children, :parent] @c1.all_association_reflections.sort_by{|x|x[:name].to_s}.collect{|v| v[:type]}.should == [:one_to_many, :many_to_one] @c1.all_association_reflections.sort_by{|x|x[:name].to_s}.collect{|v| v[:class]}.should == [@c1, @c1] end it "#association_reflection should return nil for nonexistent association" do @c1.association_reflection(:blah).should == nil end it "#association_reflection should return association reflection hash if association exists" do @c1.associate :many_to_one, :parent, :class => @c1 @c1.association_reflection(:parent).should be_a_kind_of(Sequel::Model::Associations::AssociationReflection) @c1.association_reflection(:parent)[:name].should == :parent @c1.association_reflection(:parent)[:type].should == :many_to_one @c1.association_reflection(:parent)[:class].should == @c1 @c1.associate :one_to_many, :children, :class => @c1 @c1.association_reflection(:children).should be_a_kind_of(Sequel::Model::Associations::AssociationReflection) @c1.association_reflection(:children)[:name].should == :children @c1.association_reflection(:children)[:type].should == :one_to_many @c1.association_reflection(:children)[:class].should == @c1 end it "#associations should include all association names" do @c1.associations.should == [] @c1.associate :many_to_one, :parent, :class => @c1 @c1.associations.should == [:parent] @c1.associate :one_to_many, :children, :class => @c1 @c1.associations.sort_by{|x|x.to_s}.should == [:children, :parent] end it "association reflections should be copied upon subclasing" do @c1.associate :many_to_one, :parent, :class => @c1 c = Class.new(@c1) @c1.associations.should == [:parent] c.associations.should == [:parent] c.associate :many_to_one, :parent2, :class => @c1 @c1.associations.should == [:parent] c.associations.sort_by{|x| x.to_s}.should == [:parent, :parent2] c.instance_methods.map{|x| x.to_s}.should include('parent') end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/associations_spec.rb���������������������������������������������������0000664�0000000�0000000�00000464317�12201565355�0022105�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "associate" do it "should use explicit class if given a class, symbol, or string" do begin klass = Class.new(Sequel::Model(:nodes)) class ::ParParent < Sequel::Model; end klass.associate :many_to_one, :par_parent0, :class=>ParParent klass.associate :one_to_many, :par_parent1s, :class=>'ParParent' klass.associate :many_to_many, :par_parent2s, :class=>:ParParent klass.association_reflection(:"par_parent0").associated_class.should == ParParent klass.association_reflection(:"par_parent1s").associated_class.should == ParParent klass.association_reflection(:"par_parent2s").associated_class.should == ParParent ensure Object.send(:remove_const, :ParParent) end end it "should default to associating to other models in the same scope" do begin class ::AssociationModuleTest class Album < Sequel::Model many_to_one :artist many_to_many :tags end class Artist< Sequel::Model one_to_many :albums end class Tag < Sequel::Model many_to_many :albums end end ::AssociationModuleTest::Album.association_reflection(:artist).associated_class.should == ::AssociationModuleTest::Artist ::AssociationModuleTest::Album.association_reflection(:tags).associated_class.should == ::AssociationModuleTest::Tag ::AssociationModuleTest::Artist.association_reflection(:albums).associated_class.should == ::AssociationModuleTest::Album ::AssociationModuleTest::Tag.association_reflection(:albums).associated_class.should == ::AssociationModuleTest::Album ensure Object.send(:remove_const, :AssociationModuleTest) end end it "should add a model_object and association_reflection accessors to the dataset, and return it with the current model object" do klass = Class.new(Sequel::Model(:nodes)) do columns :id, :a_id end mod = Module.new do def blah filter{|o| o.__send__(association_reflection[:key]) > model_object.id*2} end end klass.associate :many_to_one, :a, :class=>klass klass.associate :one_to_many, :bs, :key=>:b_id, :class=>klass, :extend=>mod klass.associate :many_to_many, :cs, :class=>klass node = klass.load(:id=>1) node.a_dataset.model_object.should == node node.bs_dataset.model_object.should == node node.cs_dataset.model_object.should == node node.a_dataset.association_reflection.should == klass.association_reflection(:a) node.bs_dataset.association_reflection.should == klass.association_reflection(:bs) node.cs_dataset.association_reflection.should == klass.association_reflection(:cs) node.bs_dataset.blah.sql.should == 'SELECT * FROM nodes WHERE ((nodes.b_id = 1) AND (b_id > 2))' end it "should allow extending the dataset with :extend option" do klass = Class.new(Sequel::Model(:nodes)) do columns :id, :a_id end mod = Module.new do def blah 1 end end mod2 = Module.new do def blar 2 end end klass.associate :many_to_one, :a, :class=>klass, :extend=>mod klass.associate :one_to_many, :bs, :class=>klass, :extend=>[mod] klass.associate :many_to_many, :cs, :class=>klass, :extend=>[mod, mod2] node = klass.load(:id=>1) node.a_dataset.blah.should == 1 node.bs_dataset.blah.should == 1 node.cs_dataset.blah.should == 1 node.cs_dataset.blar.should == 2 end it "should clone an existing association with the :clone option" do begin class ::ParParent < Sequel::Model; end klass = Class.new(Sequel::Model(:nodes)) klass.many_to_one(:par_parent, :order=>:a){1} klass.one_to_many(:par_parent1s, :class=>'ParParent', :limit=>12){4} klass.many_to_many(:par_parent2s, :class=>:ParParent, :uniq=>true){2} klass.many_to_one :par, :clone=>:par_parent, :select=>:b klass.one_to_many :par1s, :clone=>:par_parent1s, :order=>:b, :limit=>10, :block=>nil klass.many_to_many(:par2s, :clone=>:par_parent2s, :order=>:c){3} klass.association_reflection(:par).associated_class.should == ParParent klass.association_reflection(:par1s).associated_class.should == ParParent klass.association_reflection(:par2s).associated_class.should == ParParent klass.association_reflection(:par)[:order].should == :a klass.association_reflection(:par).select.should == :b klass.association_reflection(:par)[:block].call.should == 1 klass.association_reflection(:par1s)[:limit].should == 10 klass.association_reflection(:par1s)[:order].should == :b klass.association_reflection(:par1s)[:block].should == nil klass.association_reflection(:par2s)[:after_load].length.should == 1 klass.association_reflection(:par2s)[:order].should == :c klass.association_reflection(:par2s)[:block].call.should == 3 ensure Object.send(:remove_const, :ParParent) end end it "should raise an error if attempting to clone an association of differing type" do c = Class.new(Sequel::Model(:c)) c.many_to_one :c proc{c.one_to_many :cs, :clone=>:c}.should raise_error(Sequel::Error) end it "should allow cloning of one_to_many to one_to_one associations and vice-versa" do c = Class.new(Sequel::Model(:c)) c.one_to_one :c proc{c.one_to_many :cs, :clone=>:c}.should_not raise_error proc{c.one_to_one :c2, :clone=>:cs}.should_not raise_error end it "should clear associations cache when refreshing object manually" do c = Class.new(Sequel::Model(:c)) c.many_to_one :c o = c.new o.associations[:c] = 1 o.refresh o.associations.should == {} end it "should not clear associations cache when refreshing object after save" do c = Class.new(Sequel::Model(:c)) c.many_to_one :c o = c.new o.associations[:c] = 1 o.save o.associations.should == {:c=>1} end it "should not clear associations cache when saving with insert_select" do ds = Sequel::Model.db[:c] def ds.supports_insert_select?() true end def ds.insert_select(*) {:id=>1} end c = Class.new(Sequel::Model(ds)) c.many_to_one :c o = c.new o.associations[:c] = 1 o.save o.associations.should == {:c=>1} end end describe Sequel::Model, "many_to_one" do before do @c2 = Class.new(Sequel::Model(:nodes)) do unrestrict_primary_key columns :id, :parent_id, :par_parent_id, :blah end @dataset = @c2.dataset DB.reset end it "should use implicit key if omitted" do @c2.many_to_one :parent, :class => @c2 d = @c2.new(:id => 1, :parent_id => 234) p = d.parent p.class.should == @c2 p.values.should == {:x => 1, :id => 1} DB.sqls.should == ["SELECT * FROM nodes WHERE id = 234"] end it "should allow association with the same name as the key if :key_column is given" do @c2.def_column_alias(:parent_id_id, :parent_id) @c2.many_to_one :parent_id, :key_column=>:parent_id, :class => @c2 d = @c2.load(:id => 1, :parent_id => 234) d.parent_id_dataset.sql.should == "SELECT * FROM nodes WHERE (nodes.id = 234) LIMIT 1" d.parent_id.should == @c2.load(:x => 1, :id => 1) d.parent_id_id.should == 234 d[:parent_id].should == 234 DB.sqls.should == ["SELECT * FROM nodes WHERE id = 234"] d.parent_id_id = 3 d.parent_id_id.should == 3 d[:parent_id].should == 3 end it "should use implicit class if omitted" do begin class ::ParParent < Sequel::Model; end @c2.many_to_one :par_parent @c2.new(:id => 1, :par_parent_id => 234).par_parent.class.should == ParParent DB.sqls.should == ["SELECT * FROM par_parents WHERE id = 234"] ensure Object.send(:remove_const, :ParParent) end end it "should use class inside module if given as a string" do begin module ::Par class Parent < Sequel::Model; end end @c2.many_to_one :par_parent, :class=>"Par::Parent" @c2.new(:id => 1, :par_parent_id => 234).par_parent.class.should == Par::Parent DB.sqls.should == ["SELECT * FROM parents WHERE id = 234"] ensure Object.send(:remove_const, :Par) end end it "should use explicit key if given" do @c2.many_to_one :parent, :class => @c2, :key => :blah d = @c2.new(:id => 1, :blah => 567) p = d.parent p.class.should == @c2 p.values.should == {:x => 1, :id => 1} DB.sqls.should == ["SELECT * FROM nodes WHERE id = 567"] end it "should respect :qualify => false option" do @c2.many_to_one :parent, :class => @c2, :key => :blah, :qualify=>false @c2.new(:id => 1, :blah => 567).parent DB.sqls.should == ["SELECT * FROM nodes WHERE id = 567"] end it "should use :primary_key option if given" do @c2.many_to_one :parent, :class => @c2, :key => :blah, :primary_key => :pk @c2.new(:id => 1, :blah => 567).parent DB.sqls.should == ["SELECT * FROM nodes WHERE (nodes.pk = 567) LIMIT 1"] end it "should support composite keys" do @c2.many_to_one :parent, :class => @c2, :key=>[:id, :parent_id], :primary_key=>[:parent_id, :id] @c2.new(:id => 1, :parent_id => 234).parent DB.sqls.should == ["SELECT * FROM nodes WHERE ((nodes.parent_id = 1) AND (nodes.id = 234)) LIMIT 1"] end it "should not issue query if not all keys have values" do @c2.many_to_one :parent, :class => @c2, :key=>[:id, :parent_id], :primary_key=>[:parent_id, :id] @c2.new(:id => 1, :parent_id => nil).parent.should == nil DB.sqls.should == [] end it "should raise an Error unless same number of composite keys used" do proc{@c2.many_to_one :parent, :class => @c2, :primary_key=>[:parent_id, :id]}.should raise_error(Sequel::Error) proc{@c2.many_to_one :parent, :class => @c2, :key=>[:id, :parent_id], :primary_key=>:id}.should raise_error(Sequel::Error) proc{@c2.many_to_one :parent, :class => @c2, :key=>:id, :primary_key=>[:parent_id, :id]}.should raise_error(Sequel::Error) proc{@c2.many_to_one :parent, :class => @c2, :key=>[:id, :parent_id, :blah], :primary_key=>[:parent_id, :id]}.should raise_error(Sequel::Error) end it "should use :select option if given" do @c2.many_to_one :parent, :class => @c2, :key => :blah, :select=>[:id, :name] @c2.new(:id => 1, :blah => 567).parent DB.sqls.should == ["SELECT id, name FROM nodes WHERE (nodes.id = 567) LIMIT 1"] end it "should use :conditions option if given" do @c2.many_to_one :parent, :class => @c2, :key => :blah, :conditions=>{:a=>32} @c2.new(:id => 1, :blah => 567).parent DB.sqls.should == ["SELECT * FROM nodes WHERE ((a = 32) AND (nodes.id = 567)) LIMIT 1"] @c2.many_to_one :parent, :class => @c2, :key => :blah, :conditions=>:a @c2.new(:id => 1, :blah => 567).parent DB.sqls.should == ["SELECT * FROM nodes WHERE (a AND (nodes.id = 567)) LIMIT 1"] end it "should support :order, :limit (only for offset), and :dataset options, as well as a block" do @c2.many_to_one :child_20, :class => @c2, :key=>:id, :dataset=>proc{model.filter(:parent_id=>pk)}, :limit=>[10,20], :order=>:name do |ds| ds.filter{x > 1} end @c2.load(:id => 100).child_20 DB.sqls.should == ["SELECT * FROM nodes WHERE ((parent_id = 100) AND (x > 1)) ORDER BY name LIMIT 1 OFFSET 20"] end it "should return nil if key value is nil" do @c2.many_to_one :parent, :class => @c2 @c2.new(:id => 1).parent.should == nil DB.sqls.should == [] end it "should cache negative lookup" do @c2.many_to_one :parent, :class => @c2 @c2.dataset._fetch = [] d = @c2.new(:id => 1, :parent_id=>555) DB.sqls.should == [] d.parent.should == nil DB.sqls.should == ['SELECT * FROM nodes WHERE id = 555'] d.parent.should == nil DB.sqls.should == [] end it "should define a setter method" do @c2.many_to_one :parent, :class => @c2 d = @c2.new(:id => 1) d.parent = @c2.new(:id => 4321) d.values.should == {:id => 1, :parent_id => 4321} d.parent = nil d.values.should == {:id => 1, :parent_id => nil} e = @c2.new(:id => 6677) d.parent = e d.values.should == {:id => 1, :parent_id => 6677} end it "should have the setter method respect the :primary_key option" do @c2.many_to_one :parent, :class => @c2, :primary_key=>:blah d = @c2.new(:id => 1) d.parent = @c2.new(:id => 4321, :blah=>444) d.values.should == {:id => 1, :parent_id => 444} d.parent = nil d.values.should == {:id => 1, :parent_id => nil} e = @c2.new(:id => 6677, :blah=>8) d.parent = e d.values.should == {:id => 1, :parent_id => 8} end it "should have the setter method respect composite keys" do @c2.many_to_one :parent, :class => @c2, :key=>[:id, :parent_id], :primary_key=>[:parent_id, :id] d = @c2.new(:id => 1, :parent_id=> 234) d.parent = @c2.new(:id => 4, :parent_id=>52) d.values.should == {:id => 52, :parent_id => 4} d.parent = nil d.values.should == {:id => nil, :parent_id => nil} e = @c2.new(:id => 6677, :parent_id=>8) d.parent = e d.values.should == {:id => 8, :parent_id => 6677} end it "should not persist changes until saved" do @c2.many_to_one :parent, :class => @c2 d = @c2.load(:id => 1) DB.reset d.parent = @c2.new(:id => 345) DB.sqls.should == [] d.save_changes DB.sqls.should == ['UPDATE nodes SET parent_id = 345 WHERE (id = 1)'] end it "should populate cache when accessed" do @c2.many_to_one :parent, :class => @c2 d = @c2.load(:id => 1) d.parent_id = 234 d.associations[:parent].should == nil @c2.dataset._fetch = {:id=>234} e = d.parent DB.sqls.should == ["SELECT * FROM nodes WHERE id = 234"] d.associations[:parent].should == e end it "should populate cache when assigned" do @c2.many_to_one :parent, :class => @c2 d = @c2.create(:id => 1) DB.reset d.associations[:parent].should == nil d.parent = @c2.new(:id => 234) e = d.parent d.associations[:parent].should == e DB.sqls.should == [] end it "should use cache if available" do @c2.many_to_one :parent, :class => @c2 d = @c2.create(:id => 1, :parent_id => 234) DB.reset d.associations[:parent] = 42 d.parent.should == 42 DB.sqls.should == [] end it "should not use cache if asked to reload" do @c2.many_to_one :parent, :class => @c2 d = @c2.create(:id => 1) DB.reset d.parent_id = 234 d.associations[:parent] = 42 d.parent(true).should_not == 42 DB.sqls.should == ["SELECT * FROM nodes WHERE id = 234"] end it "should use a callback if given one as the argument" do @c2.many_to_one :parent, :class => @c2 d = @c2.create(:id => 1) DB.reset d.parent_id = 234 d.associations[:parent] = 42 d.parent(proc{|ds| ds.filter{name > 'M'}}).should_not == 42 DB.sqls.should == ["SELECT * FROM nodes WHERE ((nodes.id = 234) AND (name > 'M')) LIMIT 1"] end it "should use a block given to the association method as a callback" do @c2.many_to_one :parent, :class => @c2 d = @c2.create(:id => 1) DB.reset d.parent_id = 234 d.associations[:parent] = 42 d.parent{|ds| ds.filter{name > 'M'}}.should_not == 42 DB.sqls.should == ["SELECT * FROM nodes WHERE ((nodes.id = 234) AND (name > 'M')) LIMIT 1"] end it "should have the setter add to the reciprocal one_to_many cached association array if it exists" do @c2.many_to_one :parent, :class => @c2 @c2.one_to_many :children, :class => @c2, :key=>:parent_id @c2.dataset._fetch = [] d = @c2.new(:id => 1) e = @c2.new(:id => 2) DB.sqls.should == [] d.parent = e e.children.should_not(include(d)) DB.sqls.should == ['SELECT * FROM nodes WHERE (nodes.parent_id = 2)'] d = @c2.new(:id => 1) e = @c2.new(:id => 2) e.children.should_not(include(d)) DB.sqls.should == ['SELECT * FROM nodes WHERE (nodes.parent_id = 2)'] d.parent = e e.children.should(include(d)) DB.sqls.should == [] end it "should have setter deal with a one_to_one reciprocal" do @c2.many_to_one :parent, :class => @c2, :key=>:parent_id @c2.one_to_one :child, :class => @c2, :key=>:parent_id d = @c2.new(:id => 1) e = @c2.new(:id => 2) e.associations[:child] = nil d.parent = e e.child.should == d d.parent = nil e.child.should == nil d.parent = e e.child.should == d f = @c2.new(:id => 3) d.parent = nil e.child.should == nil e.associations[:child] = f d.parent = e e.child.should == d end it "should have the setter remove the object from the previous associated object's reciprocal one_to_many cached association array if it exists" do @c2.many_to_one :parent, :class => @c2 @c2.one_to_many :children, :class => @c2, :key=>:parent_id @c2.dataset._fetch = [] d = @c2.new(:id => 1) e = @c2.new(:id => 2) f = @c2.new(:id => 3) e.children.should_not(include(d)) f.children.should_not(include(d)) DB.reset d.parent = e e.children.should(include(d)) d.parent = f f.children.should(include(d)) e.children.should_not(include(d)) d.parent = nil f.children.should_not(include(d)) DB.sqls.should == [] end it "should have the setter not modify the reciprocal if set to same value as current" do @c2.many_to_one :parent, :class => @c2 @c2.one_to_many :children, :class => @c2, :key=>:parent_id c1 = @c2.load(:id => 1, :parent_id=>nil) c2 = @c2.load(:id => 2, :parent_id=>1) c3 = @c2.load(:id => 3, :parent_id=>1) c1.associations[:children] = [c2, c3] c2.associations[:parent] = c1 c2.parent = c1 c1.children.should == [c2, c3] DB.sqls.should == [] end it "should get all matching records and only return the first if :key option is set to nil" do @c2.one_to_many :children, :class => @c2, :key=>:parent_id @c2.many_to_one :first_grand_parent, :class => @c2, :key=>nil, :eager_graph=>:children, :dataset=>proc{model.filter(:children_id=>parent_id)} @c2.dataset.columns(:id, :parent_id, :par_parent_id, :blah)._fetch = [{:id=>1, :parent_id=>0, :par_parent_id=>3, :blah=>4, :children_id=>2, :children_parent_id=>1, :children_par_parent_id=>5, :children_blah=>6}, {}] p = @c2.new(:parent_id=>2) fgp = p.first_grand_parent DB.sqls.should == ["SELECT nodes.id, nodes.parent_id, nodes.par_parent_id, nodes.blah, children.id AS children_id, children.parent_id AS children_parent_id, children.par_parent_id AS children_par_parent_id, children.blah AS children_blah FROM nodes LEFT OUTER JOIN nodes AS children ON (children.parent_id = nodes.id) WHERE (children_id = 2)"] fgp.values.should == {:id=>1, :parent_id=>0, :par_parent_id=>3, :blah=>4} fgp.children.first.values.should == {:id=>2, :parent_id=>1, :par_parent_id=>5, :blah=>6} end it "should not create the setter method if :read_only option is used" do @c2.many_to_one :parent, :class => @c2, :read_only=>true @c2.instance_methods.collect{|x| x.to_s}.should(include('parent')) @c2.instance_methods.collect{|x| x.to_s}.should_not(include('parent=')) end it "should not add associations methods directly to class" do @c2.many_to_one :parent, :class => @c2 @c2.instance_methods.collect{|x| x.to_s}.should(include('parent')) @c2.instance_methods.collect{|x| x.to_s}.should(include('parent=')) @c2.instance_methods(false).collect{|x| x.to_s}.should_not(include('parent')) @c2.instance_methods(false).collect{|x| x.to_s}.should_not(include('parent=')) end it "should add associations methods to the :methods_module option" do m = Module.new @c2.many_to_one :parent, :class => @c2, :methods_module=>m m.instance_methods.collect{|x| x.to_s}.should(include('parent')) m.instance_methods.collect{|x| x.to_s}.should(include('parent=')) @c2.instance_methods.collect{|x| x.to_s}.should_not(include('parent')) @c2.instance_methods.collect{|x| x.to_s}.should_not(include('parent=')) end it "should add associations methods directly to class if :methods_module is the class itself" do @c2.many_to_one :parent, :class => @c2, :methods_module=>@c2 @c2.instance_methods(false).collect{|x| x.to_s}.should(include('parent')) @c2.instance_methods(false).collect{|x| x.to_s}.should(include('parent=')) end it "should raise an error if trying to set a model object that doesn't have a valid primary key" do @c2.many_to_one :parent, :class => @c2 p = @c2.new c = @c2.load(:id=>123) proc{c.parent = p}.should raise_error(Sequel::Error) end it "should make the change to the foreign_key value inside a _association= method" do @c2.many_to_one :parent, :class => @c2 @c2.private_instance_methods.collect{|x| x.to_s}.sort.should(include("_parent=")) p = @c2.new c = @c2.load(:id=>123) def p._parent=(x) @x = x end p.should_not_receive(:parent_id=) p.parent = c p.instance_variable_get(:@x).should == c end it "should have the :setter option define the _association= method" do @c2.many_to_one :parent, :class => @c2, :setter=>proc{|x| @x = x} p = @c2.new c = @c2.load(:id=>123) p.should_not_receive(:parent_id=) p.parent = c p.instance_variable_get(:@x).should == c end it "should support (before|after)_set callbacks" do h = [] @c2.many_to_one :parent, :class => @c2, :before_set=>[proc{|x,y| h << x.pk; h << (y ? -y.pk : :y)}, :blah], :after_set=>proc{h << 3} @c2.class_eval do self::Foo = h def []=(a, v) a == :parent_id ? (model::Foo << (v ? 4 : 5)) : super end def blah(x) model::Foo << (x ? x.pk : :x) end def blahr(x) model::Foo << 6 end end p = @c2.load(:id=>10) c = @c2.load(:id=>123) h.should == [] p.parent = c h.should == [10, -123, 123, 4, 3] p.parent = nil h.should == [10, -123, 123, 4, 3, 10, :y, :x, 5, 3] end it "should support after_load association callback" do h = [] @c2.many_to_one :parent, :class => @c2, :after_load=>[proc{|x,y| h << [x.pk, y.pk]}, :al] @c2.class_eval do self::Foo = h def al(v) model::Foo << v.pk end dataset._fetch = {:id=>20} end p = @c2.load(:id=>10, :parent_id=>20) parent = p.parent h.should == [[10, 20], 20] parent.pk.should == 20 end it "should support after_load association callback that changes the cached object" do @c2.many_to_one :parent, :class => @c2, :after_load=>:al @c2.class_eval do def al(v) associations[:parent] = :foo end end p = @c2.load(:id=>10, :parent_id=>20) p.parent.should == :foo p.associations[:parent].should == :foo end it "should raise error and not call internal add or remove method if before callback returns false, even if raise_on_save_failure is false" do # The reason for this is that assignment in ruby always returns the argument instead of the result # of the method, so we can't return nil to signal that the association callback prevented the modification p = @c2.new c = @c2.load(:id=>123) p.raise_on_save_failure = false @c2.many_to_one :parent, :class => @c2, :before_set=>:bs def p.bs(x) false end p.should_not_receive(:_parent=) proc{p.parent = c}.should raise_error(Sequel::Error) p.parent.should == nil p.associations[:parent] = c p.parent.should == c proc{p.parent = nil}.should raise_error(Sequel::Error) end it "should raise an error if a callback is not a proc or symbol" do @c2.many_to_one :parent, :class => @c2, :before_set=>Object.new proc{@c2.new.parent = @c2.load(:id=>1)}.should raise_error(Sequel::Error) end end describe Sequel::Model, "one_to_one" do before do @c1 = Class.new(Sequel::Model(:attributes)) do unrestrict_primary_key columns :id, :node_id, :y end @c2 = Class.new(Sequel::Model(:nodes)) do unrestrict_primary_key attr_accessor :xxx def self.name; 'Node'; end def self.to_s; 'Node'; end columns :id, :x, :parent_id, :par_parent_id, :blah, :node_id end @dataset = @c2.dataset @dataset._fetch = {} @c1.dataset._fetch = {} DB.reset end it "should have the getter method return a single object if the :one_to_one option is true" do @c2.one_to_one :attribute, :class => @c1 att = @c2.new(:id => 1234).attribute DB.sqls.should == ['SELECT * FROM attributes WHERE (attributes.node_id = 1234) LIMIT 1'] att.should be_a_kind_of(@c1) att.values.should == {} end it "should not add a setter method if the :read_only option is true" do @c2.one_to_one :attribute, :class => @c1, :read_only=>true im = @c2.instance_methods.collect{|x| x.to_s} im.should(include('attribute')) im.should_not(include('attribute=')) end it "should add a setter method" do @c2.one_to_one :attribute, :class => @c1 attrib = @c1.new(:id=>3) @c1.dataset._fetch = @c1.instance_dataset._fetch = {:id=>3} @c2.new(:id => 1234).attribute = attrib sqls = DB.sqls ['INSERT INTO attributes (node_id, id) VALUES (1234, 3)', 'INSERT INTO attributes (id, node_id) VALUES (3, 1234)'].should(include(sqls.slice! 1)) sqls.should == ['UPDATE attributes SET node_id = NULL WHERE (node_id = 1234)', "SELECT * FROM attributes WHERE (id = 3) LIMIT 1"] @c2.new(:id => 1234).attribute.should == attrib attrib = @c1.load(:id=>3) @c2.new(:id => 1234).attribute = attrib DB.sqls.should == ["SELECT * FROM attributes WHERE (attributes.node_id = 1234) LIMIT 1", 'UPDATE attributes SET node_id = NULL WHERE ((node_id = 1234) AND (id != 3))', "UPDATE attributes SET node_id = 1234 WHERE (id = 3)"] end it "should use a transaction in the setter method" do @c2.one_to_one :attribute, :class => @c1 @c2.use_transactions = true attrib = @c1.load(:id=>3) @c2.new(:id => 1234).attribute = attrib DB.sqls.should == ['BEGIN', 'UPDATE attributes SET node_id = NULL WHERE ((node_id = 1234) AND (id != 3))', "UPDATE attributes SET node_id = 1234 WHERE (id = 3)", 'COMMIT'] end it "should have setter method respect association filters" do @c2.one_to_one :attribute, :class => @c1, :conditions=>{:a=>1} do |ds| ds.filter(:b=>2) end attrib = @c1.load(:id=>3) @c2.new(:id => 1234).attribute = attrib DB.sqls.should == ['UPDATE attributes SET node_id = NULL WHERE ((a = 1) AND (node_id = 1234) AND (b = 2) AND (id != 3))', "UPDATE attributes SET node_id = 1234 WHERE (id = 3)"] end it "should have the setter method respect the :primary_key option" do @c2.one_to_one :attribute, :class => @c1, :primary_key=>:xxx attrib = @c1.new(:id=>3) @c1.dataset._fetch = @c1.instance_dataset._fetch = {:id=>3} @c2.new(:id => 1234, :xxx=>5).attribute = attrib sqls = DB.sqls ['INSERT INTO attributes (node_id, id) VALUES (5, 3)', 'INSERT INTO attributes (id, node_id) VALUES (3, 5)'].should(include(sqls.slice! 1)) sqls.should == ['UPDATE attributes SET node_id = NULL WHERE (node_id = 5)', "SELECT * FROM attributes WHERE (id = 3) LIMIT 1"] @c2.new(:id => 321, :xxx=>5).attribute.should == attrib attrib = @c1.load(:id=>3) @c2.new(:id => 621, :xxx=>5).attribute = attrib DB.sqls.should == ["SELECT * FROM attributes WHERE (attributes.node_id = 5) LIMIT 1", 'UPDATE attributes SET node_id = NULL WHERE ((node_id = 5) AND (id != 3))', 'UPDATE attributes SET node_id = 5 WHERE (id = 3)'] end it "should have the setter method respect composite keys" do @c2.one_to_one :attribute, :class => @c1, :key=>[:node_id, :y], :primary_key=>[:id, :x] attrib = @c1.load(:id=>3, :y=>6) @c1.dataset._fetch = {:id=>3, :y=>6} @c2.load(:id => 1234, :x=>5).attribute = attrib sqls = DB.sqls sqls.last.should =~ /UPDATE attributes SET (node_id = 1234|y = 5), (node_id = 1234|y = 5) WHERE \(id = 3\)/ sqls.first.should =~ /UPDATE attributes SET (node_id|y) = NULL, (node_id|y) = NULL WHERE \(\(node_id = 1234\) AND \(y = 5\) AND \(id != 3\)\)/ sqls.length.should == 2 end it "should use implicit key if omitted" do @c2.one_to_one :parent, :class => @c2 d = @c2.new(:id => 234) p = d.parent p.class.should == @c2 p.values.should == {} DB.sqls.should == ["SELECT * FROM nodes WHERE (nodes.node_id = 234) LIMIT 1"] end it "should use implicit class if omitted" do begin class ::ParParent < Sequel::Model; end @c2.one_to_one :par_parent @c2.new(:id => 234).par_parent.class.should == ParParent DB.sqls.should == ["SELECT * FROM par_parents WHERE (par_parents.node_id = 234) LIMIT 1"] ensure Object.send(:remove_const, :ParParent) end end it "should use class inside module if given as a string" do begin module ::Par class Parent < Sequel::Model; end end @c2.one_to_one :par_parent, :class=>"Par::Parent" @c2.new(:id => 234).par_parent.class.should == Par::Parent DB.sqls.should == ["SELECT * FROM parents WHERE (parents.node_id = 234) LIMIT 1"] ensure Object.send(:remove_const, :Par) end end it "should use explicit key if given" do @c2.one_to_one :parent, :class => @c2, :key => :blah d = @c2.new(:id => 234) p = d.parent p.class.should == @c2 p.values.should == {} DB.sqls.should == ["SELECT * FROM nodes WHERE (nodes.blah = 234) LIMIT 1"] end it "should use :primary_key option if given" do @c2.one_to_one :parent, :class => @c2, :key => :pk, :primary_key => :blah @c2.new(:id => 1, :blah => 567).parent DB.sqls.should == ["SELECT * FROM nodes WHERE (nodes.pk = 567) LIMIT 1"] end it "should support composite keys" do @c2.one_to_one :parent, :class => @c2, :primary_key=>[:id, :parent_id], :key=>[:parent_id, :id] @c2.new(:id => 1, :parent_id => 234).parent DB.sqls.should == ["SELECT * FROM nodes WHERE ((nodes.parent_id = 1) AND (nodes.id = 234)) LIMIT 1"] end it "should not issue query if not all keys have values" do @c2.one_to_one :parent, :class => @c2, :key=>[:id, :parent_id], :primary_key=>[:parent_id, :id] @c2.new(:id => 1, :parent_id => nil).parent.should == nil DB.sqls.should == [] end it "should raise an Error unless same number of composite keys used" do proc{@c2.one_to_one :parent, :class => @c2, :primary_key=>[:parent_id, :id]}.should raise_error(Sequel::Error) proc{@c2.one_to_one :parent, :class => @c2, :key=>[:id, :parent_id], :primary_key=>:id}.should raise_error(Sequel::Error) proc{@c2.one_to_one :parent, :class => @c2, :key=>:id, :primary_key=>[:parent_id, :id]}.should raise_error(Sequel::Error) proc{@c2.one_to_one :parent, :class => @c2, :key=>[:id, :parent_id, :blah], :primary_key=>[:parent_id, :id]}.should raise_error(Sequel::Error) end it "should use :select option if given" do @c2.one_to_one :parent, :class => @c2, :select=>[:id, :name] @c2.new(:id => 567).parent DB.sqls.should == ["SELECT id, name FROM nodes WHERE (nodes.node_id = 567) LIMIT 1"] end it "should use :conditions option if given" do @c2.one_to_one :parent, :class => @c2, :conditions=>{:a=>32} @c2.new(:id => 567).parent DB.sqls.should == ["SELECT * FROM nodes WHERE ((a = 32) AND (nodes.node_id = 567)) LIMIT 1"] @c2.one_to_one :parent, :class => @c2, :conditions=>:a @c2.new(:id => 567).parent DB.sqls.should == ["SELECT * FROM nodes WHERE (a AND (nodes.node_id = 567)) LIMIT 1"] end it "should support :order, :limit (only for offset), and :dataset options, as well as a block" do @c2.one_to_one :child_20, :class => @c2, :key=>:id, :dataset=>proc{model.filter(:parent_id=>pk)}, :limit=>[10,20], :order=>:name do |ds| ds.filter{x > 1} end @c2.load(:id => 100).child_20 DB.sqls.should == ["SELECT * FROM nodes WHERE ((parent_id = 100) AND (x > 1)) ORDER BY name LIMIT 1 OFFSET 20"] end it "should return nil if primary_key value is nil" do @c2.one_to_one :parent, :class => @c2, :primary_key=>:node_id @c2.new(:id => 1).parent.should be_nil DB.sqls.should == [] end it "should cache negative lookup" do @c2.one_to_one :parent, :class => @c2 @c2.dataset._fetch = [] d = @c2.new(:id => 555) DB.sqls.should == [] d.parent.should == nil DB.sqls.should == ['SELECT * FROM nodes WHERE (nodes.node_id = 555) LIMIT 1'] d.parent.should == nil DB.sqls.should == [] end it "should have the setter method respect the :key option" do @c2.one_to_one :parent, :class => @c2, :key=>:blah d = @c2.new(:id => 3) e = @c2.new(:id => 4321, :blah=>444) @c2.dataset._fetch = @c2.instance_dataset._fetch = {:id => 4321, :blah => 3} d.parent = e e.values.should == {:id => 4321, :blah => 3} sqls = DB.sqls ["INSERT INTO nodes (blah, id) VALUES (3, 4321)", "INSERT INTO nodes (id, blah) VALUES (4321, 3)"].should include(sqls.slice! 1) sqls.should == ["UPDATE nodes SET blah = NULL WHERE (blah = 3)", "SELECT * FROM nodes WHERE (id = 4321) LIMIT 1"] end it "should persist changes to associated object when the setter is called" do @c2.one_to_one :parent, :class => @c2 d = @c2.load(:id => 1) d.parent = @c2.load(:id => 3, :node_id=>345) DB.sqls.should == ["UPDATE nodes SET node_id = NULL WHERE ((node_id = 1) AND (id != 3))", "UPDATE nodes SET node_id = 1 WHERE (id = 3)"] end it "should populate cache when accessed" do @c2.one_to_one :parent, :class => @c2 d = @c2.load(:id => 1) d.associations[:parent].should == nil @c2.dataset._fetch = {:id=>234} e = d.parent DB.sqls.should == ["SELECT * FROM nodes WHERE (nodes.node_id = 1) LIMIT 1"] d.parent DB.sqls.should == [] d.associations[:parent].should == e end it "should populate cache when assigned" do @c2.one_to_one :parent, :class => @c2 d = @c2.load(:id => 1) d.associations[:parent].should == nil e = @c2.load(:id => 234) d.parent = e f = d.parent d.associations[:parent].should == e e.should == f end it "should use cache if available" do @c2.one_to_one :parent, :class => @c2 d = @c2.load(:id => 1, :parent_id => 234) d.associations[:parent] = 42 d.parent.should == 42 DB.sqls.should == [] end it "should not use cache if asked to reload" do @c2.one_to_one :parent, :class => @c2 d = @c2.load(:id => 1) d.associations[:parent] = [42] d.parent(true).should_not == 42 DB.sqls.should == ["SELECT * FROM nodes WHERE (nodes.node_id = 1) LIMIT 1"] end it "should have the setter set the reciprocal many_to_one cached association" do @c2.one_to_one :parent, :class => @c2, :key=>:parent_id @c2.many_to_one :child, :class => @c2, :key=>:parent_id d = @c2.load(:id => 1) e = @c2.load(:id => 2) d.parent = e e.child.should == d DB.sqls.should == ["UPDATE nodes SET parent_id = NULL WHERE ((parent_id = 1) AND (id != 2))", "UPDATE nodes SET parent_id = 1 WHERE (id = 2)"] d.parent = nil e.child.should == nil DB.sqls.should == ["UPDATE nodes SET parent_id = NULL WHERE (parent_id = 1)"] end it "should have the setter remove the object from the previous associated object's reciprocal many_to_one cached association array if it exists" do @c2.one_to_one :parent, :class => @c2, :key=>:parent_id @c2.many_to_one :child, :class => @c2, :key=>:parent_id @c2.dataset._fetch = [] d = @c2.load(:id => 1) e = @c2.load(:id => 2) f = @c2.load(:id => 3) e.child.should == nil f.child.should == nil d.parent = e e.child.should == d d.parent = f f.child.should == d e.child.should == nil d.parent = nil f.child.should == nil end it "should have the setter not modify the reciprocal if set to same value as current" do @c2.one_to_one :parent, :class => @c2, :key=>:parent_id @c2.many_to_one :child, :class => @c2, :key=>:parent_id c1 = @c2.load(:id => 1, :parent_id=>nil) c2 = @c2.load(:id => 2, :parent_id=>1) c1.associations[:child] = c2 c2.associations[:parent] = c1 c2.parent = c1 c1.child.should == c2 DB.sqls.should == [] end it "should not add associations methods directly to class" do @c2.one_to_one :parent, :class => @c2 @c2.instance_methods.collect{|x| x.to_s}.should(include('parent')) @c2.instance_methods.collect{|x| x.to_s}.should(include('parent=')) @c2.instance_methods(false).collect{|x| x.to_s}.should_not(include('parent')) @c2.instance_methods(false).collect{|x| x.to_s}.should_not(include('parent=')) end it "should raise an error if the current model object that doesn't have a valid primary key" do @c2.one_to_one :parent, :class => @c2 p = @c2.new c = @c2.load(:id=>123) proc{p.parent = c}.should raise_error(Sequel::Error) end it "should make the change to the foreign_key value inside a _association= method" do @c2.one_to_one :parent, :class => @c2 @c2.private_instance_methods.collect{|x| x.to_s}.sort.should(include("_parent=")) c = @c2.new p = @c2.load(:id=>123) def p._parent=(x) @x = x end p.should_not_receive(:parent_id=) p.parent = c p.instance_variable_get(:@x).should == c end it "should have a :setter option define the _association= method" do @c2.one_to_one :parent, :class => @c2, :setter=>proc{|x| @x = x} c = @c2.new p = @c2.load(:id=>123) p.should_not_receive(:parent_id=) p.parent = c p.instance_variable_get(:@x).should == c end it "should support (before|after)_set callbacks" do h = [] @c2.one_to_one :parent, :class => @c2, :before_set=>[proc{|x,y| h << x.pk; h << (y ? -y.pk : :y)}, :blah], :after_set=>proc{h << 3} @c2.class_eval do self::Foo = h def blah(x) model::Foo << (x ? x.pk : :x) end def blahr(x) model::Foo << 6 end end p = @c2.load(:id=>10) c = @c2.load(:id=>123) h.should == [] p.parent = c h.should == [10, -123, 123, 3] p.parent = nil h.should == [10, -123, 123, 3, 10, :y, :x, 3] end it "should support after_load association callback" do h = [] @c2.one_to_one :parent, :class => @c2, :after_load=>[proc{|x,y| h << [x.pk, y.pk]}, :al] @c2.class_eval do self::Foo = h def al(v) model::Foo << v.pk end @dataset._fetch = {:id=>20} end p = @c2.load(:id=>10) parent = p.parent h.should == [[10, 20], 20] parent.pk.should == 20 end it "should raise error and not call internal add or remove method if before callback returns false, even if raise_on_save_failure is false" do # The reason for this is that assignment in ruby always returns the argument instead of the result # of the method, so we can't return nil to signal that the association callback prevented the modification p = @c2.new c = @c2.load(:id=>123) p.raise_on_save_failure = false @c2.one_to_one :parent, :class => @c2, :before_set=>:bs def p.bs(x) false end p.should_not_receive(:_parent=) proc{p.parent = c}.should raise_error(Sequel::Error) p.parent.should == nil p.associations[:parent] = c p.parent.should == c proc{p.parent = nil}.should raise_error(Sequel::Error) end it "should raise an error if a callback is not a proc or symbol" do @c2.one_to_one :parent, :class => @c2, :before_set=>Object.new proc{@c2.new.parent = @c2.load(:id=>1)}.should raise_error(Sequel::Error) end it "should work_correctly when used with associate" do @c2.associate :one_to_one, :parent, :class => @c2 @c2.load(:id => 567).parent.should == @c2.load({}) DB.sqls.should == ["SELECT * FROM nodes WHERE (nodes.node_id = 567) LIMIT 1"] end end describe Sequel::Model, "one_to_many" do before do @c1 = Class.new(Sequel::Model(:attributes)) do unrestrict_primary_key columns :id, :node_id, :y, :z end @c2 = Class.new(Sequel::Model(:nodes)) do def _refresh(ds); end unrestrict_primary_key attr_accessor :xxx def self.name; 'Node'; end def self.to_s; 'Node'; end columns :id, :x end @dataset = @c2.dataset @dataset._fetch = {} @c1.dataset._fetch = proc{|sql| sql =~ /SELECT 1/ ? {:a=>1} : {}} DB.reset end it "should use implicit key if omitted" do @c2.one_to_many :attributes, :class => @c1 @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT * FROM attributes WHERE (attributes.node_id = 1234)' end it "should use implicit class if omitted" do begin class ::HistoricalValue < Sequel::Model; end @c2.one_to_many :historical_values v = @c2.new(:id => 1234).historical_values_dataset v.should be_a_kind_of(Sequel::Dataset) v.sql.should == 'SELECT * FROM historical_values WHERE (historical_values.node_id = 1234)' v.model.should == HistoricalValue ensure Object.send(:remove_const, :HistoricalValue) end end it "should use class inside a module if given as a string" do begin module ::Historical class Value < Sequel::Model; end end @c2.one_to_many :historical_values, :class=>'Historical::Value' v = @c2.new(:id => 1234).historical_values_dataset v.should be_a_kind_of(Sequel::Dataset) v.sql.should == 'SELECT * FROM values WHERE (values.node_id = 1234)' v.model.should == Historical::Value ensure Object.send(:remove_const, :Historical) end end it "should use a callback if given one as the argument" do @c2.one_to_many :attributes, :class => @c1, :key => :nodeid d = @c2.load(:id => 1234) d.associations[:attributes] = [] d.attributes(proc{|ds| ds.filter{name > 'M'}}).should_not == [] DB.sqls.should == ["SELECT * FROM attributes WHERE ((attributes.nodeid = 1234) AND (name > 'M'))"] end it "should use explicit key if given" do @c2.one_to_many :attributes, :class => @c1, :key => :nodeid @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT * FROM attributes WHERE (attributes.nodeid = 1234)' end it "should support_composite keys" do @c2.one_to_many :attributes, :class => @c1, :key =>[:node_id, :id], :primary_key=>[:id, :x] @c2.load(:id => 1234, :x=>234).attributes_dataset.sql.should == 'SELECT * FROM attributes WHERE ((attributes.node_id = 1234) AND (attributes.id = 234))' end it "should not issue query if not all keys have values" do @c2.one_to_many :attributes, :class => @c1, :key =>[:node_id, :id], :primary_key=>[:id, :x] @c2.load(:id => 1234, :x=>nil).attributes.should == [] DB.sqls.should == [] end it "should raise an Error unless same number of composite keys used" do proc{@c2.one_to_many :attributes, :class => @c1, :key=>[:node_id, :id]}.should raise_error(Sequel::Error) proc{@c2.one_to_many :attributes, :class => @c1, :primary_key=>[:node_id, :id]}.should raise_error(Sequel::Error) proc{@c2.one_to_many :attributes, :class => @c1, :key=>[:node_id, :id], :primary_key=>:id}.should raise_error(Sequel::Error) proc{@c2.one_to_many :attributes, :class => @c1, :key=>:id, :primary_key=>[:node_id, :id]}.should raise_error(Sequel::Error) proc{@c2.one_to_many :attributes, :class => @c1, :key=>[:node_id, :id, :x], :primary_key=>[:parent_id, :id]}.should raise_error(Sequel::Error) end it "should define an add_ method that works on existing records" do @c2.one_to_many :attributes, :class => @c1 n = @c2.load(:id => 1234) a = @c1.load(:id => 2345) a.should == n.add_attribute(a) a.values.should == {:node_id => 1234, :id => 2345} DB.sqls.should == ['UPDATE attributes SET node_id = 1234 WHERE (id = 2345)'] end it "should define an add_ method that works on new records" do @c2.one_to_many :attributes, :class => @c1 n = @c2.load(:id => 1234) a = @c1.new(:id => 234) @c1.dataset._fetch = @c1.instance_dataset._fetch = {:node_id => 1234, :id => 234} a.should == n.add_attribute(a) sqls = DB.sqls sqls.shift.should =~ /INSERT INTO attributes \((node_)?id, (node_)?id\) VALUES \(1?234, 1?234\)/ sqls.should == ["SELECT * FROM attributes WHERE (id = 234) LIMIT 1"] a.values.should == {:node_id => 1234, :id => 234} end it "should define a remove_ method that works on existing records" do @c2.one_to_many :attributes, :class => @c1 n = @c2.load(:id => 1234) a = @c1.load(:id => 2345, :node_id => 1234) a.should == n.remove_attribute(a) a.values.should == {:node_id => nil, :id => 2345} DB.sqls.should == ["SELECT 1 AS one FROM attributes WHERE ((attributes.node_id = 1234) AND (id = 2345)) LIMIT 1", 'UPDATE attributes SET node_id = NULL WHERE (id = 2345)'] end it "should have the remove_ method raise an error if the passed object is not already associated" do @c2.one_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) a = @c1.load(:id => 2345, :node_id => 1234) @c1.dataset._fetch = [] proc{n.remove_attribute(a)}.should raise_error(Sequel::Error) DB.sqls.should == ["SELECT 1 AS one FROM attributes WHERE ((attributes.node_id = 1234) AND (id = 2345)) LIMIT 1"] end it "should accept a hash for the add_ method and create a new record" do @c2.one_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) DB.reset @c1.dataset._fetch = @c1.instance_dataset._fetch = {:node_id => 1234, :id => 234} n.add_attribute(:id => 234).should == @c1.load(:node_id => 1234, :id => 234) sqls = DB.sqls sqls.shift.should =~ /INSERT INTO attributes \((node_)?id, (node_)?id\) VALUES \(1?234, 1?234\)/ sqls.should == ["SELECT * FROM attributes WHERE (id = 234) LIMIT 1"] end it "should accept a primary key for the add_ method" do @c2.one_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) @c1.dataset._fetch = {:id=>234, :node_id=>nil} n.add_attribute(234).should == @c1.load(:node_id => 1234, :id => 234) DB.sqls.should == ["SELECT * FROM attributes WHERE id = 234", "UPDATE attributes SET node_id = 1234 WHERE (id = 234)"] end it "should raise an error if the primary key passed to the add_ method does not match an existing record" do @c2.one_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) @c1.dataset._fetch = [] proc{n.add_attribute(234)}.should raise_error(Sequel::NoMatchingRow) DB.sqls.should == ["SELECT * FROM attributes WHERE id = 234"] end it "should raise an error in the add_ method if the passed associated object is not of the correct type" do @c2.one_to_many :attributes, :class => @c1 proc{@c2.new(:id => 1234).add_attribute(@c2.new)}.should raise_error(Sequel::Error) end it "should accept a primary key for the remove_ method and remove an existing record" do @c2.one_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) @c1.dataset._fetch = {:id=>234, :node_id=>1234} n.remove_attribute(234).should == @c1.load(:node_id => nil, :id => 234) DB.sqls.should == ['SELECT * FROM attributes WHERE ((attributes.node_id = 1234) AND (attributes.id = 234)) LIMIT 1', 'UPDATE attributes SET node_id = NULL WHERE (id = 234)'] end it "should raise an error in the remove_ method if the passed associated object is not of the correct type" do @c2.one_to_many :attributes, :class => @c1 proc{@c2.new(:id => 1234).remove_attribute(@c2.new)}.should raise_error(Sequel::Error) end it "should have add_ method respect the :primary_key option" do @c2.one_to_many :attributes, :class => @c1, :primary_key=>:xxx n = @c2.new(:id => 1234, :xxx=>5) a = @c1.load(:id => 2345) n.add_attribute(a).should == a DB.sqls.should == ['UPDATE attributes SET node_id = 5 WHERE (id = 2345)'] end it "should have add_ method not add the same object to the cached association array if the object is already in the array" do @c2.one_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) a = @c1.load(:id => 2345) n.associations[:attributes] = [] a.should == n.add_attribute(a) a.should == n.add_attribute(a) a.values.should == {:node_id => 1234, :id => 2345} n.attributes.should == [a] DB.sqls.should == ['UPDATE attributes SET node_id = 1234 WHERE (id = 2345)'] * 2 end it "should have add_ method respect composite keys" do @c2.one_to_many :attributes, :class => @c1, :key =>[:node_id, :y], :primary_key=>[:id, :x] n = @c2.load(:id => 1234, :x=>5) a = @c1.load(:id => 2345) n.add_attribute(a).should == a sqls = DB.sqls sqls.shift.should =~ /UPDATE attributes SET (node_id = 1234|y = 5), (node_id = 1234|y = 5) WHERE \(id = 2345\)/ sqls.should == [] end it "should have add_ method accept a composite key" do @c1.set_primary_key [:id, :z] @c2.one_to_many :attributes, :class => @c1, :key =>[:node_id, :y], :primary_key=>[:id, :x] @c1.dataset._fetch = {:id => 2345, :z => 8, :node_id => 1234, :y=>5} n = @c2.load(:id => 1234, :x=>5) a = @c1.load(:id => 2345, :z => 8, :node_id => 1234, :y=>5) n.add_attribute([2345, 8]).should == a sqls = DB.sqls sqls.shift.should =~ /SELECT \* FROM attributes WHERE \(\((id|z) = (2345|8)\) AND \((id|z) = (2345|8)\)\) LIMIT 1/ sqls.shift.should =~ /UPDATE attributes SET (node_id|y) = (1234|5), (node_id|y) = (1234|5) WHERE \(\((id|z) = (2345|8)\) AND \((id|z) = (2345|8)\)\)/ sqls.should == [] end it "should have remove_ method respect composite keys" do @c2.one_to_many :attributes, :class => @c1, :key =>[:node_id, :y], :primary_key=>[:id, :x] n = @c2.load(:id => 1234, :x=>5) a = @c1.load(:id => 2345, :node_id=>1234, :y=>5) n.remove_attribute(a).should == a sqls = DB.sqls sqls.pop.should =~ /UPDATE attributes SET (node_id|y) = NULL, (node_id|y) = NULL WHERE \(id = 2345\)/ sqls.should == ["SELECT 1 AS one FROM attributes WHERE ((attributes.node_id = 1234) AND (attributes.y = 5) AND (id = 2345)) LIMIT 1"] end it "should accept a array of composite primary key values for the remove_ method and remove an existing record" do @c1.set_primary_key [:id, :y] @c2.one_to_many :attributes, :class => @c1, :key=>:node_id, :primary_key=>:id n = @c2.new(:id => 123) @c1.dataset._fetch = {:id=>234, :node_id=>123, :y=>5} n.remove_attribute([234, 5]).should == @c1.load(:node_id => nil, :y => 5, :id => 234) sqls = DB.sqls sqls.length.should == 2 sqls.first.should =~ /SELECT \* FROM attributes WHERE \(\(attributes.node_id = 123\) AND \(attributes\.(id|y) = (234|5)\) AND \(attributes\.(id|y) = (234|5)\)\) LIMIT 1/ sqls.last.should =~ /UPDATE attributes SET node_id = NULL WHERE \(\((id|y) = (234|5)\) AND \((id|y) = (234|5)\)\)/ end it "should raise an error in add_ and remove_ if the passed object returns false to save (is not valid)" do @c2.one_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) a = @c1.new(:id => 2345) def a.validate() errors.add(:id, 'foo') end proc{n.add_attribute(a)}.should raise_error(Sequel::Error) proc{n.remove_attribute(a)}.should raise_error(Sequel::Error) end it "should not validate the associated object in add_ and remove_ if the :validate=>false option is used" do @c2.one_to_many :attributes, :class => @c1, :validate=>false n = @c2.new(:id => 1234) a = @c1.new(:id => 2345) def a.validate() errors.add(:id, 'foo') end n.add_attribute(a).should == a n.remove_attribute(a).should == a end it "should raise an error if the model object doesn't have a valid primary key" do @c2.one_to_many :attributes, :class => @c1 a = @c2.new n = @c1.load(:id=>123) proc{a.attributes_dataset}.should raise_error(Sequel::Error) proc{a.add_attribute(n)}.should raise_error(Sequel::Error) proc{a.remove_attribute(n)}.should raise_error(Sequel::Error) proc{a.remove_all_attributes}.should raise_error(Sequel::Error) end it "should use :primary_key option if given" do @c1.one_to_many :nodes, :class => @c2, :primary_key => :node_id, :key=>:id @c1.load(:id => 1234, :node_id=>4321).nodes_dataset.sql.should == "SELECT * FROM nodes WHERE (nodes.id = 4321)" end it "should support a select option" do @c2.one_to_many :attributes, :class => @c1, :select => [:id, :name] @c2.new(:id => 1234).attributes_dataset.sql.should == "SELECT id, name FROM attributes WHERE (attributes.node_id = 1234)" end it "should support a conditions option" do @c2.one_to_many :attributes, :class => @c1, :conditions => {:a=>32} @c2.new(:id => 1234).attributes_dataset.sql.should == "SELECT * FROM attributes WHERE ((a = 32) AND (attributes.node_id = 1234))" @c2.one_to_many :attributes, :class => @c1, :conditions => Sequel.~(:a) @c2.new(:id => 1234).attributes_dataset.sql.should == "SELECT * FROM attributes WHERE (NOT a AND (attributes.node_id = 1234))" end it "should support an order option" do @c2.one_to_many :attributes, :class => @c1, :order => :kind @c2.new(:id => 1234).attributes_dataset.sql.should == "SELECT * FROM attributes WHERE (attributes.node_id = 1234) ORDER BY kind" end it "should support an array for the order option" do @c2.one_to_many :attributes, :class => @c1, :order => [:kind1, :kind2] @c2.new(:id => 1234).attributes_dataset.sql.should == "SELECT * FROM attributes WHERE (attributes.node_id = 1234) ORDER BY kind1, kind2" end it "should have a dataset method for the associated object dataset" do @c2.one_to_many :attributes, :class => @c1 @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT * FROM attributes WHERE (attributes.node_id = 1234)' end it "should accept a block" do @c2.one_to_many :attributes, :class => @c1 do |ds| ds.filter(:xxx => nil) end @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT * FROM attributes WHERE ((attributes.node_id = 1234) AND (xxx IS NULL))' end it "should support :order option with block" do @c2.one_to_many :attributes, :class => @c1, :order => :kind do |ds| ds.filter(:xxx => nil) end @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT * FROM attributes WHERE ((attributes.node_id = 1234) AND (xxx IS NULL)) ORDER BY kind' end it "should have the block argument affect the _dataset method" do @c2.one_to_many :attributes, :class => @c1 do |ds| ds.filter(:xxx => 456) end @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT * FROM attributes WHERE ((attributes.node_id = 1234) AND (xxx = 456))' end it "should support a :dataset option that is used instead of the default" do c1 = @c1 @c2.one_to_many :all_other_attributes, :class => @c1, :dataset=>proc{c1.exclude(:nodeid=>pk)}, :order=>:a, :limit=>10 do |ds| ds.filter(:xxx => 5) end @c2.new(:id => 1234).all_other_attributes_dataset.sql.should == 'SELECT * FROM attributes WHERE ((nodeid != 1234) AND (xxx = 5)) ORDER BY a LIMIT 10' @c2.new(:id => 1234).all_other_attributes.should == [@c1.load({})] DB.sqls.should == ['SELECT * FROM attributes WHERE ((nodeid != 1234) AND (xxx = 5)) ORDER BY a LIMIT 10'] end it "should support a :limit option" do @c2.one_to_many :attributes, :class => @c1 , :limit=>10 @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT * FROM attributes WHERE (attributes.node_id = 1234) LIMIT 10' @c2.one_to_many :attributes, :class => @c1 , :limit=>[10,10] @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT * FROM attributes WHERE (attributes.node_id = 1234) LIMIT 10 OFFSET 10' end it "should have the :eager option affect the _dataset method" do @c2.one_to_many :attributes, :class => @c2 , :eager=>:attributes @c2.new(:id => 1234).attributes_dataset.opts[:eager].should == {:attributes=>nil} end it "should populate cache when accessed" do @c2.one_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) n.associations.include?(:attributes).should == false atts = n.attributes atts.should == n.associations[:attributes] DB.sqls.should == ['SELECT * FROM attributes WHERE (attributes.node_id = 1234)'] end it "should use cache if available" do @c2.one_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) n.associations[:attributes] = 42 n.attributes.should == 42 DB.sqls.should == [] end it "should not use cache if asked to reload" do @c2.one_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) n.associations[:attributes] = 42 n.attributes(true).should_not == 42 DB.sqls.should == ['SELECT * FROM attributes WHERE (attributes.node_id = 1234)'] end it "should add item to cache if it exists when calling add_" do @c2.one_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) att = @c1.load(:id => 345) a = [] n.associations[:attributes] = a n.add_attribute(att) a.should == [att] end it "should set object to item's reciprocal cache when calling add_" do @c2.one_to_many :attributes, :class => @c1 @c1.many_to_one :node, :class => @c2 n = @c2.new(:id => 1234) att = @c1.new(:id => 345) n.add_attribute(att) att.node.should == n end it "should remove item from cache if it exists when calling remove_" do @c2.one_to_many :attributes, :class => @c1 n = @c2.load(:id => 1234) att = @c1.load(:id => 345) a = [att] n.associations[:attributes] = a n.remove_attribute(att) a.should == [] end it "should remove item's reciprocal cache calling remove_" do @c2.one_to_many :attributes, :class => @c1 @c1.many_to_one :node, :class => @c2 n = @c2.new(:id => 1234) att = @c1.new(:id => 345) att.associations[:node] = n att.node.should == n n.remove_attribute(att) att.node.should == nil end it "should not create the add_, remove_, or remove_all_ methods if :read_only option is used" do @c2.one_to_many :attributes, :class => @c1, :read_only=>true im = @c2.instance_methods.collect{|x| x.to_s} im.should(include('attributes')) im.should(include('attributes_dataset')) im.should_not(include('add_attribute')) im.should_not(include('remove_attribute')) im.should_not(include('remove_all_attributes')) end it "should not add associations methods directly to class" do @c2.one_to_many :attributes, :class => @c1 im = @c2.instance_methods.collect{|x| x.to_s} im.should(include('attributes')) im.should(include('attributes_dataset')) im.should(include('add_attribute')) im.should(include('remove_attribute')) im.should(include('remove_all_attributes')) im2 = @c2.instance_methods(false).collect{|x| x.to_s} im2.should_not(include('attributes')) im2.should_not(include('attributes_dataset')) im2.should_not(include('add_attribute')) im2.should_not(include('remove_attribute')) im2.should_not(include('remove_all_attributes')) end it "should populate the reciprocal many_to_one cache when loading the one_to_many association" do @c2.one_to_many :attributes, :class => @c1, :key => :node_id @c1.many_to_one :node, :class => @c2, :key => :node_id n = @c2.new(:id => 1234) atts = n.attributes DB.sqls.should == ['SELECT * FROM attributes WHERE (attributes.node_id = 1234)'] atts.should == [@c1.load({})] atts.map{|a| a.node}.should == [n] DB.sqls.should == [] end it "should use an explicit :reciprocal option if given" do @c2.one_to_many :attributes, :class => @c1, :key => :node_id, :reciprocal=>:wxyz n = @c2.new(:id => 1234) atts = n.attributes DB.sqls.should == ['SELECT * FROM attributes WHERE (attributes.node_id = 1234)'] atts.should == [@c1.load({})] atts.map{|a| a.associations[:wxyz]}.should == [n] DB.sqls.should == [] end it "should have an remove_all_ method that removes all associated objects" do @c2.one_to_many :attributes, :class => @c1 @c2.new(:id => 1234).remove_all_attributes DB.sqls.should == ['UPDATE attributes SET node_id = NULL WHERE (node_id = 1234)'] end it "should have remove_all method respect association filters" do @c2.one_to_many :attributes, :class => @c1, :conditions=>{:a=>1} do |ds| ds.filter(:b=>2) end @c2.new(:id => 1234).remove_all_attributes DB.sqls.should == ['UPDATE attributes SET node_id = NULL WHERE ((a = 1) AND (node_id = 1234) AND (b = 2))'] end it "should have the remove_all_ method respect the :primary_key option" do @c2.one_to_many :attributes, :class => @c1, :primary_key=>:xxx @c2.new(:id => 1234, :xxx=>5).remove_all_attributes DB.sqls.should == ['UPDATE attributes SET node_id = NULL WHERE (node_id = 5)'] end it "should have the remove_all_ method respect composite keys" do @c2.one_to_many :attributes, :class => @c1, :key=>[:node_id, :y], :primary_key=>[:id, :x] @c2.new(:id => 1234, :x=>5).remove_all_attributes sqls = DB.sqls sqls.pop.should =~ /UPDATE attributes SET (node_id|y) = NULL, (node_id|y) = NULL WHERE \(\(node_id = 1234\) AND \(y = 5\)\)/ sqls.should == [] end it "remove_all should set the cache to []" do @c2.one_to_many :attributes, :class => @c1 node = @c2.new(:id => 1234) node.remove_all_attributes node.associations[:attributes].should == [] end it "remove_all should return the array of previously associated items if the cache is populated" do @c2.one_to_many :attributes, :class => @c1 attrib = @c1.new(:id=>3) node = @c2.new(:id => 1234) @c1.dataset._fetch = [[], [{:id=>3, :node_id=>1234}]] node.attributes.should == [] node.add_attribute(attrib) node.associations[:attributes].should == [attrib] node.remove_all_attributes.should == [attrib] end it "remove_all should return nil if the cache is not populated" do @c2.one_to_many :attributes, :class => @c1 @c2.new(:id => 1234).remove_all_attributes.should == nil end it "remove_all should remove the current item from all reciprocal association caches if they are populated" do @c2.one_to_many :attributes, :class => @c1 @c1.many_to_one :node, :class => @c2 @c2.dataset._fetch = [] @c1.dataset._fetch = [[], [{:id=>3, :node_id=>1234}]] attrib = @c1.new(:id=>3) node = @c2.load(:id => 1234) node.attributes.should == [] attrib.node.should == nil node.add_attribute(attrib) attrib.associations[:node].should == node node.remove_all_attributes attrib.associations.fetch(:node, 2).should == nil end it "should call an _add_ method internally to add attributes" do @c2.one_to_many :attributes, :class => @c1 @c2.private_instance_methods.collect{|x| x.to_s}.sort.should(include("_add_attribute")) p = @c2.load(:id=>10) c = @c1.load(:id=>123) def p._add_attribute(x) @x = x end c.should_not_receive(:node_id=) p.add_attribute(c) p.instance_variable_get(:@x).should == c end it "should support an :adder option for defining the _add_ method" do @c2.one_to_many :attributes, :class => @c1, :adder=>proc{|x| @x = x} p = @c2.load(:id=>10) c = @c1.load(:id=>123) c.should_not_receive(:node_id=) p.add_attribute(c) p.instance_variable_get(:@x).should == c end it "should allow additional arguments given to the add_ method and pass them onwards to the _add_ method" do @c2.one_to_many :attributes, :class => @c1 p = @c2.load(:id=>10) c = @c1.load(:id=>123) def p._add_attribute(x,*y) @x = x @y = y end c.should_not_receive(:node_id=) p.add_attribute(c,:foo,:bar=>:baz) p.instance_variable_get(:@x).should == c p.instance_variable_get(:@y).should == [:foo,{:bar=>:baz}] end it "should call a _remove_ method internally to remove attributes" do @c2.one_to_many :attributes, :class => @c1 @c2.private_instance_methods.collect{|x| x.to_s}.sort.should(include("_remove_attribute")) p = @c2.load(:id=>10) c = @c1.load(:id=>123) def p._remove_attribute(x) @x = x end c.should_not_receive(:node_id=) p.remove_attribute(c) p.instance_variable_get(:@x).should == c end it "should support a :remover option for defining the _remove_ method" do @c2.one_to_many :attributes, :class => @c1, :remover=>proc{|x| @x = x} p = @c2.load(:id=>10) c = @c1.load(:id=>123) c.should_not_receive(:node_id=) p.remove_attribute(c) p.instance_variable_get(:@x).should == c end it "should allow additional arguments given to the remove_ method and pass them onwards to the _remove_ method" do @c2.one_to_many :attributes, :class => @c1, :reciprocal=>nil p = @c2.load(:id=>10) c = @c1.load(:id=>123) def p._remove_attribute(x,*y) @x = x @y = y end c.should_not_receive(:node_id=) p.remove_attribute(c,:foo,:bar=>:baz) p.instance_variable_get(:@x).should == c p.instance_variable_get(:@y).should == [:foo,{:bar=>:baz}] end it "should allow additional arguments given to the remove_all_ method and pass them onwards to the _remove_all_ method" do @c2.one_to_many :attributes, :class => @c1 p = @c2.load(:id=>10) c = @c1.load(:id=>123) def p._remove_all_attributes(*y) @y = y end c.should_not_receive(:node_id=) p.remove_all_attributes(:foo,:bar=>:baz) p.instance_variable_get(:@y).should == [:foo,{:bar=>:baz}] end it "should call a _remove_all_ method internally to remove attributes" do @c2.one_to_many :attributes, :class => @c1 @c2.private_instance_methods.collect{|x| x.to_s}.sort.should(include("_remove_all_attributes")) p = @c2.load(:id=>10) def p._remove_all_attributes @x = :foo end p.remove_all_attributes p.instance_variable_get(:@x).should == :foo end it "should support a :clearer option for defining the _remove_all_ method" do @c2.one_to_many :attributes, :class => @c1, :clearer=>proc{@x = :foo} p = @c2.load(:id=>10) p.remove_all_attributes p.instance_variable_get(:@x).should == :foo end it "should support (before|after)_(add|remove) callbacks" do h = [] @c2.one_to_many :attributes, :class => @c1, :before_add=>[proc{|x,y| h << x.pk; h << -y.pk}, :blah], :after_add=>proc{h << 3}, :before_remove=>:blah, :after_remove=>[:blahr] @c2.class_eval do self::Foo = h def _add_attribute(v) model::Foo << 4 end def _remove_attribute(v) model::Foo << 5 end def blah(x) model::Foo << x.pk end def blahr(x) model::Foo << 6 end end p = @c2.load(:id=>10) c = @c1.load(:id=>123) h.should == [] p.add_attribute(c) h.should == [10, -123, 123, 4, 3] p.remove_attribute(c) h.should == [10, -123, 123, 4, 3, 123, 5, 6] end it "should support after_load association callback" do h = [] @c2.one_to_many :attributes, :class => @c1, :after_load=>[proc{|x,y| h << [x.pk, y.collect{|z|z.pk}]}, :al] @c2.class_eval do self::Foo = h def al(v) v.each{|x| model::Foo << x.pk} end end @c1.dataset._fetch = [{:id=>20}, {:id=>30}] p = @c2.load(:id=>10, :parent_id=>20) attributes = p.attributes h.should == [[10, [20, 30]], 20, 30] attributes.collect{|a| a.pk}.should == [20, 30] end it "should raise error and not call internal add or remove method if before callback returns false if raise_on_save_failure is true" do p = @c2.load(:id=>10) c = @c1.load(:id=>123) @c2.one_to_many :attributes, :class => @c1, :before_add=>:ba, :before_remove=>:br p.should_receive(:ba).once.with(c).and_return(false) p.should_not_receive(:_add_attribute) p.should_not_receive(:_remove_attribute) p.associations[:attributes] = [] proc{p.add_attribute(c)}.should raise_error(Sequel::Error) p.attributes.should == [] p.associations[:attributes] = [c] p.should_receive(:br).once.with(c).and_return(false) proc{p.remove_attribute(c)}.should raise_error(Sequel::Error) p.attributes.should == [c] end it "should return nil and not call internal add or remove method if before callback returns false if raise_on_save_failure is false" do p = @c2.load(:id=>10) c = @c1.load(:id=>123) p.raise_on_save_failure = false @c2.one_to_many :attributes, :class => @c1, :before_add=>:ba, :before_remove=>:br p.should_receive(:ba).once.with(c).and_return(false) p.should_not_receive(:_add_attribute) p.should_not_receive(:_remove_attribute) p.associations[:attributes] = [] p.add_attribute(c).should == nil p.attributes.should == [] p.associations[:attributes] = [c] p.should_receive(:br).once.with(c).and_return(false) p.remove_attribute(c).should == nil p.attributes.should == [c] end end describe Sequel::Model, "many_to_many" do before do @c1 = Class.new(Sequel::Model(:attributes)) do unrestrict_primary_key attr_accessor :yyy def self.name; 'Attribute'; end def self.to_s; 'Attribute'; end columns :id, :y, :z end @c2 = Class.new(Sequel::Model(:nodes)) do unrestrict_primary_key attr_accessor :xxx def self.name; 'Node'; end def self.to_s; 'Node'; end columns :id, :x end @dataset = @c2.dataset @c1.dataset.autoid = 1 [@c1, @c2].each{|c| c.dataset._fetch = {}} DB.reset end it "should use implicit key values and join table if omitted" do @c2.many_to_many :attributes, :class => @c1 @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))' end it "should use implicit class if omitted" do begin class ::Tag < Sequel::Model; end @c2.many_to_many :tags @c2.new(:id => 1234).tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN nodes_tags ON ((nodes_tags.tag_id = tags.id) AND (nodes_tags.node_id = 1234))' ensure Object.send(:remove_const, :Tag) end end it "should use class inside module if given as a string" do begin module ::Historical class Tag < Sequel::Model; end end @c2.many_to_many :tags, :class=>'::Historical::Tag' @c2.new(:id => 1234).tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN nodes_tags ON ((nodes_tags.tag_id = tags.id) AND (nodes_tags.node_id = 1234))' ensure Object.send(:remove_const, :Historical) end end it "should respect :eager_loader_predicate_key when lazily loading" do @c2.many_to_many :attributes, :class => @c1, :eager_loading_predicate_key=>Sequel.subscript(:attributes_nodes__node_id, 0) @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id[0] = 1234))' end it "should use explicit key values and join table if given" do @c2.many_to_many :attributes, :class => @c1, :left_key => :nodeid, :right_key => :attributeid, :join_table => :attribute2node @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attribute2node ON ((attribute2node.attributeid = attributes.id) AND (attribute2node.nodeid = 1234))' end it "should support a conditions option" do @c2.many_to_many :attributes, :class => @c1, :conditions => {:a=>32} @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE (a = 32)' @c2.many_to_many :attributes, :class => @c1, :conditions => ['a = ?', 32] @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE (a = 32)' @c2.new(:id => 1234).attributes.should == [@c1.load({})] end it "should support an order option" do @c2.many_to_many :attributes, :class => @c1, :order => :blah @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) ORDER BY blah' end it "should support an array for the order option" do @c2.many_to_many :attributes, :class => @c1, :order => [:blah1, :blah2] @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) ORDER BY blah1, blah2' end it "should support :left_primary_key and :right_primary_key options" do @c2.many_to_many :attributes, :class => @c1, :left_primary_key=>:xxx, :right_primary_key=>:yyy @c2.new(:id => 1234, :xxx=>5).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.yyy) AND (attributes_nodes.node_id = 5))' end it "should support composite keys" do @c2.many_to_many :attributes, :class => @c1, :left_key=>[:l1, :l2], :right_key=>[:r1, :r2], :left_primary_key=>[:id, :x], :right_primary_key=>[:id, :y] @c2.load(:id => 1234, :x=>5).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.r1 = attributes.id) AND (attributes_nodes.r2 = attributes.y) AND (attributes_nodes.l1 = 1234) AND (attributes_nodes.l2 = 5))' end it "should not issue query if not all keys have values" do @c2.many_to_many :attributes, :class => @c1, :left_key=>[:l1, :l2], :right_key=>[:r1, :r2], :left_primary_key=>[:id, :x], :right_primary_key=>[:id, :y] @c2.load(:id => 1234, :x=>nil).attributes.should == [] DB.sqls.should == [] end it "should raise an Error unless same number of composite keys used" do proc{@c2.many_to_many :attributes, :class => @c1, :left_key=>[:node_id, :id]}.should raise_error(Sequel::Error) proc{@c2.many_to_many :attributes, :class => @c1, :left_primary_key=>[:node_id, :id]}.should raise_error(Sequel::Error) proc{@c2.many_to_many :attributes, :class => @c1, :left_key=>[:node_id, :id], :left_primary_key=>:id}.should raise_error(Sequel::Error) proc{@c2.many_to_many :attributes, :class => @c1, :left_key=>:id, :left_primary_key=>[:node_id, :id]}.should raise_error(Sequel::Error) proc{@c2.many_to_many :attributes, :class => @c1, :left_key=>[:node_id, :id, :x], :left_primary_key=>[:parent_id, :id]}.should raise_error(Sequel::Error) proc{@c2.many_to_many :attributes, :class => @c1, :right_primary_key=>[:node_id, :id]}.should raise_error(Sequel::Error) proc{@c2.many_to_many :attributes, :class => @c1, :right_key=>[:node_id, :id], :right_primary_key=>:id}.should raise_error(Sequel::Error) proc{@c2.many_to_many :attributes, :class => @c1, :right_key=>:id, :left_primary_key=>[:node_id, :id]}.should raise_error(Sequel::Error) proc{@c2.many_to_many :attributes, :class => @c1, :right_key=>[:node_id, :id, :x], :right_primary_key=>[:parent_id, :id]}.should raise_error(Sequel::Error) end it "should support a select option" do @c2.many_to_many :attributes, :class => @c1, :select => :blah @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT blah FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))' end it "should support an array for the select option" do @c2.many_to_many :attributes, :class => @c1, :select => [Sequel::SQL::ColumnAll.new(:attributes), :attribute_nodes__blah2] @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.*, attribute_nodes.blah2 FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))' end it "should accept a block" do @c2.many_to_many :attributes, :class => @c1 do |ds| ds.filter(:xxx => @xxx) end n = @c2.new(:id => 1234) n.xxx = 555 n.attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE (xxx = 555)' end it "should allow the :order option while accepting a block" do @c2.many_to_many :attributes, :class => @c1, :order=>[:blah1, :blah2] do |ds| ds.filter(:xxx => @xxx) end n = @c2.new(:id => 1234) n.xxx = 555 n.attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE (xxx = 555) ORDER BY blah1, blah2' end it "should support a :dataset option that is used instead of the default" do c1 = @c1 @c2.many_to_many :attributes, :class => @c1, :dataset=>proc{c1.join_table(:natural, :an).filter(:an__nodeid=>pk)}, :order=> :a, :limit=>10, :select=>nil do |ds| ds.filter(:xxx => @xxx) end n = @c2.new(:id => 1234) n.xxx = 555 n.attributes_dataset.sql.should == 'SELECT * FROM attributes NATURAL JOIN an WHERE ((an.nodeid = 1234) AND (xxx = 555)) ORDER BY a LIMIT 10' n.attributes.should == [@c1.load({})] DB.sqls.should == ['SELECT * FROM attributes NATURAL JOIN an WHERE ((an.nodeid = 1234) AND (xxx = 555)) ORDER BY a LIMIT 10'] end it "should support a :dataset option that accepts the reflection as an argument" do @c2.many_to_many :attributes, :class => @c1, :dataset=>lambda{|opts| opts.associated_dataset.join_table(:natural, :an).filter(:an__nodeid=>pk)}, :order=> :a, :limit=>10, :select=>nil do |ds| ds.filter(:xxx => @xxx) end n = @c2.new(:id => 1234) n.xxx = 555 n.attributes_dataset.sql.should == 'SELECT * FROM attributes NATURAL JOIN an WHERE ((an.nodeid = 1234) AND (xxx = 555)) ORDER BY a LIMIT 10' n.attributes.should == [@c1.load({})] DB.sqls.should == ['SELECT * FROM attributes NATURAL JOIN an WHERE ((an.nodeid = 1234) AND (xxx = 555)) ORDER BY a LIMIT 10'] end it "should support a :limit option" do @c2.many_to_many :attributes, :class => @c1 , :limit=>10 @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) LIMIT 10' @c2.many_to_many :attributes, :class => @c1 , :limit=>[10, 10] @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) LIMIT 10 OFFSET 10' end it "should have the :eager option affect the _dataset method" do @c2.many_to_many :attributes, :class => @c2 , :eager=>:attributes @c2.new(:id => 1234).attributes_dataset.opts[:eager].should == {:attributes=>nil} end it "should handle an aliased join table" do @c2.many_to_many :attributes, :class => @c1, :join_table => :attribute2node___attributes_nodes n = @c2.load(:id => 1234) a = @c1.load(:id => 2345) n.attributes_dataset.sql.should == "SELECT attributes.* FROM attributes INNER JOIN attribute2node AS attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))" a.should == n.add_attribute(a) a.should == n.remove_attribute(a) n.remove_all_attributes sqls = DB.sqls ['INSERT INTO attribute2node (node_id, attribute_id) VALUES (1234, 2345)', 'INSERT INTO attribute2node (attribute_id, node_id) VALUES (2345, 1234)'].should(include(sqls.shift)) ["DELETE FROM attribute2node WHERE ((node_id = 1234) AND (attribute_id = 2345))", "DELETE FROM attribute2node WHERE ((attribute_id = 2345) AND (node_id = 1234))"].should(include(sqls.shift)) sqls.should == ["DELETE FROM attribute2node WHERE (node_id = 1234)"] end it "should define an add_ method that works on existing records" do @c2.many_to_many :attributes, :class => @c1 n = @c2.load(:id => 1234) a = @c1.load(:id => 2345) n.add_attribute(a).should == a sqls = DB.sqls ['INSERT INTO attributes_nodes (node_id, attribute_id) VALUES (1234, 2345)', 'INSERT INTO attributes_nodes (attribute_id, node_id) VALUES (2345, 1234)'].should(include(sqls.shift)) sqls.should == [] end it "should define an add_ method that works with a primary key" do @c2.many_to_many :attributes, :class => @c1 n = @c2.load(:id => 1234) a = @c1.load(:id => 2345) @c1.dataset._fetch = {:id=>2345} n.add_attribute(2345).should == a sqls = DB.sqls ['INSERT INTO attributes_nodes (node_id, attribute_id) VALUES (1234, 2345)', 'INSERT INTO attributes_nodes (attribute_id, node_id) VALUES (2345, 1234)'].should(include(sqls.pop)) sqls.should == ["SELECT * FROM attributes WHERE id = 2345"] end it "should raise an error if the primary key passed to the add_ method does not match an existing record" do @c2.many_to_many :attributes, :class => @c1 n = @c2.load(:id => 1234) a = @c1.load(:id => 2345) @c1.dataset._fetch = [] proc{n.add_attribute(2345)}.should raise_error(Sequel::NoMatchingRow) DB.sqls.should == ["SELECT * FROM attributes WHERE id = 2345"] end it "should allow passing a hash to the add_ method which creates a new record" do @c2.many_to_many :attributes, :class => @c1 n = @c2.load(:id => 1234) @c1.dataset._fetch = @c1.instance_dataset._fetch = {:id=>1} n.add_attribute(:id => 1).should == @c1.load(:id => 1) sqls = DB.sqls ['INSERT INTO attributes_nodes (node_id, attribute_id) VALUES (1234, 1)', 'INSERT INTO attributes_nodes (attribute_id, node_id) VALUES (1, 1234)' ].should(include(sqls.pop)) sqls.should == ['INSERT INTO attributes (id) VALUES (1)', "SELECT * FROM attributes WHERE (id = 1) LIMIT 1"] end it "should define a remove_ method that works on existing records" do @c2.many_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) a = @c1.new(:id => 2345) n.remove_attribute(a).should == a DB.sqls.should == ['DELETE FROM attributes_nodes WHERE ((node_id = 1234) AND (attribute_id = 2345))'] end it "should raise an error in the add_ method if the passed associated object is not of the correct type" do @c2.many_to_many :attributes, :class => @c1 proc{@c2.new(:id => 1234).add_attribute(@c2.new)}.should raise_error(Sequel::Error) end it "should accept a primary key for the remove_ method and remove an existing record" do @c2.many_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) @c1.dataset._fetch = {:id=>234} n.remove_attribute(234).should == @c1.load(:id => 234) DB.sqls.should == ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE (attributes.id = 234) LIMIT 1", "DELETE FROM attributes_nodes WHERE ((node_id = 1234) AND (attribute_id = 234))"] end it "should raise an error in the remove_ method if the passed associated object is not of the correct type" do @c2.many_to_many :attributes, :class => @c1 proc{@c2.new(:id => 1234).remove_attribute(@c2.new)}.should raise_error(Sequel::Error) end it "should have the add_ method respect the :left_primary_key and :right_primary_key options" do @c2.many_to_many :attributes, :class => @c1, :left_primary_key=>:xxx, :right_primary_key=>:yyy n = @c2.load(:id => 1234).set(:xxx=>5) a = @c1.load(:id => 2345).set(:yyy=>8) n.add_attribute(a).should == a sqls = DB.sqls ['INSERT INTO attributes_nodes (node_id, attribute_id) VALUES (5, 8)', 'INSERT INTO attributes_nodes (attribute_id, node_id) VALUES (8, 5)' ].should(include(sqls.pop)) sqls.should == [] end it "should have add_ method not add the same object to the cached association array if the object is already in the array" do @c2.many_to_many :attributes, :class => @c1 n = @c2.load(:id => 1234).set(:xxx=>5) a = @c1.load(:id => 2345).set(:yyy=>8) n.associations[:attributes] = [] a.should == n.add_attribute(a) a.should == n.add_attribute(a) n.attributes.should == [a] end it "should have the add_ method respect composite keys" do @c2.many_to_many :attributes, :class => @c1, :left_key=>[:l1, :l2], :right_key=>[:r1, :r2], :left_primary_key=>[:id, :x], :right_primary_key=>[:id, :z] n = @c2.load(:id => 1234, :x=>5) a = @c1.load(:id => 2345, :z=>8) a.should == n.add_attribute(a) sqls = DB.sqls m = /INSERT INTO attributes_nodes \((\w+), (\w+), (\w+), (\w+)\) VALUES \((\d+), (\d+), (\d+), (\d+)\)/.match(sqls.pop) sqls.should == [] m.should_not == nil map = {'l1'=>1234, 'l2'=>5, 'r1'=>2345, 'r2'=>8} %w[l1 l2 r1 r2].each do |x| v = false 4.times do |i| i += 1 if m[i] == x m[i+4].should == map[x].to_s v = true end end v.should == true end end it "should have the add_ method respect composite keys" do @c2.many_to_many :attributes, :class => @c1, :left_key=>[:l1, :l2], :right_key=>[:r1, :r2], :left_primary_key=>[:id, :x], :right_primary_key=>[:id, :z] @c1.set_primary_key [:id, :z] n = @c2.load(:id => 1234, :x=>5) a = @c1.load(:id => 2345, :z=>8) @c1.dataset._fetch = {:id => 2345, :z=>8} n.add_attribute([2345, 8]).should == a sqls = DB.sqls sqls.shift.should =~ /SELECT \* FROM attributes WHERE \(\((id|z) = (8|2345)\) AND \((id|z) = (8|2345)\)\) LIMIT 1/ sqls.pop.should =~ /INSERT INTO attributes_nodes \([lr][12], [lr][12], [lr][12], [lr][12]\) VALUES \((1234|5|2345|8), (1234|5|2345|8), (1234|5|2345|8), (1234|5|2345|8)\)/ sqls.should == [] end it "should have the remove_ method respect the :left_primary_key and :right_primary_key options" do @c2.many_to_many :attributes, :class => @c1, :left_primary_key=>:xxx, :right_primary_key=>:yyy n = @c2.new(:id => 1234, :xxx=>5) a = @c1.new(:id => 2345, :yyy=>8) n.remove_attribute(a).should == a DB.sqls.should == ['DELETE FROM attributes_nodes WHERE ((node_id = 5) AND (attribute_id = 8))'] end it "should have the remove_ method respect composite keys" do @c2.many_to_many :attributes, :class => @c1, :left_key=>[:l1, :l2], :right_key=>[:r1, :r2], :left_primary_key=>[:id, :x], :right_primary_key=>[:id, :z] n = @c2.load(:id => 1234, :x=>5) a = @c1.load(:id => 2345, :z=>8) a.should == n.remove_attribute(a) DB.sqls.should == ["DELETE FROM attributes_nodes WHERE ((l1 = 1234) AND (l2 = 5) AND (r1 = 2345) AND (r2 = 8))"] end it "should accept a array of composite primary key values for the remove_ method and remove an existing record" do @c1.set_primary_key [:id, :y] @c2.many_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) @c1.dataset._fetch = {:id=>234, :y=>8} @c1.load(:id => 234, :y=>8).should == n.remove_attribute([234, 8]) sqls = DB.sqls ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE ((attributes.id = 234) AND (attributes.y = 8)) LIMIT 1", "SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE ((attributes.y = 8) AND (attributes.id = 234)) LIMIT 1"].should include(sqls.shift) sqls.should == ["DELETE FROM attributes_nodes WHERE ((node_id = 1234) AND (attribute_id = 234))"] end it "should raise an error if the model object doesn't have a valid primary key" do @c2.many_to_many :attributes, :class => @c1 a = @c2.new n = @c1.load(:id=>123) proc{a.attributes_dataset}.should raise_error(Sequel::Error) proc{a.add_attribute(n)}.should raise_error(Sequel::Error) proc{a.remove_attribute(n)}.should raise_error(Sequel::Error) proc{a.remove_all_attributes}.should raise_error(Sequel::Error) end it "should save the associated object first in add_ if passed a new model object" do @c2.many_to_many :attributes, :class => @c1 n = @c1.new a = @c2.load(:id=>123) n.new?.should == true @c1.dataset._fetch = {:id=>1} a.add_attribute(n) n.new?.should == false end it "should raise a ValidationFailed in add_ if the associated object is new and invalid" do @c2.many_to_many :attributes, :class => @c1 n = @c1.new a = @c2.load(:id=>123) def n.validate() errors.add(:id, 'foo') end proc{a.add_attribute(n)}.should raise_error(Sequel::ValidationFailed) end it "should raise an Error in add_ if the associated object is new and invalid and raise_on_save_failure is false" do @c2.many_to_many :attributes, :class => @c1 n = @c1.new n.raise_on_save_failure = false a = @c2.load(:id=>123) def n.validate() errors.add(:id, 'foo') end proc{a.add_attribute(n)}.should raise_error(Sequel::Error) end it "should not attempt to validate the associated object in add_ if the :validate=>false option is used" do @c2.many_to_many :attributes, :class => @c1, :validate=>false n = @c1.new a = @c2.load(:id=>123) def n.validate() errors.add(:id, 'foo') end @c1.dataset._fetch = {:id=>1} a.add_attribute(n) n.new?.should == false end it "should raise an error if trying to remove a model object that doesn't have a valid primary key" do @c2.many_to_many :attributes, :class => @c1 n = @c1.new a = @c2.load(:id=>123) proc{a.remove_attribute(n)}.should raise_error(Sequel::Error) end it "should provide an array with all members of the association" do @c2.many_to_many :attributes, :class => @c1 @c2.new(:id => 1234).attributes.should == [@c1.load({})] DB.sqls.should == ['SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))'] end it "should populate cache when accessed" do @c2.many_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) n.associations.include?(:attributes).should == false atts = n.attributes atts.should == n.associations[:attributes] end it "should use cache if available" do @c2.many_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) n.associations[:attributes] = 42 n.attributes.should == 42 DB.sqls.should == [] end it "should not use cache if asked to reload" do @c2.many_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) n.associations[:attributes] = 42 n.attributes(true).should_not == 42 DB.sqls.should == ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))"] end it "should add item to cache if it exists when calling add_" do @c2.many_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) att = @c1.load(:id => 345) a = [] n.associations[:attributes] = a n.add_attribute(att) a.should == [att] end it "should add item to reciprocal's cache if it exists when calling add_" do @c2.many_to_many :attributes, :class => @c1 @c1.many_to_many :nodes, :class => @c2 n = @c2.new(:id => 1234) att = @c1.load(:id => 345) att.associations[:nodes] = [] n.add_attribute(att) att.nodes.should == [n] end it "should remove item from cache if it exists when calling remove_" do @c2.many_to_many :attributes, :class => @c1 n = @c2.new(:id => 1234) att = @c1.load(:id => 345) a = [att] n.associations[:attributes] = a n.remove_attribute(att) a.should == [] end it "should remove item from reciprocal's if it exists when calling remove_" do @c2.many_to_many :attributes, :class => @c1 @c1.many_to_many :nodes, :class => @c2 n = @c2.new(:id => 1234) att = @c1.new(:id => 345) att.associations[:nodes] = [n] n.remove_attribute(att) att.nodes.should == [] end it "should not create the add_, remove_, or remove_all_ methods if :read_only option is used" do @c2.many_to_many :attributes, :class => @c1, :read_only=>true im = @c2.instance_methods.collect{|x| x.to_s} im.should(include('attributes')) im.should(include('attributes_dataset')) im.should_not(include('add_attribute')) im.should_not(include('remove_attribute')) im.should_not(include('remove_all_attributes')) end it "should not add associations methods directly to class" do @c2.many_to_many :attributes, :class => @c1 im = @c2.instance_methods.collect{|x| x.to_s} im.should(include('attributes')) im.should(include('attributes_dataset')) im.should(include('add_attribute')) im.should(include('remove_attribute')) im.should(include('remove_all_attributes')) im2 = @c2.instance_methods(false).collect{|x| x.to_s} im2.should_not(include('attributes')) im2.should_not(include('attributes_dataset')) im2.should_not(include('add_attribute')) im2.should_not(include('remove_attribute')) im2.should_not(include('remove_all_attributes')) end it "should have an remove_all_ method that removes all associations" do @c2.many_to_many :attributes, :class => @c1 @c2.new(:id => 1234).remove_all_attributes DB.sqls.should == ['DELETE FROM attributes_nodes WHERE (node_id = 1234)'] end it "should have the remove_all_ method respect the :left_primary_key option" do @c2.many_to_many :attributes, :class => @c1, :left_primary_key=>:xxx @c2.new(:id => 1234, :xxx=>5).remove_all_attributes DB.sqls.should == ['DELETE FROM attributes_nodes WHERE (node_id = 5)'] end it "should have the remove_all_ method respect composite keys" do @c2.many_to_many :attributes, :class => @c1, :left_primary_key=>[:id, :x], :left_key=>[:l1, :l2] @c2.load(:id => 1234, :x=>5).remove_all_attributes DB.sqls.should == ['DELETE FROM attributes_nodes WHERE ((l1 = 1234) AND (l2 = 5))'] end it "remove_all should set the cached instance variable to []" do @c2.many_to_many :attributes, :class => @c1 node = @c2.new(:id => 1234) node.remove_all_attributes node.associations[:attributes].should == [] end it "remove_all should return the array of previously associated items if the cached instance variable exists" do @c2.many_to_many :attributes, :class => @c1 attrib = @c1.load(:id=>3) node = @c2.load(:id => 1234) @c1.dataset._fetch = [] node.attributes.should == [] node.add_attribute(attrib) node.associations[:attributes].should == [attrib] node.remove_all_attributes.should == [attrib] end it "remove_all should return nil if the cached instance variable does not exist" do @c2.many_to_many :attributes, :class => @c1 @c2.new(:id => 1234).remove_all_attributes.should == nil end it "remove_all should remove the current item from all reciprocal instance varaibles if it cached instance variable exists" do @c2.many_to_many :attributes, :class => @c1 @c1.many_to_many :nodes, :class => @c2 @c1.dataset._fetch = [] @c2.dataset._fetch = [] attrib = @c1.load(:id=>3) node = @c2.new(:id => 1234) node.attributes.should == [] attrib.nodes.should == [] node.add_attribute(attrib) attrib.associations[:nodes].should == [node] node.remove_all_attributes attrib.associations[:nodes].should == [] end it "add, remove, and remove_all methods should respect :join_table_block option" do @c2.many_to_many :attributes, :class => @c1, :join_table_block=>proc{|ds| ds.filter(:x=>123)} o = @c2.load(:id => 1234) o.add_attribute(@c1.load(:id=>44)) o.remove_attribute(@c1.load(:id=>45)) o.remove_all_attributes sqls = DB.sqls sqls.shift =~ /INSERT INTO attributes_nodes \((node_id|attribute_id), (node_id|attribute_id)\) VALUES \((1234|44), (1234|44)\)/ sqls.should == ["DELETE FROM attributes_nodes WHERE ((x = 123) AND (node_id = 1234) AND (attribute_id = 45))", "DELETE FROM attributes_nodes WHERE ((x = 123) AND (node_id = 1234))"] end it "should call an _add_ method internally to add attributes" do @c2.many_to_many :attributes, :class => @c1 @c2.private_instance_methods.collect{|x| x.to_s}.sort.should(include("_add_attribute")) p = @c2.load(:id=>10) c = @c1.load(:id=>123) def p._add_attribute(x) @x = x end p.add_attribute(c) p.instance_variable_get(:@x).should == c DB.sqls.should == [] end it "should support an :adder option for defining the _add_ method" do @c2.many_to_many :attributes, :class => @c1, :adder=>proc{|x| @x = x} p = @c2.load(:id=>10) c = @c1.load(:id=>123) p.add_attribute(c) p.instance_variable_get(:@x).should == c DB.sqls.should == [] end it "should allow additional arguments given to the add_ method and pass them onwards to the _add_ method" do @c2.many_to_many :attributes, :class => @c1 p = @c2.load(:id=>10) c = @c1.load(:id=>123) def p._add_attribute(x,*y) @x = x @y = y end p.add_attribute(c,:foo,:bar=>:baz) p.instance_variable_get(:@x).should == c p.instance_variable_get(:@y).should == [:foo,{:bar=>:baz}] end it "should call a _remove_ method internally to remove attributes" do @c2.many_to_many :attributes, :class => @c1 @c2.private_instance_methods.collect{|x| x.to_s}.sort.should(include("_remove_attribute")) p = @c2.load(:id=>10) c = @c1.load(:id=>123) def p._remove_attribute(x) @x = x end p.remove_attribute(c) p.instance_variable_get(:@x).should == c DB.sqls.should == [] end it "should support a :remover option for defining the _remove_ method" do @c2.many_to_many :attributes, :class => @c1, :remover=>proc{|x| @x = x} p = @c2.load(:id=>10) c = @c1.load(:id=>123) p.remove_attribute(c) p.instance_variable_get(:@x).should == c DB.sqls.should == [] end it "should allow additional arguments given to the remove_ method and pass them onwards to the _remove_ method" do @c2.many_to_many :attributes, :class => @c1 p = @c2.load(:id=>10) c = @c1.load(:id=>123) def p._remove_attribute(x,*y) @x = x @y = y end p.remove_attribute(c,:foo,:bar=>:baz) p.instance_variable_get(:@x).should == c p.instance_variable_get(:@y).should == [:foo,{:bar=>:baz}] end it "should allow additional arguments given to the remove_all_ method and pass them onwards to the _remove_all_ method" do @c2.many_to_many :attributes, :class => @c1 p = @c2.load(:id=>10) def p._remove_all_attributes(*y) @y = y end p.remove_all_attributes(:foo,:bar=>:baz) p.instance_variable_get(:@y).should == [:foo,{:bar=>:baz}] end it "should call a _remove_all_ method internally to remove attributes" do @c2.many_to_many :attributes, :class => @c1 @c2.private_instance_methods.collect{|x| x.to_s}.sort.should(include("_remove_all_attributes")) p = @c2.load(:id=>10) def p._remove_all_attributes @x = :foo end p.remove_all_attributes p.instance_variable_get(:@x).should == :foo DB.sqls.should == [] end it "should support a :clearer option for defining the _remove_all_ method" do @c2.many_to_many :attributes, :class => @c1, :clearer=>proc{@x = :foo} p = @c2.load(:id=>10) p.remove_all_attributes p.instance_variable_get(:@x).should == :foo DB.sqls.should == [] end it "should support (before|after)_(add|remove) callbacks" do h = [] @c2.many_to_many :attributes, :class => @c1, :before_add=>[proc{|x,y| h << x.pk; h << -y.pk}, :blah], :after_add=>proc{h << 3}, :before_remove=>:blah, :after_remove=>[:blahr] @c2.class_eval do self::Foo = h def _add_attribute(v) model::Foo << 4 end def _remove_attribute(v) model::Foo << 5 end def blah(x) model::Foo << x.pk end def blahr(x) model::Foo << 6 end end p = @c2.load(:id=>10) c = @c1.load(:id=>123) h.should == [] p.add_attribute(c) h.should == [10, -123, 123, 4, 3] p.remove_attribute(c) h.should == [10, -123, 123, 4, 3, 123, 5, 6] end it "should support after_load association callback" do h = [] @c2.many_to_many :attributes, :class => @c1, :after_load=>[proc{|x,y| h << [x.pk, y.collect{|z|z.pk}]}, :al] @c2.class_eval do self::Foo = h def al(v) v.each{|x| model::Foo << x.pk} end end @c1.dataset._fetch = [{:id=>20}, {:id=>30}] p = @c2.load(:id=>10, :parent_id=>20) attributes = p.attributes h.should == [[10, [20, 30]], 20, 30] attributes.collect{|a| a.pk}.should == [20, 30] end it "should raise error and not call internal add or remove method if before callback returns false if raise_on_save_failure is true" do p = @c2.load(:id=>10) c = @c1.load(:id=>123) @c2.many_to_many :attributes, :class => @c1, :before_add=>:ba, :before_remove=>:br p.should_receive(:ba).once.with(c).and_return(false) p.should_not_receive(:_add_attribute) p.should_not_receive(:_remove_attribute) p.associations[:attributes] = [] p.raise_on_save_failure = true proc{p.add_attribute(c)}.should raise_error(Sequel::Error) p.attributes.should == [] p.associations[:attributes] = [c] p.should_receive(:br).once.with(c).and_return(false) proc{p.remove_attribute(c)}.should raise_error(Sequel::Error) p.attributes.should == [c] end it "should return nil and not call internal add or remove method if before callback returns false if raise_on_save_failure is false" do p = @c2.load(:id=>10) c = @c1.load(:id=>123) p.raise_on_save_failure = false @c2.many_to_many :attributes, :class => @c1, :before_add=>:ba, :before_remove=>:br p.should_receive(:ba).once.with(c).and_return(false) p.should_not_receive(:_add_attribute) p.should_not_receive(:_remove_attribute) p.associations[:attributes] = [] p.add_attribute(c).should == nil p.attributes.should == [] p.associations[:attributes] = [c] p.should_receive(:br).once.with(c).and_return(false) p.remove_attribute(c).should == nil p.attributes.should == [c] end it "should support a :uniq option that removes duplicates from the association" do @c2.many_to_many :attributes, :class => @c1, :uniq=>true @c1.dataset._fetch = [{:id=>20}, {:id=>30}, {:id=>20}, {:id=>30}] @c2.load(:id=>10, :parent_id=>20).attributes.should == [@c1.load(:id=>20), @c1.load(:id=>30)] end it "should support a :distinct option that uses the DISTINCT clause" do @c2.many_to_many :attributes, :class => @c1, :distinct=>true @c2.load(:id=>10).attributes_dataset.sql.should == "SELECT DISTINCT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 10))" end it "should not apply association options when removing all associated records" do @c2.many_to_many :attributes, :class => @c1 do |ds| ds.filter(:name=>'John') end @c2.load(:id=>1).remove_all_attributes DB.sqls.should == ["DELETE FROM attributes_nodes WHERE (node_id = 1)"] end it "should use assocation's dataset when grabbing a record to remove from the assocation by primary key" do @c2.many_to_many :attributes, :class => @c1 do |ds| ds.filter(:join_table_att=>3) end @c1.dataset._fetch = {:id=>2} @c2.load(:id=>1).remove_attribute(2) DB.sqls.should == ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1)) WHERE ((join_table_att = 3) AND (attributes.id = 2)) LIMIT 1", "DELETE FROM attributes_nodes WHERE ((node_id = 1) AND (attribute_id = 2))"] end end describe "Filtering by associations" do before do @Album = Class.new(Sequel::Model(:albums)) artist = @Artist = Class.new(Sequel::Model(:artists)) tag = @Tag = Class.new(Sequel::Model(:tags)) track = @Track = Class.new(Sequel::Model(:tracks)) album_info = @AlbumInfo = Class.new(Sequel::Model(:album_infos)) @Artist.columns :id, :id1, :id2 @Tag.columns :id, :tid1, :tid2 @Track.columns :id, :album_id, :album_id1, :album_id2 @AlbumInfo.columns :id, :album_id, :album_id1, :album_id2 @Album.class_eval do columns :id, :id1, :id2, :artist_id, :artist_id1, :artist_id2 many_to_one :artist, :class=>artist one_to_many :tracks, :class=>track, :key=>:album_id one_to_one :album_info, :class=>album_info, :key=>:album_id many_to_many :tags, :class=>tag, :left_key=>:album_id, :join_table=>:albums_tags many_to_one :cartist, :class=>artist, :key=>[:artist_id1, :artist_id2], :primary_key=>[:id1, :id2] one_to_many :ctracks, :class=>track, :key=>[:album_id1, :album_id2], :primary_key=>[:id1, :id2] one_to_one :calbum_info, :class=>album_info, :key=>[:album_id1, :album_id2], :primary_key=>[:id1, :id2] many_to_many :ctags, :class=>tag, :left_key=>[:album_id1, :album_id2], :left_primary_key=>[:id1, :id2], :right_key=>[:tag_id1, :tag_id2], :right_primary_key=>[:tid1, :tid2], :join_table=>:albums_tags end end it "should be able to filter on many_to_one associations" do @Album.filter(:artist=>@Artist.load(:id=>3)).sql.should == 'SELECT * FROM albums WHERE (albums.artist_id = 3)' end it "should be able to filter on one_to_many associations" do @Album.filter(:tracks=>@Track.load(:album_id=>3)).sql.should == 'SELECT * FROM albums WHERE (albums.id = 3)' end it "should be able to filter on one_to_one associations" do @Album.filter(:album_info=>@AlbumInfo.load(:album_id=>3)).sql.should == 'SELECT * FROM albums WHERE (albums.id = 3)' end it "should be able to filter on many_to_many associations" do @Album.filter(:tags=>@Tag.load(:id=>3)).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id = 3) AND (albums_tags.album_id IS NOT NULL))))' end it "should be able to filter on many_to_one associations with composite keys" do @Album.filter(:cartist=>@Artist.load(:id1=>3, :id2=>4)).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id1 = 3) AND (albums.artist_id2 = 4))' end it "should be able to filter on one_to_many associations with composite keys" do @Album.filter(:ctracks=>@Track.load(:album_id1=>3, :album_id2=>4)).sql.should == 'SELECT * FROM albums WHERE ((albums.id1 = 3) AND (albums.id2 = 4))' end it "should be able to filter on one_to_one associations with composite keys" do @Album.filter(:calbum_info=>@AlbumInfo.load(:album_id1=>3, :album_id2=>4)).sql.should == 'SELECT * FROM albums WHERE ((albums.id1 = 3) AND (albums.id2 = 4))' end it "should be able to filter on many_to_many associations with composite keys" do @Album.filter(:ctags=>@Tag.load(:tid1=>3, :tid2=>4)).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE ((albums_tags.tag_id1 = 3) AND (albums_tags.tag_id2 = 4) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL))))' end it "should work inside a complex filter" do artist = @Artist.load(:id=>3) @Album.filter{foo & {:artist=>artist}}.sql.should == 'SELECT * FROM albums WHERE (foo AND (albums.artist_id = 3))' track = @Track.load(:album_id=>4) @Album.filter{foo & [[:artist, artist], [:tracks, track]]}.sql.should == 'SELECT * FROM albums WHERE (foo AND (albums.artist_id = 3) AND (albums.id = 4))' end it "should raise for an invalid association name" do proc{@Album.filter(:foo=>@Artist.load(:id=>3)).sql}.should raise_error(Sequel::Error) end it "should raise for an invalid association type" do @Album.many_to_many :iatags, :clone=>:tags @Album.association_reflection(:iatags)[:type] = :foo proc{@Album.filter(:iatags=>@Tag.load(:id=>3)).sql}.should raise_error(Sequel::Error) end it "should raise for an invalid associated object class " do proc{@Album.filter(:tags=>@Artist.load(:id=>3)).sql}.should raise_error(Sequel::Error) end it "should raise for an invalid associated object class when multiple objects are used" do proc{@Album.filter(:tags=>[@Tag.load(:id=>3), @Artist.load(:id=>3)]).sql}.should raise_error(Sequel::Error) end it "should correctly handle case when a multiple value association is used" do proc{@Album.filter(:tags=>[@Tag.load(:id=>3), @Artist.load(:id=>3)]).sql}.should raise_error(Sequel::Error) end it "should not affect non-association IN/NOT IN filtering with an empty array" do @Album.filter(:tag_id=>[]).sql.should == 'SELECT * FROM albums WHERE (tag_id != tag_id)' @Album.exclude(:tag_id=>[]).sql.should == 'SELECT * FROM albums WHERE (tag_id = tag_id)' end it "should work correctly in subclasses" do c = Class.new(@Album) c.many_to_one :sartist, :class=>@Artist c.filter(:sartist=>@Artist.load(:id=>3)).sql.should == 'SELECT * FROM albums WHERE (albums.sartist_id = 3)' end it "should be able to exclude on many_to_one associations" do @Album.exclude(:artist=>@Artist.load(:id=>3)).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id != 3) OR (albums.artist_id IS NULL))' end it "should be able to exclude on one_to_many associations" do @Album.exclude(:tracks=>@Track.load(:album_id=>3)).sql.should == 'SELECT * FROM albums WHERE ((albums.id != 3) OR (albums.id IS NULL))' end it "should be able to exclude on one_to_one associations" do @Album.exclude(:album_info=>@AlbumInfo.load(:album_id=>3)).sql.should == 'SELECT * FROM albums WHERE ((albums.id != 3) OR (albums.id IS NULL))' end it "should be able to exclude on many_to_many associations" do @Album.exclude(:tags=>@Tag.load(:id=>3)).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id = 3) AND (albums_tags.album_id IS NOT NULL)))) OR (albums.id IS NULL))' end it "should be able to exclude on many_to_one associations with composite keys" do @Album.exclude(:cartist=>@Artist.load(:id1=>3, :id2=>4)).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id1 != 3) OR (albums.artist_id2 != 4) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))' end it "should be able to exclude on one_to_many associations with composite keys" do @Album.exclude(:ctracks=>@Track.load(:album_id1=>3, :album_id2=>4)).sql.should == 'SELECT * FROM albums WHERE ((albums.id1 != 3) OR (albums.id2 != 4) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should be able to exclude on one_to_one associations with composite keys" do @Album.exclude(:calbum_info=>@AlbumInfo.load(:album_id1=>3, :album_id2=>4)).sql.should == 'SELECT * FROM albums WHERE ((albums.id1 != 3) OR (albums.id2 != 4) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should be able to exclude on many_to_many associations with composite keys" do @Album.exclude(:ctags=>@Tag.load(:tid1=>3, :tid2=>4)).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE ((albums_tags.tag_id1 = 3) AND (albums_tags.tag_id2 = 4) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should be able to filter on multiple many_to_one associations" do @Album.filter(:artist=>[@Artist.load(:id=>3), @Artist.load(:id=>4)]).sql.should == 'SELECT * FROM albums WHERE (albums.artist_id IN (3, 4))' end it "should be able to filter on multiple one_to_many associations" do @Album.filter(:tracks=>[@Track.load(:album_id=>3), @Track.load(:album_id=>4)]).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (3, 4))' end it "should be able to filter on multiple one_to_one associations" do @Album.filter(:album_info=>[@AlbumInfo.load(:album_id=>3), @AlbumInfo.load(:album_id=>4)]).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (3, 4))' end it "should be able to filter on multiple many_to_many associations" do @Album.filter(:tags=>[@Tag.load(:id=>3), @Tag.load(:id=>4)]).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id IN (3, 4)) AND (albums_tags.album_id IS NOT NULL))))' end it "should be able to filter on multiple many_to_one associations with composite keys" do @Album.filter(:cartist=>[@Artist.load(:id1=>3, :id2=>4), @Artist.load(:id1=>5, :id2=>6)]).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN ((3, 4), (5, 6)))' end it "should be able to filter on multiple one_to_many associations with composite keys" do @Album.filter(:ctracks=>[@Track.load(:album_id1=>3, :album_id2=>4), @Track.load(:album_id1=>5, :album_id2=>6)]).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN ((3, 4), (5, 6)))' end it "should be able to filter on multiple one_to_one associations with composite keys" do @Album.filter(:calbum_info=>[@AlbumInfo.load(:album_id1=>3, :album_id2=>4), @AlbumInfo.load(:album_id1=>5, :album_id2=>6)]).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN ((3, 4), (5, 6)))' end it "should be able to filter on multiple many_to_many associations with composite keys" do @Album.filter(:ctags=>[@Tag.load(:tid1=>3, :tid2=>4), @Tag.load(:tid1=>5, :tid2=>6)]).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN ((3, 4), (5, 6))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL))))' end it "should be able to exclude on multiple many_to_one associations" do @Album.exclude(:artist=>[@Artist.load(:id=>3), @Artist.load(:id=>4)]).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id NOT IN (3, 4)) OR (albums.artist_id IS NULL))' end it "should be able to exclude on multiple one_to_many associations" do @Album.exclude(:tracks=>[@Track.load(:album_id=>3), @Track.load(:album_id=>4)]).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (3, 4)) OR (albums.id IS NULL))' end it "should be able to exclude on multiple one_to_one associations" do @Album.exclude(:album_info=>[@AlbumInfo.load(:album_id=>3), @AlbumInfo.load(:album_id=>4)]).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (3, 4)) OR (albums.id IS NULL))' end it "should be able to exclude on multiple many_to_many associations" do @Album.exclude(:tags=>[@Tag.load(:id=>3), @Tag.load(:id=>4)]).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id IN (3, 4)) AND (albums_tags.album_id IS NOT NULL)))) OR (albums.id IS NULL))' end it "should be able to exclude on multiple many_to_one associations with composite keys" do @Album.exclude(:cartist=>[@Artist.load(:id1=>3, :id2=>4), @Artist.load(:id1=>5, :id2=>6)]).sql.should == 'SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN ((3, 4), (5, 6))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))' end it "should be able to exclude on multiple one_to_many associations with composite keys" do @Album.exclude(:ctracks=>[@Track.load(:album_id1=>3, :album_id2=>4), @Track.load(:album_id1=>5, :album_id2=>6)]).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN ((3, 4), (5, 6))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should be able to exclude on multiple one_to_one associations with composite keys" do @Album.exclude(:calbum_info=>[@AlbumInfo.load(:album_id1=>3, :album_id2=>4), @AlbumInfo.load(:album_id1=>5, :album_id2=>6)]).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN ((3, 4), (5, 6))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should be able to exclude on multiple many_to_many associations with composite keys" do @Album.exclude(:ctags=>[@Tag.load(:tid1=>3, :tid2=>4), @Tag.load(:tid1=>5, :tid2=>6)]).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN ((3, 4), (5, 6))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should be able to handle NULL values when filtering many_to_one associations" do @Album.filter(:artist=>@Artist.new).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to handle NULL values when filtering one_to_many associations" do @Album.filter(:tracks=>@Track.new).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to handle NULL values when filtering one_to_one associations" do @Album.filter(:album_info=>@AlbumInfo.new).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to handle NULL values when filtering many_to_many associations" do @Album.filter(:tags=>@Tag.new).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to handle filtering with NULL values for many_to_one associations with composite keys" do @Album.filter(:cartist=>@Artist.load(:id2=>4)).sql.should == 'SELECT * FROM albums WHERE \'f\'' @Album.filter(:cartist=>@Artist.load(:id1=>3)).sql.should == 'SELECT * FROM albums WHERE \'f\'' @Album.filter(:cartist=>@Artist.new).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to filter with NULL values for one_to_many associations with composite keys" do @Album.filter(:ctracks=>@Track.load(:album_id2=>4)).sql.should == 'SELECT * FROM albums WHERE \'f\'' @Album.filter(:ctracks=>@Track.load(:album_id1=>3)).sql.should == 'SELECT * FROM albums WHERE \'f\'' @Album.filter(:ctracks=>@Track.new).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to filter with NULL values for one_to_one associations with composite keys" do @Album.filter(:calbum_info=>@AlbumInfo.load(:album_id2=>4)).sql.should == 'SELECT * FROM albums WHERE \'f\'' @Album.filter(:calbum_info=>@AlbumInfo.load(:album_id1=>3)).sql.should == 'SELECT * FROM albums WHERE \'f\'' @Album.filter(:calbum_info=>@AlbumInfo.new).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to filter with NULL values for many_to_many associations with composite keys" do @Album.filter(:ctags=>@Tag.load(:tid1=>3)).sql.should == 'SELECT * FROM albums WHERE \'f\'' @Album.filter(:ctags=>@Tag.load(:tid2=>4)).sql.should == 'SELECT * FROM albums WHERE \'f\'' @Album.filter(:ctags=>@Tag.new).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to handle NULL values when excluding many_to_one associations" do @Album.exclude(:artist=>@Artist.new).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to handle NULL values when excluding one_to_many associations" do @Album.exclude(:tracks=>@Track.new).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to handle NULL values when excluding one_to_one associations" do @Album.exclude(:album_info=>@AlbumInfo.new).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to handle NULL values when excluding many_to_many associations" do @Album.exclude(:tags=>@Tag.new).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to handle excluding with NULL values for many_to_one associations with composite keys" do @Album.exclude(:cartist=>@Artist.load(:id2=>4)).sql.should == 'SELECT * FROM albums WHERE \'t\'' @Album.exclude(:cartist=>@Artist.load(:id1=>3)).sql.should == 'SELECT * FROM albums WHERE \'t\'' @Album.exclude(:cartist=>@Artist.new).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to excluding with NULL values for one_to_many associations with composite keys" do @Album.exclude(:ctracks=>@Track.load(:album_id2=>4)).sql.should == 'SELECT * FROM albums WHERE \'t\'' @Album.exclude(:ctracks=>@Track.load(:album_id1=>3)).sql.should == 'SELECT * FROM albums WHERE \'t\'' @Album.exclude(:ctracks=>@Track.new).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to excluding with NULL values for one_to_one associations with composite keys" do @Album.exclude(:calbum_info=>@AlbumInfo.load(:album_id2=>4)).sql.should == 'SELECT * FROM albums WHERE \'t\'' @Album.exclude(:calbum_info=>@AlbumInfo.load(:album_id1=>3)).sql.should == 'SELECT * FROM albums WHERE \'t\'' @Album.exclude(:calbum_info=>@AlbumInfo.new).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to excluding with NULL values for many_to_many associations with composite keys" do @Album.exclude(:ctags=>@Tag.load(:tid1=>3)).sql.should == 'SELECT * FROM albums WHERE \'t\'' @Album.exclude(:ctags=>@Tag.load(:tid2=>4)).sql.should == 'SELECT * FROM albums WHERE \'t\'' @Album.exclude(:ctags=>@Tag.new).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to handle NULL values when filtering multiple many_to_one associations" do @Album.filter(:artist=>[@Artist.load(:id=>3), @Artist.new]).sql.should == 'SELECT * FROM albums WHERE (albums.artist_id IN (3))' @Album.filter(:artist=>[@Artist.new, @Artist.new]).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to handle NULL values when filtering multiple one_to_many associations" do @Album.filter(:tracks=>[@Track.load(:album_id=>3), @Track.new]).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (3))' @Album.filter(:tracks=>[@Track.new, @Track.new]).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to handle NULL values when filtering multiple one_to_one associations" do @Album.filter(:album_info=>[@AlbumInfo.load(:album_id=>3), @AlbumInfo.new]).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (3))' @Album.filter(:album_info=>[@AlbumInfo.new, @AlbumInfo.new]).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to handle NULL values when filtering multiple many_to_many associations" do @Album.filter(:tags=>[@Tag.load(:id=>3), @Tag.new]).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id IN (3)) AND (albums_tags.album_id IS NOT NULL))))' @Album.filter(:tags=>[@Tag.new, @Tag.new]).sql.should == 'SELECT * FROM albums WHERE \'f\'' end it "should be able to handle NULL values when filtering multiple many_to_one associations with composite keys" do @Album.filter(:cartist=>[@Artist.load(:id1=>3, :id2=>4), @Artist.load(:id1=>3)]).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN ((3, 4)))' @Album.filter(:cartist=>[@Artist.load(:id1=>3, :id2=>4), @Artist.new]).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN ((3, 4)))' end it "should be able handle NULL values when filtering multiple one_to_many associations with composite keys" do @Album.filter(:ctracks=>[@Track.load(:album_id1=>3, :album_id2=>4), @Track.load(:album_id1=>3)]).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN ((3, 4)))' @Album.filter(:ctracks=>[@Track.load(:album_id1=>3, :album_id2=>4), @Track.new]).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN ((3, 4)))' end it "should be able to handle NULL values when filtering multiple one_to_one associations with composite keys" do @Album.filter(:calbum_info=>[@AlbumInfo.load(:album_id1=>3, :album_id2=>4), @AlbumInfo.load(:album_id1=>5)]).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN ((3, 4)))' @Album.filter(:calbum_info=>[@AlbumInfo.load(:album_id1=>3, :album_id2=>4), @AlbumInfo.new]).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN ((3, 4)))' end it "should be able to handle NULL values when filtering multiple many_to_many associations with composite keys" do @Album.filter(:ctags=>[@Tag.load(:tid1=>3, :tid2=>4), @Tag.load(:tid1=>5)]).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN ((3, 4))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL))))' @Album.filter(:ctags=>[@Tag.load(:tid1=>3, :tid2=>4), @Tag.new]).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN ((3, 4))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL))))' end it "should be able to handle NULL values when excluding multiple many_to_one associations" do @Album.exclude(:artist=>[@Artist.load(:id=>3), @Artist.new]).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id NOT IN (3)) OR (albums.artist_id IS NULL))' @Album.exclude(:artist=>[@Artist.new, @Artist.new]).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to handle NULL values when excluding multiple one_to_many associations" do @Album.exclude(:tracks=>[@Track.load(:album_id=>3), @Track.new]).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (3)) OR (albums.id IS NULL))' @Album.exclude(:tracks=>[@Track.new, @Track.new]).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to handle NULL values when excluding multiple one_to_one associations" do @Album.exclude(:album_info=>[@AlbumInfo.load(:album_id=>3), @AlbumInfo.new]).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (3)) OR (albums.id IS NULL))' @Album.exclude(:album_info=>[@AlbumInfo.new, @AlbumInfo.new]).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to handle NULL values when excluding multiple many_to_many associations" do @Album.exclude(:tags=>[@Tag.load(:id=>3), @Tag.new]).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id IN (3)) AND (albums_tags.album_id IS NOT NULL)))) OR (albums.id IS NULL))' @Album.exclude(:tags=>[@Tag.new, @Tag.new]).sql.should == 'SELECT * FROM albums WHERE \'t\'' end it "should be able to handle NULL values when excluding multiple many_to_one associations with composite keys" do @Album.exclude(:cartist=>[@Artist.load(:id1=>3, :id2=>4), @Artist.load(:id1=>3)]).sql.should == 'SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN ((3, 4))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))' @Album.exclude(:cartist=>[@Artist.load(:id1=>3, :id2=>4), @Artist.new]).sql.should == 'SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN ((3, 4))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))' end it "should be able handle NULL values when excluding multiple one_to_many associations with composite keys" do @Album.exclude(:ctracks=>[@Track.load(:album_id1=>3, :album_id2=>4), @Track.load(:album_id1=>3)]).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN ((3, 4))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' @Album.exclude(:ctracks=>[@Track.load(:album_id1=>3, :album_id2=>4), @Track.new]).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN ((3, 4))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should be able to handle NULL values when excluding multiple one_to_one associations with composite keys" do @Album.exclude(:calbum_info=>[@AlbumInfo.load(:album_id1=>3, :album_id2=>4), @AlbumInfo.load(:album_id1=>5)]).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN ((3, 4))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' @Album.exclude(:calbum_info=>[@AlbumInfo.load(:album_id1=>3, :album_id2=>4), @AlbumInfo.new]).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN ((3, 4))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should be able to handle NULL values when excluding multiple many_to_many associations with composite keys" do @Album.exclude(:ctags=>[@Tag.load(:tid1=>3, :tid2=>4), @Tag.load(:tid1=>5)]).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN ((3, 4))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' @Album.exclude(:ctags=>[@Tag.load(:tid1=>3, :tid2=>4), @Tag.new]).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN ((3, 4))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should be able to filter on many_to_one association datasets" do @Album.filter(:artist=>@Artist.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists WHERE ((x = 1) AND (artists.id IS NOT NULL))))' end it "should be able to filter on one_to_many association datasets" do @Album.filter(:tracks=>@Track.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((x = 1) AND (tracks.album_id IS NOT NULL))))' end it "should be able to filter on one_to_one association datasets" do @Album.filter(:album_info=>@AlbumInfo.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (SELECT album_infos.album_id FROM album_infos WHERE ((x = 1) AND (album_infos.album_id IS NOT NULL))))' end it "should be able to filter on many_to_many association datasets" do @Album.filter(:tags=>@Tag.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id IN (SELECT tags.id FROM tags WHERE ((x = 1) AND (tags.id IS NOT NULL)))) AND (albums_tags.album_id IS NOT NULL))))' end it "should be able to filter on many_to_one association datasets with composite keys" do @Album.filter(:cartist=>@Artist.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((x = 1) AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL))))' end it "should be able to filter on one_to_many association datasets with composite keys" do @Album.filter(:ctracks=>@Track.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((x = 1) AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL))))' end it "should be able to filter on one_to_one association datasets with composite keys" do @Album.filter(:calbum_info=>@AlbumInfo.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((x = 1) AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL))))' end it "should be able to filter on many_to_many association datasets with composite keys" do @Album.filter(:ctags=>@Tag.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN (SELECT tags.tid1, tags.tid2 FROM tags WHERE ((x = 1) AND (tags.tid1 IS NOT NULL) AND (tags.tid2 IS NOT NULL)))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL))))' end it "should be able to exclude on many_to_one association datasets" do @Album.exclude(:artist=>@Artist.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id NOT IN (SELECT artists.id FROM artists WHERE ((x = 1) AND (artists.id IS NOT NULL)))) OR (albums.artist_id IS NULL))' end it "should be able to exclude on one_to_many association datasets" do @Album.exclude(:tracks=>@Track.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT tracks.album_id FROM tracks WHERE ((x = 1) AND (tracks.album_id IS NOT NULL)))) OR (albums.id IS NULL))' end it "should be able to exclude on one_to_one association datasets" do @Album.exclude(:album_info=>@AlbumInfo.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT album_infos.album_id FROM album_infos WHERE ((x = 1) AND (album_infos.album_id IS NOT NULL)))) OR (albums.id IS NULL))' end it "should be able to exclude on many_to_many association datasets" do @Album.exclude(:tags=>@Tag.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id IN (SELECT tags.id FROM tags WHERE ((x = 1) AND (tags.id IS NOT NULL)))) AND (albums_tags.album_id IS NOT NULL)))) OR (albums.id IS NULL))' end it "should be able to exclude on many_to_one association datasets with composite keys" do @Album.exclude(:cartist=>@Artist.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((x = 1) AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL)))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))' end it "should be able to exclude on one_to_many association datasets with composite keys" do @Album.exclude(:ctracks=>@Track.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((x = 1) AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should be able to exclude on one_to_one association datasets with composite keys" do @Album.exclude(:calbum_info=>@AlbumInfo.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((x = 1) AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should be able to exclude on many_to_many association datasets with composite keys" do @Album.exclude(:ctags=>@Tag.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN (SELECT tags.tid1, tags.tid2 FROM tags WHERE ((x = 1) AND (tags.tid1 IS NOT NULL) AND (tags.tid2 IS NOT NULL)))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))' end it "should do a regular IN query if the dataset for a different model is used" do @Album.filter(:artist=>@Album.select(:x)).sql.should == 'SELECT * FROM albums WHERE (artist IN (SELECT x FROM albums))' end it "should do a regular IN query if a non-model dataset is used" do @Album.filter(:artist=>@Album.db.from(:albums).select(:x)).sql.should == 'SELECT * FROM albums WHERE (artist IN (SELECT x FROM albums))' end end describe "Sequel::Model Associations with clashing column names" do before do @db = Sequel.mock(:fetch=>{:id=>1, :object_id=>2}) @Foo = Class.new(Sequel::Model(@db[:foos])) @Bar = Class.new(Sequel::Model(@db[:bars])) @Foo.columns :id, :object_id @Bar.columns :id, :object_id @Foo.def_column_alias(:obj_id, :object_id) @Bar.def_column_alias(:obj_id, :object_id) @Foo.one_to_many :bars, :primary_key=>:obj_id, :primary_key_column=>:object_id, :key=>:object_id, :key_method=>:obj_id, :class=>@Bar @Foo.one_to_one :bar, :primary_key=>:obj_id, :primary_key_column=>:object_id, :key=>:object_id, :key_method=>:obj_id, :class=>@Bar @Bar.many_to_one :foo, :key=>:obj_id, :key_column=>:object_id, :primary_key=>:object_id, :primary_key_method=>:obj_id, :class=>@Foo @Foo.many_to_many :mtmbars, :join_table=>:bars_foos, :left_primary_key=>:obj_id, :left_primary_key_column=>:object_id, :right_primary_key=>:object_id, :right_primary_key_method=>:obj_id, :left_key=>:foo_id, :right_key=>:object_id, :class=>@Bar @Bar.many_to_many :mtmfoos, :join_table=>:bars_foos, :left_primary_key=>:obj_id, :left_primary_key_column=>:object_id, :right_primary_key=>:object_id, :right_primary_key_method=>:obj_id, :left_key=>:object_id, :right_key=>:foo_id, :class=>@Foo @foo = @Foo.load(:id=>1, :object_id=>2) @bar = @Bar.load(:id=>1, :object_id=>2) @db.sqls end it "should have working regular association methods" do @Bar.first.foo.should == @foo @db.sqls.should == ["SELECT * FROM bars LIMIT 1", "SELECT * FROM foos WHERE (foos.object_id = 2) LIMIT 1"] @Foo.first.bars.should == [@bar] @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT * FROM bars WHERE (bars.object_id = 2)"] @Foo.first.bar.should == @bar @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT * FROM bars WHERE (bars.object_id = 2) LIMIT 1"] @Foo.first.mtmbars.should == [@bar] @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT bars.* FROM bars INNER JOIN bars_foos ON ((bars_foos.object_id = bars.object_id) AND (bars_foos.foo_id = 2))"] @Bar.first.mtmfoos.should == [@foo] @db.sqls.should == ["SELECT * FROM bars LIMIT 1", "SELECT foos.* FROM foos INNER JOIN bars_foos ON ((bars_foos.foo_id = foos.object_id) AND (bars_foos.object_id = 2))"] end it "should have working eager loading methods" do @Bar.eager(:foo).all.map{|o| [o, o.foo]}.should == [[@bar, @foo]] @db.sqls.should == ["SELECT * FROM bars", "SELECT * FROM foos WHERE (foos.object_id IN (2))"] @Foo.eager(:bars).all.map{|o| [o, o.bars]}.should == [[@foo, [@bar]]] @db.sqls.should == ["SELECT * FROM foos", "SELECT * FROM bars WHERE (bars.object_id IN (2))"] @Foo.eager(:bar).all.map{|o| [o, o.bar]}.should == [[@foo, @bar]] @db.sqls.should == ["SELECT * FROM foos", "SELECT * FROM bars WHERE (bars.object_id IN (2))"] @db.fetch = [[{:id=>1, :object_id=>2}], [{:id=>1, :object_id=>2, :x_foreign_key_x=>2}]] @Foo.eager(:mtmbars).all.map{|o| [o, o.mtmbars]}.should == [[@foo, [@bar]]] @db.sqls.should == ["SELECT * FROM foos", "SELECT bars.*, bars_foos.foo_id AS x_foreign_key_x FROM bars INNER JOIN bars_foos ON ((bars_foos.object_id = bars.object_id) AND (bars_foos.foo_id IN (2)))"] @db.fetch = [[{:id=>1, :object_id=>2}], [{:id=>1, :object_id=>2, :x_foreign_key_x=>2}]] @Bar.eager(:mtmfoos).all.map{|o| [o, o.mtmfoos]}.should == [[@bar, [@foo]]] @db.sqls.should == ["SELECT * FROM bars", "SELECT foos.*, bars_foos.object_id AS x_foreign_key_x FROM foos INNER JOIN bars_foos ON ((bars_foos.foo_id = foos.object_id) AND (bars_foos.object_id IN (2)))"] end it "should have working eager graphing methods" do @db.fetch = {:id=>1, :object_id=>2, :foo_id=>1, :foo_object_id=>2} @Bar.eager_graph(:foo).all.map{|o| [o, o.foo]}.should == [[@bar, @foo]] @db.sqls.should == ["SELECT bars.id, bars.object_id, foo.id AS foo_id, foo.object_id AS foo_object_id FROM bars LEFT OUTER JOIN foos AS foo ON (foo.object_id = bars.object_id)"] @db.fetch = {:id=>1, :object_id=>2, :bars_id=>1, :bars_object_id=>2} @Foo.eager_graph(:bars).all.map{|o| [o, o.bars]}.should == [[@foo, [@bar]]] @db.sqls.should == ["SELECT foos.id, foos.object_id, bars.id AS bars_id, bars.object_id AS bars_object_id FROM foos LEFT OUTER JOIN bars ON (bars.object_id = foos.object_id)"] @db.fetch = {:id=>1, :object_id=>2, :bar_id=>1, :bar_object_id=>2} @Foo.eager_graph(:bar).all.map{|o| [o, o.bar]}.should == [[@foo, @bar]] @db.sqls.should == ["SELECT foos.id, foos.object_id, bar.id AS bar_id, bar.object_id AS bar_object_id FROM foos LEFT OUTER JOIN bars AS bar ON (bar.object_id = foos.object_id)"] @db.fetch = {:id=>1, :object_id=>2, :mtmfoos_id=>1, :mtmfoos_object_id=>2} @Bar.eager_graph(:mtmfoos).all.map{|o| [o, o.mtmfoos]}.should == [[@bar, [@foo]]] @db.sqls.should == ["SELECT bars.id, bars.object_id, mtmfoos.id AS mtmfoos_id, mtmfoos.object_id AS mtmfoos_object_id FROM bars LEFT OUTER JOIN bars_foos ON (bars_foos.object_id = bars.object_id) LEFT OUTER JOIN foos AS mtmfoos ON (mtmfoos.object_id = bars_foos.foo_id)"] @db.fetch = {:id=>1, :object_id=>2, :mtmbars_id=>1, :mtmbars_object_id=>2} @Foo.eager_graph(:mtmbars).all.map{|o| [o, o.mtmbars]}.should == [[@foo, [@bar]]] @db.sqls.should == ["SELECT foos.id, foos.object_id, mtmbars.id AS mtmbars_id, mtmbars.object_id AS mtmbars_object_id FROM foos LEFT OUTER JOIN bars_foos ON (bars_foos.foo_id = foos.object_id) LEFT OUTER JOIN bars AS mtmbars ON (mtmbars.object_id = bars_foos.object_id)"] end it "should have working filter by associations with model instances" do @Bar.first(:foo=>@foo).should == @bar @db.sqls.should == ["SELECT * FROM bars WHERE (bars.object_id = 2) LIMIT 1"] @Foo.first(:bars=>@bar).should == @foo @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_id = 2) LIMIT 1"] @Foo.first(:bar=>@bar).should == @foo @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_id = 2) LIMIT 1"] @Foo.first(:mtmbars=>@bar).should == @foo @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_id IN (SELECT bars_foos.foo_id FROM bars_foos WHERE ((bars_foos.object_id = 2) AND (bars_foos.foo_id IS NOT NULL)))) LIMIT 1"] @Bar.first(:mtmfoos=>@foo).should == @bar @db.sqls.should == ["SELECT * FROM bars WHERE (bars.object_id IN (SELECT bars_foos.object_id FROM bars_foos WHERE ((bars_foos.foo_id = 2) AND (bars_foos.object_id IS NOT NULL)))) LIMIT 1"] end it "should have working modification methods" do b = @Bar.load(:id=>2, :object_id=>3) f = @Foo.load(:id=>2, :object_id=>3) @db.numrows = 1 @bar.foo = f @bar.obj_id.should == 3 @foo.bar = @bar @bar.obj_id.should == 2 @foo.add_bar(b) @db.fetch = [[{:id=>1, :object_id=>2}, {:id=>2, :object_id=>2}], [{:id=>1, :object_id=>2}]] @foo.bars.should == [@bar, b] @foo.remove_bar(b) @foo.bars.should == [@bar] @foo.remove_all_bars @foo.bars.should == [] @db.fetch = [[{:id=>1, :object_id=>2}], [], [{:id=>2, :object_id=>2}]] @bar = @Bar.load(:id=>1, :object_id=>2) @foo.mtmbars.should == [@bar] @foo.remove_all_mtmbars @foo.mtmbars.should == [] @foo.add_mtmbar(b) @foo.mtmbars.should == [b] @foo.remove_mtmbar(b) @foo.mtmbars.should == [] @db.fetch = [[{:id=>2, :object_id=>3}], [], [{:id=>2, :object_id=>3}]] @bar.add_mtmfoo(f) @bar.mtmfoos.should == [f] @bar.remove_all_mtmfoos @bar.mtmfoos.should == [] @bar.add_mtmfoo(f) @bar.mtmfoos.should == [f] @bar.remove_mtmfoo(f) @bar.mtmfoos.should == [] end end describe "Sequel::Model Associations with non-column expression keys" do before do @db = Sequel.mock(:fetch=>{:id=>1, :object_ids=>[2]}) @Foo = Class.new(Sequel::Model(@db[:foos])) @Bar = Class.new(Sequel::Model(@db[:bars])) @Foo.columns :id, :object_ids @Bar.columns :id, :object_ids m = Module.new{def obj_id; object_ids[0]; end} @Foo.include m @Bar.include m @Foo.one_to_many :bars, :primary_key=>:obj_id, :primary_key_column=>Sequel.subscript(:object_ids, 0), :key=>Sequel.subscript(:object_ids, 0), :key_method=>:obj_id, :class=>@Bar @Foo.one_to_one :bar, :primary_key=>:obj_id, :primary_key_column=>Sequel.subscript(:object_ids, 0), :key=>Sequel.subscript(:object_ids, 0), :key_method=>:obj_id, :class=>@Bar @Bar.many_to_one :foo, :key=>:obj_id, :key_column=>Sequel.subscript(:object_ids, 0), :primary_key=>Sequel.subscript(:object_ids, 0), :primary_key_method=>:obj_id, :class=>@Foo @Foo.many_to_many :mtmbars, :join_table=>:bars_foos, :left_primary_key=>:obj_id, :left_primary_key_column=>Sequel.subscript(:object_ids, 0), :right_primary_key=>Sequel.subscript(:object_ids, 0), :right_primary_key_method=>:obj_id, :left_key=>Sequel.subscript(:foo_ids, 0), :right_key=>Sequel.subscript(:bar_ids, 0), :class=>@Bar @Bar.many_to_many :mtmfoos, :join_table=>:bars_foos, :left_primary_key=>:obj_id, :left_primary_key_column=>Sequel.subscript(:object_ids, 0), :right_primary_key=>Sequel.subscript(:object_ids, 0), :right_primary_key_method=>:obj_id, :left_key=>Sequel.subscript(:bar_ids, 0), :right_key=>Sequel.subscript(:foo_ids, 0), :class=>@Foo, :reciprocal=>nil @foo = @Foo.load(:id=>1, :object_ids=>[2]) @bar = @Bar.load(:id=>1, :object_ids=>[2]) @db.sqls end it "should have working regular association methods" do @Bar.first.foo.should == @foo @db.sqls.should == ["SELECT * FROM bars LIMIT 1", "SELECT * FROM foos WHERE (foos.object_ids[0] = 2) LIMIT 1"] @Foo.first.bars.should == [@bar] @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT * FROM bars WHERE (bars.object_ids[0] = 2)"] @Foo.first.bar.should == @bar @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT * FROM bars WHERE (bars.object_ids[0] = 2) LIMIT 1"] @Foo.first.mtmbars.should == [@bar] @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT bars.* FROM bars INNER JOIN bars_foos ON ((bars_foos.bar_ids[0] = bars.object_ids[0]) AND (bars_foos.foo_ids[0] = 2))"] @Bar.first.mtmfoos.should == [@foo] @db.sqls.should == ["SELECT * FROM bars LIMIT 1", "SELECT foos.* FROM foos INNER JOIN bars_foos ON ((bars_foos.foo_ids[0] = foos.object_ids[0]) AND (bars_foos.bar_ids[0] = 2))"] end it "should have working eager loading methods" do @Bar.eager(:foo).all.map{|o| [o, o.foo]}.should == [[@bar, @foo]] @db.sqls.should == ["SELECT * FROM bars", "SELECT * FROM foos WHERE (foos.object_ids[0] IN (2))"] @Foo.eager(:bars).all.map{|o| [o, o.bars]}.should == [[@foo, [@bar]]] @db.sqls.should == ["SELECT * FROM foos", "SELECT * FROM bars WHERE (bars.object_ids[0] IN (2))"] @Foo.eager(:bar).all.map{|o| [o, o.bar]}.should == [[@foo, @bar]] @db.sqls.should == ["SELECT * FROM foos", "SELECT * FROM bars WHERE (bars.object_ids[0] IN (2))"] @db.fetch = [[{:id=>1, :object_ids=>[2]}], [{:id=>1, :object_ids=>[2], :x_foreign_key_x=>2}]] @Foo.eager(:mtmbars).all.map{|o| [o, o.mtmbars]}.should == [[@foo, [@bar]]] @db.sqls.should == ["SELECT * FROM foos", "SELECT bars.*, bars_foos.foo_ids[0] AS x_foreign_key_x FROM bars INNER JOIN bars_foos ON ((bars_foos.bar_ids[0] = bars.object_ids[0]) AND (bars_foos.foo_ids[0] IN (2)))"] @db.fetch = [[{:id=>1, :object_ids=>[2]}], [{:id=>1, :object_ids=>[2], :x_foreign_key_x=>2}]] @Bar.eager(:mtmfoos).all.map{|o| [o, o.mtmfoos]}.should == [[@bar, [@foo]]] @db.sqls.should == ["SELECT * FROM bars", "SELECT foos.*, bars_foos.bar_ids[0] AS x_foreign_key_x FROM foos INNER JOIN bars_foos ON ((bars_foos.foo_ids[0] = foos.object_ids[0]) AND (bars_foos.bar_ids[0] IN (2)))"] end it "should have working eager graphing methods" do @db.fetch = {:id=>1, :object_ids=>[2], :foo_id=>1, :foo_object_ids=>[2]} @Bar.eager_graph(:foo).all.map{|o| [o, o.foo]}.should == [[@bar, @foo]] @db.sqls.should == ["SELECT bars.id, bars.object_ids, foo.id AS foo_id, foo.object_ids AS foo_object_ids FROM bars LEFT OUTER JOIN foos AS foo ON (foo.object_ids[0] = bars.object_ids[0])"] @db.fetch = {:id=>1, :object_ids=>[2], :bars_id=>1, :bars_object_ids=>[2]} @Foo.eager_graph(:bars).all.map{|o| [o, o.bars]}.should == [[@foo, [@bar]]] @db.sqls.should == ["SELECT foos.id, foos.object_ids, bars.id AS bars_id, bars.object_ids AS bars_object_ids FROM foos LEFT OUTER JOIN bars ON (bars.object_ids[0] = foos.object_ids[0])"] @db.fetch = {:id=>1, :object_ids=>[2], :bar_id=>1, :bar_object_ids=>[2]} @Foo.eager_graph(:bar).all.map{|o| [o, o.bar]}.should == [[@foo, @bar]] @db.sqls.should == ["SELECT foos.id, foos.object_ids, bar.id AS bar_id, bar.object_ids AS bar_object_ids FROM foos LEFT OUTER JOIN bars AS bar ON (bar.object_ids[0] = foos.object_ids[0])"] @db.fetch = {:id=>1, :object_ids=>[2], :mtmfoos_id=>1, :mtmfoos_object_ids=>[2]} @Bar.eager_graph(:mtmfoos).all.map{|o| [o, o.mtmfoos]}.should == [[@bar, [@foo]]] @db.sqls.should == ["SELECT bars.id, bars.object_ids, mtmfoos.id AS mtmfoos_id, mtmfoos.object_ids AS mtmfoos_object_ids FROM bars LEFT OUTER JOIN bars_foos ON (bars_foos.bar_ids[0] = bars.object_ids[0]) LEFT OUTER JOIN foos AS mtmfoos ON (mtmfoos.object_ids[0] = bars_foos.foo_ids[0])"] @db.fetch = {:id=>1, :object_ids=>[2], :mtmbars_id=>1, :mtmbars_object_ids=>[2]} @Foo.eager_graph(:mtmbars).all.map{|o| [o, o.mtmbars]}.should == [[@foo, [@bar]]] @db.sqls.should == ["SELECT foos.id, foos.object_ids, mtmbars.id AS mtmbars_id, mtmbars.object_ids AS mtmbars_object_ids FROM foos LEFT OUTER JOIN bars_foos ON (bars_foos.foo_ids[0] = foos.object_ids[0]) LEFT OUTER JOIN bars AS mtmbars ON (mtmbars.object_ids[0] = bars_foos.bar_ids[0])"] end it "should have working filter by associations with model instances" do @Bar.first(:foo=>@foo).should == @bar @db.sqls.should == ["SELECT * FROM bars WHERE (bars.object_ids[0] = 2) LIMIT 1"] @Foo.first(:bars=>@bar).should == @foo @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_ids[0] = 2) LIMIT 1"] @Foo.first(:bar=>@bar).should == @foo @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_ids[0] = 2) LIMIT 1"] @Foo.first(:mtmbars=>@bar).should == @foo @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT bars_foos.foo_ids[0] FROM bars_foos WHERE ((bars_foos.bar_ids[0] = 2) AND (bars_foos.foo_ids[0] IS NOT NULL)))) LIMIT 1"] @Bar.first(:mtmfoos=>@foo).should == @bar @db.sqls.should == ["SELECT * FROM bars WHERE (bars.object_ids[0] IN (SELECT bars_foos.bar_ids[0] FROM bars_foos WHERE ((bars_foos.foo_ids[0] = 2) AND (bars_foos.bar_ids[0] IS NOT NULL)))) LIMIT 1"] end it "should have working filter by associations with model datasets" do @Bar.first(:foo=>@Foo.where(:id=>@foo.id)).should == @bar @db.sqls.should == ["SELECT * FROM bars WHERE (bars.object_ids[0] IN (SELECT foos.object_ids[0] FROM foos WHERE ((id = 1) AND (foos.object_ids[0] IS NOT NULL)))) LIMIT 1"] @Foo.first(:bars=>@Bar.where(:id=>@bar.id)).should == @foo @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT bars.object_ids[0] FROM bars WHERE ((id = 1) AND (bars.object_ids[0] IS NOT NULL)))) LIMIT 1"] @Foo.first(:bar=>@Bar.where(:id=>@bar.id)).should == @foo @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT bars.object_ids[0] FROM bars WHERE ((id = 1) AND (bars.object_ids[0] IS NOT NULL)))) LIMIT 1"] @Foo.first(:mtmbars=>@Bar.where(:id=>@bar.id)).should == @foo @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT bars_foos.foo_ids[0] FROM bars_foos WHERE ((bars_foos.bar_ids[0] IN (SELECT bars.object_ids[0] FROM bars WHERE ((id = 1) AND (bars.object_ids[0] IS NOT NULL)))) AND (bars_foos.foo_ids[0] IS NOT NULL)))) LIMIT 1"] @Bar.first(:mtmfoos=>@Foo.where(:id=>@foo.id)).should == @bar @db.sqls.should == ["SELECT * FROM bars WHERE (bars.object_ids[0] IN (SELECT bars_foos.bar_ids[0] FROM bars_foos WHERE ((bars_foos.foo_ids[0] IN (SELECT foos.object_ids[0] FROM foos WHERE ((id = 1) AND (foos.object_ids[0] IS NOT NULL)))) AND (bars_foos.bar_ids[0] IS NOT NULL)))) LIMIT 1"] end end describe Sequel::Model, "#refresh" do before do @c = Class.new(Sequel::Model(:items)) do unrestrict_primary_key columns :id, :x end DB.reset end specify "should remove cached associations" do @c.many_to_one :node, :class=>@c @m = @c.new(:id => 555) @m.associations[:node] = 15 @m.reload @m.associations.should == {} end end describe "Model#freeze" do before do class ::Album < Sequel::Model columns :id class B < Sequel::Model columns :id, :album_id many_to_one :album, :class=>Album end one_to_one :b, :key=>:album_id, :class=>B end @o = Album.load(:id=>1).freeze DB.sqls end after do Object.send(:remove_const, :Album) end it "should freeze the object's associations" do @o.associations.frozen?.should be_true end it "should not break associations getters" do Album::B.dataset._fetch = {:album_id=>1, :id=>2} @o.b.should == Album::B.load(:id=>2, :album_id=>1) @o.associations[:b].should be_nil end it "should not break reciprocal associations" do b = Album::B.load(:id=>2, :album_id=>nil) b.album = @o @o.associations[:b].should be_nil end end describe "association autoreloading" do before do @c = Class.new(Sequel::Model) @Artist = Class.new(@c).set_dataset(:artists) @Artist.dataset._fetch = {:id=>2, :name=>'Ar'} @Album = Class.new(@c).set_dataset(:albums) @Artist.columns :id, :name @Album.columns :id, :name, :artist_id @Album.db_schema[:artist_id][:type] = :integer @Album.many_to_one :artist, :class=>@Artist DB.reset end specify "should reload many_to_one association when foreign key is modified" do album = @Album.load(:id => 1, :name=>'Al', :artist_id=>2) album.artist DB.sqls.should == ['SELECT * FROM artists WHERE id = 2'] album.artist_id = 1 album.artist DB.sqls.should == ['SELECT * FROM artists WHERE id = 1'] end specify "should handle multiple many_to_one association with the same foreign key" do @Album.many_to_one :artist2, :key=>:artist_id, :class=>@Artist album = @Album.load(:id => 1, :name=>'Al', :artist_id=>2) album.artist album.artist2 DB.sqls.should == ['SELECT * FROM artists WHERE id = 2'] * 2 album.artist album.artist2 DB.sqls.should == [] album.artist_id = 1 album.artist album.artist2 DB.sqls.should == ['SELECT * FROM artists WHERE id = 1'] * 2 end specify "should not reload when value has not changed" do album = @Album.load(:id => 1, :name=>'Al', :artist_id=>2) album.artist DB.sqls.should == ['SELECT * FROM artists WHERE id = 2'] album.artist_id = 2 album.artist DB.sqls.should == [] album.artist_id = "2" album.artist DB.sqls.should == [] end specify "should reload all associations which use the foreign key" do @Album.many_to_one :other_artist, :key => :artist_id, :foreign_key => :id, :class => @Artist album = @Album.load(:id => 1, :name=>'Al', :artist_id=>2) album.artist album.other_artist DB.reset album.artist_id = 1 album.artist DB.sqls.should == ['SELECT * FROM artists WHERE id = 1'] album.other_artist DB.sqls.should == ['SELECT * FROM artists WHERE id = 1'] end specify "should work with composite keys" do @Album.many_to_one :composite_artist, :key => [:artist_id, :name], :primary_key => [:id, :name], :class => @Artist album = @Album.load(:id => 1, :name=>'Al', :artist_id=>2) album.composite_artist DB.reset album.artist_id = 1 album.composite_artist DB.sqls.should == ["SELECT * FROM artists WHERE ((artists.id = 1) AND (artists.name = 'Al')) LIMIT 1"] album.name = 'Al2' album.composite_artist DB.sqls.should == ["SELECT * FROM artists WHERE ((artists.id = 1) AND (artists.name = 'Al2')) LIMIT 1"] end specify "should work with subclasses" do salbum = Class.new(@Album) oartist = Class.new(@c).set_dataset(:oartist) oartist.columns :id, :name salbum.many_to_one :artist2, :class=>oartist, :key=>:artist_id album = salbum.load(:id => 1, :name=>'Al', :artist_id=>2) album.artist DB.sqls.should == ['SELECT * FROM artists WHERE id = 2'] album.artist_id = 1 album.artist DB.sqls.should == ['SELECT * FROM artists WHERE id = 1'] end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/base_spec.rb�����������������������������������������������������������0000664�0000000�0000000�00000053663�12201565355�0020316�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Model attribute setters" do before do @c = Class.new(Sequel::Model(:items)) do columns :id, :x, :y, :"x y" end @o = @c.new DB.reset end specify "refresh should return self" do @o = @c[1] @o.stub(:_refresh).and_return([]) @o.refresh.should == @o end it "should mark the column value as changed" do @o.changed_columns.should == [] @o.x = 2 @o.changed_columns.should == [:x] @o.y = 3 @o.changed_columns.should == [:x, :y] @o.changed_columns.clear @o[:x] = 2 @o.changed_columns.should == [:x] @o[:y] = 3 @o.changed_columns.should == [:x, :y] end it "should handle columns that can't be called like normal ruby methods" do @o.send(:"x y=", 3) @o.changed_columns.should == [:"x y"] @o.values.should == {:"x y"=>3} @o.send(:"x y").should == 3 end end describe "Model.def_column_alias" do before do @o = Class.new(Sequel::Model(:items)) do columns :id def_column_alias(:id2, :id) end.load(:id=>1) DB.reset end it "should create an getter alias for the column" do @o.id2.should == 1 end it "should create an setter alias for the column" do @o.id2 = 2 @o.id2.should == 2 @o.values.should == {:id => 2} end end describe Sequel::Model, "dataset" do before do @a = Class.new(Sequel::Model(:items)) @b = Class.new(Sequel::Model) class ::Elephant < Sequel::Model(:ele1); end class ::Maggot < Sequel::Model; end class ::ShoeSize < Sequel::Model; end class ::BootSize < ShoeSize; end end after do [:Elephant, :Maggot, :ShoeSize, :BootSize].each{|x| Object.send(:remove_const, x)} end specify "should default to the plural of the class name" do Maggot.dataset.sql.should == 'SELECT * FROM maggots' ShoeSize.dataset.sql.should == 'SELECT * FROM shoe_sizes' end specify "should return the dataset for the superclass if available" do BootSize.dataset.sql.should == 'SELECT * FROM shoe_sizes' end specify "should return the correct dataset if set explicitly" do Elephant.dataset.sql.should == 'SELECT * FROM ele1' @a.dataset.sql.should == 'SELECT * FROM items' end specify "should raise if no dataset is explicitly set and the class is anonymous" do proc {@b.dataset}.should raise_error(Sequel::Error) end specify "should disregard namespaces for the table name" do begin module ::BlahBlah class MwaHaHa < Sequel::Model end end BlahBlah::MwaHaHa.dataset.sql.should == 'SELECT * FROM mwa_ha_has' ensure Object.send(:remove_const, :BlahBlah) end end end describe Sequel::Model, ".def_dataset_method" do before do @c = Class.new(Sequel::Model(:items)) end it "should add a method to the dataset and model if called with a block argument" do @c.def_dataset_method(:return_3){3} @c.return_3.should == 3 @c.dataset.return_3.should == 3 end it "should handle weird method names" do @c.def_dataset_method(:"return 3"){3} @c.send(:"return 3").should == 3 @c.dataset.send(:"return 3").should == 3 end it "should not add a model method if the model already responds to the method" do @c.instance_eval do def foo 1 end private def bar 2 end def_dataset_method(:foo){3} def_dataset_method(:bar){4} end @c.foo.should == 1 @c.dataset.foo.should == 3 @c.send(:bar).should == 2 @c.dataset.bar.should == 4 end it "should add all passed methods to the model if called without a block argument" do @c.def_dataset_method(:return_3, :return_4) proc{@c.return_3}.should raise_error(NoMethodError) proc{@c.return_4}.should raise_error(NoMethodError) @c.dataset.instance_eval do def return_3; 3; end def return_4; 4; end end @c.return_3.should == 3 @c.return_4.should == 4 end it "should cache calls and readd methods if set_dataset is used" do @c.def_dataset_method(:return_3){3} @c.set_dataset :items @c.return_3.should == 3 @c.dataset.return_3.should == 3 end it "should readd methods to subclasses, if set_dataset is used in a subclass" do @c.def_dataset_method(:return_3){3} c = Class.new(@c) c.set_dataset :items c.return_3.should == 3 c.dataset.return_3.should == 3 end end describe Sequel::Model, ".dataset_module" do before do @c = Class.new(Sequel::Model(:items)) end it "should extend the dataset with the module if the model has a dataset" do @c.dataset_module{def return_3() 3 end} @c.dataset.return_3.should == 3 end it "should also extend the instance_dataset with the module if the model has a dataset" do @c.dataset_module{def return_3() 3 end} @c.instance_dataset.return_3.should == 3 end it "should add methods defined in the module to the class" do @c.dataset_module{def return_3() 3 end} @c.return_3.should == 3 end it "should add methods defined in the module outside the block to the class" do @c.dataset_module.module_eval{def return_3() 3 end} @c.return_3.should == 3 end it "should cache calls and readd methods if set_dataset is used" do @c.dataset_module{def return_3() 3 end} @c.set_dataset :items @c.return_3.should == 3 @c.dataset.return_3.should == 3 end it "should readd methods to subclasses, if set_dataset is used in a subclass" do @c.dataset_module{def return_3() 3 end} c = Class.new(@c) c.set_dataset :items c.return_3.should == 3 c.dataset.return_3.should == 3 end it "should only have a single dataset_module per class" do @c.dataset_module{def return_3() 3 end} @c.dataset_module{def return_3() 3 + (begin; super; rescue NoMethodError; 1; end) end} @c.return_3.should == 4 end it "should not have subclasses share the dataset_module" do @c.dataset_module{def return_3() 3 end} c = Class.new(@c) c.dataset_module{def return_3() 3 + (begin; super; rescue NoMethodError; 1; end) end} c.return_3.should == 6 end it "should accept a module object and extend the dataset with it" do @c.dataset_module Module.new{def return_3() 3 end} @c.dataset.return_3.should == 3 end it "should be able to call dataset_module with a module multiple times" do @c.dataset_module Module.new{def return_3() 3 end} @c.dataset_module Module.new{def return_4() 4 end} @c.dataset.return_3.should == 3 @c.dataset.return_4.should == 4 end it "should be able mix dataset_module calls with and without arguments" do @c.dataset_module{def return_3() 3 end} @c.dataset_module Module.new{def return_4() 4 end} @c.dataset.return_3.should == 3 @c.dataset.return_4.should == 4 end it "should have modules provided to dataset_module extend subclass datasets" do @c.dataset_module{def return_3() 3 end} @c.dataset_module Module.new{def return_4() 4 end} c = Class.new(@c) c.set_dataset :a c.dataset.return_3.should == 3 c.dataset.return_4.should == 4 end it "should return the dataset module if given a block" do Object.new.extend(@c.dataset_module{def return_3() 3 end}).return_3.should == 3 end it "should return the argument if given one" do Object.new.extend(@c.dataset_module Module.new{def return_3() 3 end}).return_3.should == 3 end it "should have dataset_module support a subset method" do @c.dataset_module{subset :released, :released} @c.released.sql.should == 'SELECT * FROM items WHERE released' @c.where(:foo).released.sql.should == 'SELECT * FROM items WHERE (foo AND released)' end it "should raise error if called with both an argument and ablock" do proc{@c.dataset_module(Module.new{def return_3() 3 end}){}}.should raise_error(Sequel::Error) end end describe "A model class with implicit table name" do before do class ::Donkey < Sequel::Model end end after do Object.send(:remove_const, :Donkey) end specify "should have a dataset associated with the model class" do Donkey.dataset.model.should == Donkey end end describe "A model inheriting from a model" do before do class ::Feline < Sequel::Model; end class ::Leopard < Feline; end end after do Object.send(:remove_const, :Leopard) Object.send(:remove_const, :Feline) end specify "should have a dataset associated with itself" do Feline.dataset.model.should == Feline Leopard.dataset.model.should == Leopard end end describe "Model.primary_key" do before do @c = Class.new(Sequel::Model) end specify "should default to id" do @c.primary_key.should == :id end specify "should be overridden by set_primary_key" do @c.set_primary_key :cid @c.primary_key.should == :cid @c.set_primary_key([:id1, :id2]) @c.primary_key.should == [:id1, :id2] end specify "should use nil for no primary key" do @c.no_primary_key @c.primary_key.should == nil end end describe "Model.primary_key_hash" do before do @c = Class.new(Sequel::Model) end specify "should handle a single primary key" do @c.primary_key_hash(1).should == {:id=>1} end specify "should handle a composite primary key" do @c.set_primary_key([:id1, :id2]) @c.primary_key_hash([1, 2]).should == {:id1=>1, :id2=>2} end specify "should raise an error for no primary key" do @c.no_primary_key proc{@c.primary_key_hash(1)}.should raise_error(Sequel::Error) end end describe "Model.qualified_primary_key_hash" do before do @c = Class.new(Sequel::Model(:items)) end specify "should handle a single primary key" do @c.qualified_primary_key_hash(1).should == {Sequel.qualify(:items, :id)=>1} end specify "should handle a composite primary key" do @c.set_primary_key([:id1, :id2]) @c.qualified_primary_key_hash([1, 2]).should == {Sequel.qualify(:items, :id1)=>1, Sequel.qualify(:items, :id2)=>2} end specify "should raise an error for no primary key" do @c.no_primary_key proc{@c.qualified_primary_key_hash(1)}.should raise_error(Sequel::Error) end specify "should allow specifying a different qualifier" do @c.qualified_primary_key_hash(1, :apple).should == {Sequel.qualify(:apple, :id)=>1} @c.set_primary_key([:id1, :id2]) @c.qualified_primary_key_hash([1, 2], :bear).should == {Sequel.qualify(:bear, :id1)=>1, Sequel.qualify(:bear, :id2)=>2} end end describe "Model.db" do before do @db = Sequel.mock @databases = Sequel::DATABASES.dup @model_db = Sequel::Model.db Sequel::Model.db = nil Sequel::DATABASES.clear end after do Sequel::Model.instance_variable_get(:@db).should == nil Sequel::DATABASES.replace(@databases) Sequel::Model.db = @model_db end specify "should be required when creating named model classes" do begin proc{class ModelTest < Sequel::Model; end}.should raise_error(Sequel::Error) ensure Object.send(:remove_const, :ModelTest) end end specify "should be required when creating anonymous model classes without a database" do proc{Sequel::Model(:foo)}.should raise_error(Sequel::Error) end specify "should not be required when creating anonymous model classes with a database" do Sequel::Model(@db).db.should == @db Sequel::Model(@db[:foo]).db.should == @db end specify "should work correctly when subclassing anonymous model classes with a database" do begin Class.new(Sequel::Model(@db)).db.should == @db Class.new(Sequel::Model(@db[:foo])).db.should == @db class ModelTest < Sequel::Model(@db) db.should == @db end class ModelTest2 < Sequel::Model(@db[:foo]) db.should == @db end ModelTest.instance_variable_set(:@db, nil) ModelTest.db.should == @db ensure Object.send(:remove_const, :ModelTest) Object.send(:remove_const, :ModelTest2) end end end describe "Model.db=" do before do @db1 = Sequel.mock @db2 = Sequel.mock @m = Class.new(Sequel::Model(@db1[:blue].filter(:x=>1))) end specify "should affect the underlying dataset" do @m.db = @db2 @m.dataset.db.should === @db2 @m.dataset.db.should_not === @db1 end specify "should keep the same dataset options" do @m.db = @db2 @m.dataset.sql.should == 'SELECT * FROM blue WHERE (x = 1)' end specify "should use the database for subclasses" do @m.db = @db2 Class.new(@m).db.should === @db2 end end describe Sequel::Model, ".(allowed|restricted)_columns " do before do @c = Class.new(Sequel::Model(:blahblah)) do columns :x, :y, :z end @c.strict_param_setting = false @c.instance_variable_set(:@columns, [:x, :y, :z]) end it "should set the allowed columns correctly" do @c.allowed_columns.should == nil @c.set_allowed_columns :x @c.allowed_columns.should == [:x] @c.set_allowed_columns :x, :y @c.allowed_columns.should == [:x, :y] end it "should only set allowed columns by default" do @c.set_allowed_columns :x, :y i = @c.new(:x => 1, :y => 2, :z => 3) i.values.should == {:x => 1, :y => 2} i.set(:x => 4, :y => 5, :z => 6) i.values.should == {:x => 4, :y => 5} @c.instance_dataset._fetch = @c.dataset._fetch = {:x => 7} i = @c.new i.update(:x => 7, :z => 9) i.values.should == {:x => 7} DB.sqls.should == ["INSERT INTO blahblah (x) VALUES (7)", "SELECT * FROM blahblah WHERE (id = 10) LIMIT 1"] end end describe Sequel::Model, ".(un)?restrict_primary_key\\??" do before do @c = Class.new(Sequel::Model(:blahblah)) do set_primary_key :id columns :x, :y, :z, :id end @c.strict_param_setting = false end it "should restrict updates to primary key by default" do i = @c.new(:x => 1, :y => 2, :id => 3) i.values.should == {:x => 1, :y => 2} i.set(:x => 4, :y => 5, :id => 6) i.values.should == {:x => 4, :y => 5} end it "should allow updates to primary key if unrestrict_primary_key is used" do @c.unrestrict_primary_key i = @c.new(:x => 1, :y => 2, :id => 3) i.values.should == {:x => 1, :y => 2, :id=>3} i.set(:x => 4, :y => 5, :id => 6) i.values.should == {:x => 4, :y => 5, :id=>6} end it "should have restrict_primary_key? return true or false depending" do @c.restrict_primary_key?.should == true @c.unrestrict_primary_key @c.restrict_primary_key?.should == false c1 = Class.new(@c) c1.restrict_primary_key?.should == false @c.restrict_primary_key @c.restrict_primary_key?.should == true c1.restrict_primary_key?.should == false c2 = Class.new(@c) c2.restrict_primary_key?.should == true end end describe Sequel::Model, ".strict_param_setting" do before do @c = Class.new(Sequel::Model(:blahblah)) do columns :x, :y, :z, :id set_allowed_columns :x, :y end end it "should be enabled by default" do @c.strict_param_setting.should == true end it "should raise an error if a missing/restricted column/method is accessed" do proc{@c.new(:z=>1)}.should raise_error(Sequel::Error) proc{@c.create(:z=>1)}.should raise_error(Sequel::Error) c = @c.new proc{c.set(:z=>1)}.should raise_error(Sequel::Error) proc{c.set_all(:use_after_commit_rollback => false)}.should raise_error(Sequel::Error) proc{c.set_only({:x=>1}, :y)}.should raise_error(Sequel::Error) proc{c.update(:z=>1)}.should raise_error(Sequel::Error) proc{c.update_all(:use_after_commit_rollback=>false)}.should raise_error(Sequel::Error) proc{c.update_only({:x=>1}, :y)}.should raise_error(Sequel::Error) end it "should be disabled by strict_param_setting = false" do @c.strict_param_setting = false @c.strict_param_setting.should == false proc{@c.new(:z=>1)}.should_not raise_error end end describe Sequel::Model, ".require_modification" do before do @ds1 = DB[:items] def @ds1.provides_accurate_rows_matched?() false end @ds2 = DB[:items] def @ds2.provides_accurate_rows_matched?() true end end after do Sequel::Model.require_modification = nil end it "should depend on whether the dataset provides an accurate number of rows matched by default" do Class.new(Sequel::Model).set_dataset(@ds1).require_modification.should == false Class.new(Sequel::Model).set_dataset(@ds2).require_modification.should == true end it "should obey global setting regardless of dataset support if set" do Sequel::Model.require_modification = true Class.new(Sequel::Model).set_dataset(@ds1).require_modification.should == true Class.new(Sequel::Model).set_dataset(@ds2).require_modification.should == true Sequel::Model.require_modification = false Class.new(Sequel::Model).set_dataset(@ds1).require_modification.should == false Class.new(Sequel::Model).set_dataset(@ds1).require_modification.should == false end end describe Sequel::Model, ".[] optimization" do before do @db = DB.clone @db.quote_identifiers = true @c = Class.new(Sequel::Model(@db)) end it "should set simple_pk to the literalized primary key column name if a single primary key" do @c.set_primary_key :id @c.simple_pk.should == '"id"' @c.set_primary_key :b @c.simple_pk.should == '"b"' @c.set_primary_key Sequel.identifier(:b__a) @c.simple_pk.should == '"b__a"' end it "should have simple_pk be blank if compound or no primary key" do @c.no_primary_key @c.simple_pk.should == nil @c.set_primary_key [:b, :a] @c.simple_pk.should == nil end it "should have simple table set if passed a Symbol to set_dataset" do @c.set_dataset :a @c.simple_table.should == '"a"' @c.set_dataset :b @c.simple_table.should == '"b"' @c.set_dataset :b__a @c.simple_table.should == '"b"."a"' end it "should have simple_table set if passed a simple select all dataset to set_dataset" do @c.set_dataset @db[:a] @c.simple_table.should == '"a"' @c.set_dataset @db[:b] @c.simple_table.should == '"b"' @c.set_dataset @db[:b__a] @c.simple_table.should == '"b"."a"' end it "should have simple_pk and simple_table respect dataset's identifier input methods" do ds = @db[:ab] ds.identifier_input_method = :reverse @c.set_dataset ds @c.simple_table.should == '"ba"' @c.set_primary_key :cd @c.simple_pk.should == '"dc"' @c.set_dataset ds.from(:ef__gh) @c.simple_table.should == '"fe"."hg"' end it "should have simple_table = nil if passed a non-simple select all dataset to set_dataset" do @c.set_dataset @c.db[:a].filter(:active) @c.simple_table.should == nil end it "should have simple_table inherit superclass's setting" do Class.new(@c).simple_table.should == nil @c.set_dataset :a Class.new(@c).simple_table.should == '"a"' end it "should use Dataset#with_sql if simple_table and simple_pk are true" do @c.set_dataset :a @c.instance_dataset._fetch = @c.dataset._fetch = {:id => 1} @c[1].should == @c.load(:id=>1) @db.sqls.should == ['SELECT * FROM "a" WHERE "id" = 1'] end it "should not use Dataset#with_sql if either simple_table or simple_pk is nil" do @c.set_dataset @db[:a].filter(:active) @c.dataset._fetch = {:id => 1} @c[1].should == @c.load(:id=>1) @db.sqls.should == ['SELECT * FROM "a" WHERE ("active" AND ("id" = 1)) LIMIT 1'] end end describe "Model datasets #with_pk with #with_pk!" do before do @c = Class.new(Sequel::Model(:a)) @ds = @c.dataset @ds._fetch = {:id=>1} DB.reset end it "should be callable on the model class with optimized SQL" do @c.with_pk(1).should == @c.load(:id=>1) DB.sqls.should == ["SELECT * FROM a WHERE id = 1"] @c.with_pk!(1).should == @c.load(:id=>1) DB.sqls.should == ["SELECT * FROM a WHERE id = 1"] end it "should return the first record where the primary key matches" do @ds.with_pk(1).should == @c.load(:id=>1) DB.sqls.should == ["SELECT * FROM a WHERE (a.id = 1) LIMIT 1"] @ds.with_pk!(1).should == @c.load(:id=>1) DB.sqls.should == ["SELECT * FROM a WHERE (a.id = 1) LIMIT 1"] end it "should handle existing filters" do @ds.filter(:a=>2).with_pk(1) DB.sqls.should == ["SELECT * FROM a WHERE ((a = 2) AND (a.id = 1)) LIMIT 1"] @ds.filter(:a=>2).with_pk!(1) DB.sqls.should == ["SELECT * FROM a WHERE ((a = 2) AND (a.id = 1)) LIMIT 1"] end it "should work with string values" do @ds.with_pk("foo") DB.sqls.should == ["SELECT * FROM a WHERE (a.id = 'foo') LIMIT 1"] @ds.with_pk!("foo") DB.sqls.should == ["SELECT * FROM a WHERE (a.id = 'foo') LIMIT 1"] end it "should handle an array for composite primary keys" do @c.set_primary_key [:id1, :id2] @ds.with_pk([1, 2]) sqls = DB.sqls ["SELECT * FROM a WHERE ((a.id1 = 1) AND (a.id2 = 2)) LIMIT 1", "SELECT * FROM a WHERE ((a.id2 = 2) AND (a.id1 = 1)) LIMIT 1"].should include(sqls.pop) sqls.should == [] @ds.with_pk!([1, 2]) sqls = DB.sqls ["SELECT * FROM a WHERE ((a.id1 = 1) AND (a.id2 = 2)) LIMIT 1", "SELECT * FROM a WHERE ((a.id2 = 2) AND (a.id1 = 1)) LIMIT 1"].should include(sqls.pop) sqls.should == [] end it "should have with_pk return nil and with_pk! raise if no rows match" do @ds._fetch = [] @ds.with_pk(1).should == nil DB.sqls.should == ["SELECT * FROM a WHERE (a.id = 1) LIMIT 1"] proc{@ds.with_pk!(1)}.should raise_error(Sequel::NoMatchingRow) DB.sqls.should == ["SELECT * FROM a WHERE (a.id = 1) LIMIT 1"] end it "should have with_pk return nil and with_pk! raise if no rows match when calling the class method" do @ds._fetch = [] @c.with_pk(1).should == nil DB.sqls.should == ["SELECT * FROM a WHERE id = 1"] proc{@c.with_pk!(1)}.should raise_error(Sequel::NoMatchingRow) DB.sqls.should == ["SELECT * FROM a WHERE id = 1"] end it "should have #[] consider an integer as a primary key lookup" do @ds[1].should == @c.load(:id=>1) DB.sqls.should == ["SELECT * FROM a WHERE (a.id = 1) LIMIT 1"] end it "should not have #[] consider a string as a primary key lookup" do @ds['foo'].should == @c.load(:id=>1) DB.sqls.should == ["SELECT * FROM a WHERE (foo) LIMIT 1"] end end �����������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/class_dataset_methods_spec.rb������������������������������������������0000664�0000000�0000000�00000017620�12201565355�0023732�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "class dataset methods" do before do @db = Sequel.mock @c = Class.new(Sequel::Model(@db[:items])) @d = @c.dataset @d._fetch = {:id=>1} @d.autoid = 1 @d.numrows = 0 @db.sqls end it "should call the dataset method of the same name with the same args" do @c.<<({}).should == @d @db.sqls.should == ["INSERT INTO items DEFAULT VALUES"] @c.all.should == [@c.load(:id=>1)] @db.sqls.should == ["SELECT * FROM items"] @c.avg(:id).should == 1 @db.sqls.should == ["SELECT avg(id) AS avg FROM items LIMIT 1"] @c.count.should == 1 @db.sqls.should == ["SELECT count(*) AS count FROM items LIMIT 1"] @c.cross_join(@c).sql.should == "SELECT * FROM items CROSS JOIN items" @c.distinct.sql.should == "SELECT DISTINCT * FROM items" @c.each{|r| r.should == @c.load(:id=>1)}.should == @d @db.sqls.should == ["SELECT * FROM items"] @c.each_server{|r| r.opts[:server].should == :default} @c.empty?.should be_false @db.sqls.should == ["SELECT 1 AS one FROM items LIMIT 1"] @c.except(@d, :from_self=>false).sql.should == "SELECT * FROM items EXCEPT SELECT * FROM items" @c.exclude(:a).sql.should == "SELECT * FROM items WHERE NOT a" @c.exclude_having(:a).sql.should == "SELECT * FROM items HAVING NOT a" @c.exclude_where(:a).sql.should == "SELECT * FROM items WHERE NOT a" @c.fetch_rows("S"){|r| r.should == {:id=>1}} @db.sqls.should == ["S"] @c.filter(:a).sql.should == "SELECT * FROM items WHERE a" @c.first.should == @c.load(:id=>1) @db.sqls.should == ["SELECT * FROM items LIMIT 1"] @c.first!.should == @c.load(:id=>1) @db.sqls.should == ["SELECT * FROM items LIMIT 1"] @c.for_update.sql.should == "SELECT * FROM items FOR UPDATE" @c.from.sql.should == "SELECT *" @c.from_self.sql.should == "SELECT * FROM (SELECT * FROM items) AS t1" @c.full_join(@c).sql.should == "SELECT * FROM items FULL JOIN items" @c.full_outer_join(@c).sql.should == "SELECT * FROM items FULL OUTER JOIN items" @c.get(:a).should == 1 @db.sqls.should == ["SELECT a FROM items LIMIT 1"] @c.graph(@c, nil, :table_alias=>:a).sql.should == "SELECT * FROM items LEFT OUTER JOIN items AS a" @db.sqls @c.grep(:id, 'a%').sql.should == "SELECT * FROM items WHERE ((id LIKE 'a%' ESCAPE '\\'))" @c.group(:a).sql.should == "SELECT * FROM items GROUP BY a" @c.group_and_count(:a).sql.should == "SELECT a, count(*) AS count FROM items GROUP BY a" @c.group_by(:a).sql.should == "SELECT * FROM items GROUP BY a" @c.having(:a).sql.should == "SELECT * FROM items HAVING a" @c.import([:id], [[1]]) @db.sqls.should == ["BEGIN", "INSERT INTO items (id) VALUES (1)", "COMMIT"] @c.inner_join(@c).sql.should == "SELECT * FROM items INNER JOIN items" @c.insert.should == 2 @db.sqls.should == ["INSERT INTO items DEFAULT VALUES"] @c.intersect(@d, :from_self=>false).sql.should == "SELECT * FROM items INTERSECT SELECT * FROM items" @c.interval(:id).should == 1 @db.sqls.should == ["SELECT (max(id) - min(id)) AS interval FROM items LIMIT 1"] @c.join(@c).sql.should == "SELECT * FROM items INNER JOIN items" @c.join_table(:inner, @c).sql.should == "SELECT * FROM items INNER JOIN items" @c.last.should == @c.load(:id=>1) @db.sqls.should == ["SELECT * FROM items ORDER BY id DESC LIMIT 1"] @c.left_join(@c).sql.should == "SELECT * FROM items LEFT JOIN items" @c.left_outer_join(@c).sql.should == "SELECT * FROM items LEFT OUTER JOIN items" @c.limit(2).sql.should == "SELECT * FROM items LIMIT 2" @c.lock_style(:update).sql.should == "SELECT * FROM items FOR UPDATE" @c.map(:id).should == [1] @db.sqls.should == ["SELECT * FROM items"] @c.max(:id).should == 1 @db.sqls.should == ["SELECT max(id) AS max FROM items LIMIT 1"] @c.min(:id).should == 1 @db.sqls.should == ["SELECT min(id) AS min FROM items LIMIT 1"] @c.multi_insert([{:id=>1}]) @db.sqls.should == ["BEGIN", "INSERT INTO items (id) VALUES (1)", "COMMIT"] @c.naked.row_proc.should == nil @c.natural_full_join(@c).sql.should == "SELECT * FROM items NATURAL FULL JOIN items" @c.natural_join(@c).sql.should == "SELECT * FROM items NATURAL JOIN items" @c.natural_left_join(@c).sql.should == "SELECT * FROM items NATURAL LEFT JOIN items" @c.natural_right_join(@c).sql.should == "SELECT * FROM items NATURAL RIGHT JOIN items" @c.order(:a).sql.should == "SELECT * FROM items ORDER BY a" @c.order_append(:a).sql.should == "SELECT * FROM items ORDER BY a" @c.order_by(:a).sql.should == "SELECT * FROM items ORDER BY a" @c.order_more(:a).sql.should == "SELECT * FROM items ORDER BY a" @c.order_prepend(:a).sql.should == "SELECT * FROM items ORDER BY a" @c.paged_each{|r| r.should == @c.load(:id=>1)} @db.sqls.should == ["BEGIN", "SELECT * FROM items ORDER BY id LIMIT 1000 OFFSET 0", "COMMIT"] @c.qualify.sql.should == 'SELECT items.* FROM items' @c.right_join(@c).sql.should == "SELECT * FROM items RIGHT JOIN items" @c.right_outer_join(@c).sql.should == "SELECT * FROM items RIGHT OUTER JOIN items" @c.select(:a).sql.should == "SELECT a FROM items" @c.select_all(:items).sql.should == "SELECT items.* FROM items" @c.select_append(:a).sql.should == "SELECT *, a FROM items" @c.select_group(:a).sql.should == "SELECT a FROM items GROUP BY a" @c.select_hash(:id, :id).should == {1=>1} @db.sqls.should == ["SELECT id, id FROM items"] @c.select_hash_groups(:id, :id).should == {1=>[1]} @db.sqls.should == ["SELECT id, id FROM items"] @c.select_map(:id).should == [1] @db.sqls.should == ["SELECT id FROM items"] @c.select_order_map(:id).should == [1] @db.sqls.should == ["SELECT id FROM items ORDER BY id"] @c.server(:a).opts[:server].should == :a @c.set_graph_aliases(:a=>:b).opts[:graph_aliases].should == {:a=>[:b, :a]} @c.single_record.should == @c.load(:id=>1) @db.sqls.should == ["SELECT * FROM items LIMIT 1"] @c.single_value.should == 1 @db.sqls.should == ["SELECT * FROM items LIMIT 1"] @c.sum(:id).should == 1 @db.sqls.should == ["SELECT sum(id) AS sum FROM items LIMIT 1"] @c.to_hash(:id, :id).should == {1=>1} @db.sqls.should == ["SELECT * FROM items"] @c.to_hash_groups(:id, :id).should == {1=>[1]} @db.sqls.should == ["SELECT * FROM items"] @c.truncate @db.sqls.should == ["TRUNCATE TABLE items"] @c.union(@d, :from_self=>false).sql.should == "SELECT * FROM items UNION SELECT * FROM items" @c.where(:a).sql.should == "SELECT * FROM items WHERE a" @c.with(:a, @d).sql.should == "WITH a AS (SELECT * FROM items) SELECT * FROM items" @c.with_recursive(:a, @d, @d).sql.should == "WITH a AS (SELECT * FROM items UNION ALL SELECT * FROM items) SELECT * FROM items" @c.with_sql('S').sql.should == "S" sc = Class.new(@c) sc.set_dataset(@d.where(:a).order(:a).select(:a).group(:a).limit(2)) @db.sqls sc.invert.sql.should == 'SELECT a FROM items WHERE NOT a GROUP BY a ORDER BY a LIMIT 2' sc.dataset._fetch = {:v1=>1, :v2=>2} sc.range(:a).should == (1..2) @db.sqls.should == ["SELECT min(a) AS v1, max(a) AS v2 FROM (SELECT a FROM items WHERE a GROUP BY a ORDER BY a LIMIT 2) AS t1 LIMIT 1"] sc.reverse.sql.should == 'SELECT a FROM items WHERE a GROUP BY a ORDER BY a DESC LIMIT 2' sc.reverse_order.sql.should == 'SELECT a FROM items WHERE a GROUP BY a ORDER BY a DESC LIMIT 2' sc.select_more(:a).sql.should == 'SELECT a, a FROM items WHERE a GROUP BY a ORDER BY a LIMIT 2' sc.unfiltered.sql.should == 'SELECT a FROM items GROUP BY a ORDER BY a LIMIT 2' sc.ungrouped.sql.should == 'SELECT a FROM items WHERE a ORDER BY a LIMIT 2' sc.unordered.sql.should == 'SELECT a FROM items WHERE a GROUP BY a LIMIT 2' sc.unlimited.sql.should == 'SELECT a FROM items WHERE a GROUP BY a ORDER BY a' sc.dataset.graph!(:a) sc.dataset.ungraphed.opts[:graph].should == nil end end ����������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/dataset_methods_spec.rb������������������������������������������������0000664�0000000�0000000�00000012624�12201565355�0022544�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model::DatasetMethods, "#destroy" do before do @c = Class.new(Sequel::Model(:items)) do self::Destroyed = [] def destroy model::Destroyed << self end end @d = @c.dataset @d._fetch = [{:id=>1}, {:id=>2}] DB.reset end it "should instantiate objects in the dataset and call destroy on each" do @d.destroy @c::Destroyed.collect{|x| x.values}.should == [{:id=>1}, {:id=>2}] end it "should return the number of records destroyed" do @d.destroy.should == 2 @d._fetch = [[{:i=>1}], []] @d.destroy.should == 1 @d.destroy.should == 0 end it "should use a transaction if use_transactions is true for the model" do @c.use_transactions = true @d.destroy DB.sqls.should == ["BEGIN", "SELECT * FROM items", "COMMIT"] end it "should not use a transaction if use_transactions is false for the model" do @c.use_transactions = false @d.destroy DB.sqls.should == ["SELECT * FROM items"] end end describe Sequel::Model::DatasetMethods, "#to_hash" do before do @c = Class.new(Sequel::Model(:items)) do set_primary_key :name end @d = @c.dataset end it "should result in a hash with primary key value keys and model object values" do @d._fetch = [{:name=>1}, {:name=>2}] h = @d.to_hash h.should be_a_kind_of(Hash) a = h.to_a a.collect{|x| x[1].class}.should == [@c, @c] a.sort_by{|x| x[0]}.collect{|x| [x[0], x[1].values]}.should == [[1, {:name=>1}], [2, {:name=>2}]] end it "should result in a hash with given value keys and model object values" do @d._fetch = [{:name=>1, :number=>3}, {:name=>2, :number=>4}] h = @d.to_hash(:number) h.should be_a_kind_of(Hash) a = h.to_a a.collect{|x| x[1].class}.should == [@c, @c] a.sort_by{|x| x[0]}.collect{|x| [x[0], x[1].values]}.should == [[3, {:name=>1, :number=>3}], [4, {:name=>2, :number=>4}]] end it "should raise an error if the class doesn't have a primary key" do @c.no_primary_key proc{@d.to_hash}.should raise_error(Sequel::Error) end end describe Sequel::Model::DatasetMethods do before do @c = Class.new(Sequel::Model(:items)) @c.columns :id @c.db.reset end specify "#join_table should allow use to use a model class when joining" do @c.join(Class.new(Sequel::Model(:categories)), :item_id => :id).sql.should == 'SELECT * FROM items INNER JOIN categories ON (categories.item_id = items.id)' end specify "#join_table should handle model classes that aren't simple selects using a subselect" do @c.join(Class.new(Sequel::Model(DB[:categories].where(:foo=>1))), :item_id => :id).sql.should == 'SELECT * FROM items INNER JOIN (SELECT * FROM categories WHERE (foo = 1)) AS t1 ON (t1.item_id = items.id)' end specify "#graph should allow use to use a model class when joining" do c = Class.new(Sequel::Model(:categories)) c.columns :id @c.graph(c, :item_id => :id).sql.should == 'SELECT items.id, categories.id AS categories_id FROM items LEFT OUTER JOIN categories ON (categories.item_id = items.id)' end specify "#insert_sql should handle a single model instance as an argument" do @c.dataset.insert_sql(@c.load(:id=>1)).should == 'INSERT INTO items (id) VALUES (1)' end specify "#first should handle no primary key" do @c.no_primary_key @c.first.should be_a_kind_of(@c) @c.db.sqls.should == ['SELECT * FROM items LIMIT 1'] end specify "#last should reverse order by primary key if not already ordered" do @c.last.should be_a_kind_of(@c) @c.db.sqls.should == ['SELECT * FROM items ORDER BY id DESC LIMIT 1'] @c.where(:id=>2).last(:foo=>2){{bar=>3}}.should be_a_kind_of(@c) @c.db.sqls.should == ['SELECT * FROM items WHERE ((id = 2) AND (bar = 3) AND (foo = 2)) ORDER BY id DESC LIMIT 1'] end specify "#last should use existing order if there is one" do @c.order(:foo).last.should be_a_kind_of(@c) @c.db.sqls.should == ['SELECT * FROM items ORDER BY foo DESC LIMIT 1'] end specify "#last should handle a composite primary key" do @c.set_primary_key [:id1, :id2] @c.last.should be_a_kind_of(@c) @c.db.sqls.should == ['SELECT * FROM items ORDER BY id1 DESC, id2 DESC LIMIT 1'] end specify "#last should raise an error if no primary key" do @c.no_primary_key proc{@c.last}.should raise_error(Sequel::Error) end specify "#paged_each should order by primary key if not already ordered" do @c.paged_each{|r| r.should be_a_kind_of(@c)} @c.db.sqls.should == ['BEGIN', 'SELECT * FROM items ORDER BY id LIMIT 1000 OFFSET 0', 'COMMIT'] @c.paged_each(:rows_per_fetch=>5){|r|} @c.db.sqls.should == ['BEGIN', 'SELECT * FROM items ORDER BY id LIMIT 5 OFFSET 0', 'COMMIT'] end specify "#paged_each should use existing order if there is one" do @c.order(:foo).paged_each{|r| r.should be_a_kind_of(@c)} @c.db.sqls.should == ['BEGIN', 'SELECT * FROM items ORDER BY foo LIMIT 1000 OFFSET 0', 'COMMIT'] end specify "#paged_each should handle a composite primary key" do @c.set_primary_key [:id1, :id2] @c.paged_each{|r| r.should be_a_kind_of(@c)} @c.db.sqls.should == ['BEGIN', 'SELECT * FROM items ORDER BY id1, id2 LIMIT 1000 OFFSET 0', 'COMMIT'] end specify "#paged_each should raise an error if no primary key" do @c.no_primary_key proc{@c.paged_each{|r| }}.should raise_error(Sequel::Error) end end ������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/eager_loading_spec.rb��������������������������������������������������0000664�0000000�0000000�00000325205�12201565355�0022156�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, "#eager" do before do class ::EagerAlbum < Sequel::Model(:albums) columns :id, :band_id many_to_one :band, :class=>'EagerBand', :key=>:band_id one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id many_to_many :genres, :class=>'EagerGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag one_to_many :good_tracks, :class=>'EagerTrack', :reciprocal=>nil, :key=>:album_id do |ds| ds.filter(:name=>'Good') end many_to_one :band_name, :class=>'EagerBand', :key=>:band_id, :select=>[:id, :name] one_to_many :track_names, :class=>'EagerTrack', :key=>:album_id, :select=>[:id, :name] many_to_many :genre_names, :class=>'EagerGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :select=>[:id] def band_id3 band_id*3 end end class ::EagerBand < Sequel::Model(:bands) columns :id, :p_k one_to_many :albums, :class=>'EagerAlbum', :key=>:band_id, :eager=>:tracks, :reciprocal=>:band one_to_many :graph_albums, :class=>'EagerAlbum', :key=>:band_id, :eager_graph=>:tracks, :reciprocal=>nil many_to_many :members, :class=>'EagerBandMember', :left_key=>:band_id, :right_key=>:member_id, :join_table=>:bm many_to_many :graph_members, :clone=>:members, :eager_graph=>:bands one_to_many :good_albums, :class=>'EagerAlbum', :key=>:band_id, :reciprocal=>nil, :eager_block=>proc{|ds| ds.filter(:name=>'good')} do |ds| ds.filter(:name=>'Good') end one_to_many :self_titled_albums, :class=>'EagerAlbum', :key=>:band_id, :allow_eager=>false do |ds| ds.filter(:name=>name) end one_to_many :albums_by_name, :class=>'EagerAlbum', :key=>:band_id, :order=>:name, :allow_eager=>false one_to_many :top_10_albums, :class=>'EagerAlbum', :key=>:band_id, :limit=>10 def id3 id/3 end end class ::EagerTrack < Sequel::Model(:tracks) columns :id, :album_id many_to_one :album, :class=>'EagerAlbum', :key=>:album_id end class ::EagerGenre < Sequel::Model(:genres) columns :id, :xxx many_to_many :albums, :class=>'EagerAlbum', :left_key=>:genre_id, :right_key=>:album_id, :join_table=>:ag end class ::EagerBandMember < Sequel::Model(:members) columns :id many_to_many :bands, :class=>'EagerBand', :left_key=>:member_id, :right_key=>:band_id, :join_table=>:bm, :order =>:id end EagerAlbum.dataset.columns(:id, :band_id) EagerAlbum.dataset._fetch = proc do |sql| h = if sql =~ /101/ {:id => 101, :band_id=> 101} else {:id => 1, :band_id=> 2} end h[:x_foreign_key_x] = 4 if sql =~ /ag\.genre_id/ h end EagerBand.dataset._fetch = proc do |sql| case sql when /id IN (101)/ # nothing when /id > 100/ [{:id => 101}, {:id => 102}] else h = {:id => 2} h[:x_foreign_key_x] = 5 if sql =~ /bm\.member_id/ h end end EagerTrack.dataset._fetch = {:id => 3, :album_id => 1} EagerGenre.dataset._fetch = proc do |sql| h = {:id => 4} h[:x_foreign_key_x] = 1 if sql =~ /ag\.album_id/ h end EagerBandMember.dataset._fetch = proc do |sql| h = {:id => 5} h[:x_foreign_key_x] = 2 if sql =~ /bm\.band_id/ h end DB.reset end after do [:EagerAlbum, :EagerBand, :EagerTrack, :EagerGenre, :EagerBandMember].each{|x| Object.send(:remove_const, x)} end it "should populate :key_hash and :id_map option correctly for custom eager loaders" do khs = {} pr = proc{|a, m| proc{|h| khs[a] = h[:key_hash][m]; h[:id_map].should == h[:key_hash][m]}} EagerAlbum.many_to_one :sband, :clone=>:band, :eager_loader=>pr.call(:sband, :band_id) EagerAlbum.one_to_many :stracks, :clone=>:tracks, :eager_loader=>pr.call(:stracks, :id) EagerAlbum.many_to_many :sgenres, :clone=>:genres, :eager_loader=>pr.call(:sgenres, :id) EagerAlbum.eager(:sband, :stracks, :sgenres).all khs.should == {:sband=>{2=>[EagerAlbum.load(:band_id=>2, :id=>1)]}, :stracks=>{1=>[EagerAlbum.load(:band_id=>2, :id=>1)]}, :sgenres=>{1=>[EagerAlbum.load(:band_id=>2, :id=>1)]}} end it "should populate :key_hash using the method symbol" do khs = {} pr = proc{|a, m| proc{|h| khs[a] = h[:key_hash][m]}} EagerAlbum.many_to_one :sband, :clone=>:band, :eager_loader=>pr.call(:sband, :band_id), :key=>:band_id, :key_column=>:b_id EagerAlbum.one_to_many :stracks, :clone=>:tracks, :eager_loader=>pr.call(:stracks, :id), :primary_key=>:id, :primary_key_column=>:i EagerAlbum.many_to_many :sgenres, :clone=>:genres, :eager_loader=>pr.call(:sgenres, :id), :left_primary_key=>:id, :left_primary_key_column=>:i EagerAlbum.eager(:sband, :stracks, :sgenres).all khs.should == {:sband=>{2=>[EagerAlbum.load(:band_id=>2, :id=>1)]}, :stracks=>{1=>[EagerAlbum.load(:band_id=>2, :id=>1)]}, :sgenres=>{1=>[EagerAlbum.load(:band_id=>2, :id=>1)]}} end it "should raise an error if called without a symbol or hash" do proc{EagerAlbum.eager(Object.new)}.should raise_error(Sequel::Error) end it "should eagerly load a single many_to_one association" do a = EagerAlbum.eager(:band).all DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM bands WHERE (bands.id IN (2))'] a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] a.first.band.should == EagerBand.load(:id=>2) DB.sqls.should == [] end it "should eagerly load a single many_to_one association with the same name as the column" do EagerAlbum.def_column_alias(:band_id_id, :band_id) EagerAlbum.many_to_one :band_id, :key_column=>:band_id, :class=>EagerBand a = EagerAlbum.eager(:band_id).all DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM bands WHERE (bands.id IN (2))'] a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] a.first.band_id.should == EagerBand.load(:id=>2) DB.sqls.should == [] end it "should eagerly load a single one_to_one association" do EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id a = EagerAlbum.eager(:track).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM tracks WHERE (tracks.album_id IN (1))'] a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1) DB.sqls.should == [] end it "should use first matching entry when eager loading one_to_one association" do EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id EagerTrack.dataset._fetch = [{:id => 3, :album_id=>1}, {:id => 4, :album_id=>1}] a = EagerAlbum.eager(:track).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM tracks WHERE (tracks.album_id IN (1))'] a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1) DB.sqls.should == [] end it "should eagerly load a single one_to_one association using the :distinct_on strategy" do def (EagerTrack.dataset).supports_distinct_on?() true end EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :eager_limit_strategy=>true a = EagerAlbum.eager(:track).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT DISTINCT ON (tracks.album_id) * FROM tracks WHERE (tracks.album_id IN (1)) ORDER BY tracks.album_id'] a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1) DB.sqls.should == [] end it "should eagerly load a single one_to_one association using the :window_function strategy" do def (EagerTrack.dataset).supports_window_functions?() true end EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :eager_limit_strategy=>true a = EagerAlbum.eager(:track).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE (x_sequel_row_number_x = 1)'] a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1) DB.sqls.should == [] end it "should not use distinct on eager limit strategy if the association has an offset" do def (EagerTrack.dataset).supports_distinct_on?() true end def (EagerTrack.dataset).supports_window_functions?() true end EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :limit=>[1,1], :order=>:name a = EagerAlbum.eager(:track).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE (x_sequel_row_number_x = 2)'] a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1) DB.sqls.should == [] end it "should automatically use an eager limit stategy if the association has an offset" do EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :limit=>[1,1] EagerTrack.dataset._fetch = [{:id => 3, :album_id=>1}, {:id => 4, :album_id=>1}] a = EagerAlbum.eager(:track).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM tracks WHERE (tracks.album_id IN (1))'] a.first.track.should == EagerTrack.load(:id => 4, :album_id=>1) DB.sqls.should == [] end it "should eagerly load a single one_to_many association" do a = EagerAlbum.eager(:tracks).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM tracks WHERE (tracks.album_id IN (1))'] a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)] DB.sqls.should == [] end it "should eagerly load a single many_to_many association" do a = EagerAlbum.eager(:genres).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"] a.first.genres.should == [EagerGenre.load(:id=>4)] DB.sqls.should == [] end it "should support using a custom :key option when eager loading many_to_one associations" do EagerAlbum.many_to_one :sband, :clone=>:band, :key=>:band_id3 EagerBand.dataset._fetch = {:id=>6} a = EagerAlbum.eager(:sband).all DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM bands WHERE (bands.id IN (6))'] a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] a.first.sband.should == EagerBand.load(:id=>6) DB.sqls.should == [] end it "should support using a custom :primary_key option when eager loading one_to_many associations" do EagerBand.one_to_many :salbums, :clone=>:albums, :primary_key=>:id3, :eager=>nil, :reciprocal=>nil EagerBand.dataset._fetch = {:id=>6} a = EagerBand.eager(:salbums).all DB.sqls.should == ['SELECT * FROM bands', 'SELECT * FROM albums WHERE (albums.band_id IN (2))'] a.should == [EagerBand.load(:id => 6)] a.first.salbums.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == [] end it "should support using a custom :left_primary_key option when eager loading many_to_many associations" do EagerAlbum.many_to_many :sgenres, :clone=>:genres, :left_primary_key=>:band_id3 EagerGenre.dataset._fetch = {:id=>4, :x_foreign_key_x=>6} a = EagerAlbum.eager(:sgenres).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (6)))"] a.first.sgenres.should == [EagerGenre.load(:id=>4)] DB.sqls.should == [] end it "should handle a :eager_loading_predicate_key option to change the SQL used in the lookup, for many_to_one associations" do EagerAlbum.many_to_one :sband, :clone=>:band, :eager_loading_predicate_key=>Sequel./(:bands__id, 3), :primary_key_method=>:id3 EagerBand.dataset._fetch = {:id=>6} a = EagerAlbum.eager(:sband).all DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM bands WHERE ((bands.id / 3) IN (2))'] a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] a.first.sband.should == EagerBand.load(:id=>6) DB.sqls.should == [] end it "should handle a :eager_loading_predicate_key option to change the SQL used in the lookup, for one_to_many associations" do EagerBand.one_to_many :salbums, :clone=>:albums, :eager_loading_predicate_key=>Sequel.*(:albums__band_id, 3), :key_method=>:band_id3, :eager=>nil, :reciprocal=>nil EagerBand.dataset._fetch = {:id=>6} a = EagerBand.eager(:salbums).all DB.sqls.should == ['SELECT * FROM bands', 'SELECT * FROM albums WHERE ((albums.band_id * 3) IN (6))'] a.should == [EagerBand.load(:id => 6)] a.first.salbums.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == [] end it "should handle a :eager_loading_predicate_key option to change the SQL used in the lookup, for many_to_many associations" do EagerAlbum.many_to_many :sgenres, :clone=>:genres, :eager_loading_predicate_key=>Sequel.*(:ag__album_id, 1) a = EagerAlbum.eager(:sgenres).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, (ag.album_id * 1) AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND ((ag.album_id * 1) IN (1)))"] a.first.sgenres.should == [EagerGenre.load(:id=>4)] DB.sqls.should == [] end it "should raise an error for an unhandled :eager_loader_key option" do EagerAlbum.many_to_many :sgenres, :clone=>:genres, :eager_loader_key=>1 ds = EagerAlbum.eager(:sgenres) proc{ds.all}.should raise_error(Sequel::Error) end it "should not add entry to key_hash for :eager_loader_key=>nil option" do eo = nil EagerAlbum.many_to_many :sgenres, :clone=>:genres, :eager_loader_key=>nil, :eager_loader=>proc{|o| eo = o} ds = EagerAlbum.eager(:sgenres) ds.all eo[:key_hash].should == {} eo[:id_map].should == nil end it "should correctly handle a :select=>[] option to many_to_many" do EagerAlbum.many_to_many :sgenres, :clone=>:genres, :select=>[] EagerAlbum.eager(:sgenres).all DB.sqls.should == ['SELECT * FROM albums', "SELECT *, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"] end it "should correctly handle an aliased join table in many_to_many" do EagerAlbum.many_to_many :sgenres, :clone=>:genres, :join_table=>:ag___ga EagerAlbum.eager(:sgenres).all DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ga.album_id AS x_foreign_key_x FROM genres INNER JOIN ag AS ga ON ((ga.genre_id = genres.id) AND (ga.album_id IN (1)))"] end it "should eagerly load multiple associations in a single call" do a = EagerAlbum.eager(:genres, :tracks, :band).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] sqls = DB.sqls sqls.shift.should == 'SELECT * FROM albums' sqls.sort.should == ['SELECT * FROM bands WHERE (bands.id IN (2))', 'SELECT * FROM tracks WHERE (tracks.album_id IN (1))', 'SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))'] a = a.first a.band.should == EagerBand.load(:id=>2) a.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)] a.genres.should == [EagerGenre.load(:id => 4)] DB.sqls.should == [] end it "should eagerly load multiple associations in separate calls" do a = EagerAlbum.eager(:genres).eager(:tracks).eager(:band).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] sqls = DB.sqls sqls.shift.should == 'SELECT * FROM albums' sqls.sort.should == ['SELECT * FROM bands WHERE (bands.id IN (2))', 'SELECT * FROM tracks WHERE (tracks.album_id IN (1))', 'SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))'] a = a.first a.band.should == EagerBand.load(:id=>2) a.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)] a.genres.should == [EagerGenre.load(:id => 4)] DB.sqls.should == [] end it "should allow cascading of eager loading for associations of associated models" do a = EagerTrack.eager(:album=>{:band=>:members}).all a.should == [EagerTrack.load(:id => 3, :album_id => 1)] DB.sqls.should == ['SELECT * FROM tracks', 'SELECT * FROM albums WHERE (albums.id IN (1))', 'SELECT * FROM bands WHERE (bands.id IN (2))', "SELECT members.*, bm.band_id AS x_foreign_key_x FROM members INNER JOIN bm ON ((bm.member_id = members.id) AND (bm.band_id IN (2)))"] a = a.first a.album.should == EagerAlbum.load(:id => 1, :band_id => 2) a.album.band.should == EagerBand.load(:id => 2) a.album.band.members.should == [EagerBandMember.load(:id => 5)] DB.sqls.should == [] end it "should cascade eagerly loading when the :eager association option is used" do a = EagerBand.eager(:albums).all a.should == [EagerBand.load(:id=>2)] DB.sqls.should == ['SELECT * FROM bands', 'SELECT * FROM albums WHERE (albums.band_id IN (2))', 'SELECT * FROM tracks WHERE (tracks.album_id IN (1))'] a.first.albums.should == [EagerAlbum.load(:id => 1, :band_id => 2)] a.first.albums.first.tracks.should == [EagerTrack.load(:id => 3, :album_id => 1)] DB.sqls.should == [] end it "should respect :eager when lazily loading an association" do a = EagerBand.all a.should == [EagerBand.load(:id=>2)] DB.sqls.should == ['SELECT * FROM bands'] a = a.first.albums DB.sqls.should == ['SELECT * FROM albums WHERE (albums.band_id = 2)', 'SELECT * FROM tracks WHERE (tracks.album_id IN (1))'] a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id => 1)] DB.sqls.should == [] end it "should cascade eagerly loading when the :eager_graph association option is used" do EagerAlbum.dataset._fetch = {:id=>1, :band_id=>2, :tracks_id=>3, :album_id=>1} a = EagerBand.eager(:graph_albums).all a.should == [EagerBand.load(:id=>2)] DB.sqls.should == ['SELECT * FROM bands', 'SELECT albums.id, albums.band_id, tracks.id AS tracks_id, tracks.album_id FROM albums LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id) WHERE (albums.band_id IN (2))'] a.first.graph_albums.should == [EagerAlbum.load(:id => 1, :band_id => 2)] a.first.graph_albums.first.tracks.should == [EagerTrack.load(:id => 3, :album_id => 1)] DB.sqls.should == [] end it "should raise an Error when eager loading a many_to_many association with the :eager_graph option" do proc{EagerBand.eager(:graph_members).all}.should raise_error(Sequel::Error) end it "should respect :eager_graph when lazily loading an association" do a = EagerBand.all a.should == [EagerBand.load(:id=>2)] DB.sqls.should == ['SELECT * FROM bands'] a = a.first EagerAlbum.dataset._fetch = {:id=>1, :band_id=>2, :tracks_id=>3, :album_id=>1} a.graph_albums DB.sqls.should == ['SELECT albums.id, albums.band_id, tracks.id AS tracks_id, tracks.album_id FROM albums LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id) WHERE (albums.band_id = 2)'] a.graph_albums.should == [EagerAlbum.load(:id => 1, :band_id => 2)] a.graph_albums.first.tracks.should == [EagerTrack.load(:id => 3, :album_id => 1)] DB.sqls.should == [] end it "should respect :eager_graph when lazily loading a many_to_many association" do ds = EagerBandMember.dataset def ds.columns() [:id] end ds._fetch = [{:id=>5, :bands_id=>2, :p_k=>6}, {:id=>5, :bands_id=>3, :p_k=>6}] a = EagerBand.load(:id=>2) a.graph_members.should == [EagerBandMember.load(:id=>5)] DB.sqls.should == ['SELECT members.id, bands.id AS bands_id, bands.p_k FROM (SELECT members.* FROM members INNER JOIN bm ON ((bm.member_id = members.id) AND (bm.band_id = 2))) AS members LEFT OUTER JOIN bm AS bm_0 ON (bm_0.member_id = members.id) LEFT OUTER JOIN bands ON (bands.id = bm_0.band_id) ORDER BY bands.id'] a.graph_members.first.bands.should == [EagerBand.load(:id=>2, :p_k=>6), EagerBand.load(:id=>3, :p_k=>6)] DB.sqls.should == [] end it "should respect :conditions when eagerly loading" do EagerBandMember.many_to_many :good_bands, :clone=>:bands, :conditions=>{:a=>32} a = EagerBandMember.eager(:good_bands).all a.should == [EagerBandMember.load(:id => 5)] DB.sqls.should == ['SELECT * FROM members', 'SELECT bands.*, bm.member_id AS x_foreign_key_x FROM bands INNER JOIN bm ON ((bm.band_id = bands.id) AND (bm.member_id IN (5))) WHERE (a = 32) ORDER BY id'] a.first.good_bands.should == [EagerBand.load(:id => 2)] DB.sqls.should == [] EagerBandMember.many_to_many :good_bands, :clone=>:bands, :conditions=>"x = 1" a = EagerBandMember.eager(:good_bands).all DB.sqls.should == ['SELECT * FROM members', 'SELECT bands.*, bm.member_id AS x_foreign_key_x FROM bands INNER JOIN bm ON ((bm.band_id = bands.id) AND (bm.member_id IN (5))) WHERE (x = 1) ORDER BY id'] end it "should respect :order when eagerly loading" do a = EagerBandMember.eager(:bands).all a.should == [EagerBandMember.load(:id => 5)] DB.sqls.should == ['SELECT * FROM members', 'SELECT bands.*, bm.member_id AS x_foreign_key_x FROM bands INNER JOIN bm ON ((bm.band_id = bands.id) AND (bm.member_id IN (5))) ORDER BY id'] a.first.bands.should == [EagerBand.load(:id => 2)] DB.sqls.should == [] end it "should populate the reciprocal many_to_one association when eagerly loading the one_to_many association" do a = EagerAlbum.eager(:tracks).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM tracks WHERE (tracks.album_id IN (1))'] a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)] a.first.tracks.first.album.should == a.first DB.sqls.should == [] end it "should cache the negative lookup when eagerly loading a many_to_one association" do a = EagerAlbum.eager(:band).filter(:id=>101).all a.should == [EagerAlbum.load(:id => 101, :band_id => 101)] DB.sqls.should == ['SELECT * FROM albums WHERE (id = 101)', 'SELECT * FROM bands WHERE (bands.id IN (101))'] a.first.associations.fetch(:band, 2).should be_nil a.first.band.should be_nil DB.sqls.should == [] end it "should cache the negative lookup when eagerly loading a *_to_many associations" do a = EagerBand.eager(:albums).filter('id > 100').all a.should == [EagerBand.load(:id => 101), EagerBand.load(:id =>102)] sqls = DB.sqls ['SELECT * FROM albums WHERE (albums.band_id IN (101, 102))', 'SELECT * FROM albums WHERE (albums.band_id IN (102, 101))'].should include(sqls.delete_at(1)) sqls.should == ['SELECT * FROM bands WHERE (id > 100)', "SELECT * FROM tracks WHERE (tracks.album_id IN (101))"] a.map{|b| b.associations[:albums]}.should == [[EagerAlbum.load({:band_id=>101, :id=>101})], []] DB.sqls.should == [] end it "should use the association's block when eager loading by default" do EagerAlbum.eager(:good_tracks).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM tracks WHERE ((tracks.album_id IN (1)) AND (name = 'Good'))"] end it "should use the eager_block option when eager loading if given" do EagerBand.eager(:good_albums).all DB.sqls.should == ['SELECT * FROM bands', "SELECT * FROM albums WHERE ((albums.band_id IN (2)) AND (name = 'good'))"] EagerBand.eager(:good_albums=>:good_tracks).all DB.sqls.should == ['SELECT * FROM bands', "SELECT * FROM albums WHERE ((albums.band_id IN (2)) AND (name = 'good'))", "SELECT * FROM tracks WHERE ((tracks.album_id IN (1)) AND (name = 'Good'))"] end it "should raise an error when attempting to eagerly load an association with the :allow_eager option set to false" do proc{EagerBand.eager(:self_titled_albums).all}.should raise_error(Sequel::Error) proc{EagerBand.eager(:albums_by_name).all}.should raise_error(Sequel::Error) end it "should respect the association's :select option" do EagerAlbum.eager(:band_name).all DB.sqls.should == ['SELECT * FROM albums', "SELECT id, name FROM bands WHERE (bands.id IN (2))"] EagerAlbum.eager(:track_names).all DB.sqls.should == ['SELECT * FROM albums', "SELECT id, name FROM tracks WHERE (tracks.album_id IN (1))"] EagerAlbum.eager(:genre_names).all DB.sqls.should == ['SELECT * FROM albums', "SELECT id, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"] end it "should respect many_to_one association's :qualify option" do EagerAlbum.many_to_one :special_band, :class=>:EagerBand, :qualify=>false, :key=>:band_id EagerBand.dataset._fetch = {:id=>2} as = EagerAlbum.eager(:special_band).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM bands WHERE (id IN (2))"] as.map{|a| a.special_band}.should == [EagerBand.load(:id=>2)] DB.sqls.should == [] end it "should respect the association's :primary_key option" do EagerAlbum.many_to_one :special_band, :class=>:EagerBand, :primary_key=>:p_k, :key=>:band_id EagerBand.dataset._fetch = {:p_k=>2, :id=>1} as = EagerAlbum.eager(:special_band).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM bands WHERE (bands.p_k IN (2))"] as.length.should == 1 as.first.special_band.should == EagerBand.load(:p_k=>2, :id=>1) EagerAlbum.one_to_many :special_tracks, :class=>:EagerTrack, :primary_key=>:band_id, :key=>:album_id, :reciprocal=>nil EagerTrack.dataset._fetch = {:album_id=>2, :id=>1} as = EagerAlbum.eager(:special_tracks).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM tracks WHERE (tracks.album_id IN (2))"] as.length.should == 1 as.first.special_tracks.should == [EagerTrack.load(:album_id=>2, :id=>1)] end it "should respect the many_to_one association's composite keys" do EagerAlbum.many_to_one :special_band, :class=>:EagerBand, :primary_key=>[:id, :p_k], :key=>[:band_id, :id] EagerBand.dataset._fetch = {:p_k=>1, :id=>2} as = EagerAlbum.eager(:special_band).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM bands WHERE ((bands.id, bands.p_k) IN ((2, 1)))"] as.length.should == 1 as.first.special_band.should == EagerBand.load(:p_k=>1, :id=>2) end it "should respect the one_to_many association's composite keys" do EagerAlbum.one_to_many :special_tracks, :class=>:EagerTrack, :primary_key=>[:band_id, :id], :key=>[:id, :album_id] EagerTrack.dataset._fetch = {:album_id=>1, :id=>2} as = EagerAlbum.eager(:special_tracks).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM tracks WHERE ((tracks.id, tracks.album_id) IN ((2, 1)))"] as.length.should == 1 as.first.special_tracks.should == [EagerTrack.load(:album_id=>1, :id=>2)] end it "should respect many_to_many association's composite keys" do EagerAlbum.many_to_many :special_genres, :class=>:EagerGenre, :left_primary_key=>[:band_id, :id], :left_key=>[:l1, :l2], :right_primary_key=>[:xxx, :id], :right_key=>[:r1, :r2], :join_table=>:ag EagerGenre.dataset._fetch = [{:x_foreign_key_0_x=>2, :x_foreign_key_1_x=>1, :id=>5}, {:x_foreign_key_0_x=>2, :x_foreign_key_1_x=>1, :id=>6}] as = EagerAlbum.eager(:special_genres).all DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.l1 AS x_foreign_key_0_x, ag.l2 AS x_foreign_key_1_x FROM genres INNER JOIN ag ON ((ag.r1 = genres.xxx) AND (ag.r2 = genres.id) AND ((ag.l1, ag.l2) IN ((2, 1))))"] as.length.should == 1 as.first.special_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)] end it "should respect many_to_many association's :left_primary_key and :right_primary_key options" do EagerAlbum.many_to_many :special_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_primary_key=>:xxx, :right_key=>:genre_id, :join_table=>:ag EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}] as = EagerAlbum.eager(:special_genres).all DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.xxx) AND (ag.album_id IN (2)))"] as.length.should == 1 as.first.special_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)] end it "should respect the :limit option on a one_to_many association" do EagerAlbum.one_to_many :first_two_tracks, :class=>:EagerTrack, :key=>:album_id, :limit=>2 EagerTrack.dataset._fetch = [{:album_id=>1, :id=>2}, {:album_id=>1, :id=>3}, {:album_id=>1, :id=>4}] as = EagerAlbum.eager(:first_two_tracks).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM tracks WHERE (tracks.album_id IN (1))"] as.length.should == 1 as.first.first_two_tracks.should == [EagerTrack.load(:album_id=>1, :id=>2), EagerTrack.load(:album_id=>1, :id=>3)] DB.reset EagerAlbum.one_to_many :first_two_tracks, :class=>:EagerTrack, :key=>:album_id, :limit=>[1,1] as = EagerAlbum.eager(:first_two_tracks).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM tracks WHERE (tracks.album_id IN (1))"] as.length.should == 1 as.first.first_two_tracks.should == [EagerTrack.load(:album_id=>1, :id=>3)] DB.reset EagerAlbum.one_to_many :first_two_tracks, :class=>:EagerTrack, :key=>:album_id, :limit=>[nil,1] as = EagerAlbum.eager(:first_two_tracks).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM tracks WHERE (tracks.album_id IN (1))"] as.length.should == 1 as.first.first_two_tracks.should == [EagerTrack.load(:album_id=>1, :id=>3), EagerTrack.load(:album_id=>1, :id=>4)] end it "should respect the :limit option on a one_to_many association using the :window_function strategy" do def (EagerTrack.dataset).supports_window_functions?() true end EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>2 a = EagerAlbum.eager(:tracks).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE (x_sequel_row_number_x <= 2)'] a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)] DB.sqls.should == [] end it "should respect the :limit option with an offset on a one_to_many association using the :window_function strategy" do def (EagerTrack.dataset).supports_window_functions?() true end EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>[2, 1] a = EagerAlbum.eager(:tracks).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE ((x_sequel_row_number_x >= 2) AND (x_sequel_row_number_x < 4))'] a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)] DB.sqls.should == [] end it "should respect the :limit option with just an offset on a one_to_many association using the :window_function strategy" do def (EagerTrack.dataset).supports_window_functions?() true end EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>[nil, 1] a = EagerAlbum.eager(:tracks).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE (x_sequel_row_number_x >= 2)'] a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)] DB.sqls.should == [] end it "should respect the limit option on a many_to_many association" do EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>2 EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}, {:x_foreign_key_x=>2, :id=>7}] as = EagerAlbum.eager(:first_two_genres).all DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))"] as.length.should == 1 as.first.first_two_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)] EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}, {:x_foreign_key_x=>2, :id=>7}] EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[1, 1] as = EagerAlbum.eager(:first_two_genres).all DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))"] as.length.should == 1 as.first.first_two_genres.should == [EagerGenre.load(:id=>6)] EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}, {:x_foreign_key_x=>2, :id=>7}] EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[nil, 1] as = EagerAlbum.eager(:first_two_genres).all DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))"] as.length.should == 1 as.first.first_two_genres.should == [EagerGenre.load(:id=>6), EagerGenre.load(:id=>7)] end it "should respect the limit option on a many_to_many association using the :window_function strategy" do def (EagerGenre.dataset).supports_window_functions?() true end EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>2, :order=>:name EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}] as = EagerAlbum.eager(:first_two_genres).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id ORDER BY name) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))) AS t1 WHERE (x_sequel_row_number_x <= 2)"] as.length.should == 1 as.first.first_two_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)] EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}] EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[1, 1], :order=>:name as = EagerAlbum.eager(:first_two_genres).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id ORDER BY name) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))) AS t1 WHERE ((x_sequel_row_number_x >= 2) AND (x_sequel_row_number_x < 3))"] as.length.should == 1 as.first.first_two_genres.should == [EagerGenre.load(:id=>5)] EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}] EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[nil, 1], :order=>:name as = EagerAlbum.eager(:first_two_genres).all DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id ORDER BY name) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))) AS t1 WHERE (x_sequel_row_number_x >= 2)"] as.length.should == 1 as.first.first_two_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)] end it "should use the :eager_loader association option when eager loading" do EagerAlbum.many_to_one :special_band, :key=>:band_id, :eager_loader=>(proc do |eo| item = EagerBand.filter(:album_id=>eo[:rows].collect{|r| [r.pk, r.pk*2]}.flatten).order(:name).first eo[:rows].each{|r| r.associations[:special_band] = item} end) EagerAlbum.one_to_many :special_tracks, :eager_loader=>(proc do |eo| items = EagerTrack.filter(:album_id=>eo[:rows].collect{|r| [r.pk, r.pk*2]}.flatten).all eo[:rows].each{|r| r.associations[:special_tracks] = items} end) EagerAlbum.many_to_many :special_genres, :class=>:EagerGenre, :eager_loader=>(proc do |eo| items = EagerGenre.inner_join(:ag, [:genre_id]).filter(:album_id=>eo[:rows].collect{|r| r.pk}).all eo[:rows].each{|r| r.associations[:special_genres] = items} end) a = EagerAlbum.eager(:special_genres, :special_tracks, :special_band).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] sqls = DB.sqls sqls.shift.should == 'SELECT * FROM albums' sqls.sort.should == ['SELECT * FROM bands WHERE (album_id IN (1, 2)) ORDER BY name LIMIT 1', 'SELECT * FROM genres INNER JOIN ag USING (genre_id) WHERE (album_id IN (1))', 'SELECT * FROM tracks WHERE (album_id IN (1, 2))'] a = a.first a.special_band.should == EagerBand.load(:id => 2) a.special_tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)] a.special_genres.should == [EagerGenre.load(:id => 4)] DB.sqls.should == [] end it "should respect :after_load callbacks on associations when eager loading" do EagerAlbum.many_to_one :al_band, :class=>'EagerBand', :key=>:band_id, :after_load=>proc{|o, a| a.id *=2} EagerAlbum.one_to_many :al_tracks, :class=>'EagerTrack', :key=>:album_id, :after_load=>proc{|o, os| os.each{|a| a.id *=2}} EagerAlbum.many_to_many :al_genres, :class=>'EagerGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :after_load=>proc{|o, os| os.each{|a| a.id *=2}} a = EagerAlbum.eager(:al_band, :al_tracks, :al_genres).all.first a.should == EagerAlbum.load(:id => 1, :band_id => 2) a.al_band.should == EagerBand.load(:id=>4) a.al_tracks.should == [EagerTrack.load(:id=>6, :album_id=>1)] a.al_genres.should == [EagerGenre.load(:id=>8)] end it "should respect :uniq option when eagerly loading many_to_many associations" do EagerAlbum.many_to_many :al_genres, :class=>'EagerGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :uniq=>true EagerGenre.dataset._fetch = [{:x_foreign_key_x=>1, :id=>8}, {:x_foreign_key_x=>1, :id=>8}] a = EagerAlbum.eager(:al_genres).all.first DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"] a.should == EagerAlbum.load(:id => 1, :band_id => 2) a.al_genres.should == [EagerGenre.load(:id=>8)] end it "should respect :distinct option when eagerly loading many_to_many associations" do EagerAlbum.many_to_many :al_genres, :class=>'EagerGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :distinct=>true a = EagerAlbum.eager(:al_genres).all.first DB.sqls.should == ['SELECT * FROM albums', "SELECT DISTINCT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"] a.should == EagerAlbum.load(:id => 1, :band_id => 2) a.al_genres.should == [EagerGenre.load(:id=>4)] end it "should eagerly load a many_to_one association with custom eager block" do a = EagerAlbum.eager(:band => proc {|ds| ds.select(:id, :name)}).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT id, name FROM bands WHERE (bands.id IN (2))'] a.first.band.should == EagerBand.load(:id => 2) DB.sqls.should == [] end it "should eagerly load a one_to_one association with custom eager block" do EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id a = EagerAlbum.eager(:track => proc {|ds| ds.select(:id)}).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT id FROM tracks WHERE (tracks.album_id IN (1))'] a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1) DB.sqls.should == [] end it "should eagerly load a one_to_many association with custom eager block" do a = EagerAlbum.eager(:tracks => proc {|ds| ds.select(:id)}).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', 'SELECT id FROM tracks WHERE (tracks.album_id IN (1))'] a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)] DB.sqls.should == [] end it "should eagerly load a many_to_many association with custom eager block" do a = EagerAlbum.eager(:genres => proc {|ds| ds.select(:name)}).all a.should == [EagerAlbum.load(:id => 1, :band_id => 2)] DB.sqls.should == ['SELECT * FROM albums', "SELECT name, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"] a.first.genres.should == [EagerGenre.load(:id => 4)] DB.sqls.should == [] end it "should allow cascading of eager loading within a custom eager block" do a = EagerTrack.eager(:album => proc {|ds| ds.eager(:band => :members)}).all a.should == [EagerTrack.load(:id => 3, :album_id => 1)] DB.sqls.should == ['SELECT * FROM tracks', 'SELECT * FROM albums WHERE (albums.id IN (1))', 'SELECT * FROM bands WHERE (bands.id IN (2))', "SELECT members.*, bm.band_id AS x_foreign_key_x FROM members INNER JOIN bm ON ((bm.member_id = members.id) AND (bm.band_id IN (2)))"] a = a.first a.album.should == EagerAlbum.load(:id => 1, :band_id => 2) a.album.band.should == EagerBand.load(:id => 2) a.album.band.members.should == [EagerBandMember.load(:id => 5)] DB.sqls.should == [] end it "should allow cascading of eager loading with custom callback with hash value" do a = EagerTrack.eager(:album=>{proc{|ds| ds.select(:id, :band_id)}=>{:band => :members}}).all a.should == [EagerTrack.load(:id => 3, :album_id => 1)] DB.sqls.should == ['SELECT * FROM tracks', 'SELECT id, band_id FROM albums WHERE (albums.id IN (1))', 'SELECT * FROM bands WHERE (bands.id IN (2))', "SELECT members.*, bm.band_id AS x_foreign_key_x FROM members INNER JOIN bm ON ((bm.member_id = members.id) AND (bm.band_id IN (2)))"] a = a.first a.album.should == EagerAlbum.load(:id => 1, :band_id => 2) a.album.band.should == EagerBand.load(:id => 2) a.album.band.members.should == [EagerBandMember.load(:id => 5)] DB.sqls.should == [] end it "should allow cascading of eager loading with custom callback with symbol value" do a = EagerTrack.eager(:album=>{proc{|ds| ds.select(:id, :band_id)}=>:band}).all a.should == [EagerTrack.load(:id => 3, :album_id => 1)] DB.sqls.should == ['SELECT * FROM tracks', 'SELECT id, band_id FROM albums WHERE (albums.id IN (1))', 'SELECT * FROM bands WHERE (bands.id IN (2))'] a = a.first a.album.should == EagerAlbum.load(:id => 1, :band_id => 2) a.album.band.should == EagerBand.load(:id => 2) DB.sqls.should == [] end it "should allow cascading of eager loading with custom callback with array value" do a = EagerTrack.eager(:album=>{proc{|ds| ds.select(:id, :band_id)}=>[:band, :band_name]}).all a.should == [EagerTrack.load(:id => 3, :album_id => 1)] sqls = DB.sqls sqls.slice!(0..1).should == ['SELECT * FROM tracks', 'SELECT id, band_id FROM albums WHERE (albums.id IN (1))'] sqls.sort.should == ['SELECT * FROM bands WHERE (bands.id IN (2))', 'SELECT id, name FROM bands WHERE (bands.id IN (2))'] a = a.first a.album.should == EagerAlbum.load(:id => 1, :band_id => 2) a.album.band.should == EagerBand.load(:id => 2) a.album.band_name.should == EagerBand.load(:id => 2) DB.sqls.should == [] end it "should call both association and custom eager blocks" do EagerBand.eager(:good_albums => proc {|ds| ds.select(:name)}).all DB.sqls.should == ['SELECT * FROM bands', "SELECT name FROM albums WHERE ((albums.band_id IN (2)) AND (name = 'good'))"] end end describe Sequel::Model, "#eager_graph" do before(:all) do class ::GraphAlbum < Sequel::Model(:albums) dataset.opts[:from] = [:albums] columns :id, :band_id many_to_one :band, :class=>'GraphBand', :key=>:band_id one_to_many :tracks, :class=>'GraphTrack', :key=>:album_id many_to_many :genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag many_to_one :previous_album, :class=>'GraphAlbum' end class ::GraphBand < Sequel::Model(:bands) dataset.opts[:from] = [:bands] columns :id, :vocalist_id many_to_one :vocalist, :class=>'GraphBandMember', :key=>:vocalist_id one_to_many :albums, :class=>'GraphAlbum', :key=>:band_id many_to_many :members, :class=>'GraphBandMember', :left_key=>:band_id, :right_key=>:member_id, :join_table=>:bm many_to_many :genres, :class=>'GraphGenre', :left_key=>:band_id, :right_key=>:genre_id, :join_table=>:bg end class ::GraphTrack < Sequel::Model(:tracks) dataset.opts[:from] = [:tracks] columns :id, :album_id many_to_one :album, :class=>'GraphAlbum', :key=>:album_id end class ::GraphGenre < Sequel::Model(:genres) dataset.opts[:from] = [:genres] columns :id many_to_many :albums, :class=>'GraphAlbum', :left_key=>:genre_id, :right_key=>:album_id, :join_table=>:ag end class ::GraphBandMember < Sequel::Model(:members) dataset.opts[:from] = [:members] columns :id many_to_many :bands, :class=>'GraphBand', :left_key=>:member_id, :right_key=>:band_id, :join_table=>:bm end end after(:all) do [:GraphAlbum, :GraphBand, :GraphTrack, :GraphGenre, :GraphBandMember].each{|x| Object.send(:remove_const, x)} end it "should raise an error if called without a symbol or hash" do proc{GraphAlbum.eager_graph(Object.new)}.should raise_error(Sequel::Error) end it "should work correctly with select_map" do ds = GraphAlbum.eager_graph(:band) ds._fetch = [{:id=>1}, {:id=>2}] ds.select_map(:albums__id).should == [1, 2] DB.sqls.should == ['SELECT albums.id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)'] ds._fetch = [{:id=>1}, {:id=>2}] ds.select_map([:albums__id, :albums__id]).should == [[1, 1], [2, 2]] DB.sqls.should == ['SELECT albums.id, albums.id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)'] end it "should work correctly with single_value" do ds = GraphAlbum.eager_graph(:band).select(:albums__id) ds._fetch = [{:id=>1}] ds.single_value.should == 1 DB.sqls.should == ['SELECT albums.id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id) LIMIT 1'] end it "should not split results and assign associations if ungraphed is called" do ds = GraphAlbum.eager_graph(:band).ungraphed ds.sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0, band.vocalist_id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)' ds._fetch = {:id=>1, :band_id=>2, :band_id_0=>2, :vocalist_id=>3} ds.all.should == [GraphAlbum.load(:id=>1, :band_id=>2, :band_id_0=>2, :vocalist_id=>3)] end it "should not modify existing dataset" do ds1 = GraphAlbum.dataset ds2 = ds1.eager_graph(:band) proc{ds1.eager_graph(:band)}.should_not raise_error proc{ds2.eager_graph(:tracks)}.should_not raise_error proc{ds2.eager_graph(:tracks)}.should_not raise_error end it "should allow manually selecting the alias base per call via an AliasedExpression" do ds = GraphAlbum.eager_graph(Sequel.as(:band, :b)) ds.sql.should == 'SELECT albums.id, albums.band_id, b.id AS b_id, b.vocalist_id FROM albums LEFT OUTER JOIN bands AS b ON (b.id = albums.band_id)' ds._fetch = {:id=>1, :band_id=>2, :b_id=>2, :vocalist_id=>3} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.band.should == GraphBand.load(:id => 2, :vocalist_id=>3) end it "should handle multiple associations using the same alias base" do ds = GraphAlbum.eager_graph(Sequel.as(:genres, :b), Sequel.as(:tracks, :b), Sequel.as(:band, :b)) ds.sql.should == 'SELECT albums.id, albums.band_id, b.id AS b_id, b_0.id AS b_0_id, b_0.album_id, b_1.id AS b_1_id, b_1.vocalist_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS b ON (b.id = ag.genre_id) LEFT OUTER JOIN tracks AS b_0 ON (b_0.album_id = albums.id) LEFT OUTER JOIN bands AS b_1 ON (b_1.id = albums.band_id)' ds._fetch = {:id=>1, :band_id=>2, :b_id=>4, :b_0_id=>3, :album_id=>1, :b_1_id=>2, :vocalist_id=>6} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a = a.first a.band.should == GraphBand.load(:id => 2, :vocalist_id=>6) a.tracks.should == [GraphTrack.load({:id => 3, :album_id=>1})] a.genres.should == [GraphGenre.load(:id => 4)] ds = GraphTrack.eager_graph(Sequel.as(:album, :b)=>{Sequel.as(:band, :b)=>Sequel.as(:members, :b)}) ds.sql.should == 'SELECT tracks.id, tracks.album_id, b.id AS b_id, b.band_id, b_0.id AS b_0_id, b_0.vocalist_id, b_1.id AS b_1_id FROM tracks LEFT OUTER JOIN albums AS b ON (b.id = tracks.album_id) LEFT OUTER JOIN bands AS b_0 ON (b_0.id = b.band_id) LEFT OUTER JOIN bm ON (bm.band_id = b_0.id) LEFT OUTER JOIN members AS b_1 ON (b_1.id = bm.member_id)' ds._fetch = {:id=>3, :album_id=>1, :b_id=>1, :band_id=>2, :b_1_id=>5, :b_0_id=>2, :vocalist_id=>6} a = ds.all a.should == [GraphTrack.load(:id => 3, :album_id => 1)] a = a.first a.album.should == GraphAlbum.load(:id => 1, :band_id => 2) a.album.band.should == GraphBand.load(:id => 2, :vocalist_id=>6) a.album.band.members.should == [GraphBandMember.load(:id => 5)] end it "should eagerly load a single many_to_one association" do ds = GraphAlbum.eager_graph(:band) ds.sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0, band.vocalist_id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)' ds._fetch = {:id=>1, :band_id=>2, :band_id_0=>2, :vocalist_id=>3} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.band.should == GraphBand.load(:id => 2, :vocalist_id=>3) end it "should eagerly load a single many_to_one association with the same name as a column" do GraphAlbum.def_column_alias(:band_id_id, :band_id) GraphAlbum.many_to_one :band_id, :key_column=>:band_id, :class=>GraphBand ds = GraphAlbum.eager_graph(:band_id) ds.sql.should == 'SELECT albums.id, albums.band_id, band_id.id AS band_id_id, band_id.vocalist_id FROM albums LEFT OUTER JOIN bands AS band_id ON (band_id.id = albums.band_id)' ds._fetch = {:id=>1, :band_id=>2, :band_id_id=>2, :vocalist_id=>3} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.band_id.should == GraphBand.load(:id => 2, :vocalist_id=>3) end it "should eagerly load a single one_to_one association" do GraphAlbum.one_to_one :track, :class=>'GraphTrack', :key=>:album_id ds = GraphAlbum.eager_graph(:track) ds.sql.should == 'SELECT albums.id, albums.band_id, track.id AS track_id, track.album_id FROM albums LEFT OUTER JOIN tracks AS track ON (track.album_id = albums.id)' ds._fetch = {:id=>1, :band_id=>2, :track_id=>3, :album_id=>1} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.track.should == GraphTrack.load(:id => 3, :album_id=>1) end it "should eagerly load a single one_to_many association" do ds = GraphAlbum.eager_graph(:tracks) ds.sql.should == 'SELECT albums.id, albums.band_id, tracks.id AS tracks_id, tracks.album_id FROM albums LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id)' ds._fetch = {:id=>1, :band_id=>2, :tracks_id=>3, :album_id=>1} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.tracks.should == [GraphTrack.load(:id => 3, :album_id=>1)] end it "should eagerly load a single many_to_many association" do ds = GraphAlbum.eager_graph(:genres) ds.sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = ag.genre_id)' ds._fetch = {:id=>1, :band_id=>2, :genres_id=>4} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.genres.should == [GraphGenre.load(:id => 4)] end it "should correctly handle an aliased join table in many_to_many" do c = Class.new(GraphAlbum) c.many_to_many :genres, :clone=>:genres, :join_table=>:ag___ga c.eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag AS ga ON (ga.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = ga.genre_id)' c.many_to_many :genres, :clone=>:genres, :join_table=>:ag___albums c.eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag AS albums_0 ON (albums_0.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = albums_0.genre_id)' c.many_to_many :genres, :clone=>:genres, :join_table=>:ag___genres c.eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag AS genres_0 ON (genres_0.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = genres_0.genre_id)' end it "should eagerly load multiple associations in a single call" do ds = GraphAlbum.eager_graph(:genres, :tracks, :band) ds.sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id, tracks.id AS tracks_id, tracks.album_id, band.id AS band_id_0, band.vocalist_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = ag.genre_id) LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id) LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)' ds._fetch = {:id=>1, :band_id=>2, :genres_id=>4, :tracks_id=>3, :album_id=>1, :band_id_0=>2, :vocalist_id=>6} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a = a.first a.band.should == GraphBand.load(:id => 2, :vocalist_id=>6) a.tracks.should == [GraphTrack.load({:id => 3, :album_id=>1})] a.genres.should == [GraphGenre.load(:id => 4)] end it "should eagerly load multiple associations in separate calls" do ds = GraphAlbum.eager_graph(:genres).eager_graph(:tracks).eager_graph(:band) ds.sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id, tracks.id AS tracks_id, tracks.album_id, band.id AS band_id_0, band.vocalist_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = ag.genre_id) LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id) LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)' ds._fetch = {:id=>1, :band_id=>2, :genres_id=>4, :tracks_id=>3, :album_id=>1, :band_id_0=>2, :vocalist_id=>6} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a = a.first a.band.should == GraphBand.load(:id => 2, :vocalist_id=>6) a.tracks.should == [GraphTrack.load({:id => 3, :album_id=>1})] a.genres.should == [GraphGenre.load(:id => 4)] end it "should allow cascading of eager loading for associations of associated models" do ds = GraphTrack.eager_graph(:album=>{:band=>:members}) ds.sql.should == 'SELECT tracks.id, tracks.album_id, album.id AS album_id_0, album.band_id, band.id AS band_id_0, band.vocalist_id, members.id AS members_id FROM tracks LEFT OUTER JOIN albums AS album ON (album.id = tracks.album_id) LEFT OUTER JOIN bands AS band ON (band.id = album.band_id) LEFT OUTER JOIN bm ON (bm.band_id = band.id) LEFT OUTER JOIN members ON (members.id = bm.member_id)' ds._fetch = {:id=>3, :album_id=>1, :album_id_0=>1, :band_id=>2, :members_id=>5, :band_id_0=>2, :vocalist_id=>6} a = ds.all a.should == [GraphTrack.load(:id => 3, :album_id => 1)] a = a.first a.album.should == GraphAlbum.load(:id => 1, :band_id => 2) a.album.band.should == GraphBand.load(:id => 2, :vocalist_id=>6) a.album.band.members.should == [GraphBandMember.load(:id => 5)] end it "should allow cascading of eager loading for multiple *_to_many associations, eliminating duplicates caused by cartesian products" do ds = GraphBand.eager_graph({:albums=>:tracks}, :members) ds.sql.should == 'SELECT bands.id, bands.vocalist_id, albums.id AS albums_id, albums.band_id, tracks.id AS tracks_id, tracks.album_id, members.id AS members_id FROM bands LEFT OUTER JOIN albums ON (albums.band_id = bands.id) LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id) LEFT OUTER JOIN bm ON (bm.band_id = bands.id) LEFT OUTER JOIN members ON (members.id = bm.member_id)' ds._fetch = [{:id=>1, :vocalist_id=>2, :albums_id=>3, :band_id=>1, :tracks_id=>4, :album_id=>3, :members_id=>5}, {:id=>1, :vocalist_id=>2, :albums_id=>3, :band_id=>1, :tracks_id=>4, :album_id=>3, :members_id=>6}, {:id=>1, :vocalist_id=>2, :albums_id=>3, :band_id=>1, :tracks_id=>5, :album_id=>3, :members_id=>5}, {:id=>1, :vocalist_id=>2, :albums_id=>3, :band_id=>1, :tracks_id=>5, :album_id=>3, :members_id=>6}, {:id=>1, :vocalist_id=>2, :albums_id=>4, :band_id=>1, :tracks_id=>6, :album_id=>4, :members_id=>5}, {:id=>1, :vocalist_id=>2, :albums_id=>4, :band_id=>1, :tracks_id=>6, :album_id=>4, :members_id=>6}, {:id=>1, :vocalist_id=>2, :albums_id=>4, :band_id=>1, :tracks_id=>7, :album_id=>4, :members_id=>5}, {:id=>1, :vocalist_id=>2, :albums_id=>4, :band_id=>1, :tracks_id=>7, :album_id=>4, :members_id=>6}, {:id=>2, :vocalist_id=>2, :albums_id=>5, :band_id=>2, :tracks_id=>8, :album_id=>5, :members_id=>5}, {:id=>2, :vocalist_id=>2, :albums_id=>5, :band_id=>2, :tracks_id=>8, :album_id=>5, :members_id=>6}, {:id=>2, :vocalist_id=>2, :albums_id=>5, :band_id=>2, :tracks_id=>9, :album_id=>5, :members_id=>5}, {:id=>2, :vocalist_id=>2, :albums_id=>5, :band_id=>2, :tracks_id=>9, :album_id=>5, :members_id=>6}, {:id=>2, :vocalist_id=>2, :albums_id=>6, :band_id=>2, :tracks_id=>1, :album_id=>6, :members_id=>5}, {:id=>2, :vocalist_id=>2, :albums_id=>6, :band_id=>2, :tracks_id=>1, :album_id=>6, :members_id=>6}, {:id=>2, :vocalist_id=>2, :albums_id=>6, :band_id=>2, :tracks_id=>2, :album_id=>6, :members_id=>5}, {:id=>2, :vocalist_id=>2, :albums_id=>6, :band_id=>2, :tracks_id=>2, :album_id=>6, :members_id=>6}] a = ds.all a.should == [GraphBand.load(:id=>1, :vocalist_id=>2), GraphBand.load(:id=>2, :vocalist_id=>2)] members = a.map{|x| x.members} members.should == [[GraphBandMember.load(:id=>5), GraphBandMember.load(:id=>6)], [GraphBandMember.load(:id=>5), GraphBandMember.load(:id=>6)]] albums = a.map{|x| x.albums} albums.should == [[GraphAlbum.load(:id=>3, :band_id=>1), GraphAlbum.load(:id=>4, :band_id=>1)], [GraphAlbum.load(:id=>5, :band_id=>2), GraphAlbum.load(:id=>6, :band_id=>2)]] tracks = albums.map{|x| x.map{|y| y.tracks}} tracks.should == [[[GraphTrack.load(:id=>4, :album_id=>3), GraphTrack.load(:id=>5, :album_id=>3)], [GraphTrack.load(:id=>6, :album_id=>4), GraphTrack.load(:id=>7, :album_id=>4)]], [[GraphTrack.load(:id=>8, :album_id=>5), GraphTrack.load(:id=>9, :album_id=>5)], [GraphTrack.load(:id=>1, :album_id=>6), GraphTrack.load(:id=>2, :album_id=>6)]]] end it "should populate the reciprocal many_to_one association when eagerly loading the one_to_many association" do DB.reset ds = GraphAlbum.eager_graph(:tracks) ds.sql.should == 'SELECT albums.id, albums.band_id, tracks.id AS tracks_id, tracks.album_id FROM albums LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id)' ds._fetch = {:id=>1, :band_id=>2, :tracks_id=>3, :album_id=>1} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a = a.first a.tracks.should == [GraphTrack.load(:id => 3, :album_id=>1)] a.tracks.first.album.should == a DB.sqls.should == ['SELECT albums.id, albums.band_id, tracks.id AS tracks_id, tracks.album_id FROM albums LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id)'] end it "should eager load multiple associations from the same table" do ds = GraphBand.eager_graph(:vocalist, :members) ds.sql.should == 'SELECT bands.id, bands.vocalist_id, vocalist.id AS vocalist_id_0, members.id AS members_id FROM bands LEFT OUTER JOIN members AS vocalist ON (vocalist.id = bands.vocalist_id) LEFT OUTER JOIN bm ON (bm.band_id = bands.id) LEFT OUTER JOIN members ON (members.id = bm.member_id)' ds._fetch = {:id=>2, :vocalist_id=>6, :vocalist_id_0=>6, :members_id=>5} a = ds.all a.should == [GraphBand.load(:id => 2, :vocalist_id => 6)] a = a.first a.vocalist.should == GraphBandMember.load(:id => 6) a.members.should == [GraphBandMember.load(:id => 5)] end it "should give you a plain hash when called without .all" do ds = GraphAlbum.eager_graph(:band) ds.sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0, band.vocalist_id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)' ds._fetch = {:id=>1, :band_id=>2, :band_id_0=>2, :vocalist_id=>3} ds.first.should == {:id=>1, :band_id=>2, :band_id_0=>2, :vocalist_id=>3} end it "should not drop any associated objects if the graph could not be a cartesian product" do ds = GraphBand.eager_graph(:members, :vocalist) ds.sql.should == 'SELECT bands.id, bands.vocalist_id, members.id AS members_id, vocalist.id AS vocalist_id_0 FROM bands LEFT OUTER JOIN bm ON (bm.band_id = bands.id) LEFT OUTER JOIN members ON (members.id = bm.member_id) LEFT OUTER JOIN members AS vocalist ON (vocalist.id = bands.vocalist_id)' ds._fetch = [{:id=>2, :vocalist_id=>6, :members_id=>5, :vocalist_id_0=>6}, {:id=>2, :vocalist_id=>6, :members_id=>5, :vocalist_id_0=>6}] a = ds.all a.should == [GraphBand.load(:id => 2, :vocalist_id => 6)] a = a.first a.vocalist.should == GraphBandMember.load(:id => 6) a.members.should == [GraphBandMember.load(:id => 5), GraphBandMember.load(:id => 5)] end it "should respect the :cartesian_product_number option" do GraphBand.many_to_one :other_vocalist, :class=>'GraphBandMember', :key=>:vocalist_id, :cartesian_product_number=>1 ds = GraphBand.eager_graph(:members, :other_vocalist) ds.sql.should == 'SELECT bands.id, bands.vocalist_id, members.id AS members_id, other_vocalist.id AS other_vocalist_id FROM bands LEFT OUTER JOIN bm ON (bm.band_id = bands.id) LEFT OUTER JOIN members ON (members.id = bm.member_id) LEFT OUTER JOIN members AS other_vocalist ON (other_vocalist.id = bands.vocalist_id)' ds._fetch = [{:id=>2, :vocalist_id=>6, :members_id=>5, :other_vocalist_id=>6}, {:id=>2, :vocalist_id=>6, :members_id=>5, :other_vocalist_id=>6}] a = ds.all a.should == [GraphBand.load(:id=>2, :vocalist_id => 6)] a.first.other_vocalist.should == GraphBandMember.load(:id=>6) a.first.members.should == [GraphBandMember.load(:id=>5)] end it "should drop duplicate items that occur in sequence if the graph could be a cartesian product" do ds = GraphBand.eager_graph(:members, :genres) ds.sql.should == 'SELECT bands.id, bands.vocalist_id, members.id AS members_id, genres.id AS genres_id FROM bands LEFT OUTER JOIN bm ON (bm.band_id = bands.id) LEFT OUTER JOIN members ON (members.id = bm.member_id) LEFT OUTER JOIN bg ON (bg.band_id = bands.id) LEFT OUTER JOIN genres ON (genres.id = bg.genre_id)' ds._fetch = [{:id=>2, :vocalist_id=>6, :members_id=>5, :genres_id=>7}, {:id=>2, :vocalist_id=>6, :members_id=>5, :genres_id=>8}, {:id=>2, :vocalist_id=>6, :members_id=>6, :genres_id=>7}, {:id=>2, :vocalist_id=>6, :members_id=>6, :genres_id=>8}] a = ds.all a.should == [GraphBand.load(:id => 2, :vocalist_id => 6)] a = a.first a.members.should == [GraphBandMember.load(:id => 5), GraphBandMember.load(:id => 6)] a.genres.should == [GraphGenre.load(:id => 7), GraphGenre.load(:id => 8)] end it "should be able to be used in combination with #eager" do DB.reset ds = GraphAlbum.eager_graph(:tracks).eager(:genres) ds._fetch = {:id=>1, :band_id=>2, :tracks_id=>3, :album_id=>1} ds2 = GraphGenre.dataset ds2._fetch = {:id=>6, :x_foreign_key_x=>1} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a = a.first a.tracks.should == [GraphTrack.load(:id=>3, :album_id=>1)] a.genres.should == [GraphGenre.load(:id => 6)] DB.sqls.should == ['SELECT albums.id, albums.band_id, tracks.id AS tracks_id, tracks.album_id FROM albums LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id)', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"] end it "should handle no associated records for a single many_to_one association" do ds = GraphAlbum.eager_graph(:band) ds.sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0, band.vocalist_id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)' ds._fetch = {:id=>1, :band_id=>2, :band_id_0=>nil, :vocalist_id=>nil} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.band.should == nil end it "should handle no associated records for a single one_to_many association" do ds = GraphAlbum.eager_graph(:tracks) ds.sql.should == 'SELECT albums.id, albums.band_id, tracks.id AS tracks_id, tracks.album_id FROM albums LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id)' ds._fetch = {:id=>1, :band_id=>2, :tracks_id=>nil, :album_id=>nil} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.tracks.should == [] end it "should handle no associated records for a single many_to_many association" do ds = GraphAlbum.eager_graph(:genres) ds.sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = ag.genre_id)' ds._fetch = {:id=>1, :band_id=>2, :genres_id=>nil} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.genres.should == [] end it "should handle missing associated records when loading multiple associations" do ds = GraphAlbum.eager_graph(:genres, :tracks, :band) ds.sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id, tracks.id AS tracks_id, tracks.album_id, band.id AS band_id_0, band.vocalist_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = ag.genre_id) LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id) LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)' ds._fetch = [{:id=>1, :band_id=>2, :genres_id=>nil, :tracks_id=>3, :album_id=>1, :band_id_0=>nil, :vocalist_id=>nil}, {:id=>1, :band_id=>2, :genres_id=>nil, :tracks_id=>4, :album_id=>1, :band_id_0=>nil, :vocalist_id=>nil}, {:id=>1, :band_id=>2, :genres_id=>nil, :tracks_id=>5, :album_id=>1, :band_id_0=>nil, :vocalist_id=>nil}, {:id=>1, :band_id=>2, :genres_id=>nil, :tracks_id=>6, :album_id=>1, :band_id_0=>nil, :vocalist_id=>nil}] a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a = a.first a.tracks.should == [GraphTrack.load(:id => 3, :album_id => 1), GraphTrack.load(:id => 4, :album_id => 1), GraphTrack.load(:id => 5, :album_id => 1), GraphTrack.load(:id => 6, :album_id => 1)] a.band.should == nil a.genres.should == [] end it "should handle missing associated records when cascading eager loading for associations of associated models" do ds = GraphTrack.eager_graph(:album=>{:band=>:members}) ds.sql.should == 'SELECT tracks.id, tracks.album_id, album.id AS album_id_0, album.band_id, band.id AS band_id_0, band.vocalist_id, members.id AS members_id FROM tracks LEFT OUTER JOIN albums AS album ON (album.id = tracks.album_id) LEFT OUTER JOIN bands AS band ON (band.id = album.band_id) LEFT OUTER JOIN bm ON (bm.band_id = band.id) LEFT OUTER JOIN members ON (members.id = bm.member_id)' ds._fetch = [{:id=>2, :album_id=>2, :album_id_0=>nil, :band_id=>nil, :members_id=>nil, :band_id_0=>nil, :vocalist_id=>nil}, {:id=>3, :album_id=>3, :album_id_0=>3, :band_id=>3, :members_id=>nil, :band_id_0=>nil, :vocalist_id=>nil}, {:id=>4, :album_id=>4, :album_id_0=>4, :band_id=>2, :members_id=>nil, :band_id_0=>2, :vocalist_id=>6}, {:id=>5, :album_id=>1, :album_id_0=>1, :band_id=>4, :members_id=>5, :band_id_0=>4, :vocalist_id=>8}, {:id=>5, :album_id=>1, :album_id_0=>1, :band_id=>4, :members_id=>6, :band_id_0=>4, :vocalist_id=>8}] a = ds.all a.should == [GraphTrack.load(:id => 2, :album_id => 2), GraphTrack.load(:id => 3, :album_id => 3), GraphTrack.load(:id => 4, :album_id => 4), GraphTrack.load(:id => 5, :album_id => 1)] a.map{|x| x.album}.should == [nil, GraphAlbum.load(:id => 3, :band_id => 3), GraphAlbum.load(:id => 4, :band_id => 2), GraphAlbum.load(:id => 1, :band_id => 4)] a.map{|x| x.album.band if x.album}.should == [nil, nil, GraphBand.load(:id => 2, :vocalist_id=>6), GraphBand.load(:id => 4, :vocalist_id=>8)] a.map{|x| x.album.band.members if x.album && x.album.band}.should == [nil, nil, [], [GraphBandMember.load(:id => 5), GraphBandMember.load(:id => 6)]] end it "should respect the association's :primary_key option" do GraphAlbum.many_to_one :inner_band, :class=>'GraphBand', :key=>:band_id, :primary_key=>:vocalist_id ds = GraphAlbum.eager_graph(:inner_band) ds.sql.should == 'SELECT albums.id, albums.band_id, inner_band.id AS inner_band_id, inner_band.vocalist_id FROM albums LEFT OUTER JOIN bands AS inner_band ON (inner_band.vocalist_id = albums.band_id)' ds._fetch = {:id=>3, :band_id=>2, :inner_band_id=>5, :vocalist_id=>2} as = ds.all as.should == [GraphAlbum.load(:id=>3, :band_id=>2)] as.first.inner_band.should == GraphBand.load(:id=>5, :vocalist_id=>2) GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :primary_key=>:band_id, :reciprocal=>nil ds = GraphAlbum.eager_graph(:right_tracks) ds.sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON (right_tracks.album_id = albums.band_id)' ds._fetch = [{:id=>3, :band_id=>2, :right_tracks_id=>5, :album_id=>2}, {:id=>3, :band_id=>2, :right_tracks_id=>6, :album_id=>2}] as = ds.all as.should == [GraphAlbum.load(:id=>3, :band_id=>2)] as.first.right_tracks.should == [GraphTrack.load(:id=>5, :album_id=>2), GraphTrack.load(:id=>6, :album_id=>2)] end it "should respect many_to_one association's composite keys" do GraphAlbum.many_to_one :inner_band, :class=>'GraphBand', :key=>[:band_id, :id], :primary_key=>[:vocalist_id, :id] ds = GraphAlbum.eager_graph(:inner_band) ds.sql.should == 'SELECT albums.id, albums.band_id, inner_band.id AS inner_band_id, inner_band.vocalist_id FROM albums LEFT OUTER JOIN bands AS inner_band ON ((inner_band.vocalist_id = albums.band_id) AND (inner_band.id = albums.id))' ds._fetch = {:id=>3, :band_id=>2, :inner_band_id=>3, :vocalist_id=>2} as = ds.all as.should == [GraphAlbum.load(:id=>3, :band_id=>2)] as.first.inner_band.should == GraphBand.load(:id=>3, :vocalist_id=>2) end it "should respect one_to_many association's composite keys" do GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>[:album_id, :id], :primary_key=>[:band_id, :id] ds = GraphAlbum.eager_graph(:right_tracks) ds.sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON ((right_tracks.album_id = albums.band_id) AND (right_tracks.id = albums.id))' ds._fetch = {:id=>3, :band_id=>2, :right_tracks_id=>3, :album_id=>2} as = ds.all as.should == [GraphAlbum.load(:id=>3, :band_id=>2)] as.first.right_tracks.should == [GraphTrack.load(:id=>3, :album_id=>2)] end it "should respect many_to_many association's composite keys" do GraphAlbum.many_to_many :sbands, :class=>'GraphBand', :left_key=>[:l1, :l2], :left_primary_key=>[:band_id, :id], :right_key=>[:r1, :r2], :right_primary_key=>[:vocalist_id, :id], :join_table=>:b ds = GraphAlbum.eager_graph(:sbands) ds.sql.should == 'SELECT albums.id, albums.band_id, sbands.id AS sbands_id, sbands.vocalist_id FROM albums LEFT OUTER JOIN b ON ((b.l1 = albums.band_id) AND (b.l2 = albums.id)) LEFT OUTER JOIN bands AS sbands ON ((sbands.vocalist_id = b.r1) AND (sbands.id = b.r2))' ds._fetch = [{:id=>3, :band_id=>2, :sbands_id=>5, :vocalist_id=>6}, {:id=>3, :band_id=>2, :sbands_id=>6, :vocalist_id=>22}] as = ds.all as.should == [GraphAlbum.load(:id=>3, :band_id=>2)] as.first.sbands.should == [GraphBand.load(:id=>5, :vocalist_id=>6), GraphBand.load(:id=>6, :vocalist_id=>22)] end it "should respect many_to_many association's :left_primary_key and :right_primary_key options" do GraphAlbum.many_to_many :inner_genres, :class=>'GraphGenre', :left_key=>:album_id, :left_primary_key=>:band_id, :right_key=>:genre_id, :right_primary_key=>:xxx, :join_table=>:ag ds = GraphAlbum.eager_graph(:inner_genres) ds.sql.should == 'SELECT albums.id, albums.band_id, inner_genres.id AS inner_genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.band_id) LEFT OUTER JOIN genres AS inner_genres ON (inner_genres.xxx = ag.genre_id)' ds._fetch = [{:id=>3, :band_id=>2, :inner_genres_id=>5, :xxx=>12}, {:id=>3, :band_id=>2, :inner_genres_id=>6, :xxx=>22}] as = ds.all as.should == [GraphAlbum.load(:id=>3, :band_id=>2)] as.first.inner_genres.should == [GraphGenre.load(:id=>5), GraphGenre.load(:id=>6)] end it "should respect composite primary keys for classes when eager loading" do c1 = Class.new(GraphAlbum) c2 = Class.new(GraphBand) c1.set_primary_key [:band_id, :id] c2.set_primary_key [:vocalist_id, :id] c1.many_to_many :sbands, :class=>c2, :left_key=>[:l1, :l2], :right_key=>[:r1, :r2], :join_table=>:b c2.one_to_many :salbums, :class=>c1, :key=>[:band_id, :id] ds = c1.eager_graph(:sbands=>:salbums) ds.sql.should == 'SELECT albums.id, albums.band_id, sbands.id AS sbands_id, sbands.vocalist_id, salbums.id AS salbums_id, salbums.band_id AS salbums_band_id FROM albums LEFT OUTER JOIN b ON ((b.l1 = albums.band_id) AND (b.l2 = albums.id)) LEFT OUTER JOIN bands AS sbands ON ((sbands.vocalist_id = b.r1) AND (sbands.id = b.r2)) LEFT OUTER JOIN albums AS salbums ON ((salbums.band_id = sbands.vocalist_id) AND (salbums.id = sbands.id))' ds._fetch = [{:id=>3, :band_id=>2, :sbands_id=>5, :vocalist_id=>6, :salbums_id=>7, :salbums_band_id=>8}, {:id=>3, :band_id=>2, :sbands_id=>5, :vocalist_id=>6, :salbums_id=>9, :salbums_band_id=>10}, {:id=>3, :band_id=>2, :sbands_id=>6, :vocalist_id=>22, :salbums_id=>nil, :salbums_band_id=>nil}, {:id=>7, :band_id=>8, :sbands_id=>nil, :vocalist_id=>nil, :salbums_id=>nil, :salbums_band_id=>nil}] as = ds.all as.should == [c1.load(:id=>3, :band_id=>2), c1.load(:id=>7, :band_id=>8)] as.map{|x| x.sbands}.should == [[c2.load(:id=>5, :vocalist_id=>6), c2.load(:id=>6, :vocalist_id=>22)], []] as.map{|x| x.sbands.map{|y| y.salbums}}.should == [[[c1.load(:id=>7, :band_id=>8), c1.load(:id=>9, :band_id=>10)], []], []] end it "should respect the association's :graph_select option" do GraphAlbum.many_to_one :inner_band, :class=>'GraphBand', :key=>:band_id, :graph_select=>:vocalist_id GraphAlbum.eager_graph(:inner_band).sql.should == 'SELECT albums.id, albums.band_id, inner_band.vocalist_id FROM albums LEFT OUTER JOIN bands AS inner_band ON (inner_band.id = albums.band_id)' GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :graph_select=>[:album_id] GraphAlbum.eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON (right_tracks.album_id = albums.id)' GraphAlbum.many_to_many :inner_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_select=>[] GraphAlbum.eager_graph(:inner_genres).sql.should == 'SELECT albums.id, albums.band_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS inner_genres ON (inner_genres.id = ag.genre_id)' end it "should respect the association's :graph_alias_base option" do GraphAlbum.many_to_one :inner_band, :class=>'GraphBand', :key=>:band_id, :graph_alias_base=>:foo ds = GraphAlbum.eager_graph(:inner_band) ds.sql.should == 'SELECT albums.id, albums.band_id, foo.id AS foo_id, foo.vocalist_id FROM albums LEFT OUTER JOIN bands AS foo ON (foo.id = albums.band_id)' GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :graph_alias_base=>:foo ds.eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, foo.id AS foo_id, foo.vocalist_id, foo_0.id AS foo_0_id, foo_0.album_id FROM albums LEFT OUTER JOIN bands AS foo ON (foo.id = albums.band_id) LEFT OUTER JOIN tracks AS foo_0 ON (foo_0.album_id = albums.id)' end it "should respect the association's :graph_join_type option" do GraphAlbum.many_to_one :inner_band, :class=>'GraphBand', :key=>:band_id, :graph_join_type=>:inner GraphAlbum.eager_graph(:inner_band).sql.should == 'SELECT albums.id, albums.band_id, inner_band.id AS inner_band_id, inner_band.vocalist_id FROM albums INNER JOIN bands AS inner_band ON (inner_band.id = albums.band_id)' GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :graph_join_type=>:right_outer GraphAlbum.eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums RIGHT OUTER JOIN tracks AS right_tracks ON (right_tracks.album_id = albums.id)' GraphAlbum.many_to_many :inner_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_join_type=>:inner GraphAlbum.eager_graph(:inner_genres).sql.should == 'SELECT albums.id, albums.band_id, inner_genres.id AS inner_genres_id FROM albums INNER JOIN ag ON (ag.album_id = albums.id) INNER JOIN genres AS inner_genres ON (inner_genres.id = ag.genre_id)' end it "should respect the association's :graph_join_table_join_type option" do GraphAlbum.many_to_many :inner_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_join_table_join_type=>:inner GraphAlbum.eager_graph(:inner_genres).sql.should == 'SELECT albums.id, albums.band_id, inner_genres.id AS inner_genres_id FROM albums INNER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS inner_genres ON (inner_genres.id = ag.genre_id)' GraphAlbum.many_to_many :inner_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_join_table_join_type=>:inner, :graph_join_type=>:right_outer GraphAlbum.eager_graph(:inner_genres).sql.should == 'SELECT albums.id, albums.band_id, inner_genres.id AS inner_genres_id FROM albums INNER JOIN ag ON (ag.album_id = albums.id) RIGHT OUTER JOIN genres AS inner_genres ON (inner_genres.id = ag.genre_id)' end it "should respect the association's :conditions option" do GraphAlbum.many_to_one :active_band, :class=>'GraphBand', :key=>:band_id, :conditions=>{:active=>true} GraphAlbum.eager_graph(:active_band).sql.should == "SELECT albums.id, albums.band_id, active_band.id AS active_band_id, active_band.vocalist_id FROM albums LEFT OUTER JOIN bands AS active_band ON ((active_band.id = albums.band_id) AND (active_band.active IS TRUE))" GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :conditions=>{:id=>(0..100)}, :reciprocal=>nil GraphAlbum.eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON ((right_tracks.album_id = albums.id) AND (right_tracks.id >= 0) AND (right_tracks.id <= 100))' GraphAlbum.many_to_many :active_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :conditions=>{true=>:active} GraphAlbum.eager_graph(:active_genres).sql.should == "SELECT albums.id, albums.band_id, active_genres.id AS active_genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS active_genres ON ((active_genres.id = ag.genre_id) AND ('t' = ag.active))" end it "should respect the association's :graph_conditions option" do GraphAlbum.many_to_one :active_band, :class=>'GraphBand', :key=>:band_id, :graph_conditions=>{:active=>true} GraphAlbum.eager_graph(:active_band).sql.should == "SELECT albums.id, albums.band_id, active_band.id AS active_band_id, active_band.vocalist_id FROM albums LEFT OUTER JOIN bands AS active_band ON ((active_band.id = albums.band_id) AND (active_band.active IS TRUE))" GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :graph_conditions=>{:id=>(0..100)} GraphAlbum.eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON ((right_tracks.album_id = albums.id) AND (right_tracks.id >= 0) AND (right_tracks.id <= 100))' GraphAlbum.many_to_many :active_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_conditions=>{true=>:active} GraphAlbum.eager_graph(:active_genres).sql.should == "SELECT albums.id, albums.band_id, active_genres.id AS active_genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS active_genres ON ((active_genres.id = ag.genre_id) AND ('t' = ag.active))" end it "should respect the association's :graph_join_table_conditions option" do GraphAlbum.many_to_many :active_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_join_table_conditions=>{:active=>true} GraphAlbum.eager_graph(:active_genres).sql.should == "SELECT albums.id, albums.band_id, active_genres.id AS active_genres_id FROM albums LEFT OUTER JOIN ag ON ((ag.album_id = albums.id) AND (ag.active IS TRUE)) LEFT OUTER JOIN genres AS active_genres ON (active_genres.id = ag.genre_id)" GraphAlbum.many_to_many :active_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_conditions=>{true=>:active}, :graph_join_table_conditions=>{true=>:active} GraphAlbum.eager_graph(:active_genres).sql.should == "SELECT albums.id, albums.band_id, active_genres.id AS active_genres_id FROM albums LEFT OUTER JOIN ag ON ((ag.album_id = albums.id) AND ('t' = albums.active)) LEFT OUTER JOIN genres AS active_genres ON ((active_genres.id = ag.genre_id) AND ('t' = ag.active))" end it "should respect the association's :graph_block option" do GraphAlbum.many_to_one :active_band, :class=>'GraphBand', :key=>:band_id, :graph_block=>proc{|ja,lja,js| {Sequel.qualify(ja, :active)=>true}} GraphAlbum.eager_graph(:active_band).sql.should == "SELECT albums.id, albums.band_id, active_band.id AS active_band_id, active_band.vocalist_id FROM albums LEFT OUTER JOIN bands AS active_band ON ((active_band.id = albums.band_id) AND (active_band.active IS TRUE))" GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :graph_block=>proc{|ja,lja,js| {Sequel.qualify(ja, :id)=>(0..100)}} GraphAlbum.eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON ((right_tracks.album_id = albums.id) AND (right_tracks.id >= 0) AND (right_tracks.id <= 100))' GraphAlbum.many_to_many :active_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_block=>proc{|ja,lja,js| {true=>Sequel.qualify(lja, :active)}} GraphAlbum.eager_graph(:active_genres).sql.should == "SELECT albums.id, albums.band_id, active_genres.id AS active_genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS active_genres ON ((active_genres.id = ag.genre_id) AND ('t' = ag.active))" end it "should respect the association's :graph_join_table_block option" do GraphAlbum.many_to_many :active_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_join_table_block=>proc{|ja,lja,js| {Sequel.qualify(ja, :active)=>true}} GraphAlbum.eager_graph(:active_genres).sql.should == "SELECT albums.id, albums.band_id, active_genres.id AS active_genres_id FROM albums LEFT OUTER JOIN ag ON ((ag.album_id = albums.id) AND (ag.active IS TRUE)) LEFT OUTER JOIN genres AS active_genres ON (active_genres.id = ag.genre_id)" GraphAlbum.many_to_many :active_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_block=>proc{|ja,lja,js| {true=>Sequel.qualify(lja, :active)}}, :graph_join_table_block=>proc{|ja,lja,js| {true=>Sequel.qualify(lja, :active)}} GraphAlbum.eager_graph(:active_genres).sql.should == "SELECT albums.id, albums.band_id, active_genres.id AS active_genres_id FROM albums LEFT OUTER JOIN ag ON ((ag.album_id = albums.id) AND ('t' = albums.active)) LEFT OUTER JOIN genres AS active_genres ON ((active_genres.id = ag.genre_id) AND ('t' = ag.active))" end it "should respect the association's :eager_grapher option" do GraphAlbum.many_to_one :active_band, :class=>'GraphBand', :key=>:band_id, :eager_grapher=>proc{|eo| eo[:self].graph(GraphBand, {:active=>true}, :table_alias=>eo[:table_alias], :join_type=>:inner)} GraphAlbum.eager_graph(:active_band).sql.should == "SELECT albums.id, albums.band_id, active_band.id AS active_band_id, active_band.vocalist_id FROM albums INNER JOIN bands AS active_band ON (active_band.active IS TRUE)" GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :eager_grapher=>proc{|eo| eo[:self].graph(GraphTrack, nil, :join_type=>:natural, :table_alias=>eo[:table_alias])} GraphAlbum.eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums NATURAL JOIN tracks AS right_tracks' GraphAlbum.many_to_many :active_genres, :class=>'GraphGenre', :eager_grapher=>proc{|eo| eo[:self].graph(:ag, {:album_id=>:id}, :table_alias=>:a123, :implicit_qualifier=>eo[:implicit_qualifier]).graph(GraphGenre, [:album_id], :table_alias=>eo[:table_alias])} GraphAlbum.eager_graph(:active_genres).sql.should == "SELECT albums.id, albums.band_id, active_genres.id AS active_genres_id FROM albums LEFT OUTER JOIN ag AS a123 ON (a123.album_id = albums.id) LEFT OUTER JOIN genres AS active_genres USING (album_id)" end it "should respect the association's :graph_only_conditions option" do GraphAlbum.many_to_one :active_band, :class=>'GraphBand', :key=>:band_id, :graph_only_conditions=>{:active=>true} GraphAlbum.eager_graph(:active_band).sql.should == "SELECT albums.id, albums.band_id, active_band.id AS active_band_id, active_band.vocalist_id FROM albums LEFT OUTER JOIN bands AS active_band ON (active_band.active IS TRUE)" GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :graph_only_conditions=>nil, :graph_join_type=>:natural GraphAlbum.eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums NATURAL JOIN tracks AS right_tracks' GraphAlbum.many_to_many :active_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_only_conditions=>[:album_id] GraphAlbum.eager_graph(:active_genres).sql.should == "SELECT albums.id, albums.band_id, active_genres.id AS active_genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS active_genres USING (album_id)" end it "should respect the association's :graph_join_table_only_conditions option" do GraphAlbum.many_to_many :active_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_join_table_only_conditions=>{:active=>true} GraphAlbum.eager_graph(:active_genres).sql.should == "SELECT albums.id, albums.band_id, active_genres.id AS active_genres_id FROM albums LEFT OUTER JOIN ag ON (ag.active IS TRUE) LEFT OUTER JOIN genres AS active_genres ON (active_genres.id = ag.genre_id)" GraphAlbum.many_to_many :active_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :graph_only_conditions=>(Sequel.expr(:price) + 2 > 100), :graph_join_table_only_conditions=>"active" GraphAlbum.eager_graph(:active_genres).sql.should == "SELECT albums.id, albums.band_id, active_genres.id AS active_genres_id FROM albums LEFT OUTER JOIN ag ON (active) LEFT OUTER JOIN genres AS active_genres ON ((price + 2) > 100)" end it "should create unique table aliases for all associations" do GraphAlbum.eager_graph(:previous_album=>{:previous_album=>:previous_album}).sql.should == "SELECT albums.id, albums.band_id, previous_album.id AS previous_album_id, previous_album.band_id AS previous_album_band_id, previous_album_0.id AS previous_album_0_id, previous_album_0.band_id AS previous_album_0_band_id, previous_album_1.id AS previous_album_1_id, previous_album_1.band_id AS previous_album_1_band_id FROM albums LEFT OUTER JOIN albums AS previous_album ON (previous_album.id = albums.previous_album_id) LEFT OUTER JOIN albums AS previous_album_0 ON (previous_album_0.id = previous_album.previous_album_id) LEFT OUTER JOIN albums AS previous_album_1 ON (previous_album_1.id = previous_album_0.previous_album_id)" end it "should respect the association's :order" do GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :order=>[:id, :album_id] GraphAlbum.eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON (right_tracks.album_id = albums.id) ORDER BY right_tracks.id, right_tracks.album_id' end it "should only qualify unqualified symbols, identifiers, or ordered versions in association's :order" do GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :order=>[Sequel.identifier(:blah__id), Sequel.identifier(:blah__id).desc, Sequel.desc(:blah__id), :blah__id, :album_id, Sequel.desc(:album_id), 1, Sequel.lit('RANDOM()'), Sequel.qualify(:b, :a)] GraphAlbum.eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON (right_tracks.album_id = albums.id) ORDER BY right_tracks.blah__id, right_tracks.blah__id DESC, blah.id DESC, blah.id, right_tracks.album_id, right_tracks.album_id DESC, 1, RANDOM(), b.a' end it "should not respect the association's :order if :order_eager_graph is false" do GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :order=>[:id, :album_id], :order_eager_graph=>false GraphAlbum.eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON (right_tracks.album_id = albums.id)' end it "should add the association's :order to the existing order" do GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :order=>[:id, :album_id] GraphAlbum.order(:band_id).eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON (right_tracks.album_id = albums.id) ORDER BY band_id, right_tracks.id, right_tracks.album_id' end it "should add the association's :order for cascading associations" do GraphBand.one_to_many :a_albums, :class=>'GraphAlbum', :key=>:band_id, :order=>:name, :reciprocal=>nil GraphAlbum.one_to_many :b_tracks, :class=>'GraphTrack', :key=>:album_id, :order=>[:id, :album_id] GraphBand.eager_graph(:a_albums=>:b_tracks).sql.should == 'SELECT bands.id, bands.vocalist_id, a_albums.id AS a_albums_id, a_albums.band_id, b_tracks.id AS b_tracks_id, b_tracks.album_id FROM bands LEFT OUTER JOIN albums AS a_albums ON (a_albums.band_id = bands.id) LEFT OUTER JOIN tracks AS b_tracks ON (b_tracks.album_id = a_albums.id) ORDER BY a_albums.name, b_tracks.id, b_tracks.album_id' GraphAlbum.one_to_many :albums, :class=>'GraphAlbum', :key=>:band_id, :order=>[:band_id, :id] GraphAlbum.eager_graph(:albums=>{:albums=>:albums}).sql.should == 'SELECT albums.id, albums.band_id, albums_0.id AS albums_0_id, albums_0.band_id AS albums_0_band_id, albums_1.id AS albums_1_id, albums_1.band_id AS albums_1_band_id, albums_2.id AS albums_2_id, albums_2.band_id AS albums_2_band_id FROM albums LEFT OUTER JOIN albums AS albums_0 ON (albums_0.band_id = albums.id) LEFT OUTER JOIN albums AS albums_1 ON (albums_1.band_id = albums_0.id) LEFT OUTER JOIN albums AS albums_2 ON (albums_2.band_id = albums_1.id) ORDER BY albums_0.band_id, albums_0.id, albums_1.band_id, albums_1.id, albums_2.band_id, albums_2.id' end it "should add the associations :order for multiple associations" do GraphAlbum.many_to_many :a_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :order=>:id GraphAlbum.one_to_many :b_tracks, :class=>'GraphTrack', :key=>:album_id, :order=>[:id, :album_id] GraphAlbum.eager_graph(:a_genres, :b_tracks).sql.should == 'SELECT albums.id, albums.band_id, a_genres.id AS a_genres_id, b_tracks.id AS b_tracks_id, b_tracks.album_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS a_genres ON (a_genres.id = ag.genre_id) LEFT OUTER JOIN tracks AS b_tracks ON (b_tracks.album_id = albums.id) ORDER BY a_genres.id, b_tracks.id, b_tracks.album_id' end it "should use the correct qualifier when graphing multiple tables with extra conditions" do GraphAlbum.many_to_many :a_genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag GraphAlbum.one_to_many :b_tracks, :class=>'GraphTrack', :key=>:album_id, :graph_conditions=>{:a=>:b} GraphAlbum.eager_graph(:a_genres, :b_tracks).sql.should == 'SELECT albums.id, albums.band_id, a_genres.id AS a_genres_id, b_tracks.id AS b_tracks_id, b_tracks.album_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS a_genres ON (a_genres.id = ag.genre_id) LEFT OUTER JOIN tracks AS b_tracks ON ((b_tracks.album_id = albums.id) AND (b_tracks.a = albums.b))' end it "should eagerly load associated records for classes that do not have a primary key" do GraphAlbum.no_primary_key GraphGenre.no_primary_key GraphAlbum.many_to_many :inner_genres, :class=>'GraphGenre', :left_key=>:album_id, :left_primary_key=>:band_id, :right_key=>:genre_id, :right_primary_key=>:xxx, :join_table=>:ag ds = GraphAlbum.eager_graph(:inner_genres) ds.sql.should == 'SELECT albums.id, albums.band_id, inner_genres.id AS inner_genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.band_id) LEFT OUTER JOIN genres AS inner_genres ON (inner_genres.xxx = ag.genre_id)' ds._fetch = [{:id=>3, :band_id=>2, :inner_genres_id=>5, :xxx=>12}, {:id=>3, :band_id=>2, :inner_genres_id=>6, :xxx=>22}] as = ds.all as.should == [GraphAlbum.load(:id=>3, :band_id=>2)] as.first.inner_genres.should == [GraphGenre.load(:id=>5), GraphGenre.load(:id=>6)] GraphAlbum.set_primary_key :id GraphGenre.set_primary_key :id end it "should handle eager loading with schemas and aliases of different types" do GraphAlbum.eager_graph(:band).join(:s__genres, [:b_id]).eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0, band.vocalist_id, genres_0.id AS genres_0_id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id) INNER JOIN s.genres USING (b_id) LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS genres_0 ON (genres_0.id = ag.genre_id)' GraphAlbum.eager_graph(:band).join(Sequel.qualify(:s, :genres), [:b_id]).eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0, band.vocalist_id, genres_0.id AS genres_0_id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id) INNER JOIN s.genres USING (b_id) LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS genres_0 ON (genres_0.id = ag.genre_id)' GraphAlbum.eager_graph(:band).join(Sequel.expr(:s__b).as('genres'), [:b_id]).eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0, band.vocalist_id, genres_0.id AS genres_0_id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id) INNER JOIN s.b AS genres USING (b_id) LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS genres_0 ON (genres_0.id = ag.genre_id)' GraphAlbum.eager_graph(:band).join(:s__b, [:b_id], :table_alias=>Sequel.identifier(:genres)).eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0, band.vocalist_id, genres_0.id AS genres_0_id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id) INNER JOIN s.b AS genres USING (b_id) LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS genres_0 ON (genres_0.id = ag.genre_id)' GraphAlbum.eager_graph(:band).join(Sequel.identifier(:genres), [:b_id]).eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0, band.vocalist_id, genres_0.id AS genres_0_id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id) INNER JOIN genres USING (b_id) LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS genres_0 ON (genres_0.id = ag.genre_id)' GraphAlbum.eager_graph(:band).join('genres', [:b_id]).eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0, band.vocalist_id, genres_0.id AS genres_0_id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id) INNER JOIN genres USING (b_id) LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS genres_0 ON (genres_0.id = ag.genre_id)' end it "should raise errors if invalid aliases or table styles are used" do proc{GraphAlbum.from_self(:alias=>Sequel.qualify(:s, :bands)).eager_graph(:band)}.should raise_error(Sequel::Error) proc{GraphAlbum.from(Sequel.lit('?', :bands)).eager_graph(:band)}.should raise_error(Sequel::Error) end it "should eagerly load schema qualified tables correctly with joins" do c1 = Class.new(GraphAlbum) c2 = Class.new(GraphGenre) ds = c1.dataset = c1.dataset.from(:s__a) def ds.columns() [:id] end c2.dataset = c2.dataset.from(:s__g) c1.many_to_many :a_genres, :class=>c2, :left_primary_key=>:id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:s__ag ds = c1.join(:s__t, [:b_id]).eager_graph(:a_genres) ds.sql.should == 'SELECT a.id, a_genres.id AS a_genres_id FROM (SELECT * FROM s.a INNER JOIN s.t USING (b_id)) AS a LEFT OUTER JOIN s.ag AS ag ON (ag.album_id = a.id) LEFT OUTER JOIN s.g AS a_genres ON (a_genres.id = ag.genre_id)' end it "should respect :after_load callbacks on associations when eager graphing" do GraphAlbum.many_to_one :al_band, :class=>GraphBand, :key=>:band_id, :after_load=>proc{|o, a| a.id *=2} GraphAlbum.one_to_many :al_tracks, :class=>GraphTrack, :key=>:album_id, :after_load=>proc{|o, os| os.each{|a| a.id *=2}} GraphAlbum.many_to_many :al_genres, :class=>GraphGenre, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :after_load=>proc{|o, os| os.each{|a| a.id *=2}} ds = GraphAlbum.eager_graph(:al_band, :al_tracks, :al_genres) ds.sql.should == "SELECT albums.id, albums.band_id, al_band.id AS al_band_id, al_band.vocalist_id, al_tracks.id AS al_tracks_id, al_tracks.album_id, al_genres.id AS al_genres_id FROM albums LEFT OUTER JOIN bands AS al_band ON (al_band.id = albums.band_id) LEFT OUTER JOIN tracks AS al_tracks ON (al_tracks.album_id = albums.id) LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS al_genres ON (al_genres.id = ag.genre_id)" ds._fetch = {:id=>1, :band_id=>2, :al_band_id=>3, :vocalist_id=>4, :al_tracks_id=>5, :album_id=>6, :al_genres_id=>7} a = ds.all.first a.should == GraphAlbum.load(:id => 1, :band_id => 2) a.al_band.should == GraphBand.load(:id=>6, :vocalist_id=>4) a.al_tracks.should == [GraphTrack.load(:id=>10, :album_id=>6)] a.al_genres.should == [GraphGenre.load(:id=>14)] end it "should respect limits on associations when eager graphing" do GraphAlbum.many_to_one :al_band, :class=>GraphBand, :key=>:band_id GraphAlbum.one_to_many :al_tracks, :class=>GraphTrack, :key=>:album_id, :limit=>2 GraphAlbum.many_to_many :al_genres, :class=>GraphGenre, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>2 ds = GraphAlbum.eager_graph(:al_band, :al_tracks, :al_genres) ds.sql.should == "SELECT albums.id, albums.band_id, al_band.id AS al_band_id, al_band.vocalist_id, al_tracks.id AS al_tracks_id, al_tracks.album_id, al_genres.id AS al_genres_id FROM albums LEFT OUTER JOIN bands AS al_band ON (al_band.id = albums.band_id) LEFT OUTER JOIN tracks AS al_tracks ON (al_tracks.album_id = albums.id) LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS al_genres ON (al_genres.id = ag.genre_id)" ds._fetch = [{:id=>1, :band_id=>2, :al_band_id=>3, :vocalist_id=>4, :al_tracks_id=>5, :album_id=>6, :al_genres_id=>7}, {:id=>1, :band_id=>2, :al_band_id=>8, :vocalist_id=>9, :al_tracks_id=>10, :album_id=>11, :al_genres_id=>12}, {:id=>1, :band_id=>2, :al_band_id=>13, :vocalist_id=>14, :al_tracks_id=>15, :album_id=>16, :al_genres_id=>17}] a = ds.all.first a.should == GraphAlbum.load(:id => 1, :band_id => 2) a.al_band.should == GraphBand.load(:id=>3, :vocalist_id=>4) a.al_tracks.should == [GraphTrack.load(:id=>5, :album_id=>6), GraphTrack.load(:id=>10, :album_id=>11)] a.al_genres.should == [GraphGenre.load(:id=>7), GraphGenre.load(:id=>12)] end it "should eagerly load a many_to_one association with a custom callback" do ds = GraphAlbum.eager_graph(:band => proc {|ds1| ds1.select(:id).columns(:id)}) ds.sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0 FROM albums LEFT OUTER JOIN (SELECT id FROM bands) AS band ON (band.id = albums.band_id)' ds._fetch = {:id=>1, :band_id=>2, :band_id_0=>2} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.band.should == GraphBand.load(:id => 2) end it "should eagerly load a one_to_one association with a custom callback" do GraphAlbum.one_to_one :track, :class=>'GraphTrack', :key=>:album_id ds = GraphAlbum.eager_graph(:track => proc {|ds1| ds1.select(:album_id).columns(:album_id)}) ds.sql.should == 'SELECT albums.id, albums.band_id, track.album_id FROM albums LEFT OUTER JOIN (SELECT album_id FROM tracks) AS track ON (track.album_id = albums.id)' ds._fetch = {:id=>1, :band_id=>2, :album_id=>1} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.track.should == GraphTrack.load(:album_id=>1) end it "should eagerly load a one_to_many association with a custom callback" do ds = GraphAlbum.eager_graph(:tracks => proc {|ds1| ds1.select(:album_id).columns(:album_id)}) ds.sql.should == 'SELECT albums.id, albums.band_id, tracks.album_id FROM albums LEFT OUTER JOIN (SELECT album_id FROM tracks) AS tracks ON (tracks.album_id = albums.id)' ds._fetch = {:id=>1, :band_id=>2, :album_id=>1} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.tracks.should == [GraphTrack.load(:album_id=>1)] end it "should eagerly load a many_to_many association with a custom callback" do ds = GraphAlbum.eager_graph(:genres => proc {|ds1| ds1.select(:id).columns(:id)}) ds.sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN (SELECT id FROM genres) AS genres ON (genres.id = ag.genre_id)' ds._fetch = {:id=>1, :band_id=>2, :genres_id=>4} a = ds.all a.should == [GraphAlbum.load(:id => 1, :band_id => 2)] a.first.genres.should == [GraphGenre.load(:id => 4)] end it "should allow cascading of eager loading with a custom callback with hash value" do ds = GraphTrack.eager_graph(:album=>{proc{|ds1| ds1.select(:id, :band_id).columns(:id, :band_id)}=>{:band=>:members}}) ds.sql.should == 'SELECT tracks.id, tracks.album_id, album.id AS album_id_0, album.band_id, band.id AS band_id_0, band.vocalist_id, members.id AS members_id FROM tracks LEFT OUTER JOIN (SELECT id, band_id FROM albums) AS album ON (album.id = tracks.album_id) LEFT OUTER JOIN bands AS band ON (band.id = album.band_id) LEFT OUTER JOIN bm ON (bm.band_id = band.id) LEFT OUTER JOIN members ON (members.id = bm.member_id)' ds._fetch = {:id=>3, :album_id=>1, :album_id_0=>1, :band_id=>2, :members_id=>5, :band_id_0=>2, :vocalist_id=>6} a = ds.all a.should == [GraphTrack.load(:id => 3, :album_id => 1)] a = a.first a.album.should == GraphAlbum.load(:id => 1, :band_id => 2) a.album.band.should == GraphBand.load(:id => 2, :vocalist_id=>6) a.album.band.members.should == [GraphBandMember.load(:id => 5)] end it "should allow cascading of eager loading with a custom callback with array value" do ds = GraphTrack.eager_graph(:album=>{proc{|ds1| ds1.select(:id, :band_id).columns(:id, :band_id)}=>[:band, :tracks]}) ds.sql.should == 'SELECT tracks.id, tracks.album_id, album.id AS album_id_0, album.band_id, band.id AS band_id_0, band.vocalist_id, tracks_0.id AS tracks_0_id, tracks_0.album_id AS tracks_0_album_id FROM tracks LEFT OUTER JOIN (SELECT id, band_id FROM albums) AS album ON (album.id = tracks.album_id) LEFT OUTER JOIN bands AS band ON (band.id = album.band_id) LEFT OUTER JOIN tracks AS tracks_0 ON (tracks_0.album_id = album.id)' ds._fetch = {:id=>3, :album_id=>1, :album_id_0=>1, :band_id=>2, :band_id_0=>2, :vocalist_id=>6, :tracks_0_id=>3, :tracks_0_album_id=>1} a = ds.all a.should == [GraphTrack.load(:id => 3, :album_id => 1)] a = a.first a.album.should == GraphAlbum.load(:id => 1, :band_id => 2) a.album.band.should == GraphBand.load(:id => 2, :vocalist_id=>6) a.album.tracks.should == [GraphTrack.load(:id => 3, :album_id => 1)] end end �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/hooks_spec.rb����������������������������������������������������������0000664�0000000�0000000�00000042042�12201565355�0020514�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Model#before_create && Model#after_create" do before do @c = Class.new(Sequel::Model(:items)) do columns :x set_primary_key :x unrestrict_primary_key def after_create DB << "BLAH after" end end DB.reset end specify "should be called around new record creation" do @c.send(:define_method, :before_create){DB << "BLAH before"} @c.create(:x => 2) DB.sqls.should == ['BLAH before', 'INSERT INTO items (x) VALUES (2)', 'BLAH after', 'SELECT * FROM items WHERE (x = 2) LIMIT 1'] end specify ".create should cancel the save and raise an error if before_create returns false and raise_on_save_failure is true" do @c.send(:define_method, :before_create){false} proc{@c.create(:x => 2)}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] proc{@c.load(:id => 2233).save}.should_not raise_error end specify ".create should cancel the save and return nil if before_create returns false and raise_on_save_failure is false" do @c.send(:define_method, :before_create){false} @c.raise_on_save_failure = false @c.create(:x => 2).should == nil DB.sqls.should == [] end end describe "Model#before_update && Model#after_update" do before do @c = Class.new(Sequel::Model(:items)) do columns :id, :x def after_update DB << "BLAH after" end end DB.reset end specify "should be called around record update" do @c.send(:define_method, :before_update){DB << "BLAH before"} m = @c.load(:id => 2233, :x=>123) m.save DB.sqls.should == ['BLAH before', 'UPDATE items SET x = 123 WHERE (id = 2233)', 'BLAH after'] end specify "#save should cancel the save and raise an error if before_update returns false and raise_on_save_failure is true" do @c.send(:define_method, :before_update){false} proc{@c.load(:id => 2233).save}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#save should cancel the save and raise an error if before_update returns false and raise_on_failure option is true" do @c.send(:define_method, :before_update){false} @c.raise_on_save_failure = false proc{@c.load(:id => 2233).save(:raise_on_failure => true)}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#save should cancel the save and return nil if before_update returns false and raise_on_save_failure is false" do @c.send(:define_method, :before_update){false} @c.raise_on_save_failure = false @c.load(:id => 2233).save.should == nil DB.sqls.should == [] end end describe "Model#before_save && Model#after_save" do before do @c = Class.new(Sequel::Model(:items)) do columns :x def after_save DB << "BLAH after" end end DB.reset end specify "should be called around record update" do @c.send(:define_method, :before_save){DB << "BLAH before"} m = @c.load(:id => 2233, :x=>123) m.save DB.sqls.should == ['BLAH before', 'UPDATE items SET x = 123 WHERE (id = 2233)', 'BLAH after'] end specify "should be called around record creation" do @c.send(:define_method, :before_save){DB << "BLAH before"} @c.set_primary_key :x @c.unrestrict_primary_key @c.create(:x => 2) DB.sqls.should == ['BLAH before', 'INSERT INTO items (x) VALUES (2)', 'BLAH after', 'SELECT * FROM items WHERE (x = 2) LIMIT 1'] end specify "#save should cancel the save and raise an error if before_save returns false and raise_on_save_failure is true" do @c.send(:define_method, :before_save){false} proc{@c.load(:id => 2233).save}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#save should cancel the save and raise an error if before_save returns false and raise_on_failure option is true" do @c.send(:define_method, :before_save){false} @c.raise_on_save_failure = false proc{@c.load(:id => 2233).save(:raise_on_failure => true)}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#save should cancel the save and return nil if before_save returns false and raise_on_save_failure is false" do @c.send(:define_method, :before_save){false} @c.raise_on_save_failure = false @c.load(:id => 2233).save.should == nil DB.sqls.should == [] end specify "#save should have a raised exception reference the model instance" do @c.send(:define_method, :before_save){false} proc{@c.create(:x => 2233)}.should raise_error(Sequel::HookFailed){|e| e.model.should == @c.load(:x=>2233)} DB.sqls.should == [] end end describe "Model#before_destroy && Model#after_destroy" do before do @c = Class.new(Sequel::Model(:items)) do def after_destroy DB << "BLAH after" end end DB.reset end specify "should be called around record destruction" do @c.send(:define_method, :before_destroy){DB << "BLAH before"} m = @c.load(:id => 2233) m.destroy DB.sqls.should == ['BLAH before', 'DELETE FROM items WHERE id = 2233', 'BLAH after'] end specify "#destroy should cancel the destroy and raise an error if before_destroy returns false and raise_on_save_failure is true" do @c.send(:define_method, :before_destroy){false} proc{@c.load(:id => 2233).destroy}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#destroy should cancel the destroy and raise an error if before_destroy returns false and raise_on_failure option is true" do @c.send(:define_method, :before_destroy){false} @c.raise_on_save_failure = false proc{@c.load(:id => 2233).destroy(:raise_on_failure => true)}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#destroy should cancel the destroy and return nil if before_destroy returns false and raise_on_save_failure is false" do @c.send(:define_method, :before_destroy){false} @c.raise_on_save_failure = false @c.load(:id => 2233).destroy.should == nil DB.sqls.should == [] end end describe "Model#before_validation && Model#after_validation" do before do @c = Class.new(Sequel::Model(:items)) do columns :id def after_validation DB << "BLAH after" end def validate errors.add(:id, 'not valid') unless id == 2233 end end end specify "should be called around validation" do @c.send(:define_method, :before_validation){DB << "BLAH before"} m = @c.load(:id => 2233) m.should be_valid DB.sqls.should == ['BLAH before', 'BLAH after'] m = @c.load(:id => 22) m.should_not be_valid DB.sqls.should == ['BLAH before', 'BLAH after'] end specify "should be called when calling save" do @c.send(:define_method, :before_validation){DB << "BLAH before"} m = @c.load(:id => 2233, :x=>123) m.save.should == m DB.sqls.should == ['BLAH before', 'BLAH after', 'UPDATE items SET x = 123 WHERE (id = 2233)'] m = @c.load(:id => 22) m.raise_on_save_failure = false m.save.should == nil DB.sqls.should == ['BLAH before', 'BLAH after'] end specify "#save should cancel the save and raise an error if before_validation returns false and raise_on_save_failure is true" do @c.send(:define_method, :before_validation){false} proc{@c.load(:id => 2233).save}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#save should cancel the save and raise an error if before_validation returns false and raise_on_failure option is true" do @c.send(:define_method, :before_validation){false} @c.raise_on_save_failure = false proc{@c.load(:id => 2233).save(:raise_on_failure => true)}.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == [] end specify "#save should cancel the save and return nil if before_validation returns false and raise_on_save_failure is false" do @c.send(:define_method, :before_validation){false} @c.raise_on_save_failure = false @c.load(:id => 2233).save.should == nil DB.sqls.should == [] end specify "#valid? should return false if before_validation returns false" do @c.send(:define_method, :before_validation){false} @c.load(:id => 2233).valid?.should == false end end describe "Model around filters" do before do @c = Class.new(Sequel::Model(:items)) do columns :id, :x end DB.reset end specify "around_create should be called around new record creation" do @c.class_eval do def around_create DB << 'ac_before' super DB << 'ac_after' end end @c.create(:x => 2) DB.sqls.should == ['ac_before', 'INSERT INTO items (x) VALUES (2)', 'ac_after', "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end specify "around_delete should be called around record destruction" do @c.class_eval do def around_destroy DB << 'ad_before' super DB << 'ad_after' end end @c.load(:id=>1, :x => 2).destroy DB.sqls.should == ['ad_before', 'DELETE FROM items WHERE id = 1', 'ad_after'] end specify "around_update should be called around updating existing records" do @c.class_eval do def around_update DB << 'au_before' super DB << 'au_after' end end @c.load(:id=>1, :x => 2).save DB.sqls.should == ['au_before', 'UPDATE items SET x = 2 WHERE (id = 1)', 'au_after'] end specify "around_save should be called around saving both new and existing records, around either after_create and after_update" do @c.class_eval do def around_update DB << 'au_before' super DB << 'au_after' end def around_create DB << 'ac_before' super DB << 'ac_after' end def around_save DB << 'as_before' super DB << 'as_after' end end @c.create(:x => 2) DB.sqls.should == ['as_before', 'ac_before', 'INSERT INTO items (x) VALUES (2)', 'ac_after', 'as_after', "SELECT * FROM items WHERE (id = 10) LIMIT 1"] @c.load(:id=>1, :x => 2).save DB.sqls.should == ['as_before', 'au_before', 'UPDATE items SET x = 2 WHERE (id = 1)', 'au_after', 'as_after'] end specify "around_validation should be called around validating records" do @c.class_eval do def around_validation DB << 'av_before' super DB << 'av_after' end def validate DB << 'validate' end end @c.new(:x => 2).valid?.should == true DB.sqls.should == [ 'av_before', 'validate', 'av_after' ] end specify "around_validation should be able to catch validation errors and modify them" do @c.class_eval do def validate errors.add(:x, 'foo') end end @c.new(:x => 2).valid?.should == false @c.class_eval do def around_validation super errors.clear end end @c.new(:x => 2).valid?.should == true end specify "around_create that doesn't call super should raise a HookFailed" do @c.send(:define_method, :around_create){} proc{@c.create(:x => 2)}.should raise_error(Sequel::HookFailed) end specify "around_update that doesn't call super should raise a HookFailed" do @c.send(:define_method, :around_update){} proc{@c.load(:x => 2).save}.should raise_error(Sequel::HookFailed) end specify "around_save that doesn't call super should raise a HookFailed" do @c.send(:define_method, :around_save){} proc{@c.create(:x => 2)}.should raise_error(Sequel::HookFailed) proc{@c.load(:x => 2).save}.should raise_error(Sequel::HookFailed) end specify "around_destroy that doesn't call super should raise a HookFailed" do @c.send(:define_method, :around_destroy){} proc{@c.load(:x => 2).destroy}.should raise_error(Sequel::HookFailed) end specify "around_validation that doesn't call super should raise a HookFailed" do @c.send(:define_method, :around_validation){} proc{@c.new.save}.should raise_error(Sequel::HookFailed) end specify "around_validation that doesn't call super should have valid? return false" do @c.send(:define_method, :around_validation){} @c.new.valid?.should == false end specify "around_* that doesn't call super should return nil if raise_on_save_failure is false" do @c.raise_on_save_failure = false o = @c.load(:id => 1) def o.around_save() end o.save.should == nil o = @c.load(:id => 1) def o.around_update() end o.save.should == nil o = @c.new def o.around_create() end o.save.should == nil o = @c.new def o.around_validation() end o.save.should == nil end end describe "Model#after_commit and #after_rollback" do before do @db = Sequel.mock(:servers=>{:test=>{}}) @m = Class.new(Sequel::Model(@db[:items])) do attr_accessor :rb def _delete end def after_save db.execute('as') raise Sequel::Rollback if rb end def after_commit db.execute('ac') end def after_rollback db.execute('ar') end def after_destroy db.execute('ad') raise Sequel::Rollback if rb end def after_destroy_commit db.execute('adc') end def after_destroy_rollback db.execute('adr') end end @m.use_transactions = true @o = @m.load({}) @db.sqls end specify "should call after_commit for save after the transaction commits if it commits" do @o.save @db.sqls.should == ['BEGIN', 'as', 'COMMIT', 'ac'] end specify "should call after_rollback for save after the transaction rolls back if it rolls back" do @o.rb = true @o.save @db.sqls.should == ['BEGIN', 'as', 'ROLLBACK', 'ar'] end specify "should have after_commit respect any surrounding transactions" do @db.transaction do @o.save end @db.sqls.should == ['BEGIN', 'as', 'COMMIT', 'ac'] end specify "should have after_rollback respect any surrounding transactions" do @db.transaction do @o.rb = true @o.save end @db.sqls.should == ['BEGIN', 'as', 'ROLLBACK', 'ar'] end specify "should have after_commit work with surrounding transactions and sharding" do @db.transaction(:server=>:test) do @o.save end @db.sqls.should == ['BEGIN -- test', 'BEGIN', 'as', 'COMMIT', 'ac', 'COMMIT -- test'] end specify "should have after_rollback work with surrounding transactions and sharding" do @db.transaction(:server=>:test) do @o.rb = true @o.save end @db.sqls.should == ['BEGIN -- test', 'BEGIN', 'as', 'ROLLBACK', 'ar', 'COMMIT -- test'] end specify "should call after_destroy_commit for destroy after the transaction commits if it commits" do @o.destroy @db.sqls.should == ['BEGIN', 'ad', 'COMMIT', 'adc'] end specify "should call after_destroy_rollback for destroy after the transaction rolls back if it rolls back" do @o.rb = true @o.destroy @db.sqls.should == ['BEGIN', 'ad', 'ROLLBACK', 'adr'] end specify "should have after_destroy_commit respect any surrounding transactions" do @db.transaction do @o.destroy end @db.sqls.should == ['BEGIN', 'ad', 'COMMIT', 'adc'] end specify "should have after_destroy_rollback respect any surrounding transactions" do @db.transaction do @o.rb = true @o.destroy end @db.sqls.should == ['BEGIN', 'ad', 'ROLLBACK', 'adr'] end specify "should have after_destroy commit work with surrounding transactions and sharding" do @db.transaction(:server=>:test) do @o.destroy end @db.sqls.should == ['BEGIN -- test', 'BEGIN', 'ad', 'COMMIT', 'adc', 'COMMIT -- test'] end specify "should have after_destroy_rollback work with surrounding transactions and sharding" do @db.transaction(:server=>:test) do @o.rb = true @o.destroy end @db.sqls.should == ['BEGIN -- test', 'BEGIN', 'ad', 'ROLLBACK', 'adr', 'COMMIT -- test'] end specify "should not call after_commit if use_after_commit_rollback is false" do @o.use_after_commit_rollback = false @o.save @db.sqls.should == ['BEGIN', 'as', 'COMMIT'] end specify "should not call after_rollback if use_after_commit_rollback is false" do @o.use_after_commit_rollback = false @o.rb = true @o.save @db.sqls.should == ['BEGIN', 'as', 'ROLLBACK'] end specify "should not call after_destroy_commit if use_after_commit_rollback is false" do @o.use_after_commit_rollback = false @o.destroy @db.sqls.should == ['BEGIN', 'ad', 'COMMIT'] end specify "should not call after_destroy_rollback for save if use_after_commit_rollback is false" do @o.use_after_commit_rollback = false @o.rb = true @o.destroy @db.sqls.should == ['BEGIN', 'ad', 'ROLLBACK'] end specify "should handle use_after_commit_rollback at the class level" do @m.use_after_commit_rollback = false @o.save @db.sqls.should == ['BEGIN', 'as', 'COMMIT'] end specify "should handle use_after_commit_rollback when subclassing" do @m.use_after_commit_rollback = false o = Class.new(@m).load({}) @db.sqls o.save @db.sqls.should == ['BEGIN', 'as', 'COMMIT'] end end ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/inflector_spec.rb������������������������������������������������������0000664�0000000�0000000�00000001754�12201565355�0021363�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper') describe Sequel::Inflections do before do @plurals, @singulars, @uncountables = Sequel.inflections.plurals.dup, Sequel.inflections.singulars.dup, Sequel.inflections.uncountables.dup end after do Sequel.inflections.plurals.replace(@plurals) Sequel.inflections.singulars.replace(@singulars) Sequel.inflections.uncountables.replace(@uncountables) end it "should be possible to clear the list of singulars, plurals, and uncountables" do Sequel.inflections.clear(:plurals) Sequel.inflections.plurals.should == [] Sequel.inflections.plural('blah', 'blahs') Sequel.inflections.clear Sequel.inflections.plurals.should == [] Sequel.inflections.singulars.should == [] Sequel.inflections.uncountables.should == [] end it "should be yielded and returned by Sequel.inflections" do Sequel.inflections{|i| i.should == Sequel::Inflections}.should == Sequel::Inflections end end ��������������������ruby-sequel-4.1.1/spec/model/model_spec.rb����������������������������������������������������������0000664�0000000�0000000�00000055413�12201565355�0020477�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Sequel::Model()" do before do @db = Sequel::Model.db end it "should return a model subclass with the given dataset if given a dataset" do ds = @db[:blah] c = Sequel::Model(ds) c.superclass.should == Sequel::Model c.dataset.should == ds end it "should return a model subclass with a dataset with the default database and given table name if given a Symbol" do c = Sequel::Model(:blah) c.superclass.should == Sequel::Model c.db.should == @db c.table_name.should == :blah end it "should return a model subclass with a dataset with the default database and given table name if given a LiteralString" do c = Sequel::Model(Sequel.lit('blah')) c.superclass.should == Sequel::Model c.db.should == @db c.table_name.should == Sequel.lit('blah') end it "should return a model subclass with a dataset with the default database and given table name if given an SQL::Identifier" do c = Sequel::Model(Sequel.identifier(:blah)) c.superclass.should == Sequel::Model c.db.should == @db c.table_name.should == Sequel.identifier(:blah) end it "should return a model subclass with a dataset with the default database and given table name if given an SQL::QualifiedIdentifier" do c = Sequel::Model(Sequel.qualify(:boo, :blah)) c.superclass.should == Sequel::Model c.db.should == @db c.table_name.should == Sequel.qualify(:boo, :blah) end it "should return a model subclass with a dataset with the default database and given table name if given an SQL::AliasedExpression" do c = Sequel::Model(Sequel.as(:blah, :boo)) c.superclass.should == Sequel::Model c.db.should == @db c.table_name.should == :boo end it "should return a model subclass with the given dataset if given a dataset using an SQL::Identifier" do ds = @db[Sequel.identifier(:blah)] c = Sequel::Model(ds) c.superclass.should == Sequel::Model c.dataset.should == ds end it "should return a model subclass associated to the given database if given a database" do db = Sequel.mock c = Sequel::Model(db) c.superclass.should == Sequel::Model c.db.should == db proc{c.dataset}.should raise_error(Sequel::Error) class SmBlahTest < c end SmBlahTest.db.should == db SmBlahTest.table_name.should == :sm_blah_tests end describe "reloading" do before do Sequel.cache_anonymous_models = true end after do Sequel.cache_anonymous_models = false Object.send(:remove_const, :Album) if defined?(::Album) end it "should work without raising an exception with a symbol" do proc do class ::Album < Sequel::Model(:table); end class ::Album < Sequel::Model(:table); end end.should_not raise_error end it "should work without raising an exception with an SQL::Identifier " do proc do class ::Album < Sequel::Model(Sequel.identifier(:table)); end class ::Album < Sequel::Model(Sequel.identifier(:table)); end end.should_not raise_error end it "should work without raising an exception with an SQL::QualifiedIdentifier " do proc do class ::Album < Sequel::Model(Sequel.qualify(:schema, :table)); end class ::Album < Sequel::Model(Sequel.qualify(:schema, :table)); end end.should_not raise_error end it "should work without raising an exception with an SQL::AliasedExpression" do proc do class ::Album < Sequel::Model(Sequel.as(:table, :alias)); end class ::Album < Sequel::Model(Sequel.as(:table, :alias)); end end.should_not raise_error end it "should work without raising an exception with an LiteralString" do proc do class ::Album < Sequel::Model(Sequel.lit('table')); end class ::Album < Sequel::Model(Sequel.lit('table')); end end.should_not raise_error end it "should work without raising an exception with a database" do proc do class ::Album < Sequel::Model(@db); end class ::Album < Sequel::Model(@db); end end.should_not raise_error end it "should work without raising an exception with a dataset" do proc do class ::Album < Sequel::Model(@db[:table]); end class ::Album < Sequel::Model(@db[:table]); end end.should_not raise_error end it "should work without raising an exception with a dataset with an SQL::Identifier" do proc do class ::Album < Sequel::Model(@db[Sequel.identifier(:table)]); end class ::Album < Sequel::Model(@db[Sequel.identifier(:table)]); end end.should_not raise_error end it "should raise an exception if anonymous model caching is disabled" do Sequel.cache_anonymous_models = false proc do class ::Album < Sequel::Model(@db[Sequel.identifier(:table)]); end class ::Album < Sequel::Model(@db[Sequel.identifier(:table)]); end end.should raise_error end end end describe Sequel::Model do it "should have class method aliased as model" do Sequel::Model.instance_methods.collect{|x| x.to_s}.should include("model") model_a = Class.new(Sequel::Model(:items)) model_a.new.model.should be(model_a) end it "should be associated with a dataset" do model_a = Class.new(Sequel::Model) { set_dataset DB[:as] } model_a.dataset.should be_a_kind_of(Sequel::Mock::Dataset) model_a.dataset.opts[:from].should == [:as] model_b = Class.new(Sequel::Model) { set_dataset DB[:bs] } model_b.dataset.should be_a_kind_of(Sequel::Mock::Dataset) model_b.dataset.opts[:from].should == [:bs] model_a.dataset.opts[:from].should == [:as] end end describe Sequel::Model do before do @model = Class.new(Sequel::Model(:items)) end it "has table_name return name of table" do @model.table_name.should == :items end it "defaults to primary key of id" do @model.primary_key.should == :id end it "allow primary key change" do @model.set_primary_key :ssn @model.primary_key.should == :ssn end it "allows dataset change" do @model.set_dataset(DB[:foo]) @model.table_name.should == :foo end it "allows set_dataset to accept a Symbol" do @model.db = DB @model.set_dataset(:foo) @model.table_name.should == :foo end it "allows set_dataset to accept a LiteralString" do @model.db = DB @model.set_dataset(Sequel.lit('foo')) @model.table_name.should == Sequel.lit('foo') end it "allows set_dataset to acceptan SQL::Identifier" do @model.db = DB @model.set_dataset(Sequel.identifier(:foo)) @model.table_name.should == Sequel.identifier(:foo) end it "allows set_dataset to acceptan SQL::QualifiedIdentifier" do @model.db = DB @model.set_dataset(Sequel.qualify(:bar, :foo)) @model.table_name.should == Sequel.qualify(:bar, :foo) end it "allows set_dataset to acceptan SQL::AliasedExpression" do @model.db = DB @model.set_dataset(Sequel.as(:foo, :bar)) @model.table_name.should == :bar end it "table_name should respect table aliases" do @model.set_dataset(:foo___x) @model.table_name.should == :x end it "set_dataset should raise an error unless given a Symbol or Dataset" do proc{@model.set_dataset(Object.new)}.should raise_error(Sequel::Error) end it "set_dataset should add the destroy method to the dataset that destroys each object" do ds = DB[:foo] ds.should_not respond_to(:destroy) @model.set_dataset(ds) ds.should respond_to(:destroy) DB.sqls ds._fetch = [{:id=>1}, {:id=>2}] ds.destroy.should == 2 DB.sqls.should == ["SELECT * FROM foo", "DELETE FROM foo WHERE id = 1", "DELETE FROM foo WHERE id = 2"] end it "set_dataset should add the destroy method that respects sharding with transactions" do db = Sequel.mock(:servers=>{:s1=>{}}) ds = db[:foo].server(:s1) @model.use_transactions = true @model.set_dataset(ds) db.sqls ds.destroy.should == 0 db.sqls.should == ["BEGIN -- s1", "SELECT * FROM foo -- s1", "COMMIT -- s1"] end it "should raise an error on set_dataset if there is an error connecting to the database" do def @model.columns() raise Sequel::DatabaseConnectionError end proc{@model.set_dataset(Sequel::Database.new[:foo].join(:blah))}.should raise_error end it "should not raise an error if there is a problem getting the columns for a dataset" do def @model.columns() raise Sequel::Error end proc{@model.set_dataset(DB[:foo].join(:blah))}.should_not raise_error end it "doesn't raise an error on set_dataset if there is an error raised getting the schema" do def @model.get_db_schema(*) raise Sequel::Error end proc{@model.set_dataset(DB[:foo])}.should_not raise_error end it "doesn't raise an error on inherited if there is an error setting the dataset" do def @model.set_dataset(*) raise Sequel::Error end proc{Class.new(@model)}.should_not raise_error end it "should raise if bad inherited instance variable value is used" do def @model.inherited_instance_variables() super.merge(:@a=>:foo) end @model.instance_eval{@a=1} proc{Class.new(@model)}.should raise_error(Sequel::Error) end it "copy inherited instance variables into subclass if set" do def @model.inherited_instance_variables() super.merge(:@a=>nil, :@b=>:dup, :@c=>:hash_dup, :@d=>proc{|v| v * 2}) end @model.instance_eval{@a=1; @b=[2]; @c={3=>[4]}; @d=10} m = Class.new(@model) @model.instance_eval{@a=5; @b << 6; @c[3] << 7; @c[8] = [9]; @d=40} m.instance_eval do @a.should == 1 @b.should == [2] @c.should == {3=>[4]} @d.should == 20 end end end describe Sequel::Model, "constructors" do before do @m = Class.new(Sequel::Model) @m.columns :a, :b end it "should accept a hash" do m = @m.new(:a => 1, :b => 2) m.values.should == {:a => 1, :b => 2} m.should be_new end it "should accept a block and yield itself to the block" do block_called = false m = @m.new {|i| block_called = true; i.should be_a_kind_of(@m); i.values[:a] = 1} block_called.should be_true m.values[:a].should == 1 end it "should have dataset row_proc create an existing object" do @m.dataset = Sequel.mock.dataset o = @m.dataset.row_proc.call(:a=>1) o.should be_a_kind_of(@m) o.values.should == {:a=>1} o.new?.should be_false end it "should have .call create an existing object" do o = @m.call(:a=>1) o.should be_a_kind_of(@m) o.values.should == {:a=>1} o.new?.should be_false end it "should have .load create an existing object" do o = @m.load(:a=>1) o.should be_a_kind_of(@m) o.values.should == {:a=>1} o.new?.should be_false end end describe Sequel::Model, "new" do before do @m = Class.new(Sequel::Model) do set_dataset DB[:items] columns :x, :id end end it "should be marked as new?" do o = @m.new o.should be_new end it "should not be marked as new? once it is saved" do o = @m.new(:x => 1) o.should be_new o.save o.should_not be_new end it "should use the last inserted id as primary key if not in values" do @m.instance_dataset._fetch = @m.dataset._fetch = {:x => 1, :id => 1234} @m.instance_dataset.autoid = @m.dataset.autoid = 1234 o = @m.new(:x => 1) o.save o.id.should == 1234 o = @m.load(:x => 1, :id => 333) o.save o.id.should == 333 end end describe Sequel::Model, ".subset" do before do @c = Class.new(Sequel::Model(:items)) DB.reset end specify "should create a filter on the underlying dataset" do proc {@c.new_only}.should raise_error(NoMethodError) @c.subset(:new_only){age < 'new'} @c.new_only.sql.should == "SELECT * FROM items WHERE (age < 'new')" @c.dataset.new_only.sql.should == "SELECT * FROM items WHERE (age < 'new')" @c.subset(:pricey){price > 100} @c.pricey.sql.should == "SELECT * FROM items WHERE (price > 100)" @c.dataset.pricey.sql.should == "SELECT * FROM items WHERE (price > 100)" @c.pricey.new_only.sql.should == "SELECT * FROM items WHERE ((price > 100) AND (age < 'new'))" @c.new_only.pricey.sql.should == "SELECT * FROM items WHERE ((age < 'new') AND (price > 100))" end specify "should not override existing model methods" do def @c.active() true end @c.subset(:active, :active) @c.active.should == true end end describe Sequel::Model, ".find" do before do @c = Class.new(Sequel::Model(:items)) @c.dataset._fetch = {:name => 'sharon', :id => 1} DB.reset end it "should return the first record matching the given filter" do @c.find(:name => 'sharon').should be_a_kind_of(@c) DB.sqls.should == ["SELECT * FROM items WHERE (name = 'sharon') LIMIT 1"] @c.find(Sequel.expr(:name).like('abc%')).should be_a_kind_of(@c) DB.sqls.should == ["SELECT * FROM items WHERE (name LIKE 'abc%' ESCAPE '\\') LIMIT 1"] end specify "should accept filter blocks" do @c.find{id > 1}.should be_a_kind_of(@c) DB.sqls.should == ["SELECT * FROM items WHERE (id > 1) LIMIT 1"] @c.find{(x > 1) & (y < 2)}.should be_a_kind_of(@c) DB.sqls.should == ["SELECT * FROM items WHERE ((x > 1) AND (y < 2)) LIMIT 1"] end end describe Sequel::Model, ".fetch" do before do DB.reset @c = Class.new(Sequel::Model(:items)) end it "should return instances of Model" do @c.fetch("SELECT * FROM items").first.should be_a_kind_of(@c) end it "should return true for .empty? and not raise an error on empty selection" do rows = @c.fetch("SELECT * FROM items WHERE FALSE") @c.send(:define_method, :fetch_rows){|sql| yield({:count => 0})} proc {rows.empty?}.should_not raise_error end end describe Sequel::Model, ".find_or_create" do before do @c = Class.new(Sequel::Model(:items)) do set_primary_key :id columns :x end DB.reset end it "should find the record" do @c.find_or_create(:x => 1).should == @c.load(:x=>1, :id=>1) DB.sqls.should == ["SELECT * FROM items WHERE (x = 1) LIMIT 1"] end it "should create the record if not found" do @c.instance_dataset._fetch = @c.dataset._fetch = [[], {:x=>1, :id=>1}] @c.instance_dataset.autoid = @c.dataset.autoid = 1 @c.find_or_create(:x => 1).should == @c.load(:x=>1, :id=>1) DB.sqls.should == ["SELECT * FROM items WHERE (x = 1) LIMIT 1", "INSERT INTO items (x) VALUES (1)", "SELECT * FROM items WHERE (id = 1) LIMIT 1"] end it "should pass the new record to be created to the block if no record is found" do @c.instance_dataset._fetch = @c.dataset._fetch = [[], {:x=>1, :id=>1}] @c.instance_dataset.autoid = @c.dataset.autoid = 1 @c.find_or_create(:x => 1){|x| x[:y] = 2}.should == @c.load(:x=>1, :id=>1) sqls = DB.sqls sqls.first.should == "SELECT * FROM items WHERE (x = 1) LIMIT 1" ["INSERT INTO items (x, y) VALUES (1, 2)", "INSERT INTO items (y, x) VALUES (2, 1)"].should include(sqls[1]) sqls.last.should == "SELECT * FROM items WHERE (id = 1) LIMIT 1" end end describe Sequel::Model, ".all" do it "should return all records in the dataset" do c = Class.new(Sequel::Model(:items)) c.all.should == [c.load(:x=>1, :id=>1)] end end describe Sequel::Model, "A model class without a primary key" do before do @c = Class.new(Sequel::Model(:items)) do columns :x no_primary_key end DB.reset end it "should be able to insert records without selecting them back" do i = nil proc {i = @c.create(:x => 1)}.should_not raise_error i.class.should be(@c) i.values.to_hash.should == {:x => 1} DB.sqls.should == ['INSERT INTO items (x) VALUES (1)'] end it "should raise when deleting" do proc{@c.load(:x=>1).delete}.should raise_error end it "should raise when updating" do proc{@c.load(:x=>1).update(:x=>2)}.should raise_error end it "should insert a record when saving" do o = @c.new(:x => 2) o.should be_new o.save DB.sqls.should == ['INSERT INTO items (x) VALUES (2)'] end end describe Sequel::Model, "attribute accessors" do before do db = Sequel.mock def db.supports_schema_parsing?() true end def db.schema(*) [[:x, {:type=>:integer}], [:z, {:type=>:integer}]] end @dataset = db[:items].columns(:x, :z) @c = Class.new(Sequel::Model) DB.reset end it "should be created on set_dataset" do %w'x z x= z='.each do |x| @c.instance_methods.collect{|z| z.to_s}.should_not include(x) end @c.set_dataset(@dataset) %w'x z x= z='.each do |x| @c.instance_methods.collect{|z| z.to_s}.should include(x) end o = @c.new %w'x z x= z='.each do |x| o.methods.collect{|z| z.to_s}.should include(x) end o.x.should be_nil o.x = 34 o.x.should == 34 end it "should be only accept one argument for the write accessor" do @c.set_dataset(@dataset) o = @c.new o.x = 34 o.x.should == 34 proc{o.send(:x=)}.should raise_error proc{o.send(:x=, 3, 4)}.should raise_error end it "should have a working typecasting setter even if the column is not selected" do @c.set_dataset(@dataset.select(:z).columns(:z)) o = @c.new o.x = '34' o.x.should == 34 end it "should typecast if the new value is the same as the existing but has a different class" do @c.set_dataset(@dataset.select(:z).columns(:z)) o = @c.new o.x = 34 o.x = 34.0 o.x.should == 34.0 o.x = 34 o.x.should == 34 end end describe Sequel::Model, ".[]" do before do @c = Class.new(Sequel::Model(:items)) @c.dataset._fetch = {:name => 'sharon', :id => 1} DB.reset end it "should return the first record for the given pk" do @c[1].should == @c.load(:name => 'sharon', :id => 1) DB.sqls.should == ["SELECT * FROM items WHERE id = 1"] @c[9999].should == @c.load(:name => 'sharon', :id => 1) DB.sqls.should == ["SELECT * FROM items WHERE id = 9999"] end it "should have #[] return nil if no rows match" do @c.dataset._fetch = [] @c[1].should == nil DB.sqls.should == ["SELECT * FROM items WHERE id = 1"] end it "should work correctly for custom primary key" do @c.set_primary_key :name @c['sharon'].should == @c.load(:name => 'sharon', :id => 1) DB.sqls.should == ["SELECT * FROM items WHERE name = 'sharon'"] end it "should return the first record for the given pk for a filtered dataset" do @c.dataset = @c.dataset.filter(:active=>true) @c[1].should == @c.load(:name => 'sharon', :id => 1) DB.sqls.should == ["SELECT * FROM items WHERE ((active IS TRUE) AND (id = 1)) LIMIT 1"] end it "should work correctly for composite primary key specified as array" do @c.set_primary_key [:node_id, :kind] @c[3921, 201].should be_a_kind_of(@c) sqls = DB.sqls sqls.length.should == 1 sqls.first.should =~ /^SELECT \* FROM items WHERE \((\(node_id = 3921\) AND \(kind = 201\))|(\(kind = 201\) AND \(node_id = 3921\))\) LIMIT 1$/ end end describe "Model#inspect" do specify "should include the class name and the values" do Sequel::Model.load(:x => 333).inspect.should == '#<Sequel::Model @values={:x=>333}>' end end describe "Model.db_schema" do before do @c = Class.new(Sequel::Model(:items)) do def self.columns; orig_columns; end end @db = Sequel.mock def @db.supports_schema_parsing?() true end @dataset = @db[:items] end specify "should not call database's schema if it isn't supported" do def @db.supports_schema_parsing?() false end def @db.schema(table, opts = {}) raise Sequel::Error end @dataset.instance_variable_set(:@columns, [:x, :y]) @c.dataset = @dataset @c.db_schema.should == {:x=>{}, :y=>{}} @c.columns.should == [:x, :y] @c.dataset.instance_variable_get(:@columns).should == [:x, :y] end specify "should use the database's schema and set the columns and dataset columns" do def @db.schema(table, opts = {}) [[:x, {:type=>:integer}], [:y, {:type=>:string}]] end @c.dataset = @dataset @c.db_schema.should == {:x=>{:type=>:integer}, :y=>{:type=>:string}} @c.columns.should == [:x, :y] @c.dataset.instance_variable_get(:@columns).should == [:x, :y] end specify "should not restrict the schema for datasets with a :select option" do def @c.columns; [:x, :z]; end def @db.schema(table, opts = {}) [[:x, {:type=>:integer}], [:y, {:type=>:string}]] end @c.dataset = @dataset.select(:x, :y___z) @c.db_schema.should == {:x=>{:type=>:integer}, :z=>{}, :y=>{:type=>:string}} end specify "should fallback to fetching records if schema raises an error" do def @db.schema(table, opts={}) raise Sequel::Error end @c.dataset = @dataset.join(:x, :id).columns(:id, :x) @c.db_schema.should == {:x=>{}, :id=>{}} end specify "should automatically set a singular primary key based on the schema" do ds = @dataset d = ds.db def d.schema(table, *opts) [[:x, {:primary_key=>true}]] end @c.primary_key.should == :id @c.dataset = ds @c.db_schema.should == {:x=>{:primary_key=>true}} @c.primary_key.should == :x end specify "should automatically set the composite primary key based on the schema" do ds = @dataset d = ds.db def d.schema(table, *opts) [[:x, {:primary_key=>true}], [:y, {:primary_key=>true}]] end @c.primary_key.should == :id @c.dataset = ds @c.db_schema.should == {:x=>{:primary_key=>true}, :y=>{:primary_key=>true}} @c.primary_key.should == [:x, :y] end specify "should automatically set no primary key based on the schema" do ds = @dataset d = ds.db def d.schema(table, *opts) [[:x, {:primary_key=>false}], [:y, {:primary_key=>false}]] end @c.primary_key.should == :id @c.dataset = ds @c.db_schema.should == {:x=>{:primary_key=>false}, :y=>{:primary_key=>false}} @c.primary_key.should == nil end specify "should not modify the primary key unless all column schema hashes have a :primary_key entry" do ds = @dataset d = ds.db def d.schema(table, *opts) [[:x, {:primary_key=>false}], [:y, {}]] end @c.primary_key.should == :id @c.dataset = ds @c.db_schema.should == {:x=>{:primary_key=>false}, :y=>{}} @c.primary_key.should == :id end end describe "Model#use_transactions" do before do @c = Class.new(Sequel::Model(:items)) end specify "should return class value by default" do @c.use_transactions = true @c.new.use_transactions.should == true @c.use_transactions = false @c.new.use_transactions.should == false end specify "should return set value if manually set" do instance = @c.new instance.use_transactions = false instance.use_transactions.should == false @c.use_transactions = true instance.use_transactions.should == false instance.use_transactions = true instance.use_transactions.should == true @c.use_transactions = false instance.use_transactions.should == true end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/plugins_spec.rb��������������������������������������������������������0000664�0000000�0000000�00000017553�12201565355�0021063�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model, ".plugin" do before do module Sequel::Plugins module Timestamped module InstanceMethods def get_stamp(*args); @values[:stamp] end def abc; 123; end end module ClassMethods def def; 234; end end module DatasetMethods def ghi; 345; end end end end @c = Class.new(Sequel::Model(:items)) @t = Sequel::Plugins::Timestamped end after do Sequel::Plugins.send(:remove_const, :Timestamped) end it "should raise LoadError if the plugin is not found" do proc{@c.plugin :something_or_other}.should raise_error(LoadError) end it "should store the plugin in .plugins" do @c.plugins.should_not include(@t) @c.plugin @t @c.plugins.should include(@t) end it "should be inherited in subclasses" do @c.plugins.should_not include(@t) c1 = Class.new(@c) @c.plugin @t c2 = Class.new(@c) @c.plugins.should include(@t) c1.plugins.should_not include(@t) c2.plugins.should include(@t) end it "should accept a symbol and load the module from the Sequel::Plugins namespace" do @c.plugin :timestamped @c.plugins.should include(@t) end it "should accept a module" do m = Module.new @c.plugin m @c.plugins.should include(m) end it "should not attempt to load a plugin twice" do @c.plugins.should_not include(@t) @c.plugin @t @c.plugins.reject{|m| m != @t}.length.should == 1 @c.plugin @t @c.plugins.reject{|m| m != @t}.length.should == 1 end it "should call apply and configure if the plugin responds to it, with the args and block used" do m = Module.new do def self.args; @args; end def self.block; @block; end def self.block_call; @block.call; end def self.args2; @args2; end def self.block2; @block2; end def self.block2_call; @block2.call; end def self.apply(model, *args, &block) @args = args @block = block model.send(:define_method, :blah){43} end def self.configure(model, *args, &block) @args2 = args @block2 = block model.send(:define_method, :blag){44} end end b = lambda{42} @c.plugin(m, 123, 1=>2, &b) m.args.should == [123, {1=>2}] m.block.should == b m.block_call.should == 42 @c.new.blah.should == 43 m.args2.should == [123, {1=>2}] m.block2.should == b m.block2_call.should == 42 @c.new.blag.should == 44 end it "should call configure even if the plugin has already been loaded" do m = Module.new do @args = [] def self.args; @args; end def self.configure(model, *args, &block) @args << [block, *args] end end b = lambda{42} @c.plugin(m, 123, 1=>2, &b) m.args.should == [[b, 123, {1=>2}]] b2 = lambda{44} @c.plugin(m, 234, 2=>3, &b2) m.args.should == [[b, 123, {1=>2}], [b2, 234, {2=>3}]] end it "should call things in the following order: apply, ClassMethods, InstanceMethods, DatasetMethods, configure" do m = Module.new do @args = [] def self.args; @args; end def self.apply(model, *args, &block) @args << :apply end def self.configure(model, *args, &block) @args << :configure end self::InstanceMethods = Module.new do def self.included(model) model.plugins.last.args << :im end end self::ClassMethods = Module.new do def self.extended(model) model.plugins.last.args << :cm end end self::DatasetMethods = Module.new do def self.extended(dataset) dataset.model.plugins.last.args << :dm end end end b = lambda{44} @c.plugin(m, 123, 1=>2, &b) m.args.should == [:apply, :cm, :im, :dm, :configure] @c.plugin(m, 234, 2=>3, &b) m.args.should == [:apply, :cm, :im, :dm, :configure, :configure] end it "should include an InstanceMethods module in the class if the plugin includes it" do @c.plugin @t m = @c.new m.should respond_to(:get_stamp) m.should respond_to(:abc) m.abc.should == 123 t = Time.now m[:stamp] = t m.get_stamp.should == t end it "should extend the class with a ClassMethods module if the plugin includes it" do @c.plugin @t @c.def.should == 234 end it "should extend the class's dataset with a DatasetMethods module if the plugin includes it" do @c.plugin @t @c.dataset.ghi.should == 345 end it "should save the DatasetMethods module and apply it later if the class doesn't have a dataset" do c = Class.new(Sequel::Model) c.plugin @t c.dataset = DB[:i] c.dataset.ghi.should == 345 end it "should save the DatasetMethods module and apply it later if the class has a dataset" do @c.plugin @t @c.dataset = DB[:i] @c.dataset.ghi.should == 345 end it "should not define class methods for private instance methods in DatasetMethod" do m = Module.new do self::DatasetMethods = Module.new do def b; 2; end private def a; 1; end end end @c.plugin m @c.dataset.b.should == 2 lambda{@c.dataset.a}.should raise_error(NoMethodError) @c.dataset.send(:a).should == 1 lambda{@c.a}.should raise_error(NoMethodError) lambda{@c.send(:a)}.should raise_error(NoMethodError) end it "should not raise an error if the DatasetMethod module has no public instance methods" do m = Module.new do self::DatasetMethods = Module.new do private def a; 1; end end end lambda{@c.plugin m}.should_not raise_error end it "should not raise an error if plugin submodule names exist higher up in the namespace hierarchy" do class ::ClassMethods; end @c.plugin(m = Module.new) Object.send(:remove_const, :ClassMethods) @c.plugins.should include(m) class ::InstanceMethods; end @c.plugin(m = Module.new) Object.send(:remove_const, :InstanceMethods) @c.plugins.should include(m) class ::DatasetMethods; end @c.plugin(m = Module.new) Object.send(:remove_const, :DatasetMethods) @c.plugins.should include(m) end end describe Sequel::Plugins do before do @c = Class.new(Sequel::Model(:items)) end it "should have def_dataset_methods define methods that call methods on the dataset" do m = Module.new do module self::ClassMethods Sequel::Plugins.def_dataset_methods(self, :one) end module self::DatasetMethods def one 1 end end end @c.plugin m @c.one.should == 1 end it "should have def_dataset_methods accept an array with multiple methods" do m = Module.new do module self::ClassMethods Sequel::Plugins.def_dataset_methods(self, [:one, :two]) end module self::DatasetMethods def one 1 end def two 2 end end end @c.plugin m @c.one.should == 1 @c.two.should == 2 end it "should have inherited_instance_variables add instance variables to copy into the subclass" do m = Module.new do def self.apply(model) model.instance_variable_set(:@one, 1) end module self::ClassMethods attr_reader :one Sequel::Plugins.inherited_instance_variables(self, :@one=>nil) end end @c.plugin m Class.new(@c).one.should == 1 end it "should have after_set_dataset add a method to call after set_dataset" do m = Module.new do module self::ClassMethods Sequel::Plugins.after_set_dataset(self, :one) private def one dataset.opts[:foo] = 1 end end end @c.plugin m @c.dataset.opts[:foo].should == nil @c.set_dataset :blah @c.dataset.opts[:foo].should == 1 end end �����������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/record_spec.rb���������������������������������������������������������0000664�0000000�0000000�00000164410�12201565355�0020653�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe "Model#values" do before do @c = Class.new(Sequel::Model(:items)) end it "should return the hash of model values" do hash = {:x=>1} @c.load(hash).values.should equal(hash) end it "should be aliased as to_hash" do hash = {:x=>1} @c.load(hash).to_hash.should equal(hash) end end describe "Model#save server use" do before do @db = Sequel.mock(:autoid=>proc{|sql| 10}, :fetch=>{:x=>1, :id=>10}, :servers=>{:blah=>{}, :read_only=>{}}) @c = Class.new(Sequel::Model(@db[:items])) @c.columns :id, :x, :y @c.dataset.columns(:id, :x, :y) @db.sqls end it "should use the :default server if the model doesn't have one already specified" do @c.new(:x=>1).save.should == @c.load(:x=>1, :id=>10) @db.sqls.should == ["INSERT INTO items (x) VALUES (1)", 'SELECT * FROM items WHERE (id = 10) LIMIT 1'] end it "should use the model's server if the model has one already specified" do @c.dataset = @c.dataset.server(:blah) @c.new(:x=>1).save.should == @c.load(:x=>1, :id=>10) @db.sqls.should == ["INSERT INTO items (x) VALUES (1) -- blah", 'SELECT * FROM items WHERE (id = 10) LIMIT 1 -- blah'] end end describe "Model#save" do before do @c = Class.new(Sequel::Model(:items)) do columns :id, :x, :y end @c.instance_dataset.autoid = @c.dataset.autoid = 13 DB.reset end it "should insert a record for a new model instance" do o = @c.new(:x => 1) o.save DB.sqls.should == ["INSERT INTO items (x) VALUES (1)", "SELECT * FROM items WHERE (id = 13) LIMIT 1"] end it "should use dataset's insert_select method if present" do ds = @c.instance_dataset ds._fetch = {:y=>2} def ds.supports_insert_select?() true end def ds.insert_select(hash) execute("INSERT INTO items (y) VALUES (2) RETURNING *"){|r| return r} end o = @c.new(:x => 1) o.save o.values.should == {:y=>2} DB.sqls.should == ["INSERT INTO items (y) VALUES (2) RETURNING *"] end it "should not use dataset's insert_select method if specific columns are selected" do ds = @c.dataset = @c.dataset.select(:y) ds.should_not_receive(:insert_select) @c.new(:x => 1).save end it "should use value returned by insert as the primary key and refresh the object" do o = @c.new(:x => 11) o.save DB.sqls.should == ["INSERT INTO items (x) VALUES (11)", "SELECT * FROM items WHERE (id = 13) LIMIT 1"] end it "should allow you to skip refreshing by overridding _save_refresh" do @c.send(:define_method, :_save_refresh){} @c.create(:x => 11) DB.sqls.should == ["INSERT INTO items (x) VALUES (11)"] end it "should work correctly for inserting a record without a primary key" do @c.no_primary_key o = @c.new(:x => 11) o.save DB.sqls.should == ["INSERT INTO items (x) VALUES (11)"] end it "should set the autoincrementing_primary_key value to the value returned by insert" do @c.unrestrict_primary_key @c.set_primary_key [:x, :y] o = @c.new(:x => 11) def o.autoincrementing_primary_key() :y end o.save sqls = DB.sqls sqls.length.should == 2 sqls.first.should == "INSERT INTO items (x) VALUES (11)" sqls.last.should =~ %r{SELECT \* FROM items WHERE \(\([xy] = 1[13]\) AND \([xy] = 1[13]\)\) LIMIT 1} end it "should update a record for an existing model instance" do o = @c.load(:id => 3, :x => 1) o.save DB.sqls.should == ["UPDATE items SET x = 1 WHERE (id = 3)"] end it "should raise a NoExistingObject exception if the dataset update call doesn't return 1, unless require_modification is false" do o = @c.load(:id => 3, :x => 1) t = o.this t.numrows = 0 proc{o.save}.should raise_error(Sequel::NoExistingObject) t.numrows = 2 proc{o.save}.should raise_error(Sequel::NoExistingObject) t.numrows = 1 proc{o.save}.should_not raise_error o.require_modification = false t.numrows = 0 proc{o.save}.should_not raise_error t.numrows = 2 proc{o.save}.should_not raise_error end it "should respect the :columns option to specify the columns to save" do o = @c.load(:id => 3, :x => 1, :y => nil) o.save(:columns=>:y) DB.sqls.first.should == "UPDATE items SET y = NULL WHERE (id = 3)" end it "should mark saved columns as not changed" do o = @c.load(:id => 3, :x => 1, :y => nil) o[:y] = 4 o.changed_columns.should == [:y] o.save(:columns=>:x) o.changed_columns.should == [:y] o.save(:columns=>:y) o.changed_columns.should == [] end it "should mark all columns as not changed if this is a new record" do o = @c.new(:x => 1, :y => nil) o.x = 4 o.changed_columns.should == [:x] o.save o.changed_columns.should == [] end it "should mark all columns as not changed if this is a new record and insert_select was used" do def (@c.dataset).insert_select(h) h.merge(:id=>1) end o = @c.new(:x => 1, :y => nil) o.x = 4 o.changed_columns.should == [:x] o.save o.changed_columns.should == [] end it "should store previous value of @new in @was_new and as well as the hash used for updating in @columns_updated until after hooks finish running" do res = nil @c.send(:define_method, :after_save){ res = [@columns_updated, @was_new]} o = @c.new(:x => 1, :y => nil) o[:x] = 2 o.save res.should == [nil, true] o.after_save res.should == [nil, nil] res = nil o = @c.load(:id => 23,:x => 1, :y => nil) o[:x] = 2 o.save res.should == [{:x => 2, :y => nil}, nil] o.after_save res.should == [nil, nil] res = nil o = @c.load(:id => 23,:x => 2, :y => nil) o[:x] = 2 o[:y] = 22 o.save(:columns=>:x) res.should == [{:x=>2},nil] o.after_save res.should == [nil, nil] end it "should use Model's use_transactions setting by default" do @c.use_transactions = true @c.load(:id => 3, :x => 1, :y => nil).save(:columns=>:y) DB.sqls.should == ["BEGIN", "UPDATE items SET y = NULL WHERE (id = 3)", "COMMIT"] @c.use_transactions = false @c.load(:id => 3, :x => 1, :y => nil).save(:columns=>:y) DB.sqls.should == ["UPDATE items SET y = NULL WHERE (id = 3)"] end it "should inherit Model's use_transactions setting" do @c.use_transactions = true Class.new(@c).load(:id => 3, :x => 1, :y => nil).save(:columns=>:y) DB.sqls.should == ["BEGIN", "UPDATE items SET y = NULL WHERE (id = 3)", "COMMIT"] @c.use_transactions = false Class.new(@c).load(:id => 3, :x => 1, :y => nil).save(:columns=>:y) DB.sqls.should == ["UPDATE items SET y = NULL WHERE (id = 3)"] end it "should use object's use_transactions setting" do o = @c.load(:id => 3, :x => 1, :y => nil) o.use_transactions = false @c.use_transactions = true o.save(:columns=>:y) DB.sqls.should == ["UPDATE items SET y = NULL WHERE (id = 3)"] o = @c.load(:id => 3, :x => 1, :y => nil) o.use_transactions = true @c.use_transactions = false o.save(:columns=>:y) DB.sqls.should == ["BEGIN", "UPDATE items SET y = NULL WHERE (id = 3)", "COMMIT"] end it "should use :transaction option if given" do o = @c.load(:id => 3, :x => 1, :y => nil) o.use_transactions = true o.save(:columns=>:y, :transaction=>false) DB.sqls.should == ["UPDATE items SET y = NULL WHERE (id = 3)"] o = @c.load(:id => 3, :x => 1, :y => nil) o.use_transactions = false o.save(:columns=>:y, :transaction=>true) DB.sqls.should == ["BEGIN", "UPDATE items SET y = NULL WHERE (id = 3)", "COMMIT"] end it "should rollback if before_save returns false and raise_on_save_failure = true" do o = @c.load(:id => 3, :x => 1, :y => nil) o.use_transactions = true o.raise_on_save_failure = true def o.before_save false end proc { o.save(:columns=>:y) }.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == ["BEGIN", "ROLLBACK"] end it "should rollback if before_save returns false and :raise_on_failure option is true" do o = @c.load(:id => 3, :x => 1, :y => nil) o.use_transactions = true o.raise_on_save_failure = false def o.before_save false end proc { o.save(:columns=>:y, :raise_on_failure => true) }.should raise_error(Sequel::BeforeHookFailed) DB.sqls.should == ["BEGIN", "ROLLBACK"] end it "should not rollback outer transactions if before_save returns false and raise_on_save_failure = false" do o = @c.load(:id => 3, :x => 1, :y => nil) o.use_transactions = true o.raise_on_save_failure = false def o.before_save false end DB.transaction do o.save(:columns=>:y).should == nil DB.run "BLAH" end DB.sqls.should == ["BEGIN", "BLAH", "COMMIT"] end it "should rollback if before_save returns false and raise_on_save_failure = false" do o = @c.load(:id => 3, :x => 1, :y => nil) o.use_transactions = true o.raise_on_save_failure = false def o.before_save false end o.save(:columns=>:y).should == nil DB.sqls.should == ["BEGIN", "ROLLBACK"] end it "should not rollback if before_save throws Rollback and use_transactions = false" do o = @c.load(:id => 3, :x => 1, :y => nil) o.use_transactions = false def o.before_save raise Sequel::Rollback end proc { o.save(:columns=>:y) }.should raise_error(Sequel::Rollback) DB.sqls.should == [] end it "should support a :server option to set the server/shard to use" do db = Sequel.mock(:fetch=>{:id=>13, :x=>1}, :autoid=>proc{13}, :numrows=>1, :servers=>{:s1=>{}}) c = Class.new(Sequel::Model(db[:items])) c.columns :id, :x db.sqls o = c.new(:x => 1) o.save(:server=>:s1) db.sqls.should == ["INSERT INTO items (x) VALUES (1) -- s1", "SELECT * FROM items WHERE (id = 13) LIMIT 1 -- s1"] o.save(:server=>:s1, :transaction=>true) db.sqls.should == ["BEGIN -- s1", "UPDATE items SET x = 1 WHERE (id = 13) -- s1", 'COMMIT -- s1'] end end describe "Model#set_server" do before do @db = Sequel.mock(:fetch=>{:id=>13, :x=>1}, :autoid=>proc{13}, :numrows=>1, :servers=>{:s1=>{}}) @c = Class.new(Sequel::Model(@db[:items])) do columns :id, :x end @db.sqls end it "should set the server to use when inserting" do @c.new(:x => 1).set_server(:s1).save @db.sqls.should == ["INSERT INTO items (x) VALUES (1) -- s1", "SELECT * FROM items WHERE (id = 13) LIMIT 1 -- s1"] end it "should set the server to use when updating" do @c.load(:id=>13, :x => 1).set_server(:s1).save @db.sqls.should == ["UPDATE items SET x = 1 WHERE (id = 13) -- s1"] end it "should set the server to use for transactions when saving" do @c.load(:id=>13, :x => 1).set_server(:s1).save(:transaction=>true) @db.sqls.should == ["BEGIN -- s1", "UPDATE items SET x = 1 WHERE (id = 13) -- s1", 'COMMIT -- s1'] end it "should set the server to use when deleting" do @c.load(:id=>13).set_server(:s1).delete @db.sqls.should == ["DELETE FROM items WHERE (id = 13) -- s1"] end it "should set the server to use for transactions when destroying" do o = @c.load(:id=>13).set_server(:s1) o.use_transactions = true o.destroy @db.sqls.should == ["BEGIN -- s1", "DELETE FROM items WHERE (id = 13) -- s1", 'COMMIT -- s1'] end it "should set the server on this if this is already loaded" do o = @c.load(:id=>13, :x => 1) o.this o.set_server(:s1) o.this.opts[:server].should == :s1 end it "should set the server on this if this is not already loaded" do @c.load(:id=>13, :x => 1).set_server(:s1).this.opts[:server].should == :s1 end end describe "Model#freeze" do before do class ::Album < Sequel::Model columns :id class B < Sequel::Model columns :id, :album_id end end @o = Album.load(:id=>1).freeze DB.sqls end after do Object.send(:remove_const, :Album) end it "should freeze the object" do @o.frozen?.should be_true end it "should freeze the object if the model doesn't have a primary key" do Album.no_primary_key @o = Album.load(:id=>1).freeze @o.frozen?.should be_true end it "should freeze the object's values, associations, changed_columns, errors, and this" do @o.values.frozen?.should be_true @o.changed_columns.frozen?.should be_true @o.errors.frozen?.should be_true @o.this.frozen?.should be_true end it "should still have working class attr overriddable methods" do Sequel::Model::BOOLEAN_SETTINGS.each{|m| @o.send(m) == Album.send(m)} end it "should have working new? method" do @o.new?.should be_false Album.new.freeze.new?.should be_true end it "should have working valid? method" do @o.valid?.should be_true o = Album.new def o.validate() errors.add(:foo, '') end o.freeze o.valid?.should be_false end it "should raise an Error if trying to save/destroy/delete/refresh" do proc{@o.save}.should raise_error(Sequel::Error) proc{@o.destroy}.should raise_error(Sequel::Error) proc{@o.delete}.should raise_error(Sequel::Error) proc{@o.refresh}.should raise_error(Sequel::Error) @o.db.sqls.should == [] end end describe "Model#marshallable" do before do class ::Album < Sequel::Model columns :id, :x end end after do Object.send(:remove_const, :Album) end it "should make an object marshallable" do i = Album.new(:x=>2) s = nil i2 = nil i.marshallable! proc{s = Marshal.dump(i)}.should_not raise_error proc{i2 = Marshal.load(s)}.should_not raise_error i2.should == i i.save i.marshallable! proc{s = Marshal.dump(i)}.should_not raise_error proc{i2 = Marshal.load(s)}.should_not raise_error i2.should == i i.save i.marshallable! proc{s = Marshal.dump(i)}.should_not raise_error proc{i2 = Marshal.load(s)}.should_not raise_error i2.should == i end end describe "Model#modified?" do before do @c = Class.new(Sequel::Model(:items)) @c.class_eval do columns :id, :x @db_schema = {:x => {:type => :integer}} end DB.reset end it "should be true if the object is new" do @c.new.modified?.should == true end it "should be false if the object has not been modified" do @c.load(:id=>1).modified?.should == false end it "should be true if the object has been modified" do o = @c.load(:id=>1, :x=>2) o.x = 3 o.modified?.should == true end it "should be true if the object is marked modified!" do o = @c.load(:id=>1, :x=>2) o.modified! o.modified?.should == true end it "should be false if the object is marked modified! after saving until modified! again" do o = @c.load(:id=>1, :x=>2) o.modified! o.save o.modified?.should == false o.modified! o.modified?.should == true end it "should be false if a column value is set that is the same as the current value after typecasting" do o = @c.load(:id=>1, :x=>2) o.x = '2' o.modified?.should == false end it "should be true if a column value is set that is the different as the current value after typecasting" do o = @c.load(:id=>1, :x=>'2') o.x = '2' o.modified?.should == true end it "should be true if given a column argument and the column has been changed" do o = @c.new o.modified?(:id).should be_false o.id = 1 o.modified?(:id).should be_true end end describe "Model#modified!" do before do @c = Class.new(Sequel::Model(:items)) @c.class_eval do columns :id, :x end DB.reset end it "should mark the object as modified so that save_changes still runs the callbacks" do o = @c.load(:id=>1, :x=>2) def o.after_save values[:x] = 3 end o.update({}) o.x.should == 2 o.modified! o.update({}) o.x.should == 3 o.db.sqls.should == [] end it "should mark given column argument as modified" do o = @c.load(:id=>1, :x=>2) o.modified!(:x) o.changed_columns.should == [:x] o.save o.db.sqls.should == ["UPDATE items SET x = 2 WHERE (id = 1)"] end end describe "Model#save_changes" do before do @c = Class.new(Sequel::Model(:items)) do unrestrict_primary_key columns :id, :x, :y end DB.reset end it "should always save if the object is new" do o = @c.new(:x => 1) o.save_changes DB.sqls.first.should == "INSERT INTO items (x) VALUES (1)" end it "should take options passed to save" do o = @c.new(:x => 1) def o.before_validation; false; end proc{o.save_changes}.should raise_error(Sequel::Error) DB.sqls.should == [] o.save_changes(:validate=>false) DB.sqls.first.should == "INSERT INTO items (x) VALUES (1)" end it "should do nothing if no changed columns" do o = @c.load(:id => 3, :x => 1, :y => nil) o.save_changes DB.sqls.should == [] end it "should do nothing if modified? is false" do o = @c.load(:id => 3, :x => 1, :y => nil) def o.modified?; false; end o.save_changes DB.sqls.should == [] end it "should update only changed columns" do o = @c.load(:id => 3, :x => 1, :y => nil) o.x = 2 o.save_changes DB.sqls.should == ["UPDATE items SET x = 2 WHERE (id = 3)"] o.save_changes o.save_changes DB.sqls.should == [] o.y = 4 o.save_changes DB.sqls.should == ["UPDATE items SET y = 4 WHERE (id = 3)"] o.save_changes o.save_changes DB.sqls.should == [] end it "should not consider columns changed if the values did not change" do o = @c.load(:id => 3, :x => 1, :y => nil) o.x = 1 o.save_changes DB.sqls.should == [] o.x = 3 o.save_changes DB.sqls.should == ["UPDATE items SET x = 3 WHERE (id = 3)"] o[:y] = nil o.save_changes DB.sqls.should == [] o[:y] = 4 o.save_changes DB.sqls.should == ["UPDATE items SET y = 4 WHERE (id = 3)"] end it "should clear changed_columns" do o = @c.load(:id => 3, :x => 1, :y => nil) o.x = 4 o.changed_columns.should == [:x] o.save_changes o.changed_columns.should == [] end it "should update columns changed in a before_update hook" do o = @c.load(:id => 3, :x => 1, :y => nil) @c.send(:define_method, :before_update){self.x += 1} o.save_changes DB.sqls.should == [] o.x = 2 o.save_changes DB.sqls.should == ["UPDATE items SET x = 3 WHERE (id = 3)"] o.save_changes DB.sqls.should == [] o.x = 4 o.save_changes DB.sqls.should == ["UPDATE items SET x = 5 WHERE (id = 3)"] end it "should update columns changed in a before_save hook" do o = @c.load(:id => 3, :x => 1, :y => nil) @c.send(:define_method, :before_update){self.x += 1} o.save_changes DB.sqls.should == [] o.x = 2 o.save_changes DB.sqls.should == ["UPDATE items SET x = 3 WHERE (id = 3)"] o.save_changes DB.sqls.should == [] o.x = 4 o.save_changes DB.sqls.should == ["UPDATE items SET x = 5 WHERE (id = 3)"] end end describe "Model#new?" do before do @c = Class.new(Sequel::Model(:items)) do unrestrict_primary_key columns :x end DB.reset end it "should be true for a new instance" do n = @c.new(:x => 1) n.should be_new end it "should be false after saving" do n = @c.new(:x => 1) n.save n.should_not be_new end end describe Sequel::Model, "with a primary key" do it "should default to :id" do model_a = Class.new Sequel::Model model_a.primary_key.should == :id end it "should be changed through 'set_primary_key'" do model_a = Class.new(Sequel::Model){ set_primary_key :a } model_a.primary_key.should == :a end it "should accept single argument composite keys" do model_a = Class.new(Sequel::Model){ set_primary_key [:a, :b] } model_a.primary_key.should == [:a, :b] end end describe Sequel::Model, "without a primary key" do it "should return nil for primary key" do Class.new(Sequel::Model){no_primary_key}.primary_key.should be_nil end it "should raise a Sequel::Error on 'this'" do instance = Class.new(Sequel::Model){no_primary_key}.new proc{instance.this}.should raise_error(Sequel::Error) end end describe Sequel::Model, "#this" do before do @example = Class.new(Sequel::Model(:examples)) @example.columns :id, :a, :x, :y end it "should return a dataset identifying the record" do instance = @example.load(:id => 3) instance.this.sql.should == "SELECT * FROM examples WHERE (id = 3) LIMIT 1" end it "should support arbitary primary keys" do @example.set_primary_key :a instance = @example.load(:a => 3) instance.this.sql.should == "SELECT * FROM examples WHERE (a = 3) LIMIT 1" end it "should support composite primary keys" do @example.set_primary_key [:x, :y] instance = @example.load(:x => 4, :y => 5) instance.this.sql.should =~ /SELECT \* FROM examples WHERE \(\([xy] = [45]\) AND \([xy] = [45]\)\) LIMIT 1/ end end describe "Model#pk" do before do @m = Class.new(Sequel::Model) @m.columns :id, :x, :y end it "should by default return the value of the :id column" do m = @m.load(:id => 111, :x => 2, :y => 3) m.pk.should == 111 end it "should return the primary key value for custom primary key" do @m.set_primary_key :x m = @m.load(:id => 111, :x => 2, :y => 3) m.pk.should == 2 end it "should return the primary key value for composite primary key" do @m.set_primary_key [:y, :x] m = @m.load(:id => 111, :x => 2, :y => 3) m.pk.should == [3, 2] end it "should raise if no primary key" do @m.set_primary_key nil m = @m.new(:id => 111, :x => 2, :y => 3) proc {m.pk}.should raise_error(Sequel::Error) @m.no_primary_key m = @m.new(:id => 111, :x => 2, :y => 3) proc {m.pk}.should raise_error(Sequel::Error) end end describe "Model#pk_hash" do before do @m = Class.new(Sequel::Model) @m.columns :id, :x, :y end it "should by default return a hash with the value of the :id column" do m = @m.load(:id => 111, :x => 2, :y => 3) m.pk_hash.should == {:id => 111} end it "should return a hash with the primary key value for custom primary key" do @m.set_primary_key :x m = @m.load(:id => 111, :x => 2, :y => 3) m.pk_hash.should == {:x => 2} end it "should return a hash with the primary key values for composite primary key" do @m.set_primary_key [:y, :x] m = @m.load(:id => 111, :x => 2, :y => 3) m.pk_hash.should == {:y => 3, :x => 2} end it "should raise if no primary key" do @m.set_primary_key nil m = @m.new(:id => 111, :x => 2, :y => 3) proc {m.pk_hash}.should raise_error(Sequel::Error) @m.no_primary_key m = @m.new(:id => 111, :x => 2, :y => 3) proc {m.pk_hash}.should raise_error(Sequel::Error) end end describe Sequel::Model, "#set" do before do @c = Class.new(Sequel::Model(:items)) do set_primary_key :id columns :x, :y, :id end @c.strict_param_setting = false @o1 = @c.new @o2 = @c.load(:id => 5) DB.reset end it "should filter the given params using the model columns" do @o1.set(:x => 1, :z => 2) @o1.values.should == {:x => 1} DB.sqls.should == [] @o2.set(:y => 1, :abc => 2) @o2.values.should == {:y => 1, :id=> 5} DB.sqls.should == [] end it "should work with both strings and symbols" do @o1.set('x'=> 1, 'z'=> 2) @o1.values.should == {:x => 1} DB.sqls.should == [] @o2.set('y'=> 1, 'abc'=> 2) @o2.values.should == {:y => 1, :id=> 5} DB.sqls.should == [] end it "should support virtual attributes" do @c.send(:define_method, :blah=){|v| self.x = v} @o1.set(:blah => 333) @o1.values.should == {:x => 333} DB.sqls.should == [] @o1.set('blah'=> 334) @o1.values.should == {:x => 334} DB.sqls.should == [] end it "should not modify the primary key" do @o1.set(:x => 1, :id => 2) @o1.values.should == {:x => 1} DB.sqls.should == [] @o2.set('y'=> 1, 'id'=> 2) @o2.values.should == {:y => 1, :id=> 5} DB.sqls.should == [] end it "should return self" do returned_value = @o1.set(:x => 1, :z => 2) returned_value.should == @o1 DB.sqls.should == [] end it "should raise error if strict_param_setting is true and method does not exist" do @o1.strict_param_setting = true proc{@o1.set('foo' => 1)}.should raise_error(Sequel::Error) end it "should raise error if strict_param_setting is true and column is a primary key" do @o1.strict_param_setting = true proc{@o1.set('id' => 1)}.should raise_error(Sequel::Error) end it "should raise error if strict_param_setting is true and column is restricted" do @o1.strict_param_setting = true @c.set_allowed_columns proc{@o1.set('x' => 1)}.should raise_error(Sequel::Error) end it "should not create a symbol if strict_param_setting is true and string is given" do @o1.strict_param_setting = true l = Symbol.all_symbols.length proc{@o1.set('sadojafdso' => 1)}.should raise_error(Sequel::Error) Symbol.all_symbols.length.should == l end it "#set should correctly handle cases where an instance method is added to the class" do @o1.set(:x => 1) @o1.values.should == {:x => 1} @c.class_eval do def z=(v) self[:z] = v end end @o1.set(:x => 2, :z => 3) @o1.values.should == {:x => 2, :z=>3} end it "#set should correctly handle cases where a singleton method is added to the object" do @o1.set(:x => 1) @o1.values.should == {:x => 1} def @o1.z=(v) self[:z] = v end @o1.set(:x => 2, :z => 3) @o1.values.should == {:x => 2, :z=>3} end it "#set should correctly handle cases where a module with a setter method is included in the class" do @o1.set(:x => 1) @o1.values.should == {:x => 1} @c.send(:include, Module.new do def z=(v) self[:z] = v end end) @o1.set(:x => 2, :z => 3) @o1.values.should == {:x => 2, :z=>3} end it "#set should correctly handle cases where the object extends a module with a setter method " do @o1.set(:x => 1) @o1.values.should == {:x => 1} @o1.extend(Module.new do def z=(v) self[:z] = v end end) @o1.set(:x => 2, :z => 3) @o1.values.should == {:x => 2, :z=>3} end end describe Sequel::Model, "#update" do before do @c = Class.new(Sequel::Model(:items)) do set_primary_key :id columns :x, :y, :id end @c.strict_param_setting = false @o1 = @c.new @o2 = @c.load(:id => 5) DB.reset end it "should filter the given params using the model columns" do @o1.update(:x => 1, :z => 2) DB.sqls.should == ["INSERT INTO items (x) VALUES (1)", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] DB.reset @o2.update(:y => 1, :abc => 2) DB.sqls.should == ["UPDATE items SET y = 1 WHERE (id = 5)"] end it "should support virtual attributes" do @c.send(:define_method, :blah=){|v| self.x = v} @o1.update(:blah => 333) DB.sqls.should == ["INSERT INTO items (x) VALUES (333)", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end it "should not modify the primary key" do @o1.update(:x => 1, :id => 2) DB.sqls.should == ["INSERT INTO items (x) VALUES (1)", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] DB.reset @o2.update('y'=> 1, 'id'=> 2) @o2.values.should == {:y => 1, :id=> 5} DB.sqls.should == ["UPDATE items SET y = 1 WHERE (id = 5)"] end end describe Sequel::Model, "#set_fields" do before do @c = Class.new(Sequel::Model(:items)) do set_primary_key :id columns :x, :y, :z, :id end @o1 = @c.new DB.reset end it "should set only the given fields" do @o1.set_fields({:x => 1, :y => 2, :z=>3, :id=>4}, [:x, :y]) @o1.values.should == {:x => 1, :y => 2} @o1.set_fields({:x => 9, :y => 8, :z=>6, :id=>7}, [:x, :y, :id]) @o1.values.should == {:x => 9, :y => 8, :id=>7} DB.sqls.should == [] end it "should lookup into the hash without checking if the entry exists" do @o1.set_fields({:x => 1}, [:x, :y]) @o1.values.should == {:x => 1, :y => nil} @o1.set_fields(Hash.new(2), [:x, :y]) @o1.values.should == {:x => 2, :y => 2} end it "should skip missing fields if :missing=>:skip option is used" do @o1.set_fields({:x => 3}, [:x, :y], :missing=>:skip) @o1.values.should == {:x => 3} @o1.set_fields({"x" => 4}, [:x, :y], :missing=>:skip) @o1.values.should == {:x => 4} @o1.set_fields(Hash.new(2).merge(:x=>2), [:x, :y], :missing=>:skip) @o1.values.should == {:x => 2} @o1.set_fields({:x => 1, :y => 2, :z=>3, :id=>4}, [:x, :y], :missing=>:skip) @o1.values.should == {:x => 1, :y => 2} end it "should raise for missing fields if :missing=>:raise option is used" do proc{@o1.set_fields({:x => 1}, [:x, :y], :missing=>:raise)}.should raise_error(Sequel::Error) proc{@o1.set_fields(Hash.new(2).merge(:x=>2), [:x, :y], :missing=>:raise)}.should raise_error(Sequel::Error) proc{@o1.set_fields({"x" => 1}, [:x, :y], :missing=>:raise)}.should raise_error(Sequel::Error) @o1.set_fields({:x => 5, "y"=>2}, [:x, :y], :missing=>:raise) @o1.values.should == {:x => 5, :y => 2} @o1.set_fields({:x => 1, :y => 3, :z=>3, :id=>4}, [:x, :y], :missing=>:raise) @o1.values.should == {:x => 1, :y => 3} end it "should use default behavior for an unrecognized :missing option" do @o1.set_fields({:x => 1, :y => 2, :z=>3, :id=>4}, [:x, :y], :missing=>:foo) @o1.values.should == {:x => 1, :y => 2} @o1.set_fields({:x => 9, :y => 8, :z=>6, :id=>7}, [:x, :y, :id], :missing=>:foo) @o1.values.should == {:x => 9, :y => 8, :id=>7} DB.sqls.should == [] end it "should respect model's default_set_fields_options" do @c.default_set_fields_options = {:missing=>:skip} @o1.set_fields({:x => 3}, [:x, :y]) @o1.values.should == {:x => 3} @o1.set_fields({:x => 4}, [:x, :y], {}) @o1.values.should == {:x => 4} proc{@o1.set_fields({:x => 3}, [:x, :y], :missing=>:raise)}.should raise_error(Sequel::Error) @c.default_set_fields_options = {:missing=>:raise} proc{@o1.set_fields({:x => 3}, [:x, :y])}.should raise_error(Sequel::Error) proc{@o1.set_fields({:x => 3}, [:x, :y], {})}.should raise_error(Sequel::Error) @o1.set_fields({:x => 5}, [:x, :y], :missing=>:skip) @o1.values.should == {:x => 5} @o1.set_fields({:x => 5}, [:x, :y], :missing=>nil) @o1.values.should == {:x => 5, :y=>nil} DB.sqls.should == [] end it "should respect model's default_set_fields_options in a subclass" do @c.default_set_fields_options = {:missing=>:skip} o = Class.new(@c).new o.set_fields({:x => 3}, [:x, :y]) o.values.should == {:x => 3} end end describe Sequel::Model, "#update_fields" do before do @c = Class.new(Sequel::Model(:items)) do set_primary_key :id columns :x, :y, :z, :id end @c.strict_param_setting = true @o1 = @c.load(:id=>1) DB.reset end it "should set only the given fields, and then save the changes to the record" do @o1.update_fields({:x => 1, :y => 2, :z=>3, :id=>4}, [:x, :y]) @o1.values.should == {:x => 1, :y => 2, :id=>1} sqls = DB.sqls sqls.pop.should =~ /UPDATE items SET [xy] = [12], [xy] = [12] WHERE \(id = 1\)/ sqls.should == [] @o1.update_fields({:x => 1, :y => 5, :z=>6, :id=>7}, [:x, :y]) @o1.values.should == {:x => 1, :y => 5, :id=>1} DB.sqls.should == ["UPDATE items SET y = 5 WHERE (id = 1)"] end it "should support :missing=>:skip option" do @o1.update_fields({:x => 1, :z=>3, :id=>4}, [:x, :y], :missing=>:skip) @o1.values.should == {:x => 1, :id=>1} DB.sqls.should == ["UPDATE items SET x = 1 WHERE (id = 1)"] end it "should support :missing=>:raise option" do proc{@o1.update_fields({:x => 1}, [:x, :y], :missing=>:raise)}.should raise_error(Sequel::Error) end it "should respect model's default_set_fields_options" do @c.default_set_fields_options = {:missing=>:skip} @o1.update_fields({:x => 3}, [:x, :y]) @o1.values.should == {:x => 3, :id=>1} DB.sqls.should == ["UPDATE items SET x = 3 WHERE (id = 1)"] @c.default_set_fields_options = {:missing=>:raise} proc{@o1.update_fields({:x => 3}, [:x, :y])}.should raise_error(Sequel::Error) DB.sqls.should == [] end end describe Sequel::Model, "#(set|update)_(all|only)" do before do @c = Class.new(Sequel::Model(:items)) do set_primary_key :id columns :x, :y, :z, :id set_allowed_columns :x end @c.strict_param_setting = false @o1 = @c.new DB.reset end it "should raise errors if not all hash fields can be set and strict_param_setting is true" do @c.strict_param_setting = true proc{@c.new.set_all(:x => 1, :y => 2, :z=>3, :use_after_commit_rollback => false)}.should raise_error(Sequel::Error) (o = @c.new).set_all(:x => 1, :y => 2, :z=>3) o.values.should == {:x => 1, :y => 2, :z=>3} proc{@c.new.set_only({:x => 1, :y => 2, :z=>3, :id=>4}, :x, :y)}.should raise_error(Sequel::Error) proc{@c.new.set_only({:x => 1, :y => 2, :z=>3}, :x, :y)}.should raise_error(Sequel::Error) (o = @c.new).set_only({:x => 1, :y => 2}, :x, :y) o.values.should == {:x => 1, :y => 2} end it "#set_all should set all attributes including the primary key" do @o1.set_all(:x => 1, :y => 2, :z=>3, :id=>4) @o1.values.should == {:id =>4, :x => 1, :y => 2, :z=>3} end it "#set_all should set not set restricted fields" do @o1.set_all(:x => 1, :use_after_commit_rollback => false) @o1.use_after_commit_rollback.should be_true @o1.values.should == {:x => 1} end it "#set_only should only set given attributes" do @o1.set_only({:x => 1, :y => 2, :z=>3, :id=>4}, [:x, :y]) @o1.values.should == {:x => 1, :y => 2} @o1.set_only({:x => 4, :y => 5, :z=>6, :id=>7}, :x, :y) @o1.values.should == {:x => 4, :y => 5} @o1.set_only({:x => 9, :y => 8, :z=>6, :id=>7}, :x, :y, :id) @o1.values.should == {:x => 9, :y => 8, :id=>7} end it "#update_all should update all attributes" do @c.new.update_all(:x => 1) DB.sqls.should == ["INSERT INTO items (x) VALUES (1)", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] @c.new.update_all(:y => 1) DB.sqls.should == ["INSERT INTO items (y) VALUES (1)", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] @c.new.update_all(:z => 1) DB.sqls.should == ["INSERT INTO items (z) VALUES (1)", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end it "#update_only should only update given attributes" do @o1.update_only({:x => 1, :y => 2, :z=>3, :id=>4}, [:x]) DB.sqls.should == ["INSERT INTO items (x) VALUES (1)", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] @c.new.update_only({:x => 1, :y => 2, :z=>3, :id=>4}, :x) DB.sqls.should == ["INSERT INTO items (x) VALUES (1)", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end end describe Sequel::Model, "#destroy with filtered dataset" do before do @model = Class.new(Sequel::Model(DB[:items].where(:a=>1))) @model.columns :id, :a @instance = @model.load(:id => 1234) DB.reset end it "should raise a NoExistingObject exception if the dataset delete call doesn't return 1" do def (@instance.this).execute_dui(*a) 0 end proc{@instance.delete}.should raise_error(Sequel::NoExistingObject) def (@instance.this).execute_dui(*a) 2 end proc{@instance.delete}.should raise_error(Sequel::NoExistingObject) def (@instance.this).execute_dui(*a) 1 end proc{@instance.delete}.should_not raise_error @instance.require_modification = false def (@instance.this).execute_dui(*a) 0 end proc{@instance.delete}.should_not raise_error def (@instance.this).execute_dui(*a) 2 end proc{@instance.delete}.should_not raise_error end it "should include WHERE clause when deleting" do @instance.destroy DB.sqls.should == ["DELETE FROM items WHERE ((a = 1) AND (id = 1234))"] end end describe Sequel::Model, "#destroy" do before do @model = Class.new(Sequel::Model(:items)) @model.columns :id @instance = @model.load(:id => 1234) DB.reset end it "should return self" do @model.send(:define_method, :after_destroy){3} @instance.destroy.should == @instance end it "should raise a NoExistingObject exception if the dataset delete call doesn't return 1" do def (@model.dataset).execute_dui(*a) 0 end proc{@instance.delete}.should raise_error(Sequel::NoExistingObject) def (@model.dataset).execute_dui(*a) 2 end proc{@instance.delete}.should raise_error(Sequel::NoExistingObject) def (@model.dataset).execute_dui(*a) 1 end proc{@instance.delete}.should_not raise_error @instance.require_modification = false def (@model.dataset).execute_dui(*a) 0 end proc{@instance.delete}.should_not raise_error def (@model.dataset).execute_dui(*a) 2 end proc{@instance.delete}.should_not raise_error end it "should run within a transaction if use_transactions is true" do @instance.use_transactions = true @instance.destroy DB.sqls.should == ["BEGIN", "DELETE FROM items WHERE id = 1234", "COMMIT"] end it "should not run within a transaction if use_transactions is false" do @instance.use_transactions = false @instance.destroy DB.sqls.should == ["DELETE FROM items WHERE id = 1234"] end it "should run within a transaction if :transaction option is true" do @instance.use_transactions = false @instance.destroy(:transaction => true) DB.sqls.should == ["BEGIN", "DELETE FROM items WHERE id = 1234", "COMMIT"] end it "should not run within a transaction if :transaction option is false" do @instance.use_transactions = true @instance.destroy(:transaction => false) DB.sqls.should == ["DELETE FROM items WHERE id = 1234"] end it "should run before_destroy and after_destroy hooks" do @model.send(:define_method, :before_destroy){DB.execute('before blah')} @model.send(:define_method, :after_destroy){DB.execute('after blah')} @instance.destroy DB.sqls.should == ["before blah", "DELETE FROM items WHERE id = 1234", "after blah"] end end describe Sequel::Model, "#exists?" do before do @model = Class.new(Sequel::Model(:items)) @model.instance_dataset._fetch = @model.dataset._fetch = proc{|sql| {:x=>1} if sql =~ /id = 1/} DB.reset end it "should do a query to check if the record exists" do @model.load(:id=>1).exists?.should be_true DB.sqls.should == ['SELECT 1 AS one FROM items WHERE (id = 1) LIMIT 1'] end it "should return false when #this.count == 0" do @model.load(:id=>2).exists?.should be_false DB.sqls.should == ['SELECT 1 AS one FROM items WHERE (id = 2) LIMIT 1'] end it "should return false without issuing a query if the model object is new" do @model.new.exists?.should be_false DB.sqls.should == [] end end describe Sequel::Model, "#each" do before do @model = Class.new(Sequel::Model(:items)) @model.columns :a, :b, :id @m = @model.load(:a => 1, :b => 2, :id => 4444) end specify "should iterate over the values" do h = {} @m.each{|k, v| h[k] = v} h.should == {:a => 1, :b => 2, :id => 4444} end end describe Sequel::Model, "#keys" do before do @model = Class.new(Sequel::Model(:items)) @model.columns :a, :b, :id @m = @model.load(:a => 1, :b => 2, :id => 4444) end specify "should return the value keys" do @m.keys.sort_by{|k| k.to_s}.should == [:a, :b, :id] @model.new.keys.should == [] end end describe Sequel::Model, "#==" do specify "should compare instances by values" do z = Class.new(Sequel::Model) z.columns :id, :x a = z.load(:id => 1, :x => 3) b = z.load(:id => 1, :x => 4) c = z.load(:id => 1, :x => 3) a.should_not == b a.should == c b.should_not == c end specify "should be aliased to #eql?" do z = Class.new(Sequel::Model) z.columns :id, :x a = z.load(:id => 1, :x => 3) b = z.load(:id => 1, :x => 4) c = z.load(:id => 1, :x => 3) a.eql?(b).should == false a.eql?(c).should == true b.eql?(c).should == false end end describe Sequel::Model, "#===" do specify "should compare instances by class and pk if pk is not nil" do z = Class.new(Sequel::Model) z.columns :id, :x y = Class.new(Sequel::Model) y.columns :id, :x a = z.load(:id => 1, :x => 3) b = z.load(:id => 1, :x => 4) c = z.load(:id => 2, :x => 3) d = y.load(:id => 1, :x => 3) a.should === b a.should_not === c a.should_not === d end specify "should always be false if the primary key is nil" do z = Class.new(Sequel::Model) z.columns :id, :x y = Class.new(Sequel::Model) y.columns :id, :x a = z.new(:x => 3) b = z.new(:x => 4) c = z.new(:x => 3) d = y.new(:x => 3) a.should_not === b a.should_not === c a.should_not === d end end describe Sequel::Model, "#hash" do specify "should be the same only for objects with the same class and pk if the pk is not nil" do z = Class.new(Sequel::Model) z.columns :id, :x y = Class.new(Sequel::Model) y.columns :id, :x a = z.load(:id => 1, :x => 3) a.hash.should == z.load(:id => 1, :x => 4).hash a.hash.should_not == z.load(:id => 2, :x => 3).hash a.hash.should_not == y.load(:id => 1, :x => 3).hash end specify "should be the same only for objects with the same class and values if the pk is nil" do z = Class.new(Sequel::Model) z.columns :id, :x y = Class.new(Sequel::Model) y.columns :id, :x a = z.new(:x => 3) a.hash.should_not == z.new(:x => 4).hash a.hash.should == z.new(:x => 3).hash a.hash.should_not == y.new(:x => 3).hash end specify "should be the same only for objects with the same class and pk if pk is composite and all values are non-NULL" do z = Class.new(Sequel::Model) z.columns :id, :id2, :x z.set_primary_key([:id, :id2]) y = Class.new(Sequel::Model) y.columns :id, :id2, :x y.set_primary_key([:id, :id2]) a = z.load(:id => 1, :id2=>2, :x => 3) a.hash.should == z.load(:id => 1, :id2=>2, :x => 4).hash a.hash.should_not == z.load(:id => 2, :id2=>1, :x => 3).hash a.hash.should_not == y.load(:id => 1, :id2=>1, :x => 3).hash end specify "should be the same only for objects with the same class and value if pk is composite and one values is NULL" do z = Class.new(Sequel::Model) z.columns :id, :id2, :x z.set_primary_key([:id, :id2]) y = Class.new(Sequel::Model) y.columns :id, :id2, :x y.set_primary_key([:id, :id2]) a = z.load(:id => 1, :id2 => nil, :x => 3) a.hash.should == z.load(:id => 1, :id2=>nil, :x => 3).hash a.hash.should_not == z.load(:id => 1, :id2=>nil, :x => 4).hash a.hash.should_not == y.load(:id => 1, :id2=>nil, :x => 3).hash a = z.load(:id =>nil, :id2 => nil, :x => 3) a.hash.should == z.load(:id => nil, :id2=>nil, :x => 3).hash a.hash.should_not == z.load(:id => nil, :id2=>nil, :x => 4).hash a.hash.should_not == y.load(:id => nil, :id2=>nil, :x => 3).hash a = z.load(:id => 1, :x => 3) a.hash.should == z.load(:id => 1, :x => 3).hash a.hash.should_not == z.load(:id => 1, :id2=>nil, :x => 3).hash a.hash.should_not == z.load(:id => 1, :x => 4).hash a.hash.should_not == y.load(:id => 1, :x => 3).hash a = z.load(:x => 3) a.hash.should == z.load(:x => 3).hash a.hash.should_not == z.load(:id => nil, :id2=>nil, :x => 3).hash a.hash.should_not == z.load(:x => 4).hash a.hash.should_not == y.load(:x => 3).hash end specify "should be the same only for objects with the same class and values if the no primary key" do z = Class.new(Sequel::Model) z.columns :id, :x z.no_primary_key y = Class.new(Sequel::Model) y.columns :id, :x y.no_primary_key a = z.new(:x => 3) a.hash.should_not == z.new(:x => 4).hash a.hash.should == z.new(:x => 3).hash a.hash.should_not == y.new(:x => 3).hash end end describe Sequel::Model, "#initialize" do before do @c = Class.new(Sequel::Model) do columns :id, :x end @c.strict_param_setting = false end specify "should accept values" do m = @c.new(:x => 2) m.values.should == {:x => 2} end specify "should not modify the primary key" do m = @c.new(:id => 1, :x => 2) m.values.should == {:x => 2} end specify "should accept no values" do m = @c.new m.values.should == {} end specify "should accept a block to execute" do m = @c.new {|o| o[:id] = 1234} m.id.should == 1234 end specify "should accept virtual attributes" do @c.send(:define_method, :blah=){|x| @blah = x} @c.send(:define_method, :blah){@blah} m = @c.new(:x => 2, :blah => 3) m.values.should == {:x => 2} m.blah.should == 3 end specify "should convert string keys into symbol keys" do m = @c.new('x' => 2) m.values.should == {:x => 2} end end describe Sequel::Model, "#initialize_set" do before do @c = Class.new(Sequel::Model){columns :id, :x, :y} end specify "should be called by initialize to set the column values" do @c.send(:define_method, :initialize_set){|h| set(:y => 3)} @c.new(:x => 2).values.should == {:y => 3} end specify "should be called with the hash given to initialize " do x = nil @c.send(:define_method, :initialize_set){|y| x = y} @c.new(:x => 2) x.should == {:x => 2} end specify "should not cause columns modified by the method to be considered as changed" do @c.send(:define_method, :initialize_set){|h| set(:y => 3)} @c.new(:x => 2).changed_columns.should == [] end end describe Sequel::Model, ".create" do before do DB.reset @c = Class.new(Sequel::Model(:items)) do unrestrict_primary_key columns :x end end it "should be able to create rows in the associated table" do o = @c.create(:x => 1) o.class.should == @c DB.sqls.should == ['INSERT INTO items (x) VALUES (1)', "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end it "should be able to create rows without any values specified" do o = @c.create o.class.should == @c DB.sqls.should == ["INSERT INTO items DEFAULT VALUES", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end it "should accept a block and call it" do o1, o2, o3 = nil, nil, nil o = @c.create {|o4| o1 = o4; o3 = o4; o2 = :blah; o3.x = 333} o.class.should == @c o1.should === o o3.should === o o2.should == :blah DB.sqls.should == ["INSERT INTO items (x) VALUES (333)", "SELECT * FROM items WHERE (id = 10) LIMIT 1"] end it "should create a row for a model with custom primary key" do @c.set_primary_key :x o = @c.create(:x => 30) o.class.should == @c DB.sqls.should == ["INSERT INTO items (x) VALUES (30)", "SELECT * FROM items WHERE (x = 30) LIMIT 1"] end end describe Sequel::Model, "#refresh" do before do @c = Class.new(Sequel::Model(:items)) do unrestrict_primary_key columns :id, :x end DB.reset end specify "should reload the instance values from the database" do @m = @c.new(:id => 555) @m[:x] = 'blah' @c.instance_dataset._fetch = @c.dataset._fetch = {:x => 'kaboom', :id => 555} @m.refresh @m[:x].should == 'kaboom' DB.sqls.should == ["SELECT * FROM items WHERE (id = 555) LIMIT 1"] end specify "should raise if the instance is not found" do @m = @c.new(:id => 555) @c.instance_dataset._fetch =@c.dataset._fetch = [] proc {@m.refresh}.should raise_error(Sequel::Error) DB.sqls.should == ["SELECT * FROM items WHERE (id = 555) LIMIT 1"] end specify "should be aliased by #reload" do @m = @c.new(:id => 555) @c.instance_dataset._fetch =@c.dataset._fetch = {:x => 'kaboom', :id => 555} @m.reload @m[:x].should == 'kaboom' DB.sqls.should == ["SELECT * FROM items WHERE (id = 555) LIMIT 1"] end end describe Sequel::Model, "typecasting" do before do @c = Class.new(Sequel::Model(:items)) do columns :x end @c.db_schema = {:x=>{:type=>:integer}} @c.raise_on_typecast_failure = true DB.reset end after do Sequel.datetime_class = Time end specify "should not convert if typecasting is turned off" do @c.typecast_on_assignment = false m = @c.new m.x = '1' m.x.should == '1' end specify "should convert to integer for an integer field" do @c.db_schema = {:x=>{:type=>:integer}} m = @c.new m.x = '1' m.x.should == 1 m.x = 1 m.x.should == 1 m.x = 1.3 m.x.should == 1 end specify "should typecast '' to nil unless type is string or blob" do [:integer, :float, :decimal, :boolean, :date, :time, :datetime].each do |x| @c.db_schema = {:x=>{:type=>x}} m = @c.new m.x = '' m.x.should == nil end [:string, :blob].each do |x| @c.db_schema = {:x=>{:type=>x}} m = @c.new m.x = '' m.x.should == '' end end specify "should not typecast '' to nil if typecast_empty_string_to_nil is false" do m = @c.new m.typecast_empty_string_to_nil = false proc{m.x = ''}.should raise_error @c.typecast_empty_string_to_nil = false proc{@c.new.x = ''}.should raise_error end specify "should handle typecasting where == raises an error on the object" do m = @c.new o = Object.new def o.==(v) raise ArgumentError end def o.to_i() 4 end m.x = o m.x.should == 4 end specify "should not typecast nil if NULLs are allowed" do @c.db_schema[:x][:allow_null] = true m = @c.new m.x = nil m.x.should == nil end specify "should raise an error if attempting to typecast nil and NULLs are not allowed" do @c.db_schema[:x][:allow_null] = false proc{@c.new.x = nil}.should raise_error(Sequel::Error) proc{@c.new.x = ''}.should raise_error(Sequel::Error) end specify "should not raise an error if NULLs are not allowed and typecasting is turned off" do @c.typecast_on_assignment = false @c.db_schema[:x][:allow_null] = false m = @c.new m.x = nil m.x.should == nil end specify "should not raise when typecasting nil to NOT NULL column but raise_on_typecast_failure is off" do @c.raise_on_typecast_failure = false @c.typecast_on_assignment = true m = @c.new m.x = '' m.x.should == nil m.x = nil m.x.should == nil end specify "should raise an error if invalid data is used in an integer field" do proc{@c.new.x = 'a'}.should raise_error(Sequel::InvalidValue) end specify "should assign value if raise_on_typecast_failure is off and assigning invalid integer" do @c.raise_on_typecast_failure = false model = @c.new model.x = '1d' model.x.should == '1d' end specify "should convert to float for a float field" do @c.db_schema = {:x=>{:type=>:float}} m = @c.new m.x = '1.3' m.x.should == 1.3 m.x = 1 m.x.should == 1.0 m.x = 1.3 m.x.should == 1.3 end specify "should raise an error if invalid data is used in an float field" do @c.db_schema = {:x=>{:type=>:float}} proc{@c.new.x = 'a'}.should raise_error(Sequel::InvalidValue) end specify "should assign value if raise_on_typecast_failure is off and assigning invalid float" do @c.raise_on_typecast_failure = false @c.db_schema = {:x=>{:type=>:float}} model = @c.new model.x = '1d' model.x.should == '1d' end specify "should convert to BigDecimal for a decimal field" do @c.db_schema = {:x=>{:type=>:decimal}} m = @c.new bd = BigDecimal.new('1.0') m.x = '1.0' m.x.should == bd m.x = 1.0 m.x.should == bd m.x = 1 m.x.should == bd m.x = bd m.x.should == bd m.x = '0' m.x.should == 0 end specify "should raise an error if invalid data is used in an decimal field" do @c.db_schema = {:x=>{:type=>:decimal}} proc{@c.new.x = Date.today}.should raise_error(Sequel::InvalidValue) proc{@c.new.x = 'foo'}.should raise_error(Sequel::InvalidValue) end specify "should assign value if raise_on_typecast_failure is off and assigning invalid decimal" do @c.raise_on_typecast_failure = false @c.db_schema = {:x=>{:type=>:decimal}} model = @c.new time = Time.now model.x = time model.x.should == time end specify "should convert to string for a string field" do @c.db_schema = {:x=>{:type=>:string}} m = @c.new m.x = '1.3' m.x.should == '1.3' m.x = 1 m.x.should == '1' m.x = 1.3 m.x.should == '1.3' end specify "should convert to boolean for a boolean field" do @c.db_schema = {:x=>{:type=>:boolean}} m = @c.new m.x = '1.3' m.x.should == true m.x = 1 m.x.should == true m.x = 1.3 m.x.should == true m.x = 't' m.x.should == true m.x = 'T' m.x.should == true m.x = 'y' m.x.should == true m.x = 'Y' m.x.should == true m.x = true m.x.should == true m.x = nil m.x.should == nil m.x = '' m.x.should == nil m.x = [] m.x.should == nil m.x = 'f' m.x.should == false m.x = 'F' m.x.should == false m.x = 'false' m.x.should == false m.x = 'FALSE' m.x.should == false m.x = 'n' m.x.should == false m.x = 'N' m.x.should == false m.x = 'no' m.x.should == false m.x = 'NO' m.x.should == false m.x = '0' m.x.should == false m.x = 0 m.x.should == false m.x = false m.x.should == false end specify "should convert to date for a date field" do @c.db_schema = {:x=>{:type=>:date}} m = @c.new y = Date.new(2007,10,21) m.x = '2007-10-21' m.x.should == y m.x = Date.parse('2007-10-21') m.x.should == y m.x = Time.parse('2007-10-21') m.x.should == y m.x = DateTime.parse('2007-10-21') m.x.should == y end specify "should accept a hash with symbol or string keys for a date field" do @c.db_schema = {:x=>{:type=>:date}} m = @c.new y = Date.new(2007,10,21) m.x = {:year=>2007, :month=>10, :day=>21} m.x.should == y m.x = {'year'=>'2007', 'month'=>'10', 'day'=>'21'} m.x.should == y end specify "should raise an error if invalid data is used in a date field" do @c.db_schema = {:x=>{:type=>:date}} proc{@c.new.x = 'a'}.should raise_error(Sequel::InvalidValue) proc{@c.new.x = 100}.should raise_error(Sequel::InvalidValue) end specify "should assign value if raise_on_typecast_failure is off and assigning invalid date" do @c.raise_on_typecast_failure = false @c.db_schema = {:x=>{:type=>:date}} model = @c.new model.x = 4 model.x.should == 4 end specify "should convert to Sequel::SQLTime for a time field" do @c.db_schema = {:x=>{:type=>:time}} m = @c.new x = '10:20:30' y = Sequel::SQLTime.parse(x) m.x = x m.x.should == y m.x = y m.x.should == y m.x.should be_a_kind_of(Sequel::SQLTime) end specify "should accept a hash with symbol or string keys for a time field" do @c.db_schema = {:x=>{:type=>:time}} m = @c.new y = Time.parse('10:20:30') m.x = {:hour=>10, :minute=>20, :second=>30} m.x.should == y m.x = {'hour'=>'10', 'minute'=>'20', 'second'=>'30'} m.x.should == y end specify "should raise an error if invalid data is used in a time field" do @c.db_schema = {:x=>{:type=>:time}} proc{@c.new.x = '0000'}.should raise_error proc{@c.new.x = Date.parse('2008-10-21')}.should raise_error(Sequel::InvalidValue) proc{@c.new.x = DateTime.parse('2008-10-21')}.should raise_error(Sequel::InvalidValue) end specify "should assign value if raise_on_typecast_failure is off and assigning invalid time" do @c.raise_on_typecast_failure = false @c.db_schema = {:x=>{:type=>:time}} model = @c.new model.x = '0000' model.x.should == '0000' end specify "should convert to the Sequel.datetime_class for a datetime field" do @c.db_schema = {:x=>{:type=>:datetime}} m = @c.new x = '2007-10-21T10:20:30-07:00' y = Time.parse(x) m.x = x m.x.should == y m.x = DateTime.parse(x) m.x.should == y m.x = Time.parse(x) m.x.should == y m.x = Date.parse('2007-10-21') m.x.should == Time.parse('2007-10-21') Sequel.datetime_class = DateTime y = DateTime.parse(x) m.x = x m.x.should == y m.x = DateTime.parse(x) m.x.should == y m.x = Time.parse(x) m.x.should == y m.x = Date.parse('2007-10-21') m.x.should == DateTime.parse('2007-10-21') end specify "should accept a hash with symbol or string keys for a datetime field" do @c.db_schema = {:x=>{:type=>:datetime}} m = @c.new y = Time.parse('2007-10-21 10:20:30') m.x = {:year=>2007, :month=>10, :day=>21, :hour=>10, :minute=>20, :second=>30} m.x.should == y m.x = {'year'=>'2007', 'month'=>'10', 'day'=>'21', 'hour'=>'10', 'minute'=>'20', 'second'=>'30'} m.x.should == y Sequel.datetime_class = DateTime y = DateTime.parse('2007-10-21 10:20:30') m.x = {:year=>2007, :month=>10, :day=>21, :hour=>10, :minute=>20, :second=>30} m.x.should == y m.x = {'year'=>'2007', 'month'=>'10', 'day'=>'21', 'hour'=>'10', 'minute'=>'20', 'second'=>'30'} m.x.should == y end specify "should raise an error if invalid data is used in a datetime field" do @c.db_schema = {:x=>{:type=>:datetime}} proc{@c.new.x = '0000'}.should raise_error(Sequel::InvalidValue) Sequel.datetime_class = DateTime proc{@c.new.x = '0000'}.should raise_error(Sequel::InvalidValue) proc{@c.new.x = 'a'}.should raise_error(Sequel::InvalidValue) end specify "should assign value if raise_on_typecast_failure is off and assigning invalid datetime" do @c.raise_on_typecast_failure = false @c.db_schema = {:x=>{:type=>:datetime}} model = @c.new model.x = '0000' model.x.should == '0000' Sequel.datetime_class = DateTime model = @c.new model.x = '0000' model.x.should == '0000' model.x = 'a' model.x.should == 'a' end end describe "Model#lock!" do before do @c = Class.new(Sequel::Model(:items)) do columns :id end @c.dataset._fetch = {:id=>1} DB.reset end it "should do nothing if the record is a new record" do o = @c.new def o._refresh(x) raise Sequel::Error; super(x) end x = o.lock! x.should == o DB.sqls.should == [] end it "should refresh the record using for_update if it is not a new record" do o = @c.load(:id => 1) def o._refresh(x) instance_variable_set(:@a, 1); super(x) end x = o.lock! x.should == o o.instance_variable_get(:@a).should == 1 DB.sqls.should == ["SELECT * FROM items WHERE (id = 1) LIMIT 1 FOR UPDATE"] end end describe "Model#schema_type_class" do specify "should return the class or array of classes for the given type symbol" do @c = Class.new(Sequel::Model(:items)) @c.class_eval{@db_schema = {:id=>{:type=>:integer}}} @c.new.send(:schema_type_class, :id).should == Integer end end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/spec_helper.rb���������������������������������������������������������0000664�0000000�0000000�00000003330�12201565355�0020645�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'rubygems' unless Object.const_defined?('Sequel') && Sequel.const_defined?('Model') $:.unshift(File.join(File.dirname(File.expand_path(__FILE__)), "../../lib/")) require 'sequel' end Sequel::Deprecation.backtrace_filter = lambda{|line, lineno| lineno < 4 || line =~ /_spec\.rb/} (defined?(RSpec) ? RSpec::Core::ExampleGroup : Spec::Example::ExampleGroup).class_eval do if ENV['SEQUEL_DEPRECATION_WARNINGS'] class << self alias qspecify specify end else def self.qspecify(*a, &block) specify(*a) do begin output = Sequel::Deprecation.output Sequel::Deprecation.output = false instance_exec(&block) ensure Sequel::Deprecation.output = output end end end end end Sequel.quote_identifiers = false Sequel.identifier_input_method = nil Sequel.identifier_output_method = nil class << Sequel::Model attr_writer :db_schema alias orig_columns columns def columns(*cols) return super if cols.empty? define_method(:columns){cols} @dataset.instance_variable_set(:@columns, cols) if @dataset def_column_accessor(*cols) @columns = cols @db_schema = {} cols.each{|c| @db_schema[c] = {}} end end Sequel::Model.use_transactions = false Sequel.cache_anonymous_models = false db = Sequel.mock(:fetch=>{:id => 1, :x => 1}, :numrows=>1, :autoid=>proc{|sql| 10}) def db.schema(*) [[:id, {:primary_key=>true}]] end def db.reset() sqls end def db.supports_schema_parsing?() true end Sequel::Model.db = DB = db if ENV['SEQUEL_COLUMNS_INTROSPECTION'] Sequel.extension :columns_introspection Sequel::Database.extension :columns_introspection Sequel::Mock::Dataset.send(:include, Sequel::ColumnsIntrospection) end ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/model/validations_spec.rb����������������������������������������������������0000664�0000000�0000000�00000012377�12201565355�0021716�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper") describe Sequel::Model::Errors do before do @errors = Sequel::Model::Errors.new end specify "should be clearable using #clear" do @errors.add(:a, 'b') @errors.should == {:a=>['b']} @errors.clear @errors.should == {} end specify "should be empty if there are no errors" do @errors.should be_empty end specify "should not be empty if there are errors" do @errors.add(:blah, "blah") @errors.should_not be_empty end specify "should return an array of errors for a specific attribute using #on if there are errors" do @errors.add(:blah, 'blah') @errors.on(:blah).should == ['blah'] end specify "should return nil using #on if there are no errors for that attribute" do @errors.on(:blah).should == nil end specify "should accept errors using #add" do @errors.add :blah, 'zzzz' @errors[:blah].should == ['zzzz'] end specify "should return full messages using #full_messages" do @errors.full_messages.should == [] @errors.add(:blow, 'blieuh') @errors.add(:blow, 'blich') @errors.add(:blay, 'bliu') msgs = @errors.full_messages msgs.size.should == 3 msgs.should include('blow blieuh', 'blow blich', 'blay bliu') end specify "should not add column names for LiteralStrings" do @errors.full_messages.should == [] @errors.add(:blow, 'blieuh') @errors.add(:blow, Sequel.lit('blich')) @errors.add(:blay, 'bliu') msgs = @errors.full_messages msgs.size.should == 3 msgs.should include('blow blieuh', 'blich', 'blay bliu') end specify "should return the number of error messages using #count" do @errors.count.should == 0 @errors.add(:a, 'b') @errors.count.should == 1 @errors.add(:a, 'c') @errors.count.should == 2 @errors.add(:b, 'c') @errors.count.should == 3 end specify "should return the array of error messages for a given attribute using #on" do @errors.add(:a, 'b') @errors.on(:a).should == ['b'] @errors.add(:a, 'c') @errors.on(:a).should == ['b', 'c'] @errors.add(:b, 'c') @errors.on(:a).should == ['b', 'c'] end specify "should return nil if there are no error messages for a given attribute using #on" do @errors.on(:a).should == nil @errors.add(:b, 'b') @errors.on(:a).should == nil end end describe Sequel::Model do before do @c = Class.new(Sequel::Model) do columns :score def validate errors.add(:score, 'too low') if score < 87 end end @o = @c.new end specify "should supply a #valid? method that returns true if validations pass" do @o.score = 50 @o.should_not be_valid @o.score = 100 @o.should be_valid end specify "should provide an errors object" do @o.score = 100 @o.should be_valid @o.errors.should be_empty @o.score = 86 @o.should_not be_valid @o.errors[:score].should == ['too low'] @o.errors.on(:blah).should be_nil end specify "should allow raising of ValidationFailed with a Model instance with errors" do @o.errors.add(:score, 'is too low') begin raise Sequel::ValidationFailed, @o rescue Sequel::ValidationFailed => e end e.model.should equal(@o) e.errors.should equal(@o.errors) e.message.should == 'score is too low' end specify "should allow raising of ValidationFailed with an Errors instance" do @o.errors.add(:score, 'is too low') begin raise Sequel::ValidationFailed, @o.errors rescue Sequel::ValidationFailed => e end e.model.should be_nil e.errors.should equal(@o.errors) e.message.should == 'score is too low' end specify "should allow raising of ValidationFailed with a string" do proc{raise Sequel::ValidationFailed, "no reason"}.should raise_error(Sequel::ValidationFailed, "no reason") end end describe "Model#save" do before do @c = Class.new(Sequel::Model(:people)) do columns :id, :x def validate errors.add(:id, 'blah') unless x == 7 end end @m = @c.load(:id => 4, :x=>6) DB.reset end specify "should save only if validations pass" do @m.raise_on_save_failure = false @m.should_not be_valid @m.save DB.sqls.should be_empty @m.x = 7 @m.should be_valid @m.save.should_not be_false DB.sqls.should == ['UPDATE people SET x = 7 WHERE (id = 4)'] end specify "should skip validations if the :validate=>false option is used" do @m.raise_on_save_failure = false @m.should_not be_valid @m.save(:validate=>false) DB.sqls.should == ['UPDATE people SET x = 6 WHERE (id = 4)'] end specify "should raise error if validations fail and raise_on_save_failure is true" do proc{@m.save}.should(raise_error(Sequel::ValidationFailed) do |e| e.model.should equal(@m) e.errors.should equal(@m.errors) end) end specify "should raise error if validations fail and :raise_on_failure option is true" do @m.raise_on_save_failure = false proc{@m.save(:raise_on_failure => true)}.should raise_error(Sequel::ValidationFailed) end specify "should return nil if validations fail and raise_on_save_faiure is false" do @m.raise_on_save_failure = false @m.save.should == nil end end �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/rcov.opts��������������������������������������������������������������������0000664�0000000�0000000�00000000074�12201565355�0016611�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������--exclude gems --exclude spec --exclude 00* --threshold 100 ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/spec/sequel_coverage.rb�����������������������������������������������������������0000664�0000000�0000000�00000000742�12201565355�0020431�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������require 'coverage' require 'simplecov' def SimpleCov.sequel_coverage(opts = {}) start do add_filter "/spec/" add_group('Missing-Revelent'){|src| src.filename =~ opts[:group] && src.covered_percent < 100} if opts[:group] add_group('Missing'){|src| src.covered_percent < 100} add_group('Covered'){|src| src.covered_percent == 100} add_filter{|src| src.filename !~ opts[:filter]} if opts[:filter] yield self if block_given? end end ENV.delete('COVERAGE') ������������������������������ruby-sequel-4.1.1/spec/spec_config.rb.example�������������������������������������������������������0000664�0000000�0000000�00000000231�12201565355�0021162�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������# Custom setup for the adapter/integration specs # ENV['SEQUEL_INTEGRATION_URL'] = 'sqlite:/' # ENV['SEQUEL_POSTGRES_URL'] = 'postgres://localhost/test' �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/www/������������������������������������������������������������������������������0000775�0000000�0000000�00000000000�12201565355�0014622�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ruby-sequel-4.1.1/www/layout.html.erb���������������������������������������������������������������0000664�0000000�0000000�00000003470�12201565355�0017600�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Sequel: The Database Toolkit for Ruby<%= " - #{title.capitalize}" unless title == 'index' %>

<%= content %>
ruby-sequel-4.1.1/www/make_www.rb000077500000000000000000000006601220156535500167750ustar00rootroot00000000000000#!/usr/bin/env ruby require 'erb' $: << File.join(File.dirname(__FILE__), '..','lib', 'sequel') require 'version' Dir.chdir(File.dirname(__FILE__)) erb = ERB.new(File.read('layout.html.erb')) Dir['pages/*'].each do |page| public_loc = "#{page.gsub(/\Apages\//, 'public/')}.html" content = ERB.new(File.read(page)).result(binding) title = File.basename(page) File.open(public_loc, 'wb'){|f| f.write(erb.result(binding))} end ruby-sequel-4.1.1/www/pages/000077500000000000000000000000001220156535500157215ustar00rootroot00000000000000ruby-sequel-4.1.1/www/pages/development000066400000000000000000000026071220156535500201730ustar00rootroot00000000000000

Development

Sequel is being actively developed. New versions of Sequel are generally released monthly during the first week of the month. You can join in on the discussions, ask questions, suggest features, and discuss Sequel in general by joining our Google Group - Sequel Talk or our IRC channel.

Reporting Bugs

To report a bug in Sequel, use GitHub Issues. If you aren't sure if something is a bug, post a question on the Google Group or ask on IRC.

Source Code

The master source code repository is jeremyevans/sequel on github. The latest release version also has a git clone at RubyForge.

Submitting Patches

The easiest way to contribute is to use git, post the changes to a public repository, and send a pull request, either via github, the google group, or IRC. Posting patches to the bug tracker or the Google Group works fine as well.

License

Sequel is distributed under the MIT License. Patches are assumed to be submitted under the same license as Sequel.

ruby-sequel-4.1.1/www/pages/documentation000066400000000000000000000140771220156535500205260ustar00rootroot00000000000000

Documentation for Sequel (v<%= Sequel.version %>)

General Info, Guides, Examples, and Tutorials

RDoc

Change Log

Release Notes

    <% %w'4 3'.each do |i| %>
  • Sequel <%= i %>:
      <% lines = [] Dir["../doc/release_notes/#{i}.*.txt"].map{|f| File.basename(f)}.each do |f| (lines[f.split('.')[1].to_i/10] ||= []) << f end lines.reverse.each do |fs| %>
    • <% fs.sort_by{|f| f.split('.').map{|x| x.to_i}}.reverse.each do |f| %> <%= f.sub(/\.txt$/, '').sub(/(..)\.0$/, '\\1') %> | <% end %>
    • <% end %>
  • <% end %> <% %w'2 1'.each do |i| %>
  • Sequel <%= i %>: <% Dir["../doc/release_notes/#{i}.*.txt"].map{|f| File.basename(f)}.sort_by{|f| f.split('.').map{|x| x.to_i}}.reverse.each do |f| %> <%= f.sub(/\.txt$/, '').sub(/(..)\.0$/, '\\1') %> | <% end %>
  • <% end %>

License

Presentations

ruby-sequel-4.1.1/www/pages/index000066400000000000000000000103271220156535500167560ustar00rootroot00000000000000

Sequel: The Database Toolkit for Ruby (v<%= Sequel::VERSION %>)

"Sequel has restored my faith in Ruby. It's really amazing. The O/RM I've been hoping for for years." -- Sam Smoot, creator of DataMapper

Features:

  • Sequel provides thread safety, connection pooling and a concise DSL for constructing SQL queries and table schemas.
  • Sequel includes a comprehensive ORM layer for mapping records to Ruby objects and handling associated records.
  • Sequel supports advanced database features such as prepared statements, bound variables, stored procedures, savepoints, two-phase commit, transaction isolation, master/slave configurations, and database sharding.
  • Sequel currently has adapters for ADO, Amalgalite, CUBRID, DataObjects, DB2, DBI, Firebird, IBM_DB, Informix, JDBC, MySQL, Mysql2, ODBC, OpenBase, Oracle, PostgreSQL, SQLite3, Swift, and TinyTDS.

A short example:

require "rubygems"
require "sequel"

# connect to an in-memory database
DB = Sequel.sqlite

# create an items table
DB.create_table :items do
  primary_key :id
  String :name
  Float :price
end

# create a dataset from the items table
items = DB[:items]

# populate the table
items.insert(:name => 'abc', :price => rand * 100)
items.insert(:name => 'def', :price => rand * 100)
items.insert(:name => 'ghi', :price => rand * 100)

# print out the number of records
puts "Item count: #{items.count}"

# print out the average price
puts "The average price is: #{items.avg(:price)}"

Learn more about Sequel…

ruby-sequel-4.1.1/www/pages/plugins000066400000000000000000000621761220156535500173410ustar00rootroot00000000000000

Sequel::Model Plugins for v<%= Sequel.version %>

Sequel::Model has a standardized and very flexible plugin architecture, see the RDoc. Here is a list of plugins that members of the Sequel community have developed:

Plugins that ship with Sequel

  • Associations:
    • association_dependencies: Allows easy deleting, destroying, or nullifying associated objects when destroying a model object.
    • association_pks: Adds the association_pks and association_pks= to *_to_many associations.
    • association_proxies: Changes the *_to_many association method to return a proxy instead of an array of objects.
    • dataset_associations: Adds association methods to datasets that return datasets of associated objects.
    • eager_each: Makes each on an eagerly loaded dataset do eager loading.
    • many_through_many: Allows you to create an association to multiple objects through multiple join tables.
    • nested_attributes: Allows you to modified associated objects directly through a model object, similar to ActiveRecord's Nested Attributes.
    • pg_array_associations: Adds associations types to handle the case where foreign keys are stored in a PostgreSQL array in one of the tables.
    • rcte_tree: Supports retrieving all ancestors and descendants for tree structured data using recursive common table expressions.
    • tactical_eager_loading: Allows you to eagerly load an association for all objects retreived from the same dataset when calling the association method on any of the objects in the dataset.
    • tree: Allows you to treat model objects as being part of a tree, finding ancestors, descendants, siblings, and tree roots.
  • Attributes:
    • blacklist_security: Adds blacklist-based model mass-assignment protection.
    • boolean_readers: Adds attribute? methods for all boolean columns.
    • dirty: Allows you get get initial values of columns after changing the values.
    • defaults_setter: Get default values for new models before saving.
    • force_encoding: Forces the all model column string values to a given encoding.
    • input_transformer: Automatically transform input to model column setters.
    • lazy_attributes: Allows you to set some attributes that should not be loaded by default, but only loaded when an object requests them.
    • string_stripper: Strips strings assigned to model attributes.
  • Caching:
    • caching: Supports caching primary key lookups of model objects to any object that supports the Ruby-Memcache API.
    • static_cache: Caches all model instances, improving performance for static models.
  • Hooks:
    • after_initialize: Adds an after_initialize hook to model objects, called for both new objects and those loaded from the database.
    • hook_class_methods: Adds backwards compatiblity for the legacy class-level hook methods (e.g. before_save :do_something).
    • instance_hooks: Allows you to add hooks to specific model instances.
  • Inheritance:
    • class_table_inheritance: Supports inheritance in the database by using a single database table for each class in a class hierarchy.
    • single_table_inheritance: Supports inheritance in the database by using a single table for all classes in a class hierarchy.
  • Prepared Statements:
  • Saving:
    • instance_filters: Allows you to add per instance filters that are used when updating or destroying the instance.
    • optimistic_locking: Adds a database-independent locking mechanism to models to prevent concurrent updates overwriting changes.
    • sharding: Additional model support for Sequel's sharding support.
    • skip_create_refresh: Allows you to skip the refresh when saving new model objects.
    • timestamps: Creates hooks for automatically setting create and update timestamps.
    • touch: Allows easily updating timestamps via Model#touch, as well as touching associations when model instances are updated or destroyed.
    • unlimited_update: Works around MySQL warnings when using replication due to LIMIT clause use when updating model instances.
    • update_primary_key: Allows you to safely update the primary key of a model object.
  • Serialization:
    • composition: Supports defining getters/setters for objects with data backed by the model's columns.
    • json_serializer: Allows you to serialize/deserialize model objects to/from JSON.
    • serialization: Supports serializing column values and storing them as either marshal, yaml, or json in the database.
    • serialization_modification_detection: Allows you to detect changes to serialized columns.
    • xml_serializer: Allows you to serialize/deserialize model objects to/from XML.
  • Validations:
  • Other:
    • active_model: Makes Sequel::Model objects compliant to the ActiveModel::Lint specs, so they should work correctly in Rails 3+.
    • list: Allows you to treat model objects as being part of a list, so you can move them up/down and get next/previous entries.
    • pg_row: Allows Sequel::Model classes to implement PostgreSQL row-valued/composite types.
    • pg_typecast_on_load: Handles typecasting of PostgreSQL array/hstore/composite types when loading model objects via jdbc, do, or swift adapters.
    • schema: Adds backwards compatibility for Model.set_schema and Model.create_table.
    • scissors: Adds class methods for delete, destroy, and update.
    • subclasses: Allows easy access all model subclasses and descendent classes, without using ObjectSpace.
    • typecast_on_load: Fixes bad database typecasting when loading model objects.

External Plugins

Sequel Extensions

Extensions are modifications or additions to Sequel that affect either Sequel::Database objects (database extensions), Sequel::Dataset objects (dataset extensions), or the general ruby environment (global extensions).

Database extensions that ship with Sequel

Database extensions can be loaded into a single Sequel::Database object via Sequel::Database#extension, or to all databases by using Sequel::Database.extension.

  • arbitrary_servers: Adds ability to connection to arbitrary servers (instead of just preconfigured ones) in the sharding support.
  • connection_validator: Automatically validates connections on pool checkout and handles disconnections transparently.
  • constraint_validations: Creates database constraints when creating/altering tables, with metadata for automatic model validations via the constraint_validations plugin.
  • looser_typecasting: Uses .to_f and .to_i instead of Kernel.Float and Kernel.Integer when typecasting floats and integers.
  • pg_array: Adds support for PostgreSQL arrays.
  • pg_hstore: Adds support for the PostgreSQL hstore type.
  • pg_inet: Adds support for the PostgreSQL inet and cidr types.
  • pg_interval: Adds support for the PostgreSQL interval type.
  • pg_json: Adds support for the PostgreSQL json type.
  • pg_range: Adds support for PostgreSQL range types.
  • pg_row: Adds support for PostgreSQL row-valued/composite types.
  • schema_caching: Speeds up loading a large number of models by caching database schema and loading it from a file.
  • schema_dumper: Adds Database#dump_schema_migration and related methods for dumping the schema of the database as a migration that can be restored on other databases.
  • server_block: Adds Database#with_server method that makes access inside the passed block use the specified shard by default.

Dataset extensions that ship with Sequel

Dataset extensions can be loaded into a single Sequel::Database object via Sequel::Dataset#extension, or to all datasets for a given database via Sequel::Database.extension, or all datasets for all databases by using Sequel::Database.extension.

  • columns_introspection: Attemps to skip database queries by introspecting the selected columns if possible.
  • date_arithmetic: Allows for database-independent date calculations (adding/subtracting an interval to/from a date/timestamp).
  • empty_array_ignore_nulls: Makes Sequel's handling of IN/NOT IN with an empty ignore correct NULL handling.
  • filter_having: Makes Dataset#filter, #and, #or, and #having operate on HAVING clause if HAVING clause is already present.
  • graph_each: Makes Dataset#each split returned results by table when using Dataset#graph.
  • hash_aliases: Makes Dataset#select and #from treat hashes and alias specifications.
  • null_dataset: Adds Dataset#nullify to get a dataset that will never issue a query.
  • pagination: Adds Dataset#paginate for easier pagination of datasets.
  • pretty_table: Adds Dataset#print for printing a dataset as a simple plain-text table.
  • query: Adds Dataset#query for a different interface to creating queries that doesn't use method chaining.
  • query_literals: Automatically uses literal strings for regular strings in select, group, and order methods (similar to filter methods).
  • select_remove: Adds Dataset#select_remove to remove selected columns from a dataset.
  • sequel_3_dataset_methods: Adds Dataset#[]=, #insert_multiple, #qualify_to, #qualify_to_first_source, #set, #to_csv, #db=, and #opts= methods.
  • set_overrides: Adds Dataset#set_defaults and #set_overrides for setting default values in some INSERT/UPDATE statements.
  • split_array_nil: Splits nils out of IN/NOT IN arrays into separate OR IS NULL or AND IS NOT NULL clauses.
  • to_dot: Adds Dataset#to_dot method, which returns a string suitable for processing by graphviz's dot program to get a visualization of the dataset's abstract syntax tree.

Global extensions that ship with Sequel

Global extensions can affect other parts of Sequel or the general ruby environment, and are loaded with Sequel.extension.

  • blank: Adds blank? instance methods to all objects.
  • core_extensions: Extends the Array, Hash, String, and Symbol classes with methods that return Sequel expression objects.
  • core_refinements: Adds refinement versions of Sequel's core extensions.
  • eval_inspect: Makes inspect on Sequel's expression objects attempt to return a string suitable for eval.
  • inflector: Adds instance-level inflection methods to String.
  • meta_def: Adds meta_def method for defining methods to Database, Dataset, and Model classes and instances.
  • migration: Adds Migration and Migrator classes for easily migrating the database schema forward or reverting to a previous version.
  • named_timezones: Allows you to use named timezones instead of just :local and :utc (requires TZInfo).
  • pg_array_ops: Adds DSL support for calling PostgreSQL array operators and functions.
  • pg_hstore_ops: Adds DSL support for calling PostgreSQL hstore operators and functions.
  • pg_json_ops: Adds DSL support for calling PostgreSQL json operators and functions.
  • pg_range_ops: Adds DSL support for calling PostgreSQL range operators and functions.
  • pg_row_ops: Adds DSL support for dealing with PostgreSQL row-valued/composite types.
  • ruby18_symbol_extensions: Contains implementations of Symbol#{<,<=,>,>=,[]} that are only usable on ruby 1.8.
  • sql_expr: Adds sql_expr method to all objects, allowing easy use of Sequel's DSL.
  • string_date_time: Adds instance methods to String for converting the string into a Date/Time/DateTime.
  • thread_local_timezones: Allows for per-thread overrides of the global timezone settings.

External Extensions

  • fixture_dependencies: YAML fixture loader that handles dependencies/associated objects, respecting foreign key constraints.
  • i18n_backend_sequel: Allows Sequel to be a backend for i18n translations.
  • merb_sequel: Merb plugin that provides support for Sequel models.
  • rails_sequel: Rails 2 plugin that allows you to use Sequel instead of ActiveRecord.
  • rspec_sequel_matchers: RSpec matchers for Sequel validations, associations, and columns.
  • sequel_extjs: Generates JSON from datasets that is consumable by the ExtJS JsonStore.
  • sequel-location: Easy setup and syntax for doing geolocation search on PostgreSQL.
  • sequel_pg: Faster SELECTs when using Sequel with pg.
  • sequel_plus: Collection of sequel extensions.
  • sequel_postgresql_triggers: Database enforced timestamps, immutable columns, and counter/sum caches.
  • sequel-rails: Rails 3 plugin that allows you to use Sequel instead of ActiveRecord.
  • sequel_rails3: Another Rails 3 plugin that allows you to use Sequel instead of ActiveRecord.
  • sequel_vectorized: Allows Sequel::Dataset to be exported as an Hash of Arrays and NArrays.

Submitting Your Plugin/Extension

If you have created a Sequel plugin/extension and would like to list it here, please submit a request to code AT jeremyevans DOT net, or send a pull request via github.

ruby-sequel-4.1.1/www/pages/press000066400000000000000000000026031220156535500170010ustar00rootroot00000000000000

Press

Web Articles on Sequel

Magazine Articles on Sequel

Companies and Individuals Using Sequel

Button

You can use this button if you want to show that your site is powered by Sequel:

ruby-sequel-4.1.1/www/public/000077500000000000000000000000001220156535500161005ustar00rootroot00000000000000ruby-sequel-4.1.1/www/public/css/000077500000000000000000000000001220156535500166705ustar00rootroot00000000000000ruby-sequel-4.1.1/www/public/css/application.css000066400000000000000000000100661220156535500217100ustar00rootroot00000000000000a { color: maroon; text-decoration: none; } a:hover { text-decoration: underline; } a:visited { color: maroon; } body { background-color: #333333; background-repeat: repeat-y; color: #333333; font-size: 13px; line-height: 16px; font-family: Arial, sans-serif; margin: 0 10%; overflow: scroll; padding: 0; } #header, #footer, #content { border-right: 4px solid dimgray; border-left: 4px solid dimgray; padding: 0 2em; background-color: White; background-repeat: repeat-y; } #header { padding: 1em 2em; background-color: ivory; background-repeat: repeat-x; } #footer { border-bottom: 4px solid dimgray; margin-bottom: 1em; padding-bottom: 1em; } h1 { font-size: 20px; font-weight: normal; margin: 25px 0 0 0; text-transform: uppercase; white-space: pre; } img { border: none; } textarea { width: 100%; } #logo { float: left; margin-right: 1em; } #searchbox { border-radius: 10px; font-size: 10px; padding: 3px; margin-left: 15px; } #navigation ul { list-style-type: none; margin-top: 3px; margin-bottom: 0; padding-left: 0; } #navigation ul li { float: left; font-size: 12px; margin-right: 1em; } #navigation ul li a:visited { color: maroon; } #injectedBox { background-color: ivory; border: 2px solid lightgray; float: right; font-size: 75%; min-height: 480px; margin: 3em 0 3em 3em; padding: 1.5em; width: 220px; } #injectedBox h2 { margin-top: 0; } #injectedBox h3.name { font-weight: normal; margin: 1em 0 0 0; } #injectedBox p { margin: 0; } hr { background-color: dimgray; border-width: 0; color: dimgray; margin: 0; height: 2px; } #content { height: 100%; } #footer { clear: both; text-align: center; text-transform: uppercase; } #footer p { font-size: 80%; margin: 0; } form p.controls { text-align: center; } p.timestamp { font-size: 90%; } label { display: block; margin-top: 0.5em; } div.fieldWithErrors { border-left: 1em solid mistyrose; margin-left: -1em; } #comments { font-size: 90%; margin-top: 1em; } div.comment span { font-weight: bold; } div.comment pre { font-family: monospace; font-style: italic; margin-right: 0.5em; margin-left: 0.5em; overflow: auto; } div.article { margin-top: 0.5em; } div.article h3 { font-weight: normal; margin-bottom: 0; } div.article h3 b { white-space: pre; } div.article div p { margin: 0.5em; } div.article div a { font-style: italic; } ul.plugins h3 { font-weight: normal; margin-bottom: 6px; } ul.plugins p { font-style: italic; margin-top: 6px; } td { font-size: 11px; vertical-align: top; } table thead tr th.cmAction, table tbody tr td.cmAction { text-align: center; width: 20px; } table tbody tr td img { vertical-align: bottom; } table tbody tr.cmAction { background-color: #FAFAFA; border-top: 1px dotted brown; border-bottom: 1px dotted brown; color: black; } table thead tr th.cmTimestamp, table tbody tr td.cmTimestamp { text-align: right; width: 200px; } #cmPages { border-collapse: collapse; width: 100%; } #cmPages thead tr th { text-align: left; } #cmPages thead tr th.cmPageTitle, #cmPages tbody tr td.cmPageTitle { width: 40%; } #cmPages thead tr th.cmPagePath, #cmPages tbody tr td.cmPagePath {} #cmPages tbody tr td { padding: 2px; } #cmArticles { border-collapse: collapse; width: 100%; } #cmArticles thead tr th { text-align: left; } #cmArticles tbody tr td { padding: 2px; } #cmPlugins { border-collapse: collapse; width: 100%; } #cmPlugins thead tr th { text-align: left; } #cmPlugins thead tr th.cmPluginName, #cmPlugins tbody tr td.cmPluginName { min-width: 160px; } #cmPlugins thead tr th.cmPluginFeed, #cmPlugins tbody tr td.cmPluginFeed { width: 75%; } #cmPlugins tbody tr td { padding: 2px; } #cmPlugins tbody tr td.cmPluginFeed ul { list-style-type: none; } #cmPlugins tbody tr td.cmPluginFeed ul li pre { white-space: normal; } #cmUsers { border-collapse: collapse; width: 100%; } #cmUsers thead tr th { text-align: left; } #cmUsers thead tr th.cmUserName, #cmUsers tbody tr td.cmUserName { min-width: 160px; } #cmUsers thead tr th.cmUserEMail, #cmUsers tbody tr td.cmUserEMail { width: 75%; } #cmUsers tbody tr td { padding: 2px; } ruby-sequel-4.1.1/www/public/css/ruby.css000066400000000000000000000016411220156535500203650ustar00rootroot00000000000000code { font-family: 'Verdana'; font-style: italic; } pre code { background-color: ivory; border: 1px solid lightgray; color: #000000; display: block; font-family: 'Monaco', monospace; font-size: 11px; font-style: normal; margin-left: 1em; margin-right: 1em; overflow: auto; padding: 1em 1.5em 1em 1.5em; } h1 .ruby .keyword { background-color: ivory; color: #1f60a0; } .ruby .punct { background-color: ivory; color: #000000; } .ruby .class { background-color: ivory; color: #a08000; font-weight: bold; } .ruby .constant { background-color: ivory; color: #008080; } .ruby .ident {} .ruby .comment { background-color: ivory; color: #3f603f; font-style: italic; } .ruby .symbol { background-color: ivory; color: #800000; } .ruby .string { background-color: ivory; color: #bf2f2f; } .ruby .regex { background-color: ivory; color: #bf8060; } .ruby .number { background-color: ivory; color: #0080a0; }ruby-sequel-4.1.1/www/public/images/000077500000000000000000000000001220156535500173455ustar00rootroot00000000000000ruby-sequel-4.1.1/www/public/images/ruby-sequel.png000066400000000000000000000242121220156535500223310ustar00rootroot00000000000000PNG  IHDRK5|$iCCPICC Profilex=haPZ,A.HKLS4|$kM.ݛQLVTX73:P nP%]@x<<_ Z0sWR2~d24y0ߞ.{kŇK_7K H! P960Hu e{@, owHC <3Y xbZJ <;V9p^1LRG6NxDZ[z4s8?D3F4ۅǣ:L_ϣѯx@ݿzI8ʷw}Mb[l>N񻐍A!'u!|OWdKh/jv>1e7+)U9z"+9o׳kz1of^q7_㚅uNjIw6qk6Dʝ<=t֐hHσ2޿W#m D虍Z] )]Y#"|Xd+ IDATx}ixՙ{d-w.YIb$071;BCa&@v! 20I&I 5-ƛd[^Kުtu%m9NU[~Jۈh )~I(3Ri̤J%J-9g9Q)KyYk2Yi$[l&،ݓ83O|(x FہR3jfLO;"UJ3 vnN8"8Y( \ER~N%Yr-;2d*iY5 Bh`LՀOMr2O"NF$3kPj-2d&LQtn~#n[ b> 7C/08V53}]`;T*7QAa`£ $&HiAg0{ &9D$ȒHAX;D(ybT[uoG(Za6m5 .I8 t7QD.Aز.g9J 3ރΆH7 VLZ>λQ9s.! N@TCÕN,+"P]==[(! 21b2 lzwjh;9j%%/YP'I'y9r g,\FOx3bf 2Կq1xG `ۑE &6njI1gĂŵؖ0ΰzW$z\/vV-EBdjíH;edR,P_oB)>̚-}t .RVB_"pd ߳CD?\,Ak8)9Oz&˜鳲A0GE1XJڍLOײtY1+CLծOI'u!LϢo6L/E ?'4fR=[T/+jkn79|*K ddr3#"̰:x&d*6"'$tn2_tEGt<=CJH3OjyE!$ύlS}?~/IqF!5 [ձL(WzSX)GOR; S'^7܉IS#ԐWq;7п 0>| RFw;m-0{ zv#_xZ3=}=|g ^,#L(#K/hHέ[bRU }rTL7~y6A>-DN>2?kt:tֿ]Q9k>w9#mutևv,WG'F8  lVelPz}X/x:f%) Vk?B4<a{ ?^9e .\pY4K zR)Do|\/} Qi1H F&`J d/>`"4KCyY çN)/"R)BϒUȲ0p lbQQ9k*gN"E`)D/޾t q jժVP(7o %`idc"Se&'"RYpYp DvJB0t#y iGN%;mB|~$j,Mo-/f"R7聟| Rĩa|>T#۽7TZ ,eB#72])<ħ 7ЕN$w.X'B|n[ZI)&,cϫVR:d;h5T֠bl_|_G_y{=aC ͈깳)/Gln3"h~tL%7L2BN"T\W-NF V2F=a̹zS<<6qXrԌǜkn%La aLbrpA 6oA!`@Grܓ)ⴀ0R;e&V4T& 2Mem#xK #Ar2u<.V /GI8%(&ފ>LױaO TFIQPc}Y>#߮?#hߴf<RHukJNĊxÑ6}ȴ@ed(J9i(bj.7H#0Ԧ @G#l0A+"5S }8vDHRP>l[._u"%Xtۧӷ: _缰?JZscϛ˷m@}?ȢEDJ)VZZ9sfSO=ujŒ%K"SIIIƍ֬Y:p@ap8Lh4aÆb`}ú 22Lٺ\^PN \xKV-/ɁbFH E0j,g%8n BheiyG3~s,w2os{90 H [&ᑄHA wuuuR6p8 /u]og777 /ob+qg&~ &]YYׯ~;,į KN4GhCiŊgK),۷GZ*<00p5hsDad2u=.hf ׮a 7X ʒװWۤP pܛ_Z`9w f[ZD1UXS"6DZ p\й 2Ff,cL`7GB4; w׎AU ,,ǣ%AzaZTph*X *D5)o :nRHJɒXV2MyA܌1XB͞5Uϕ+W3߈~<־FDG1BQA5D僾꺾,53FeW7=b~ I $'Ae&ڎ Zl vipN2 H#8r~;T+ֻ^,Wŭoٲe(_://t]-QSV\1'nR*\RR** e]?q/V:Xt'RCCúb>\[[; rOnzmהa dL]<(e8B:ȘhJ6ǐ/\jb٫M$QJfiD,eCRU~B{ qbdCӴ+D"_)+i;~\8N.a64@NwvAN}:yٗP5$!C>r 35 E`iKn >HO\>(񣑛|0sm`׋mD=pѢEھ.H3Ca45A3 v]h|FzH렶S. S9yye>}ÁruV yR6m4eo!f5M# ɀag9YB߻w.--1yj}zde!Zi20*J0ҭdrf#̼H܏LOd2I$زdFo7RG!ٞ~ ܴ|'ZP0hWЉ<؞h|Aͦ[D#xF.k &0x]~=O~''\CDaidzz̮(D\+DŴكu! ~슽 iB442el`{edWPa/ߊDg^ـW0]ʙsQ6a2JCD0H(BrTV!RY|7݁h/7}"҆ #H6CTf] z7=p#BﰤV2]8O_f v_J[RV|$- BWm͛!RVa;P>y?-?Vcf+՝X,tӾ^DEeC5 BWAӴk`,{dT:8blgmXǗȌqLWqҩ{7Lۨegv4>ʎ y$_9~(j†Oj{|TwI)r^I_/sQ߼#!"17z!xODFIɻ__BIH*d66@RA2efi-lva s}-G@օHwÑfWΜvcA`C|CY2)A$5pmje O,&6Cߞ]l;-AfRRY37VP{\566Bk֬ ZDV FӴ~ ؉N !FL`B|e? ]׿\)}G,zoHg_w3 dR;I9OXIvzeBIQAgS,oe B!kyru+r多l޼@NR3܅p9g歁.ZZZ!M@N҈u}B)ŋL+VLp'ñX,> w? YflXvѕujNGiLr3}9S1 ~x5uAĶupu}3S7_d1K.:BxG!sC;VXqp}.]:o`eYWJKJJYjUAZeY_O}e&"IUv[}7}\H25R\K7L̫>W]뵍ߍW>5<"CVt\H+US=/V0%_ӾGDJ* 'cn1sR_oillӦM? =ݳ5Mۢ9AMP(TS܀QJ !~୾]1]׿GD4p86,˪pid^]fw3o```CmmW x)Gu\q 3ߍ@IB͏7 .ʾ[|~l6]#ݟ>oYHwt-5.2 "rl%O*2nQ8:l\&8>v3^4[kkkavW"/꺞]AhŽB܇43Dt%r4=#B)X(/?Gi%w#7^URRºÖ ceWD,'e^)W0et[jE>*7f1 .zKNuUH1X2Xqޘt: 1LtsAohh;F}}}F .?-;Fju>fǑ̿0⊓Lj*عaĂs5B[1e8r s=eRyU+g ixϢ>q]88U+Qf_ǃ}Š)inn6cuB[Ps WX,6lPCCBUPN4]#a(466V!` .w566^@@s(8eK`לI6MshA2˅~{17 b-ܼ@a%vL9vH u{шF EV<r`1ye0IDAT\ݹ`3ARbPpؚ5kc޽h "`g`=BVf"s,+nhh`W;,fiB $!̿b]#;;\lٲe#%˖-{̼XӴ\[~v/ !vJ|je&'!2u])]i9<v$3o,++{JKK-˲r`f/sk<1.98 '6[HoiEPŞ1k?Fa(FB?U.9pck>N%t]rbN|N%N;LSy*|IW_"\\&_w7пg^릂MC&MEܙF"rq 2=e^:l[aTz+ q { v0z/`B]0RIȔe//z<;.2PiSn0JMv& ^ W$-{c^r\Oұ. p!&0z<|}wq3zҕ 3W-ql #g7u뉼NE8-pݻ~DOdlCJAY&QȴU`Riފ۟t{\JGL I0(CzFo "!c8qU, n l%/gؓE/H[D$HyZx|:b 9++rM8+,Hw%!3Vm<W|t([-u!/GMy9,dI@+P"ITwRQVʔtWV3Ivf񶛺|殮⫠$8]cɧ|̟e1[2T-^= s rUNπ,Ya8IENDB`ruby-sequel-4.1.1/www/public/images/sequel-button.png000066400000000000000000000065561220156535500226760ustar00rootroot00000000000000PNG  IHDRP pHYs   OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_FIDATxbLKKcIa̙3;#3˰mzz:3={On[Z122p024ɠmڙ3Fiɟ/_EmY 6>Qw. ~3c`I Ϸo\/~{~Y8qpr223 #v!OWL94۳;.WՋ>}$E^b``4.\ WU \LsK A\ӧrﯟ! 7oI:)p[L!`z+ a&䌏o8˓g=0>1?߇[7fgd,ČR Y𩨼>wwddal& ϟc$ԌahȤ&(~002?ݻ?O疔tuL$ #_Ywlvf9ç}wMc$K!GF??t hف=?TU]A6 a3s˞|rYyE͛}9؇n=̏ aº Ж,:=>rzK