activerecord-import-0.15.0/0000755000004100000410000000000012737365070015621 5ustar www-datawww-dataactiverecord-import-0.15.0/Rakefile0000644000004100000410000000301312737365070017263 0ustar www-datawww-datarequire "bundler" Bundler.setup require 'rake' require 'rake/testtask' namespace :display do task :notice do puts puts "To run tests you must supply the adapter, see rake -T for more information." puts end end task default: ["display:notice"] ADAPTERS = %w( mysql2 mysql2_makara mysql2spatial jdbcmysql jdbcpostgresql postgresql postgresql_makara postgis sqlite3 spatialite seamless_database_pool ).freeze ADAPTERS.each do |adapter| namespace :test do desc "Runs #{adapter} database tests." Rake::TestTask.new(adapter) do |t| # FactoryGirl has an issue with warnings, so turn off, so noisy # t.warning = true t.test_files = FileList["test/adapters/#{adapter}.rb", "test/*_test.rb", "test/active_record/*_test.rb", "test/#{adapter}/**/*_test.rb"] end task adapter end end begin require 'rcov/rcovtask' adapter = ENV['ARE_DB'] Rcov::RcovTask.new do |test| test.libs << 'test' test.pattern = ["test/adapters/#{adapter}.rb", "test/*_test.rb", "test/#{adapter}/**/*_test.rb"] test.verbose = true end rescue LoadError task :rcov do abort "RCov is not available. In order to run rcov, you must: sudo gem install rcov" end end require 'rdoc/task' Rake::RDocTask.new do |rdoc| version = File.exist?('VERSION') ? File.read('VERSION') : "" rdoc.rdoc_dir = 'rdoc' rdoc.title = "activerecord-import #{version}" rdoc.rdoc_files.include('README*') rdoc.rdoc_files.include('lib/**/*.rb') end require 'rubocop/rake_task' RuboCop::RakeTask.new activerecord-import-0.15.0/activerecord-import.gemspec0000644000004100000410000000170212737365070023150 0ustar www-datawww-data# -*- encoding: utf-8 -*- require File.expand_path('../lib/activerecord-import/version', __FILE__) Gem::Specification.new do |gem| gem.authors = ["Zach Dennis"] gem.email = ["zach.dennis@gmail.com"] gem.summary = "Bulk-loading extension for ActiveRecord" gem.description = "Extraction of the ActiveRecord::Base#import functionality from ar-extensions for Rails 3 and beyond" gem.homepage = "http://github.com/zdennis/activerecord-import" gem.license = "Ruby" gem.files = `git ls-files`.split($\) gem.executables = gem.files.grep(%r{^bin/}).map { |f| File.basename(f) } gem.test_files = gem.files.grep(%r{^(test|spec|features)/}) gem.name = "activerecord-import" gem.require_paths = ["lib"] gem.version = ActiveRecord::Import::VERSION gem.required_ruby_version = ">= 1.9.2" gem.add_runtime_dependency "activerecord", ">= 3.2" gem.add_development_dependency "rake" end activerecord-import-0.15.0/Gemfile0000644000004100000410000000166012737365070017117 0ustar www-datawww-datasource 'https://rubygems.org' gemspec group :development, :test do gem 'rubocop', '~> 0.38.0' end # Database Adapters platforms :ruby do gem "mysql2", "~> 0.3.0" gem "pg", "~> 0.9" gem "sqlite3", "~> 1.3.10" gem "seamless_database_pool", "~> 1.0.18" end platforms :jruby do gem "jdbc-mysql" gem "jdbc-postgres" gem "activerecord-jdbcmysql-adapter" gem "activerecord-jdbcpostgresql-adapter" end # Support libs gem "factory_girl", "~> 4.2.0" gem "timecop" gem "chronic" # Debugging platforms :jruby do gem "ruby-debug-base", "= 0.10.4" end platforms :jruby, :mri_18 do gem "ruby-debug", "= 0.10.4" end platforms :mri_19 do gem "debugger" end platforms :ruby do gem "pry-byebug" end version = ENV['AR_VERSION'] || "4.2" if version >= "4.0" gem "minitest" else gem "test-unit" end eval_gemfile File.expand_path("../gemfiles/#{version}.gemfile", __FILE__) activerecord-import-0.15.0/.rubocop_todo.yml0000644000004100000410000000220112737365070021113 0ustar www-datawww-data# This configuration was generated by # `rubocop --auto-gen-config` # on 2016-03-17 18:14:55 -0700 using RuboCop version 0.38.0. # The point is for the user to remove these configuration records # one by one as the offenses are removed from the code base. # Note that changes in the inspected code, or installation of new # versions of RuboCop, may require this file to be generated again. # Offense count: 2 Lint/HandleExceptions: Exclude: - 'lib/activerecord-import/base.rb' - 'test/import_test.rb' # Offense count: 2 Lint/RescueException: Exclude: - 'benchmarks/lib/cli_parser.rb' - 'test/import_test.rb' # Offense count: 4 # Cop supports --auto-correct. # Configuration parameters: AllowUnusedKeywordArguments, IgnoreEmptyMethods. Lint/UnusedMethodArgument: Exclude: - 'lib/activerecord-import/adapters/postgresql_adapter.rb' - 'lib/activerecord-import/import.rb' # Offense count: 2 # Cop supports --auto-correct. # Configuration parameters: Keywords. # Keywords: TODO, FIXME, OPTIMIZE, HACK, REVIEW Style/CommentAnnotation: Exclude: - 'benchmarks/lib/cli_parser.rb' - 'lib/activerecord-import/import.rb' activerecord-import-0.15.0/Brewfile0000644000004100000410000000005412737365070017302 0ustar www-datawww-databrew "mysql" brew "postgresql" brew "sqlite"activerecord-import-0.15.0/.travis.yml0000644000004100000410000000225512737365070017736 0ustar www-datawww-datalanguage: ruby cache: bundler rvm: - 2.2.4 env: global: # https://github.com/discourse/discourse/blob/master/.travis.yml - RUBY_GC_MALLOC_LIMIT=50000000 matrix: - AR_VERSION=3.2 - AR_VERSION=4.0 - AR_VERSION=4.1 - AR_VERSION=4.2 - AR_VERSION=5.0 matrix: include: - rvm: jruby-9.0.5.0 env: AR_VERSION=4.2 script: - bundle exec rake test:jdbcmysql - bundle exec rake test:jdbcpostgresql fast_finish: true before_script: - mysql -e 'create database activerecord_import_test;' - psql -c 'create database activerecord_import_test;' -U postgres - psql -U postgres -c "create extension postgis" - cp test/travis/database.yml test/database.yml addons: apt: sources: - travis-ci/sqlite3 packages: - sqlite3 script: - bundle exec rake test:mysql2 - bundle exec rake test:mysql2_makara - bundle exec rake test:mysql2spatial - bundle exec rake test:postgis - bundle exec rake test:postgresql - bundle exec rake test:postgresql_makara - bundle exec rake test:seamless_database_pool - bundle exec rake test:spatialite - bundle exec rake test:sqlite3 - bundle exec rubocop sudo: false activerecord-import-0.15.0/lib/0000755000004100000410000000000012737365070016367 5ustar www-datawww-dataactiverecord-import-0.15.0/lib/activerecord-import/0000755000004100000410000000000012737365070022351 5ustar www-datawww-dataactiverecord-import-0.15.0/lib/activerecord-import/mysql2.rb0000644000004100000410000000046212737365070024127 0ustar www-datawww-datawarn <<-MSG [DEPRECATION] loading activerecord-import via 'require "activerecord-import/"' is deprecated. Update to autorequire using 'require "activerecord-import"'. See http://github.com/zdennis/activerecord-import/wiki/Requiring for more information MSG require "activerecord-import" activerecord-import-0.15.0/lib/activerecord-import/adapters/0000755000004100000410000000000012737365070024154 5ustar www-datawww-dataactiverecord-import-0.15.0/lib/activerecord-import/adapters/postgresql_adapter.rb0000644000004100000410000001073512737365070030412 0ustar www-datawww-datamodule ActiveRecord::Import::PostgreSQLAdapter include ActiveRecord::Import::ImportSupport include ActiveRecord::Import::OnDuplicateKeyUpdateSupport MIN_VERSION_FOR_UPSERT = 90_500 def insert_many( sql, values, *args ) # :nodoc: number_of_inserts = 1 ids = [] base_sql, post_sql = if sql.is_a?( String ) [sql, ''] elsif sql.is_a?( Array ) [sql.shift, sql.join( ' ' )] end sql2insert = base_sql + values.join( ',' ) + post_sql if post_sql =~ /RETURNING\s/ ids = select_values( sql2insert, *args ) else insert( sql2insert, *args ) end ActiveRecord::Base.connection.query_cache.clear [number_of_inserts, ids] end def next_value_for_sequence(sequence_name) %{nextval('#{sequence_name}')} end def post_sql_statements( table_name, options ) # :nodoc: if options[:no_returning] || options[:primary_key].blank? super(table_name, options) else super(table_name, options) << "RETURNING #{options[:primary_key]}" end end # Add a column to be updated on duplicate key update def add_column_for_on_duplicate_key_update( column, options = {} ) # :nodoc: arg = options[:on_duplicate_key_update] if arg.is_a?( Hash ) columns = arg.fetch( :columns ) { arg[:columns] = [] } case columns when Array then columns << column.to_sym unless columns.include?( column.to_sym ) when Hash then columns[column.to_sym] = column.to_sym end elsif arg.is_a?( Array ) arg << column.to_sym unless arg.include?( column.to_sym ) end end # Returns a generated ON CONFLICT DO NOTHING statement given the passed # in +args+. def sql_for_on_duplicate_key_ignore( table_name, *args ) # :nodoc: arg = args.first conflict_target = sql_for_conflict_target( arg ) if arg.is_a?( Hash ) " ON CONFLICT #{conflict_target}DO NOTHING" end # Returns a generated ON CONFLICT DO UPDATE statement given the passed # in +args+. def sql_for_on_duplicate_key_update( table_name, *args ) # :nodoc: arg = args.first arg = { columns: arg } if arg.is_a?( Array ) || arg.is_a?( String ) return unless arg.is_a?( Hash ) sql = " ON CONFLICT " conflict_target = sql_for_conflict_target( arg ) columns = arg.fetch( :columns, [] ) if columns.respond_to?( :empty? ) && columns.empty? return sql << "#{conflict_target}DO NOTHING" end conflict_target ||= sql_for_default_conflict_target( table_name ) unless conflict_target raise ArgumentError, 'Expected :conflict_target or :constraint_name to be specified' end sql << "#{conflict_target}DO UPDATE SET " if columns.is_a?( Array ) sql << sql_for_on_duplicate_key_update_as_array( table_name, columns ) elsif columns.is_a?( Hash ) sql << sql_for_on_duplicate_key_update_as_hash( table_name, columns ) elsif columns.is_a?( String ) sql << columns else raise ArgumentError, 'Expected :columns to be an Array or Hash' end sql end def sql_for_on_duplicate_key_update_as_array( table_name, arr ) # :nodoc: results = arr.map do |column| qc = quote_column_name( column ) "#{qc}=EXCLUDED.#{qc}" end results.join( ',' ) end def sql_for_on_duplicate_key_update_as_hash( table_name, hsh ) # :nodoc: results = hsh.map do |column1, column2| qc1 = quote_column_name( column1 ) qc2 = quote_column_name( column2 ) "#{qc1}=EXCLUDED.#{qc2}" end results.join( ',' ) end def sql_for_conflict_target( args = {} ) constraint_name = args[:constraint_name] conflict_target = args[:conflict_target] if constraint_name.present? "ON CONSTRAINT #{constraint_name} " elsif conflict_target.present? '(' << Array( conflict_target ).reject( &:empty? ).join( ', ' ) << ') ' end end def sql_for_default_conflict_target( table_name ) conflict_target = primary_key( table_name ) "(#{conflict_target}) " if conflict_target end # Return true if the statement is a duplicate key record error def duplicate_key_update_error?(exception) # :nodoc: exception.is_a?(ActiveRecord::StatementInvalid) && exception.to_s.include?('duplicate key') end def supports_on_duplicate_key_update?(current_version = postgresql_version) current_version >= MIN_VERSION_FOR_UPSERT end def supports_on_duplicate_key_ignore?(current_version = postgresql_version) supports_on_duplicate_key_update?(current_version) end def support_setting_primary_key_of_imported_objects? true end end activerecord-import-0.15.0/lib/activerecord-import/adapters/mysql2_adapter.rb0000644000004100000410000000022212737365070027424 0ustar www-datawww-datarequire "activerecord-import/adapters/mysql_adapter" module ActiveRecord::Import::Mysql2Adapter include ActiveRecord::Import::MysqlAdapter end activerecord-import-0.15.0/lib/activerecord-import/adapters/sqlite3_adapter.rb0000644000004100000410000000302112737365070027561 0ustar www-datawww-datamodule ActiveRecord::Import::SQLite3Adapter include ActiveRecord::Import::ImportSupport MIN_VERSION_FOR_IMPORT = "3.7.11".freeze SQLITE_LIMIT_COMPOUND_SELECT = 500 # Override our conformance to ActiveRecord::Import::ImportSupport interface # to ensure that we only support import in supported version of SQLite. # Which INSERT statements with multiple value sets was introduced in 3.7.11. def supports_import?(current_version = sqlite_version) if current_version >= MIN_VERSION_FOR_IMPORT true else false end end # +sql+ can be a single string or an array. If it is an array all # elements that are in position >= 1 will be appended to the final SQL. def insert_many(sql, values, *args) # :nodoc: number_of_inserts = 0 ids = [] base_sql, post_sql = if sql.is_a?( String ) [sql, ''] elsif sql.is_a?( Array ) [sql.shift, sql.join( ' ' )] end value_sets = ::ActiveRecord::Import::ValueSetsRecordsParser.parse(values, max_records: SQLITE_LIMIT_COMPOUND_SELECT) value_sets.each do |value_set| number_of_inserts += 1 sql2insert = base_sql + value_set.join( ',' ) + post_sql first_insert_id = insert( sql2insert, *args ) last_insert_id = first_insert_id + value_set.size - 1 ids.concat((first_insert_id..last_insert_id).to_a) end [number_of_inserts, ids] end def next_value_for_sequence(sequence_name) %{nextval('#{sequence_name}')} end def support_setting_primary_key_of_imported_objects? true end end activerecord-import-0.15.0/lib/activerecord-import/adapters/em_mysql2_adapter.rb0000644000004100000410000000022412737365070030107 0ustar www-datawww-datarequire "activerecord-import/adapters/mysql_adapter" module ActiveRecord::Import::EMMysql2Adapter include ActiveRecord::Import::MysqlAdapter end activerecord-import-0.15.0/lib/activerecord-import/adapters/abstract_adapter.rb0000644000004100000410000000444512737365070030013 0ustar www-datawww-datamodule ActiveRecord::Import::AbstractAdapter module InstanceMethods def next_value_for_sequence(sequence_name) %(#{sequence_name}.nextval) end def insert_many( sql, values, *args ) # :nodoc: number_of_inserts = 1 base_sql, post_sql = if sql.is_a?( String ) [sql, ''] elsif sql.is_a?( Array ) [sql.shift, sql.join( ' ' )] end sql2insert = base_sql + values.join( ',' ) + post_sql insert( sql2insert, *args ) [number_of_inserts, []] end def pre_sql_statements(options) sql = [] sql << options[:pre_sql] if options[:pre_sql] sql << options[:command] if options[:command] sql << "IGNORE" if options[:ignore] # add keywords like IGNORE or DELAYED if options[:keywords].is_a?(Array) sql.concat(options[:keywords]) elsif options[:keywords] sql << options[:keywords].to_s end sql end # Synchronizes the passed in ActiveRecord instances with the records in # the database by calling +reload+ on each instance. def after_import_synchronize( instances ) instances.each(&:reload) end # Returns an array of post SQL statements given the passed in options. def post_sql_statements( table_name, options ) # :nodoc: post_sql_statements = [] if supports_on_duplicate_key_update? if options[:on_duplicate_key_ignore] && respond_to?(:sql_for_on_duplicate_key_ignore) # Options :recursive and :on_duplicate_key_ignore are mutually exclusive unless options[:recursive] post_sql_statements << sql_for_on_duplicate_key_ignore( table_name, options[:on_duplicate_key_ignore] ) end elsif options[:on_duplicate_key_update] post_sql_statements << sql_for_on_duplicate_key_update( table_name, options[:on_duplicate_key_update] ) end end # custom user post_sql post_sql_statements << options[:post_sql] if options[:post_sql] # with rollup post_sql_statements << rollup_sql if options[:rollup] post_sql_statements end # Returns the maximum number of bytes that the server will allow # in a single packet def max_allowed_packet NO_MAX_PACKET end def supports_on_duplicate_key_update? false end end end activerecord-import-0.15.0/lib/activerecord-import/adapters/mysql_adapter.rb0000644000004100000410000000755212737365070027357 0ustar www-datawww-datamodule ActiveRecord::Import::MysqlAdapter include ActiveRecord::Import::ImportSupport include ActiveRecord::Import::OnDuplicateKeyUpdateSupport NO_MAX_PACKET = 0 QUERY_OVERHEAD = 8 # This was shown to be true for MySQL, but it's not clear where the overhead is from. # +sql+ can be a single string or an array. If it is an array all # elements that are in position >= 1 will be appended to the final SQL. def insert_many( sql, values, *args ) # :nodoc: # the number of inserts default number_of_inserts = 0 base_sql, post_sql = if sql.is_a?( String ) [sql, ''] elsif sql.is_a?( Array ) [sql.shift, sql.join( ' ' )] end sql_size = QUERY_OVERHEAD + base_sql.size + post_sql.size # the number of bytes the requested insert statement values will take up values_in_bytes = values.sum(&:bytesize) # the number of bytes (commas) it will take to comma separate our values comma_separated_bytes = values.size - 1 # the total number of bytes required if this statement is one statement total_bytes = sql_size + values_in_bytes + comma_separated_bytes max = max_allowed_packet # if we can insert it all as one statement if NO_MAX_PACKET == max || total_bytes < max number_of_inserts += 1 sql2insert = base_sql + values.join( ',' ) + post_sql insert( sql2insert, *args ) else value_sets = ::ActiveRecord::Import::ValueSetsBytesParser.parse(values, reserved_bytes: sql_size, max_bytes: max) value_sets.each do |value_set| number_of_inserts += 1 sql2insert = base_sql + value_set.join( ',' ) + post_sql insert( sql2insert, *args ) end end [number_of_inserts, []] end # Returns the maximum number of bytes that the server will allow # in a single packet def max_allowed_packet # :nodoc: @max_allowed_packet ||= begin result = execute( "SHOW VARIABLES like 'max_allowed_packet';" ) # original Mysql gem responds to #fetch_row while Mysql2 responds to #first val = result.respond_to?(:fetch_row) ? result.fetch_row[1] : result.first[1] val.to_i end end # Add a column to be updated on duplicate key update def add_column_for_on_duplicate_key_update( column, options = {} ) # :nodoc: if options.include?(:on_duplicate_key_update) columns = options[:on_duplicate_key_update] case columns when Array then columns << column.to_sym unless columns.include?(column.to_sym) when Hash then columns[column.to_sym] = column.to_sym end else options[:on_duplicate_key_update] = [column.to_sym] end end # Returns a generated ON DUPLICATE KEY UPDATE statement given the passed # in +args+. def sql_for_on_duplicate_key_update( table_name, *args ) # :nodoc: sql = ' ON DUPLICATE KEY UPDATE ' arg = args.first if arg.is_a?( Array ) sql << sql_for_on_duplicate_key_update_as_array( table_name, arg ) elsif arg.is_a?( Hash ) sql << sql_for_on_duplicate_key_update_as_hash( table_name, arg ) elsif arg.is_a?( String ) sql << arg else raise ArgumentError, "Expected Array or Hash" end sql end def sql_for_on_duplicate_key_update_as_array( table_name, arr ) # :nodoc: results = arr.map do |column| qc = quote_column_name( column ) "#{table_name}.#{qc}=VALUES(#{qc})" end results.join( ',' ) end def sql_for_on_duplicate_key_update_as_hash( table_name, hsh ) # :nodoc: results = hsh.map do |column1, column2| qc1 = quote_column_name( column1 ) qc2 = quote_column_name( column2 ) "#{table_name}.#{qc1}=VALUES( #{qc2} )" end results.join( ',') end # Return true if the statement is a duplicate key record error def duplicate_key_update_error?(exception) # :nodoc: exception.is_a?(ActiveRecord::StatementInvalid) && exception.to_s.include?('Duplicate entry') end end activerecord-import-0.15.0/lib/activerecord-import/postgresql.rb0000644000004100000410000000046212737365070025103 0ustar www-datawww-datawarn <<-MSG [DEPRECATION] loading activerecord-import via 'require "activerecord-import/"' is deprecated. Update to autorequire using 'require "activerecord-import"'. See http://github.com/zdennis/activerecord-import/wiki/Requiring for more information MSG require "activerecord-import" activerecord-import-0.15.0/lib/activerecord-import/sqlite3.rb0000644000004100000410000000046212737365070024264 0ustar www-datawww-datawarn <<-MSG [DEPRECATION] loading activerecord-import via 'require "activerecord-import/"' is deprecated. Update to autorequire using 'require "activerecord-import"'. See http://github.com/zdennis/activerecord-import/wiki/Requiring for more information MSG require "activerecord-import" activerecord-import-0.15.0/lib/activerecord-import/synchronize.rb0000644000004100000410000000534212737365070025255 0ustar www-datawww-datamodule ActiveRecord # :nodoc: class Base # :nodoc: # Synchronizes the passed in ActiveRecord instances with data # from the database. This is like calling reload on an individual # ActiveRecord instance but it is intended for use on multiple instances. # # This uses one query for all instance updates and then updates existing # instances rather sending one query for each instance # # == Examples # # Synchronizing existing models by matching on the primary key field # posts = Post.where(author: "Zach").first # <.. out of system changes occur to change author name from Zach to Zachary..> # Post.synchronize posts # posts.first.author # => "Zachary" instead of Zach # # # Synchronizing using custom key fields # posts = Post.where(author: "Zach").first # <.. out of system changes occur to change the address of author 'Zach' to 1245 Foo Ln ..> # Post.synchronize posts, [:name] # queries on the :name column and not the :id column # posts.first.address # => "1245 Foo Ln" instead of whatever it was # def self.synchronize(instances, keys = [primary_key]) return if instances.empty? conditions = {} key_values = keys.map { |key| instances.map(&key.to_sym) } keys.zip(key_values).each { |key, values| conditions[key] = values } order = keys.map { |key| "#{key} ASC" }.join(",") klass = instances.first.class fresh_instances = klass.where(conditions).order(order) instances.each do |instance| matched_instance = fresh_instances.detect do |fresh_instance| keys.all? { |key| fresh_instance.send(key) == instance.send(key) } end next unless matched_instance instance.send :clear_aggregation_cache instance.send :clear_association_cache instance.instance_variable_set :@attributes, matched_instance.instance_variable_get(:@attributes) if instance.respond_to?(:clear_changes_information) instance.clear_changes_information # Rails 4.2 and higher else instance.instance_variable_set :@attributes_cache, {} # Rails 4.0, 4.1 instance.changed_attributes.clear # Rails 3.2 instance.previous_changes.clear end # Since the instance now accurately reflects the record in # the database, ensure that instance.persisted? is true. instance.instance_variable_set '@new_record', false instance.instance_variable_set '@destroyed', false end end # See ActiveRecord::ConnectionAdapters::AbstractAdapter.synchronize def synchronize(instances, key = [ActiveRecord::Base.primary_key]) self.class.synchronize(instances, key) end end end activerecord-import-0.15.0/lib/activerecord-import/active_record/0000755000004100000410000000000012737365070025162 5ustar www-datawww-dataactiverecord-import-0.15.0/lib/activerecord-import/active_record/adapters/0000755000004100000410000000000012737365070026765 5ustar www-datawww-dataactiverecord-import-0.15.0/lib/activerecord-import/active_record/adapters/postgresql_adapter.rb0000644000004100000410000000035212737365070033215 0ustar www-datawww-datarequire "active_record/connection_adapters/postgresql_adapter" require "activerecord-import/adapters/postgresql_adapter" class ActiveRecord::ConnectionAdapters::PostgreSQLAdapter include ActiveRecord::Import::PostgreSQLAdapter end activerecord-import-0.15.0/lib/activerecord-import/active_record/adapters/jdbcpostgresql_adapter.rb0000644000004100000410000000035212737365070034040 0ustar www-datawww-datarequire "active_record/connection_adapters/postgresql_adapter" require "activerecord-import/adapters/postgresql_adapter" class ActiveRecord::ConnectionAdapters::PostgreSQLAdapter include ActiveRecord::Import::PostgreSQLAdapter end activerecord-import-0.15.0/lib/activerecord-import/active_record/adapters/jdbcmysql_adapter.rb0000644000004100000410000000032612737365070033003 0ustar www-datawww-datarequire "active_record/connection_adapters/mysql_adapter" require "activerecord-import/adapters/mysql_adapter" class ActiveRecord::ConnectionAdapters::MysqlAdapter include ActiveRecord::Import::MysqlAdapter end activerecord-import-0.15.0/lib/activerecord-import/active_record/adapters/mysql2_adapter.rb0000644000004100000410000000033212737365070032237 0ustar www-datawww-datarequire "active_record/connection_adapters/mysql2_adapter" require "activerecord-import/adapters/mysql2_adapter" class ActiveRecord::ConnectionAdapters::Mysql2Adapter include ActiveRecord::Import::Mysql2Adapter end activerecord-import-0.15.0/lib/activerecord-import/active_record/adapters/sqlite3_adapter.rb0000644000004100000410000000033612737365070032400 0ustar www-datawww-datarequire "active_record/connection_adapters/sqlite3_adapter" require "activerecord-import/adapters/sqlite3_adapter" class ActiveRecord::ConnectionAdapters::SQLite3Adapter include ActiveRecord::Import::SQLite3Adapter end activerecord-import-0.15.0/lib/activerecord-import/active_record/adapters/abstract_adapter.rb0000644000004100000410000000037012737365070032615 0ustar www-datawww-datarequire "activerecord-import/adapters/abstract_adapter" module ActiveRecord # :nodoc: module ConnectionAdapters # :nodoc: class AbstractAdapter # :nodoc: include ActiveRecord::Import::AbstractAdapter::InstanceMethods end end end ././@LongLink0000000000000000000000000000015400000000000011565 Lustar rootrootactiverecord-import-0.15.0/lib/activerecord-import/active_record/adapters/seamless_database_pool_adapter.rbactiverecord-import-0.15.0/lib/activerecord-import/active_record/adapters/seamless_database_pool_ada0000644000004100000410000000042712737365070034211 0ustar www-datawww-datarequire "seamless_database_pool" require "active_record/connection_adapters/seamless_database_pool_adapter" require "activerecord-import/adapters/mysql_adapter" class ActiveRecord::ConnectionAdapters::SeamlessDatabasePoolAdapter include ActiveRecord::Import::MysqlAdapter end activerecord-import-0.15.0/lib/activerecord-import/import.rb0000644000004100000410000006543012737365070024220 0ustar www-datawww-datarequire "ostruct" module ActiveRecord::Import::ConnectionAdapters; end module ActiveRecord::Import #:nodoc: Result = Struct.new(:failed_instances, :num_inserts, :ids) module ImportSupport #:nodoc: def supports_import? #:nodoc: true end end module OnDuplicateKeyUpdateSupport #:nodoc: def supports_on_duplicate_key_update? #:nodoc: true end end class MissingColumnError < StandardError def initialize(name, index) super "Missing column for value <#{name}> at index #{index}" end end end class ActiveRecord::Associations::CollectionProxy def import(*args, &block) @association.import(*args, &block) end end class ActiveRecord::Associations::CollectionAssociation def import(*args, &block) unless owner.persisted? raise ActiveRecord::RecordNotSaved, "You cannot call import unless the parent is saved" end options = args.last.is_a?(Hash) ? args.pop : {} model_klass = reflection.klass symbolized_foreign_key = reflection.foreign_key.to_sym symbolized_column_names = model_klass.column_names.map(&:to_sym) owner_primary_key = owner.class.primary_key owner_primary_key_value = owner.send(owner_primary_key) # assume array of model objects if args.last.is_a?( Array ) && args.last.first.is_a?(ActiveRecord::Base) if args.length == 2 models = args.last column_names = args.first else models = args.first column_names = symbolized_column_names end unless symbolized_column_names.include?(symbolized_foreign_key) column_names << symbolized_foreign_key end models.each do |m| m.public_send "#{symbolized_foreign_key}=", owner_primary_key_value m.public_send "#{reflection.type}=", owner.class.name if reflection.type end return model_klass.import column_names, models, options # supports empty array elsif args.last.is_a?( Array ) && args.last.empty? return ActiveRecord::Import::Result.new([], 0, []) # supports 2-element array and array elsif args.size == 2 && args.first.is_a?( Array ) && args.last.is_a?( Array ) column_names, array_of_attributes = args symbolized_column_names = column_names.map(&:to_s) if symbolized_column_names.include?(symbolized_foreign_key) index = symbolized_column_names.index(symbolized_foreign_key) array_of_attributes.each { |attrs| attrs[index] = owner_primary_key_value } else column_names << symbolized_foreign_key array_of_attributes.each { |attrs| attrs << owner_primary_key_value } end if reflection.type column_names << reflection.type array_of_attributes.each { |attrs| attrs << owner.class.name } end return model_klass.import column_names, array_of_attributes, options else raise ArgumentError, "Invalid arguments!" end end end class ActiveRecord::Base class << self # use tz as set in ActiveRecord::Base tproc = lambda do ActiveRecord::Base.default_timezone == :utc ? Time.now.utc : Time.now end AREXT_RAILS_COLUMNS = { create: { "created_on" => tproc, "created_at" => tproc }, update: { "updated_on" => tproc, "updated_at" => tproc } }.freeze AREXT_RAILS_COLUMN_NAMES = AREXT_RAILS_COLUMNS[:create].keys + AREXT_RAILS_COLUMNS[:update].keys # Returns true if the current database connection adapter # supports import functionality, otherwise returns false. def supports_import?(*args) connection.respond_to?(:supports_import?) && connection.supports_import?(*args) end # Returns true if the current database connection adapter # supports on duplicate key update functionality, otherwise # returns false. def supports_on_duplicate_key_update? connection.supports_on_duplicate_key_update? end # returns true if the current database connection adapter # supports setting the primary key of bulk imported models, otherwise # returns false def support_setting_primary_key_of_imported_objects? connection.respond_to?(:support_setting_primary_key_of_imported_objects?) && connection.support_setting_primary_key_of_imported_objects? end # Imports a collection of values to the database. # # This is more efficient than using ActiveRecord::Base#create or # ActiveRecord::Base#save multiple times. This method works well if # you want to create more than one record at a time and do not care # about having ActiveRecord objects returned for each record # inserted. # # This can be used with or without validations. It does not utilize # the ActiveRecord::Callbacks during creation/modification while # performing the import. # # == Usage # Model.import array_of_models # Model.import column_names, array_of_values # Model.import column_names, array_of_values, options # # ==== Model.import array_of_models # # With this form you can call _import_ passing in an array of model # objects that you want updated. # # ==== Model.import column_names, array_of_values # # The first parameter +column_names+ is an array of symbols or # strings which specify the columns that you want to update. # # The second parameter, +array_of_values+, is an array of # arrays. Each subarray is a single set of values for a new # record. The order of values in each subarray should match up to # the order of the +column_names+. # # ==== Model.import column_names, array_of_values, options # # The first two parameters are the same as the above form. The third # parameter, +options+, is a hash. This is optional. Please see # below for what +options+ are available. # # == Options # * +validate+ - true|false, tells import whether or not to use # ActiveRecord validations. Validations are enforced by default. # * +ignore+ - true|false, tells import to use MySQL's INSERT IGNORE # to discard records that contain duplicate keys. # * +on_duplicate_key_ignore+ - true|false, tells import to use # Postgres 9.5+ ON CONFLICT DO NOTHING. Cannot be enabled on a # recursive import. # * +on_duplicate_key_update+ - an Array or Hash, tells import to # use MySQL's ON DUPLICATE KEY UPDATE or Postgres 9.5+ ON CONFLICT # DO UPDATE ability. See On Duplicate Key Update below. # * +synchronize+ - an array of ActiveRecord instances for the model # that you are currently importing data into. This synchronizes # existing model instances in memory with updates from the import. # * +timestamps+ - true|false, tells import to not add timestamps # (if false) even if record timestamps is disabled in ActiveRecord::Base # * +recursive+ - true|false, tells import to import all has_many/has_one # associations if the adapter supports setting the primary keys of the # newly imported objects. # * +batch_size+ - an integer value to specify the max number of records to # include per insert. Defaults to the total number of records to import. # # == Examples # class BlogPost < ActiveRecord::Base ; end # # # Example using array of model objects # posts = [ BlogPost.new author_name: 'Zach Dennis', title: 'AREXT', # BlogPost.new author_name: 'Zach Dennis', title: 'AREXT2', # BlogPost.new author_name: 'Zach Dennis', title: 'AREXT3' ] # BlogPost.import posts # # # Example using column_names and array_of_values # columns = [ :author_name, :title ] # values = [ [ 'zdennis', 'test post' ], [ 'jdoe', 'another test post' ] ] # BlogPost.import columns, values # # # Example using column_names, array_of_value and options # columns = [ :author_name, :title ] # values = [ [ 'zdennis', 'test post' ], [ 'jdoe', 'another test post' ] ] # BlogPost.import( columns, values, validate: false ) # # # Example synchronizing existing instances in memory # post = BlogPost.where(author_name: 'zdennis').first # puts post.author_name # => 'zdennis' # columns = [ :author_name, :title ] # values = [ [ 'yoda', 'test post' ] ] # BlogPost.import posts, synchronize: [ post ] # puts post.author_name # => 'yoda' # # # Example synchronizing unsaved/new instances in memory by using a uniqued imported field # posts = [BlogPost.new(title: "Foo"), BlogPost.new(title: "Bar")] # BlogPost.import posts, synchronize: posts, synchronize_keys: [:title] # puts posts.first.persisted? # => true # # == On Duplicate Key Update (MySQL) # # The :on_duplicate_key_update option can be either an Array or a Hash. # # ==== Using an Array # # The :on_duplicate_key_update option can be an array of column # names. The column names are the only fields that are updated if # a duplicate record is found. Below is an example: # # BlogPost.import columns, values, on_duplicate_key_update: [ :date_modified, :content, :author ] # # ==== Using A Hash # # The :on_duplicate_key_update option can be a hash of column names # to model attribute name mappings. This gives you finer grained # control over what fields are updated with what attributes on your # model. Below is an example: # # BlogPost.import columns, attributes, on_duplicate_key_update: { title: :title } # # == On Duplicate Key Update (Postgres 9.5+) # # The :on_duplicate_key_update option can be an Array or a Hash with up to # two attributes, :conflict_target or :constraint_name and :columns. # # ==== Using an Array # # The :on_duplicate_key_update option can be an array of column # names. This option only handles inserts that conflict with the # primary key. If a table does not have a primary key, this will # not work. The column names are the only fields that are updated # if a duplicate record is found. Below is an example: # # BlogPost.import columns, values, on_duplicate_key_update: [ :date_modified, :content, :author ] # # ==== Using a Hash # # The :on_duplicate_update option can be a hash with up to two attributes, # :conflict_target or constraint_name, and :columns. Unlike MySQL, Postgres # requires the conflicting constraint to be explicitly specified. Using this # option allows you to specify a constraint other than the primary key. # # ====== :conflict_target # # The :conflict_target attribute specifies the columns that make up the # conflicting unique constraint and can be a single column or an array of # column names. This attribute is ignored if :constraint_name is included, # but it is the preferred method of identifying a constraint. It will # default to the primary key. Below is an example: # # BlogPost.import columns, values, on_duplicate_key_update: { conflict_target: [:author_id, :slug], columns: [ :date_modified ] } # # ====== :constraint_name # # The :constraint_name attribute explicitly identifies the conflicting # unique index by name. Postgres documentation discourages using this method # of identifying an index unless absolutely necessary. Below is an example: # # BlogPost.import columns, values, on_duplicate_key_update: { constraint_name: :blog_posts_pkey, columns: [ :date_modified ] } # # ====== :columns # # The :columns attribute can be either an Array or a Hash. # # ======== Using an Array # # The :columns attribute can be an array of column names. The column names # are the only fields that are updated if a duplicate record is found. # Below is an example: # # BlogPost.import columns, values, on_duplicate_key_update: { conflict_target: :slug, columns: [ :date_modified, :content, :author ] } # # ======== Using a Hash # # The :columns option can be a hash of column names to model attribute name # mappings. This gives you finer grained control over what fields are updated # with what attributes on your model. Below is an example: # # BlogPost.import columns, attributes, on_duplicate_key_update: { conflict_target: :slug, columns: { title: :title } } # # = Returns # This returns an object which responds to +failed_instances+ and +num_inserts+. # * failed_instances - an array of objects that fails validation and were not committed to the database. An empty array if no validation is performed. # * num_inserts - the number of insert statements it took to import the data # * ids - the primary keys of the imported ids, if the adpater supports it, otherwise and empty array. def import(*args) if args.first.is_a?( Array ) && args.first.first.is_a?(ActiveRecord::Base) options = {} options.merge!( args.pop ) if args.last.is_a?(Hash) models = args.first import_helper(models, options) else import_helper(*args) end end # Imports a collection of values if all values are valid. Import fails at the # first encountered validation error and raises ActiveRecord::RecordInvalid # with the failed instance. def import!(*args) options = args.last.is_a?( Hash ) ? args.pop : {} options[:validate] = true options[:raise_error] = true import(*args, options) end def import_helper( *args ) options = { validate: true, timestamps: true, primary_key: primary_key } options.merge!( args.pop ) if args.last.is_a? Hash # Don't modify incoming arguments if options[:on_duplicate_key_update] options[:on_duplicate_key_update] = options[:on_duplicate_key_update].dup end is_validating = options[:validate] is_validating = true unless options[:validate_with_context].nil? # assume array of model objects if args.last.is_a?( Array ) && args.last.first.is_a?(ActiveRecord::Base) if args.length == 2 models = args.last column_names = args.first else models = args.first column_names = self.column_names.dup end array_of_attributes = models.map do |model| # this next line breaks sqlite.so with a segmentation fault # if model.new_record? || options[:on_duplicate_key_update] column_names.map do |name| model.read_attribute_before_type_cast(name.to_s) end # end end # supports empty array elsif args.last.is_a?( Array ) && args.last.empty? return ActiveRecord::Import::Result.new([], 0, []) # supports 2-element array and array elsif args.size == 2 && args.first.is_a?( Array ) && args.last.is_a?( Array ) column_names, array_of_attributes = args array_of_attributes = array_of_attributes.map(&:dup) else raise ArgumentError, "Invalid arguments!" end # dup the passed in array so we don't modify it unintentionally column_names = column_names.dup # Force the primary key col into the insert if it's not # on the list and we are using a sequence and stuff a nil # value for it into each row so the sequencer will fire later if !column_names.include?(primary_key) && connection.prefetch_primary_key? && sequence_name column_names << primary_key array_of_attributes.each { |a| a << nil } end # record timestamps unless disabled in ActiveRecord::Base if record_timestamps && options.delete( :timestamps ) add_special_rails_stamps column_names, array_of_attributes, options end return_obj = if is_validating if models import_with_validations( column_names, array_of_attributes, options ) do |failed| models.each_with_index do |model, i| model = model.dup if options[:recursive] next if model.valid?(options[:validate_with_context]) if options[:raise_error] && model.respond_to?(:raise_validation_error, true) # Rails 5.0 and higher model.send(:raise_validation_error) elsif options[:raise_error] # Rails 3.2, 4.0, 4.1 and 4.2 model.send(:raise_record_invalid) end array_of_attributes[i] = nil failed << model end end else import_with_validations( column_names, array_of_attributes, options ) end else (num_inserts, ids) = import_without_validations_or_callbacks( column_names, array_of_attributes, options ) ActiveRecord::Import::Result.new([], num_inserts, ids) end if options[:synchronize] sync_keys = options[:synchronize_keys] || [primary_key] synchronize( options[:synchronize], sync_keys) end return_obj.num_inserts = 0 if return_obj.num_inserts.nil? # if we have ids, then set the id on the models and mark the models as clean. if support_setting_primary_key_of_imported_objects? set_ids_and_mark_clean(models, return_obj) # if there are auto-save associations on the models we imported that are new, import them as well import_associations(models, options.dup) if options[:recursive] end return_obj end # TODO import_from_table needs to be implemented. def import_from_table( options ) # :nodoc: end # Imports the passed in +column_names+ and +array_of_attributes+ # given the passed in +options+ Hash with validations. Returns an # object with the methods +failed_instances+ and +num_inserts+. # +failed_instances+ is an array of instances that failed validations. # +num_inserts+ is the number of inserts it took to import the data. See # ActiveRecord::Base.import for more information on # +column_names+, +array_of_attributes+ and +options+. def import_with_validations( column_names, array_of_attributes, options = {} ) failed_instances = [] if block_given? yield failed_instances else # create instances for each of our column/value sets arr = validations_array_for_column_names_and_attributes( column_names, array_of_attributes ) # keep track of the instance and the position it is currently at. if this fails # validation we'll use the index to remove it from the array_of_attributes model = new arr.each_with_index do |hsh, i| hsh.each_pair { |k, v| model[k] = v } next if model.valid?(options[:validate_with_context]) raise(ActiveRecord::RecordInvalid, model) if options[:raise_error] array_of_attributes[i] = nil failed_instances << model.dup end end array_of_attributes.compact! num_inserts, ids = if array_of_attributes.empty? || options[:all_or_none] && failed_instances.any? [0, []] else import_without_validations_or_callbacks( column_names, array_of_attributes, options ) end ActiveRecord::Import::Result.new(failed_instances, num_inserts, ids) end # Imports the passed in +column_names+ and +array_of_attributes+ # given the passed in +options+ Hash. This will return the number # of insert operations it took to create these records without # validations or callbacks. See ActiveRecord::Base.import for more # information on +column_names+, +array_of_attributes_ and # +options+. def import_without_validations_or_callbacks( column_names, array_of_attributes, options = {} ) column_names = column_names.map(&:to_sym) scope_columns, scope_values = scope_attributes.to_a.transpose unless scope_columns.blank? scope_columns.zip(scope_values).each do |name, value| name_as_sym = name.to_sym next if column_names.include?(name_as_sym) is_sti = (name_as_sym == inheritance_column.to_sym && self < base_class) value = value.first if is_sti column_names << name_as_sym array_of_attributes.each { |attrs| attrs << value } end end columns = column_names.each_with_index.map do |name, i| column = columns_hash[name.to_s] raise ActiveRecord::Import::MissingColumnError.new(name.to_s, i) if column.nil? column end columns_sql = "(#{column_names.map { |name| connection.quote_column_name(name) }.join(',')})" insert_sql = "INSERT #{options[:ignore] ? 'IGNORE ' : ''}INTO #{quoted_table_name} #{columns_sql} VALUES " values_sql = values_sql_for_columns_and_attributes(columns, array_of_attributes) number_inserted = 0 ids = [] if supports_import? # generate the sql post_sql_statements = connection.post_sql_statements( quoted_table_name, options ) batch_size = options[:batch_size] || values_sql.size values_sql.each_slice(batch_size) do |batch_values| # perform the inserts result = connection.insert_many( [insert_sql, post_sql_statements].flatten, batch_values, "#{self.class.name} Create Many Without Validations Or Callbacks" ) number_inserted += result[0] ids += result[1] end else values_sql.each do |values| ids << connection.insert(insert_sql + values) number_inserted += 1 end end [number_inserted, ids] end private def set_ids_and_mark_clean(models, import_result) return if models.nil? import_result.ids.each_with_index do |id, index| model = models[index] model.id = id.to_i if model.respond_to?(:clear_changes_information) # Rails 4.0 and higher model.clear_changes_information else # Rails 3.2 model.instance_variable_get(:@changed_attributes).clear end model.instance_variable_set(:@new_record, false) end end def import_associations(models, options) # now, for all the dirty associations, collect them into a new set of models, then recurse. # notes: # does not handle associations that reference themselves # should probably take a hash to associations to follow. associated_objects_by_class = {} models.each { |model| find_associated_objects_for_import(associated_objects_by_class, model) } # :on_duplicate_key_update not supported for associations options.delete(:on_duplicate_key_update) associated_objects_by_class.each_value do |associations| associations.each_value do |associated_records| associated_records.first.class.import(associated_records, options) unless associated_records.empty? end end end # We are eventually going to call Class.import so we build up a hash # of class => objects to import. def find_associated_objects_for_import(associated_objects_by_class, model) associated_objects_by_class[model.class.name] ||= {} association_reflections = model.class.reflect_on_all_associations(:has_one) + model.class.reflect_on_all_associations(:has_many) association_reflections.each do |association_reflection| associated_objects_by_class[model.class.name][association_reflection.name] ||= [] association = model.association(association_reflection.name) association.loaded! # Wrap target in an array if not already association = Array(association.target) changed_objects = association.select { |a| a.new_record? || a.changed? } changed_objects.each do |child| child.public_send("#{association_reflection.foreign_key}=", model.id) # For polymorphic associations association_reflection.type.try do |type| child.public_send("#{type}=", model.class.name) end end associated_objects_by_class[model.class.name][association_reflection.name].concat changed_objects end associated_objects_by_class end # Returns SQL the VALUES for an INSERT statement given the passed in +columns+ # and +array_of_attributes+. def values_sql_for_columns_and_attributes(columns, array_of_attributes) # :nodoc: # connection gets called a *lot* in this high intensity loop. # Reuse the same one w/in the loop, otherwise it would keep being re-retreived (= lots of time for large imports) connection_memo = connection array_of_attributes.map do |arr| my_values = arr.each_with_index.map do |val, j| column = columns[j] # be sure to query sequence_name *last*, only if cheaper tests fail, because it's costly if val.nil? && column.name == primary_key && !sequence_name.blank? connection_memo.next_value_for_sequence(sequence_name) elsif column if respond_to?(:type_caster) && type_caster.respond_to?(:type_cast_for_database) # Rails 5.0 and higher connection_memo.quote(type_caster.type_cast_for_database(column.name, val)) elsif column.respond_to?(:type_cast_from_user) # Rails 4.2 and higher connection_memo.quote(column.type_cast_from_user(val), column) else # Rails 3.2, 4.0 and 4.1 if serialized_attributes.include?(column.name) val = serialized_attributes[column.name].dump(val) end connection_memo.quote(column.type_cast(val), column) end end end "(#{my_values.join(',')})" end end def add_special_rails_stamps( column_names, array_of_attributes, options ) AREXT_RAILS_COLUMNS[:create].each_pair do |key, blk| next unless self.column_names.include?(key) value = blk.call index = column_names.index(key) || column_names.index(key.to_sym) if index # replace every instance of the array of attributes with our value array_of_attributes.each { |arr| arr[index] = value if arr[index].nil? } else column_names << key array_of_attributes.each { |arr| arr << value } end end AREXT_RAILS_COLUMNS[:update].each_pair do |key, blk| next unless self.column_names.include?(key) value = blk.call index = column_names.index(key) || column_names.index(key.to_sym) if index # replace every instance of the array of attributes with our value array_of_attributes.each { |arr| arr[index] = value } else column_names << key array_of_attributes.each { |arr| arr << value } end if supports_on_duplicate_key_update? connection.add_column_for_on_duplicate_key_update(key, options) end end end # Returns an Array of Hashes for the passed in +column_names+ and +array_of_attributes+. def validations_array_for_column_names_and_attributes( column_names, array_of_attributes ) # :nodoc: array_of_attributes.map { |values| Hash[column_names.zip(values)] } end end end activerecord-import-0.15.0/lib/activerecord-import/value_sets_parser.rb0000644000004100000410000000243412737365070026427 0ustar www-datawww-datamodule ActiveRecord::Import class ValueSetsBytesParser attr_reader :reserved_bytes, :max_bytes, :values def self.parse(values, options) new(values, options).parse end def initialize(values, options) @values = values @reserved_bytes = options[:reserved_bytes] @max_bytes = options[:max_bytes] end def parse value_sets = [] arr = [] current_size = 0 values.each_with_index do |val, i| comma_bytes = arr.size bytes_thus_far = reserved_bytes + current_size + val.bytesize + comma_bytes if bytes_thus_far <= max_bytes current_size += val.bytesize arr << val else value_sets << arr arr = [val] current_size = val.bytesize end # if we're on the last iteration push whatever we have in arr to value_sets value_sets << arr if i == (values.size - 1) end [*value_sets] end end class ValueSetsRecordsParser attr_reader :max_records, :values def self.parse(values, options) new(values, options).parse end def initialize(values, options) @values = values @max_records = options[:max_records] end def parse @values.in_groups_of(max_records, false) end end end activerecord-import-0.15.0/lib/activerecord-import/version.rb0000644000004100000410000000011412737365070024357 0ustar www-datawww-datamodule ActiveRecord module Import VERSION = "0.15.0".freeze end end activerecord-import-0.15.0/lib/activerecord-import/base.rb0000644000004100000410000000217612737365070023616 0ustar www-datawww-datarequire "pathname" require "active_record" require "active_record/version" module ActiveRecord::Import ADAPTER_PATH = "activerecord-import/active_record/adapters".freeze def self.base_adapter(adapter) case adapter when 'mysql2_makara' then 'mysql2' when 'mysql2spatial' then 'mysql2' when 'spatialite' then 'sqlite3' when 'postgresql_makara' then 'postgresql' when 'postgis' then 'postgresql' else adapter end end # Loads the import functionality for a specific database adapter def self.require_adapter(adapter) require File.join(ADAPTER_PATH, "/abstract_adapter") begin require File.join(ADAPTER_PATH, "/#{base_adapter(adapter)}_adapter") rescue LoadError # fallback end end # Loads the import functionality for the passed in ActiveRecord connection def self.load_from_connection_pool(connection_pool) require_adapter connection_pool.spec.config[:adapter] end end require 'activerecord-import/import' require 'activerecord-import/active_record/adapters/abstract_adapter' require 'activerecord-import/synchronize' require 'activerecord-import/value_sets_parser' activerecord-import-0.15.0/lib/activerecord-import.rb0000644000004100000410000000127312737365070022701 0ustar www-datawww-data# rubocop:disable Style/FileName ActiveSupport.on_load(:active_record) do class ActiveRecord::Base class << self def establish_connection_with_activerecord_import(*args) conn = establish_connection_without_activerecord_import(*args) if !ActiveRecord.const_defined?(:Import) || !ActiveRecord::Import.respond_to?(:load_from_connection_pool) require "activerecord-import/base" end ActiveRecord::Import.load_from_connection_pool connection_pool conn end alias establish_connection_without_activerecord_import establish_connection alias establish_connection establish_connection_with_activerecord_import end end end activerecord-import-0.15.0/README.markdown0000644000004100000410000001055712737365070020332 0ustar www-datawww-data# activerecord-import [![Build Status](https://travis-ci.org/zdennis/activerecord-import.svg?branch=master)](https://travis-ci.org/zdennis/activerecord-import) activerecord-import is a library for bulk inserting data using ActiveRecord. One of its major features is following activerecord associations and generating the minimal number of SQL insert statements required, avoiding the N+1 insert problem. An example probably explains it best. Say you had a schema like this: - Publishers have Books - Books have Reviews and you wanted to bulk insert 100 new publishers with 10K books and 3 reviews per book. This library will follow the associations down and generate only 3 SQL insert statements - one for the publishers, one for the books, and one for the reviews. In contrast, the standard ActiveRecord save would generate 100 insert statements for the publishers, then it would visit each publisher and save all the books: 100 * 10,000 = 1,000,000 SQL insert statements and then the reviews: 100 * 10,000 * 3 = 3M SQL insert statements, That would be about 4M SQL insert statements vs 3, which results in vastly improved performance. In our case, it converted an 18 hour batch process to <2 hrs. ### Rails 5.0 Use activerecord-import 0.11.0 or higher. ### Rails 4.0 Use activerecord-import 0.4.0 or higher. ### Rails 3.1.x up to, but not including 4.0 Use the latest in the activerecord-import 0.3.x series. ### Rails 3.0.x up to, but not including 3.1 Use activerecord-import 0.2.11. As of activerecord-import 0.3.0 we are relying on functionality that was introduced in Rails 3.1. Since Rails 3.0.x is no longer a supported version of Rails we have decided to drop support as well. ### For More Information For more information on activerecord-import please see its wiki: https://github.com/zdennis/activerecord-import/wiki ## Additional Adapters Additional adapters can be provided by gems external to activerecord-import by providing an adapter that matches the naming convention setup by activerecord-import (and subsequently activerecord) for dynamically loading adapters. This involves also providing a folder on the load path that follows the activerecord-import naming convention to allow activerecord-import to dynamically load the file. When `ActiveRecord::Import.require_adapter("fake_name")` is called the require will be: ```ruby require 'activerecord-import/active_record/adapters/fake_name_adapter' ``` This allows an external gem to dyanmically add an adapter without the need to add any file/code to the core activerecord-import gem. ### Load Path Setup To understand how rubygems loads code you can reference the following: http://guides.rubygems.org/patterns/#loading_code And an example of how active_record dynamically load adapters: https://github.com/rails/rails/blob/master/activerecord/lib/active_record/connection_adapters/connection_specification.rb In summary, when a gem is loaded rubygems adds the `lib` folder of the gem to the global load path `$LOAD_PATH` so that all `require` lookups will not propegate through all of the folders on the load path. When a `require` is issued each folder on the `$LOAD_PATH` is checked for the file and/or folder referenced. This allows a gem (like activerecord-import) to define push the activerecord-import folder (or namespace) on the `$LOAD_PATH` and any adapters provided by activerecord-import will be found by rubygems when the require is issued. If `fake_name` adapter is needed by a gem (potentially called `activerecord-import-fake_name`) then the folder structure should look as follows: ```bash activerecord-import-fake_name/ |-- activerecord-import-fake_name.gemspec |-- lib | |-- activerecord-import-fake_name | | |-- version.rb | |-- activerecord-import | | |-- active_record | | | |-- adapters | | | |-- fake_name_adapter.rb |--activerecord-import-fake_name.rb ``` When rubygems pushes the `lib` folder onto the load path a `require` will now find `activerecord-import/active_record/adapters/fake_name_adapter` as it runs through the lookup process for a ruby file under that path in `$LOAD_PATH` # License This is licensed under the ruby license. # Author Zach Dennis (zach.dennis@gmail.com) # Contributors * Jordan Owens (@jkowens) * Erik Michaels-Ober (@sferik) * Blythe Dunham * Gabe da Silveira * Henry Work * James Herdman * Marcus Crafter * Thibaud Guillaume-Gentil * Mark Van Holstyn * Victor Costan activerecord-import-0.15.0/gemfiles/0000755000004100000410000000000012737365070017414 5ustar www-datawww-dataactiverecord-import-0.15.0/gemfiles/4.0.gemfile0000644000004100000410000000007012737365070021244 0ustar www-datawww-dataplatforms :ruby do gem 'activerecord', '~> 4.0.0' end activerecord-import-0.15.0/gemfiles/4.2.gemfile0000644000004100000410000000016212737365070021250 0ustar www-datawww-dataplatforms :ruby do gem 'activerecord', '~> 4.2.0' end platforms :jruby do gem 'activerecord', '~> 4.2.0' end activerecord-import-0.15.0/gemfiles/5.0.gemfile0000644000004100000410000000007012737365070021245 0ustar www-datawww-dataplatforms :ruby do gem 'activerecord', '~> 5.0.0' end activerecord-import-0.15.0/gemfiles/4.1.gemfile0000644000004100000410000000007012737365070021245 0ustar www-datawww-dataplatforms :ruby do gem 'activerecord', '~> 4.1.0' end activerecord-import-0.15.0/gemfiles/3.2.gemfile0000644000004100000410000000007012737365070021245 0ustar www-datawww-dataplatforms :ruby do gem 'activerecord', '~> 3.2.0' end activerecord-import-0.15.0/.rubocop.yml0000644000004100000410000000132512737365070020074 0ustar www-datawww-datainherit_from: .rubocop_todo.yml Lint/EndAlignment: AlignWith: variable Metrics/AbcSize: Enabled: false Metrics/ClassLength: Enabled: false Metrics/CyclomaticComplexity: Enabled: false Metrics/LineLength: Enabled: false Metrics/MethodLength: Enabled: false Metrics/ModuleLength: Enabled: false Metrics/PerceivedComplexity: Enabled: false Style/AlignParameters: EnforcedStyle: with_fixed_indentation Style/ClassAndModuleChildren: Enabled: false Style/Documentation: Enabled: false Style/ElseAlignment: Enabled: false Style/SpaceInsideParens: Enabled: false Style/SpecialGlobalVars: Enabled: false Style/StringLiterals: Enabled: false Style/TrailingCommaInLiteral: Enabled: false activerecord-import-0.15.0/test/0000755000004100000410000000000012737365070016600 5ustar www-datawww-dataactiverecord-import-0.15.0/test/import_test.rb0000644000004100000410000004416412737365070021507 0ustar www-datawww-datarequire File.expand_path('../test_helper', __FILE__) describe "#import" do it "should return the number of inserts performed" do # see ActiveRecord::ConnectionAdapters::AbstractAdapter test for more specifics assert_difference "Topic.count", +10 do result = Topic.import Build(3, :topics) assert result.num_inserts > 0 result = Topic.import Build(7, :topics) assert result.num_inserts > 0 end end it "should not produce an error when importing empty arrays" do assert_nothing_raised do Topic.import [] Topic.import %w(title author_name), [] end end describe "argument safety" do it "should not modify the passed in columns array" do assert_nothing_raised do columns = %w(title author_name).freeze Topic.import columns, [%w(foo bar)] end end it "should not modify the passed in values array" do assert_nothing_raised do record = %w(foo bar).freeze values = [record].freeze Topic.import %w(title author_name), values end end end describe "with non-default ActiveRecord models" do context "that have a non-standard primary key (that is no sequence)" do it "should import models successfully" do assert_difference "Widget.count", +3 do Widget.import Build(3, :widgets) end end end end context "with :validation option" do let(:columns) { %w(title author_name) } let(:valid_values) { [["LDAP", "Jerry Carter"], ["Rails Recipes", "Chad Fowler"]] } let(:valid_values_with_context) { [[1111, "Jerry Carter"], [2222, "Chad Fowler"]] } let(:invalid_values) { [["The RSpec Book", ""], ["Agile+UX", ""]] } context "with validation checks turned off" do it "should import valid data" do assert_difference "Topic.count", +2 do Topic.import columns, valid_values, validate: false end end it "should import invalid data" do assert_difference "Topic.count", +2 do Topic.import columns, invalid_values, validate: false end end it 'should raise a specific error if a column does not exist' do assert_raises ActiveRecord::Import::MissingColumnError do Topic.import ['foo'], [['bar']], validate: false end end end context "with validation checks turned on" do it "should import valid data" do assert_difference "Topic.count", +2 do Topic.import columns, valid_values, validate: true end end it "should import valid data with on option" do assert_difference "Topic.count", +2 do Topic.import columns, valid_values_with_context, validate_with_context: :context_test end end it "should not import invalid data" do assert_no_difference "Topic.count" do Topic.import columns, invalid_values, validate: true end end it "should import invalid data with on option" do assert_no_difference "Topic.count" do Topic.import columns, valid_values, validate_with_context: :context_test end end it "should report the failed instances" do results = Topic.import columns, invalid_values, validate: true assert_equal invalid_values.size, results.failed_instances.size results.failed_instances.each { |e| assert_kind_of Topic, e } end it "should import valid data when mixed with invalid data" do assert_difference "Topic.count", +2 do Topic.import columns, valid_values + invalid_values, validate: true end assert_equal 0, Topic.where(title: invalid_values.map(&:first)).count end end end context "with :all_or_none option" do let(:columns) { %w(title author_name) } let(:valid_values) { [["LDAP", "Jerry Carter"], ["Rails Recipes", "Chad Fowler"]] } let(:invalid_values) { [["The RSpec Book", ""], ["Agile+UX", ""]] } let(:mixed_values) { valid_values + invalid_values } context "with validation checks turned on" do it "should import valid data" do assert_difference "Topic.count", +2 do Topic.import columns, valid_values, all_or_none: true end end it "should not import invalid data" do assert_no_difference "Topic.count" do Topic.import columns, invalid_values, all_or_none: true end end it "should not import valid data when mixed with invalid data" do assert_no_difference "Topic.count" do Topic.import columns, mixed_values, all_or_none: true end end it "should report the failed instances" do results = Topic.import columns, mixed_values, all_or_none: true assert_equal invalid_values.size, results.failed_instances.size results.failed_instances.each { |e| assert_kind_of Topic, e } end it "should report the zero inserts" do results = Topic.import columns, mixed_values, all_or_none: true assert_equal 0, results.num_inserts end end end context "with :batch_size option" do it "should import with a single insert" do assert_difference "Topic.count", +10 do result = Topic.import Build(10, :topics), batch_size: 10 assert_equal 1, result.num_inserts if Topic.supports_import? end end it "should import with multiple inserts" do assert_difference "Topic.count", +10 do result = Topic.import Build(10, :topics), batch_size: 4 assert_equal 3, result.num_inserts if Topic.supports_import? end end end context "with :synchronize option" do context "synchronizing on new records" do let(:new_topics) { Build(3, :topics) } it "doesn't reload any data (doesn't work)" do Topic.import new_topics, synchronize: new_topics if Topic.support_setting_primary_key_of_imported_objects? assert new_topics.all?(&:persisted?), "Records should have been reloaded" else assert new_topics.all?(&:new_record?), "No record should have been reloaded" end end end context "synchronizing on new records with explicit conditions" do let(:new_topics) { Build(3, :topics) } it "reloads data for existing in-memory instances" do Topic.import(new_topics, synchronize: new_topics, synchronize_keys: [:title] ) assert new_topics.all?(&:persisted?), "Records should have been reloaded" end end context "synchronizing on destroyed records with explicit conditions" do let(:new_topics) { Generate(3, :topics) } it "reloads data for existing in-memory instances" do new_topics.each(&:destroy) Topic.import(new_topics, synchronize: new_topics, synchronize_keys: [:title] ) assert new_topics.all?(&:persisted?), "Records should have been reloaded" end end end context "with an array of unsaved model instances" do let(:topic) { Build(:topic, title: "The RSpec Book", author_name: "David Chelimsky") } let(:topics) { Build(9, :topics) } let(:invalid_topics) { Build(7, :invalid_topics) } it "should import records based on those model's attributes" do assert_difference "Topic.count", +9 do Topic.import topics end Topic.import [topic] assert Topic.where(title: "The RSpec Book", author_name: "David Chelimsky").first end it "should not overwrite existing records" do topic = Generate(:topic, title: "foobar") assert_no_difference "Topic.count" do begin Topic.transaction do topic.title = "baz" Topic.import [topic] end rescue Exception # PostgreSQL raises PgError due to key constraints # I don't know why ActiveRecord doesn't catch these. *sigh* end end assert_equal "foobar", topic.reload.title end context "with validation checks turned on" do it "should import valid models" do assert_difference "Topic.count", +9 do Topic.import topics, validate: true end end it "should not import invalid models" do assert_no_difference "Topic.count" do Topic.import invalid_topics, validate: true end end end context "with validation checks turned off" do it "should import invalid models" do assert_difference "Topic.count", +7 do Topic.import invalid_topics, validate: false end end end end context "with an array of columns and an array of unsaved model instances" do let(:topics) { Build(2, :topics) } it "should import records populating the supplied columns with the corresponding model instance attributes" do assert_difference "Topic.count", +2 do Topic.import [:author_name, :title], topics end # imported topics should be findable by their imported attributes assert Topic.where(author_name: topics.first.author_name).first assert Topic.where(author_name: topics.last.author_name).first end it "should not populate fields for columns not imported" do topics.first.author_email_address = "zach.dennis@gmail.com" assert_difference "Topic.count", +2 do Topic.import [:author_name, :title], topics end assert !Topic.where(author_email_address: "zach.dennis@gmail.com").first end end context "with an array of columns and an array of values" do it "should import ids when specified" do Topic.import [:id, :author_name, :title], [[99, "Bob Jones", "Topic 99"]] assert_equal 99, Topic.last.id end end context "ActiveRecord timestamps" do let(:time) { Chronic.parse("5 minutes ago") } context "when the timestamps columns are present" do setup do @existing_book = Book.create(title: "Fell", author_name: "Curry", publisher: "Bayer", created_at: 2.years.ago.utc, created_on: 2.years.ago.utc) ActiveRecord::Base.default_timezone = :utc Timecop.freeze(time) do assert_difference "Book.count", +2 do Book.import %w(title author_name publisher created_at created_on), [["LDAP", "Big Bird", "Del Rey", nil, nil], [@existing_book.title, @existing_book.author_name, @existing_book.publisher, @existing_book.created_at, @existing_book.created_on]] end end @new_book, @existing_book = Book.last 2 end it "should set the created_at column for new records" do assert_in_delta time.to_i, @new_book.created_at.to_i, 1.second end it "should set the created_on column for new records" do assert_in_delta time.to_i, @new_book.created_on.to_i, 1.second end it "should not set the created_at column for existing records" do assert_equal 2.years.ago.utc.strftime("%Y:%d"), @existing_book.created_at.strftime("%Y:%d") end it "should not set the created_on column for existing records" do assert_equal 2.years.ago.utc.strftime("%Y:%d"), @existing_book.created_on.strftime("%Y:%d") end it "should set the updated_at column for new records" do assert_in_delta time.to_i, @new_book.updated_at.to_i, 1.second end it "should set the updated_on column for new records" do assert_in_delta time.to_i, @new_book.updated_on.to_i, 1.second end end context "when a custom time zone is set" do setup do Timecop.freeze(time) do assert_difference "Book.count", +1 do Book.import [:title, :author_name, :publisher], [["LDAP", "Big Bird", "Del Rey"]] end end @book = Book.last end it "should set the created_at and created_on timestamps for new records" do assert_in_delta time.to_i, @book.created_at.to_i, 1.second assert_in_delta time.to_i, @book.created_on.to_i, 1.second end it "should set the updated_at and updated_on timestamps for new records" do assert_in_delta time.to_i, @book.updated_at.to_i, 1.second assert_in_delta time.to_i, @book.updated_on.to_i, 1.second end end end context "importing with database reserved words" do let(:group) { Build(:group, order: "superx") } it "should import just fine" do assert_difference "Group.count", +1 do Group.import [group] end assert_equal "superx", Group.first.order end end context "importing a datetime field" do it "should import a date with YYYY/MM/DD format just fine" do Topic.import [:author_name, :title, :last_read], [["Bob Jones", "Topic 2", "2010/05/14"]] assert_equal "2010/05/14".to_date, Topic.last.last_read.to_date end end context "importing through an association scope" do { has_many: :chapters, polymorphic: :discounts }.each do |association_type, association| let(:book) { FactoryGirl.create :book } let(:scope) { book.public_send association } let(:klass) { { chapters: Chapter, discounts: Discount }[association] } let(:column) { { chapters: :title, discounts: :amount }[association] } let(:val1) { { chapters: 'A', discounts: 5 }[association] } let(:val2) { { chapters: 'B', discounts: 6 }[association] } context "for #{association_type}" do it "works importing models" do scope.import [ klass.new(column => val1), klass.new(column => val2) ] assert_equal [val1, val2], scope.map(&column).sort end it "works importing array of columns and values" do scope.import [column], [[val1], [val2]] assert_equal [val1, val2], scope.map(&column).sort end end end end context 'When importing models with Enum fields' do it 'should be able to import enum fields' do Book.delete_all if Book.count > 0 books = [ Book.new(author_name: "Foo", title: "Baz", status: 0), Book.new(author_name: "Foo2", title: "Baz2", status: 1), ] Book.import books assert_equal 2, Book.count if ENV['AR_VERSION'].to_i >= 5.0 assert_equal 'draft', Book.first.read_attribute('status') assert_equal 'published', Book.last.read_attribute('status') else assert_equal 0, Book.first.read_attribute('status') assert_equal 1, Book.last.read_attribute('status') end end it 'should be able to import enum fields with default value' do Book.delete_all if Book.count > 0 books = [ Book.new(author_name: "Foo", title: "Baz") ] Book.import books assert_equal 1, Book.count if ENV['AR_VERSION'].to_i >= 5.0 assert_equal 'draft', Book.first.read_attribute('status') else assert_equal 0, Book.first.read_attribute('status') end end if ENV['AR_VERSION'].to_f > 4.1 it 'should be able to import enum fields by name' do Book.delete_all if Book.count > 0 books = [ Book.new(author_name: "Foo", title: "Baz", status: :draft), Book.new(author_name: "Foo2", title: "Baz2", status: :published), ] Book.import books assert_equal 2, Book.count if ENV['AR_VERSION'].to_i >= 5.0 assert_equal 'draft', Book.first.read_attribute('status') assert_equal 'published', Book.last.read_attribute('status') else assert_equal 0, Book.first.read_attribute('status') assert_equal 1, Book.last.read_attribute('status') end end end end context 'When importing arrays of values with Enum fields' do let(:columns) { [:author_name, :title, :status] } let(:values) { [['Author #1', 'Book #1', 0], ['Author #2', 'Book #2', 1]] } it 'should be able to import enum fields' do Book.delete_all if Book.count > 0 Book.import columns, values assert_equal 2, Book.count if ENV['AR_VERSION'].to_i >= 5.0 assert_equal 'draft', Book.first.read_attribute('status') assert_equal 'published', Book.last.read_attribute('status') else assert_equal 0, Book.first.read_attribute('status') assert_equal 1, Book.last.read_attribute('status') end end end describe "importing when model has default_scope" do it "doesn't import the default scope values" do assert_difference "Widget.unscoped.count", +2 do Widget.import [:w_id], [[1], [2]] end default_scope_value = Widget.scope_attributes[:active] assert_not_equal default_scope_value, Widget.unscoped.find_by_w_id(1) assert_not_equal default_scope_value, Widget.unscoped.find_by_w_id(2) end it "imports columns that are a part of the default scope using the value specified" do assert_difference "Widget.unscoped.count", +2 do Widget.import [:w_id, :active], [[1, true], [2, false]] end assert_not_equal true, Widget.unscoped.find_by_w_id(1) assert_not_equal false, Widget.unscoped.find_by_w_id(2) end end describe "importing serialized fields" do it "imports values for serialized fields" do assert_difference "Widget.unscoped.count", +1 do Widget.import [:w_id, :data], [[1, { a: :b }]] end assert_equal({ a: :b }, Widget.find_by_w_id(1).data) end if ENV['AR_VERSION'].to_f >= 3.1 let(:data) { { a: :b } } it "imports values for serialized JSON fields" do assert_difference "Widget.unscoped.count", +1 do Widget.import [:w_id, :json_data], [[9, data]] end assert_equal(data.as_json, Widget.find_by_w_id(9).json_data) end end end describe "#import!" do let(:columns) { %w(title author_name) } let(:valid_values) { [["LDAP", "Jerry Carter"], ["Rails Recipes", "Chad Fowler"]] } let(:invalid_values) { [["Rails Recipes", "Chad Fowler"], ["The RSpec Book", ""], ["Agile+UX", ""]] } context "with invalid data" do it "should raise ActiveRecord::RecordInvalid" do assert_no_difference "Topic.count" do assert_raise ActiveRecord::RecordInvalid do Topic.import! columns, invalid_values end end end end context "with valid data" do it "should import data" do assert_difference "Topic.count", +2 do Topic.import! columns, valid_values end end end end end activerecord-import-0.15.0/test/adapters/0000755000004100000410000000000012737365070020403 5ustar www-datawww-dataactiverecord-import-0.15.0/test/adapters/mysql2.rb0000644000004100000410000000003112737365070022151 0ustar www-datawww-dataENV["ARE_DB"] = "mysql2" activerecord-import-0.15.0/test/adapters/postgis.rb0000644000004100000410000000003212737365070022413 0ustar www-datawww-dataENV["ARE_DB"] = "postgis" activerecord-import-0.15.0/test/adapters/postgresql.rb0000644000004100000410000000003512737365070023131 0ustar www-datawww-dataENV["ARE_DB"] = "postgresql" activerecord-import-0.15.0/test/adapters/sqlite3.rb0000644000004100000410000000003212737365070022307 0ustar www-datawww-dataENV["ARE_DB"] = "sqlite3" activerecord-import-0.15.0/test/adapters/mysql2spatial.rb0000644000004100000410000000004012737365070023527 0ustar www-datawww-dataENV["ARE_DB"] = "mysql2spatial" activerecord-import-0.15.0/test/adapters/spatialite.rb0000644000004100000410000000003512737365070023065 0ustar www-datawww-dataENV["ARE_DB"] = "spatialite" activerecord-import-0.15.0/test/adapters/jdbcpostgresql.rb0000644000004100000410000000004112737365070023751 0ustar www-datawww-dataENV["ARE_DB"] = "jdbcpostgresql" activerecord-import-0.15.0/test/adapters/postgresql_makara.rb0000644000004100000410000000003512737365070024445 0ustar www-datawww-dataENV["ARE_DB"] = "postgresql" activerecord-import-0.15.0/test/adapters/jdbcmysql.rb0000644000004100000410000000003412737365070022715 0ustar www-datawww-dataENV["ARE_DB"] = "jdbcmysql" activerecord-import-0.15.0/test/adapters/seamless_database_pool.rb0000644000004100000410000000005112737365070025415 0ustar www-datawww-dataENV["ARE_DB"] = "seamless_database_pool" activerecord-import-0.15.0/test/adapters/mysql2_makara.rb0000644000004100000410000000004012737365070023465 0ustar www-datawww-dataENV["ARE_DB"] = "mysql2_makara" activerecord-import-0.15.0/test/schema/0000755000004100000410000000000012737365070020040 5ustar www-datawww-dataactiverecord-import-0.15.0/test/schema/generic_schema.rb0000644000004100000410000000673412737365070023333 0ustar www-datawww-dataActiveRecord::Schema.define do create_table :schema_info, force: :cascade do |t| t.integer :version, unique: true end SchemaInfo.create version: SchemaInfo::VERSION create_table :group, force: :cascade do |t| t.string :order t.timestamps null: true end create_table :topics, force: :cascade do |t| t.string :title, null: false t.string :author_name t.string :author_email_address t.datetime :written_on t.time :bonus_time t.datetime :last_read t.text :content t.boolean :approved, default: '1' t.integer :replies_count t.integer :parent_id t.string :type t.datetime :created_at t.datetime :created_on t.datetime :updated_at t.datetime :updated_on end create_table :projects, force: :cascade do |t| t.string :name t.string :type end create_table :developers, force: :cascade do |t| t.string :name t.integer :salary, default: '70000' t.datetime :created_at t.integer :team_id t.datetime :updated_at end create_table :addresses, force: :cascade do |t| t.string :address t.string :city t.string :state t.string :zip t.integer :developer_id end create_table :teams, force: :cascade do |t| t.string :name end create_table :books, force: :cascade do |t| t.string :title, null: false t.string :publisher, null: false, default: 'Default Publisher' t.string :author_name, null: false t.datetime :created_at t.datetime :created_on t.datetime :updated_at t.datetime :updated_on t.date :publish_date t.integer :topic_id t.boolean :for_sale, default: true t.integer :status, default: 0 end create_table :chapters, force: :cascade do |t| t.string :title t.integer :book_id, null: false t.datetime :created_at t.datetime :updated_at end create_table :end_notes, force: :cascade do |t| t.string :note t.integer :book_id, null: false t.datetime :created_at t.datetime :updated_at end create_table :languages, force: :cascade do |t| t.string :name t.integer :developer_id end create_table :shopping_carts, force: :cascade do |t| t.string :name, null: true t.datetime :created_at t.datetime :updated_at end create_table :cart_items, force: :cascade do |t| t.string :shopping_cart_id, null: false t.string :book_id, null: false t.integer :copies, default: 1 t.datetime :created_at t.datetime :updated_at end add_index :cart_items, [:shopping_cart_id, :book_id], unique: true, name: 'uk_shopping_cart_books' create_table :animals, force: :cascade do |t| t.string :name, null: false t.string :size, default: nil t.datetime :created_at t.datetime :updated_at end add_index :animals, [:name], unique: true, name: 'uk_animals' create_table :widgets, id: false, force: :cascade do |t| t.integer :w_id t.boolean :active, default: false t.text :data t.text :json_data end create_table :promotions, primary_key: :promotion_id, force: :cascade do |t| t.string :code t.string :description t.decimal :discount end add_index :promotions, [:code], unique: true, name: 'uk_code' create_table :discounts, force: :cascade do |t| t.decimal :amount t.integer :discountable_id t.string :discountable_type end create_table :rules, force: :cascade do |t| t.string :condition_text t.integer :question_id end create_table :questions, force: :cascade do |t| t.string :body end end activerecord-import-0.15.0/test/schema/version.rb0000644000004100000410000000042112737365070022047 0ustar www-datawww-dataclass SchemaInfo < ActiveRecord::Base if respond_to?(:table_name=) self.table_name = 'schema_info' else # this is becoming deprecated in ActiveRecord but not all adapters supported it # at this time set_table_name 'schema_info' end VERSION = 12 end activerecord-import-0.15.0/test/schema/mysql_schema.rb0000644000004100000410000000120712737365070023052 0ustar www-datawww-dataActiveRecord::Schema.define do create_table :books, options: 'ENGINE=MyISAM', force: true do |t| t.column :title, :string, null: false t.column :publisher, :string, null: false, default: 'Default Publisher' t.column :author_name, :string, null: false t.column :created_at, :datetime t.column :created_on, :datetime t.column :updated_at, :datetime t.column :updated_on, :datetime t.column :publish_date, :date t.column :topic_id, :integer t.column :for_sale, :boolean, default: true t.column :status, :integer end execute "ALTER TABLE books ADD FULLTEXT( `title`, `publisher`, `author_name` )" end activerecord-import-0.15.0/test/synchronize_test.rb0000644000004100000410000000303012737365070022533 0ustar www-datawww-datarequire File.expand_path('../test_helper', __FILE__) describe ".synchronize" do let(:topics) { Generate(3, :topics) } let(:titles) { %w(one two three) } setup do # update records outside of ActiveRecord knowing about it Topic.connection.execute( "UPDATE #{Topic.table_name} SET title='#{titles[0]}_haha' WHERE id=#{topics[0].id}", "Updating record 1 without ActiveRecord" ) Topic.connection.execute( "UPDATE #{Topic.table_name} SET title='#{titles[1]}_haha' WHERE id=#{topics[1].id}", "Updating record 2 without ActiveRecord" ) Topic.connection.execute( "UPDATE #{Topic.table_name} SET title='#{titles[2]}_haha' WHERE id=#{topics[2].id}", "Updating record 3 without ActiveRecord" ) end it "reloads data for the specified records" do Topic.synchronize topics actual_titles = topics.map(&:title) assert_equal "#{titles[0]}_haha", actual_titles[0], "the first record was not correctly updated" assert_equal "#{titles[1]}_haha", actual_titles[1], "the second record was not correctly updated" assert_equal "#{titles[2]}_haha", actual_titles[2], "the third record was not correctly updated" end it "the synchronized records aren't dirty" do # Update the in memory records so they're dirty topics.each { |topic| topic.title = 'dirty title' } Topic.synchronize topics assert_equal false, topics[0].changed?, "the first record was dirty" assert_equal false, topics[1].changed?, "the second record was dirty" assert_equal false, topics[2].changed?, "the third record was dirty" end end activerecord-import-0.15.0/test/jdbcmysql/0000755000004100000410000000000012737365070020570 5ustar www-datawww-dataactiverecord-import-0.15.0/test/jdbcmysql/import_test.rb0000644000004100000410000000042412737365070023466 0ustar www-datawww-datarequire File.expand_path(File.dirname(__FILE__) + '/../test_helper') require File.expand_path(File.dirname(__FILE__) + '/../support/assertions') require File.expand_path(File.dirname(__FILE__) + '/../support/mysql/import_examples') should_support_mysql_import_functionality activerecord-import-0.15.0/test/mysqlspatial2/0000755000004100000410000000000012737365070021405 5ustar www-datawww-dataactiverecord-import-0.15.0/test/mysqlspatial2/import_test.rb0000644000004100000410000000042412737365070024303 0ustar www-datawww-datarequire File.expand_path(File.dirname(__FILE__) + '/../test_helper') require File.expand_path(File.dirname(__FILE__) + '/../support/assertions') require File.expand_path(File.dirname(__FILE__) + '/../support/mysql/import_examples') should_support_mysql_import_functionality activerecord-import-0.15.0/test/value_sets_bytes_parser_test.rb0000644000004100000410000000713712737365070025130 0ustar www-datawww-datarequire File.expand_path(File.dirname(__FILE__) + '/test_helper') require 'activerecord-import/value_sets_parser' describe ActiveRecord::Import::ValueSetsBytesParser do context "#parse - computing insert value sets" do let(:parser) { ActiveRecord::Import::ValueSetsBytesParser } let(:base_sql) { "INSERT INTO atable (a,b,c)" } let(:values) { ["(1,2,3)", "(2,3,4)", "(3,4,5)"] } context "when the max allowed bytes is 33 and the base SQL is 26 bytes" do it "should return 3 value sets when given 3 value sets of 7 bytes a piece" do value_sets = parser.parse values, reserved_bytes: base_sql.size, max_bytes: 33 assert_equal 3, value_sets.size end end context "when the max allowed bytes is 40 and the base SQL is 26 bytes" do it "should return 3 value sets when given 3 value sets of 7 bytes a piece" do value_sets = parser.parse values, reserved_bytes: base_sql.size, max_bytes: 40 assert_equal 3, value_sets.size end end context "when the max allowed bytes is 41 and the base SQL is 26 bytes" do it "should return 2 value sets when given 2 value sets of 7 bytes a piece" do value_sets = parser.parse values, reserved_bytes: base_sql.size, max_bytes: 41 assert_equal 2, value_sets.size end end context "when the max allowed bytes is 48 and the base SQL is 26 bytes" do it "should return 2 value sets when given 2 value sets of 7 bytes a piece" do value_sets = parser.parse values, reserved_bytes: base_sql.size, max_bytes: 48 assert_equal 2, value_sets.size end end context "when the max allowed bytes is 49 and the base SQL is 26 bytes" do it "should return 1 value sets when given 1 value sets of 7 bytes a piece" do value_sets = parser.parse values, reserved_bytes: base_sql.size, max_bytes: 49 assert_equal 1, value_sets.size end end context "when the max allowed bytes is 999999 and the base SQL is 26 bytes" do it "should return 1 value sets when given 1 value sets of 7 bytes a piece" do value_sets = parser.parse values, reserved_bytes: base_sql.size, max_bytes: 999_999 assert_equal 1, value_sets.size end end it "should properly build insert value set based on max packet allowed" do values = [ "('1','2','3')", "('4','5','6')", "('7','8','9')"] base_sql_size_in_bytes = 15 max_bytes = 30 value_sets = parser.parse values, reserved_bytes: base_sql_size_in_bytes, max_bytes: max_bytes assert_equal 3, value_sets.size, 'Three value sets were expected!' # Each element in the value_sets array must be an array value_sets.each_with_index do |e, i| assert_kind_of Array, e, "Element #{i} was expected to be an Array!" end # Each element in the values array should have a 1:1 correlation to the elements # in the returned value_sets arrays assert_equal values[0], value_sets[0].first assert_equal values[1], value_sets[1].first assert_equal values[2], value_sets[2].first end context "data contains multi-byte chars" do it "should properly build insert value set based on max packet allowed" do # each accented e should be 2 bytes, so each entry is 6 bytes instead of 5 values = [ "('é')", "('é')"] base_sql_size_in_bytes = 15 max_bytes = 26 value_sets = parser.parse values, reserved_bytes: base_sql_size_in_bytes, max_bytes: max_bytes assert_equal 2, value_sets.size, 'Two value sets were expected!' end end end end activerecord-import-0.15.0/test/database.yml.sample0000644000004100000410000000135612737365070022354 0ustar www-datawww-datacommon: &common username: root password: encoding: utf8 host: localhost database: activerecord_import_test mysql2: &mysql2 <<: *common adapter: mysql2 mysql2spatial: <<: *mysql2 mysql2_makara: <<: *mysql2 postgresql: &postgresql <<: *common username: postgres adapter: postgresql min_messages: warning postresql_makara: <<: *postgresql postgis: <<: *postgresql oracle: <<: *common adapter: oracle min_messages: debug seamless_database_pool: <<: *common adapter: seamless_database_pool prepared_statements: false pool_adapter: mysql2 master: host: localhost sqlite: adapter: sqlite dbfile: test.db sqlite3: &sqlite3 adapter: sqlite3 database: test.db spatialite: <<: *sqlite3 activerecord-import-0.15.0/test/test_helper.rb0000644000004100000410000000330312737365070021442 0ustar www-datawww-datarequire 'pathname' test_dir = Pathname.new File.dirname(__FILE__) $LOAD_PATH.unshift(File.join(File.dirname(__FILE__), '..', 'lib')) $LOAD_PATH.unshift(File.dirname(__FILE__)) require "fileutils" ENV["RAILS_ENV"] = "test" require "bundler" Bundler.setup require 'pry' unless RbConfig::CONFIG["RUBY_INSTALL_NAME"] =~ /jruby/ require "active_record" require "active_record/fixtures" require "active_support/test_case" if ActiveSupport::VERSION::STRING < "4.0" require 'test/unit' else require 'active_support/testing/autorun' end require 'timecop' require 'chronic' require "ruby-debug" if RUBY_VERSION.to_f < 1.9 adapter = ENV["ARE_DB"] || "sqlite3" FileUtils.mkdir_p 'log' ActiveRecord::Base.logger = Logger.new("log/test.log") ActiveRecord::Base.logger.level = Logger::DEBUG ActiveRecord::Base.configurations["test"] = YAML.load_file(test_dir.join("database.yml"))[adapter] ActiveRecord::Base.default_timezone = :utc require "activerecord-import" ActiveRecord::Base.establish_connection :test ActiveSupport::Notifications.subscribe(/active_record.sql/) do |_, _, _, _, hsh| ActiveRecord::Base.logger.info hsh[:sql] end require "factory_girl" Dir[File.dirname(__FILE__) + "/support/**/*.rb"].each { |file| require file } # Load base/generic schema require test_dir.join("schema/version") require test_dir.join("schema/generic_schema") adapter_schema = test_dir.join("schema/#{adapter}_schema.rb") require adapter_schema if File.exist?(adapter_schema) Dir[File.dirname(__FILE__) + "/models/*.rb"].each { |file| require file } # Prevent this deprecation warning from breaking the tests. Rake::FileList.send(:remove_method, :import) ActiveSupport::TestCase.test_order = :random if ENV['AR_VERSION'].to_f >= 4.2 activerecord-import-0.15.0/test/postgis/0000755000004100000410000000000012737365070020270 5ustar www-datawww-dataactiverecord-import-0.15.0/test/postgis/import_test.rb0000644000004100000410000000032112737365070023162 0ustar www-datawww-datarequire File.expand_path(File.dirname(__FILE__) + '/../test_helper') require File.expand_path(File.dirname(__FILE__) + '/../support/postgresql/import_examples') should_support_postgresql_import_functionality activerecord-import-0.15.0/test/mysql2_makara/0000755000004100000410000000000012737365070021343 5ustar www-datawww-dataactiverecord-import-0.15.0/test/mysql2_makara/import_test.rb0000644000004100000410000000042412737365070024241 0ustar www-datawww-datarequire File.expand_path(File.dirname(__FILE__) + '/../test_helper') require File.expand_path(File.dirname(__FILE__) + '/../support/assertions') require File.expand_path(File.dirname(__FILE__) + '/../support/mysql/import_examples') should_support_mysql_import_functionality activerecord-import-0.15.0/test/sqlite3/0000755000004100000410000000000012737365070020164 5ustar www-datawww-dataactiverecord-import-0.15.0/test/sqlite3/import_test.rb0000644000004100000410000000505612737365070023070 0ustar www-datawww-datarequire File.expand_path(File.dirname(__FILE__) + '/../test_helper') should_support_recursive_import describe "#supports_imports?" do context "and SQLite is 3.7.11 or higher" do it "supports import" do version = ActiveRecord::ConnectionAdapters::SQLite3Adapter::Version.new("3.7.11") assert ActiveRecord::Base.supports_import?(version) version = ActiveRecord::ConnectionAdapters::SQLite3Adapter::Version.new("3.7.12") assert ActiveRecord::Base.supports_import?(version) end end context "and SQLite less than 3.7.11" do it "doesn't support import" do version = ActiveRecord::ConnectionAdapters::SQLite3Adapter::Version.new("3.7.10") assert !ActiveRecord::Base.supports_import?(version) end end end describe "#import" do it "imports with a single insert on SQLite 3.7.11 or higher" do assert_difference "Topic.count", +507 do result = Topic.import Build(7, :topics) assert_equal 1, result.num_inserts, "Failed to issue a single INSERT statement. Make sure you have a supported version of SQLite3 (3.7.11 or higher) installed" assert_equal 7, Topic.count, "Failed to insert all records. Make sure you have a supported version of SQLite3 (3.7.11 or higher) installed" result = Topic.import Build(500, :topics) assert_equal 1, result.num_inserts, "Failed to issue a single INSERT statement. Make sure you have a supported version of SQLite3 (3.7.11 or higher) installed" assert_equal 507, Topic.count, "Failed to insert all records. Make sure you have a supported version of SQLite3 (3.7.11 or higher) installed" end end it "imports with a two inserts on SQLite 3.7.11 or higher" do assert_difference "Topic.count", +501 do result = Topic.import Build(501, :topics) assert_equal 2, result.num_inserts, "Failed to issue a two INSERT statements. Make sure you have a supported version of SQLite3 (3.7.11 or higher) installed" assert_equal 501, Topic.count, "Failed to insert all records. Make sure you have a supported version of SQLite3 (3.7.11 or higher) installed" end end it "imports with a five inserts on SQLite 3.7.11 or higher" do assert_difference "Topic.count", +2500 do result = Topic.import Build(2500, :topics) assert_equal 5, result.num_inserts, "Failed to issue a two INSERT statements. Make sure you have a supported version of SQLite3 (3.7.11 or higher) installed" assert_equal 2500, Topic.count, "Failed to insert all records. Make sure you have a supported version of SQLite3 (3.7.11 or higher) installed" end end end activerecord-import-0.15.0/test/mysql2/0000755000004100000410000000000012737365070020027 5ustar www-datawww-dataactiverecord-import-0.15.0/test/mysql2/import_test.rb0000644000004100000410000000042312737365070022724 0ustar www-datawww-datarequire File.expand_path(File.dirname(__FILE__) + '/../test_helper') require File.expand_path(File.dirname(__FILE__) + '/../support/assertions') require File.expand_path(File.dirname(__FILE__) + '/../support/mysql/import_examples') should_support_mysql_import_functionality activerecord-import-0.15.0/test/value_sets_records_parser_test.rb0000644000004100000410000000213312737365070025432 0ustar www-datawww-datarequire File.expand_path(File.dirname(__FILE__) + '/test_helper') require 'activerecord-import/value_sets_parser' describe "ActiveRecord::Import::ValueSetsRecordsParser" do context "#parse - computing insert value sets" do let(:parser) { ActiveRecord::Import::ValueSetsRecordsParser } let(:base_sql) { "INSERT INTO atable (a,b,c)" } let(:values) { ["(1,2,3)", "(2,3,4)", "(3,4,5)"] } context "when the max number of records is 1" do it "should return 3 value sets when given 3 values sets" do value_sets = parser.parse values, max_records: 1 assert_equal 3, value_sets.size end end context "when the max number of records is 2" do it "should return 2 value sets when given 3 values sets" do value_sets = parser.parse values, max_records: 2 assert_equal 2, value_sets.size end end context "when the max number of records is 3" do it "should return 1 value sets when given 3 values sets" do value_sets = parser.parse values, max_records: 3 assert_equal 1, value_sets.size end end end end activerecord-import-0.15.0/test/jdbcpostgresql/0000755000004100000410000000000012737365070021626 5ustar www-datawww-dataactiverecord-import-0.15.0/test/jdbcpostgresql/import_test.rb0000644000004100000410000000032212737365070024521 0ustar www-datawww-datarequire File.expand_path(File.dirname(__FILE__) + '/../test_helper') require File.expand_path(File.dirname(__FILE__) + '/../support/postgresql/import_examples') should_support_postgresql_import_functionality activerecord-import-0.15.0/test/models/0000755000004100000410000000000012737365070020063 5ustar www-datawww-dataactiverecord-import-0.15.0/test/models/chapter.rb0000644000004100000410000000016412737365070022037 0ustar www-datawww-dataclass Chapter < ActiveRecord::Base belongs_to :book, inverse_of: :chapters validates :title, presence: true end activerecord-import-0.15.0/test/models/widget.rb0000644000004100000410000000024312737365070021672 0ustar www-datawww-dataclass Widget < ActiveRecord::Base self.primary_key = :w_id default_scope -> { where(active: true) } serialize :data, Hash serialize :json_data, JSON end activerecord-import-0.15.0/test/models/rule.rb0000644000004100000410000000007312737365070021357 0ustar www-datawww-dataclass Rule < ActiveRecord::Base belongs_to :question end activerecord-import-0.15.0/test/models/topic.rb0000644000004100000410000000055512737365070021533 0ustar www-datawww-dataclass Topic < ActiveRecord::Base validates_presence_of :author_name validates :title, numericality: { only_integer: true }, on: :context_test has_many :books, inverse_of: :topic belongs_to :parent, class_name: "Topic" composed_of :description, mapping: [%w(title title), %w(author_name author_name)], allow_nil: true, class_name: "TopicDescription" end activerecord-import-0.15.0/test/models/book.rb0000644000004100000410000000041312737365070021340 0ustar www-datawww-dataclass Book < ActiveRecord::Base belongs_to :topic, inverse_of: :books has_many :chapters, inverse_of: :book has_many :discounts, as: :discountable has_many :end_notes, inverse_of: :book enum status: [:draft, :published] if ENV['AR_VERSION'].to_f >= 4.1 end activerecord-import-0.15.0/test/models/group.rb0000644000004100000410000000010112737365070021534 0ustar www-datawww-dataclass Group < ActiveRecord::Base self.table_name = 'group' end activerecord-import-0.15.0/test/models/discount.rb0000644000004100000410000000012612737365070022237 0ustar www-datawww-dataclass Discount < ActiveRecord::Base belongs_to :discountable, polymorphic: true end activerecord-import-0.15.0/test/models/end_note.rb0000644000004100000410000000016412737365070022204 0ustar www-datawww-dataclass EndNote < ActiveRecord::Base belongs_to :book, inverse_of: :end_notes validates :note, presence: true end activerecord-import-0.15.0/test/models/question.rb0000644000004100000410000000007012737365070022254 0ustar www-datawww-dataclass Question < ActiveRecord::Base has_one :rule end activerecord-import-0.15.0/test/models/promotion.rb0000644000004100000410000000011412737365070022432 0ustar www-datawww-dataclass Promotion < ActiveRecord::Base self.primary_key = :promotion_id end activerecord-import-0.15.0/test/postgresql/0000755000004100000410000000000012737365070021003 5ustar www-datawww-dataactiverecord-import-0.15.0/test/postgresql/import_test.rb0000644000004100000410000000051212737365070023677 0ustar www-datawww-datarequire File.expand_path(File.dirname(__FILE__) + '/../test_helper') require File.expand_path(File.dirname(__FILE__) + '/../support/postgresql/import_examples') should_support_postgresql_import_functionality if ActiveRecord::Base.connection.supports_on_duplicate_key_update? should_support_postgresql_upsert_functionality end activerecord-import-0.15.0/test/support/0000755000004100000410000000000012737365070020314 5ustar www-datawww-dataactiverecord-import-0.15.0/test/support/generate.rb0000644000004100000410000000157212737365070022440 0ustar www-datawww-dataclass ActiveSupport::TestCase def Build(*args) # rubocop:disable Style/MethodName n = args.shift if args.first.is_a?(Numeric) factory = args.shift factory_girl_args = args.shift || {} if n [].tap do |collection| n.times.each { collection << FactoryGirl.build(factory.to_s.singularize.to_sym, factory_girl_args) } end else FactoryGirl.build(factory.to_s.singularize.to_sym, factory_girl_args) end end def Generate(*args) # rubocop:disable Style/MethodName n = args.shift if args.first.is_a?(Numeric) factory = args.shift factory_girl_args = args.shift || {} if n [].tap do |collection| n.times.each { collection << FactoryGirl.create(factory.to_s.singularize.to_sym, factory_girl_args) } end else FactoryGirl.create(factory.to_s.singularize.to_sym, factory_girl_args) end end end activerecord-import-0.15.0/test/support/mysql/0000755000004100000410000000000012737365070021461 5ustar www-datawww-dataactiverecord-import-0.15.0/test/support/mysql/import_examples.rb0000644000004100000410000000675312737365070025231 0ustar www-datawww-data# encoding: UTF-8 def should_support_mysql_import_functionality # Forcefully disable strict mode for this session. ActiveRecord::Base.connection.execute "set sql_mode='STRICT_ALL_TABLES'" should_support_basic_on_duplicate_key_update describe "#import" do context "with :on_duplicate_key_update and validation checks turned off" do extend ActiveSupport::TestCase::ImportAssertions asssertion_group(:should_support_on_duplicate_key_update) do should_not_update_fields_not_mentioned should_update_foreign_keys should_not_update_created_at_on_timestamp_columns should_update_updated_at_on_timestamp_columns end macro(:perform_import) { raise "supply your own #perform_import in a context below" } macro(:updated_topic) { Topic.find(@topic.id) } let(:columns) { %w( id title author_name author_email_address parent_id ) } let(:values) { [[99, "Book", "John Doe", "john@doe.com", 17]] } let(:updated_values) { [[99, "Book - 2nd Edition", "Author Should Not Change", "johndoe@example.com", 57]] } macro(:perform_import) do |*opts| Topic.import columns, updated_values, opts.extract_options!.merge(on_duplicate_key_update: update_columns, validate: false) end setup do Topic.import columns, values, validate: false @topic = Topic.find 99 end context "using string hash map" do let(:update_columns) { { "title" => "title", "author_email_address" => "author_email_address", "parent_id" => "parent_id" } } should_support_on_duplicate_key_update should_update_fields_mentioned end context "using string hash map, but specifying column mismatches" do let(:update_columns) { { "title" => "author_email_address", "author_email_address" => "title", "parent_id" => "parent_id" } } should_support_on_duplicate_key_update should_update_fields_mentioned_with_hash_mappings end context "using symbol hash map" do let(:update_columns) { { title: :title, author_email_address: :author_email_address, parent_id: :parent_id } } should_support_on_duplicate_key_update should_update_fields_mentioned end context "using symbol hash map, but specifying column mismatches" do let(:update_columns) { { title: :author_email_address, author_email_address: :title, parent_id: :parent_id } } should_support_on_duplicate_key_update should_update_fields_mentioned_with_hash_mappings end end context "with :synchronization option" do let(:topics) { [] } let(:values) { [[topics.first.id, "Jerry Carter", "title1"], [topics.last.id, "Chad Fowler", "title2"]] } let(:columns) { %w(id author_name title) } setup do topics << Topic.create!(title: "LDAP", author_name: "Big Bird") topics << Topic.create!(title: "Rails Recipes", author_name: "Elmo") end it "synchronizes passed in ActiveRecord model instances with the data just imported" do columns2update = ['author_name'] expected_count = Topic.count Topic.import( columns, values, validate: false, on_duplicate_key_update: columns2update, synchronize: topics ) assert_equal expected_count, Topic.count, "no new records should have been created!" assert_equal "Jerry Carter", topics.first.author_name, "wrong author!" assert_equal "Chad Fowler", topics.last.author_name, "wrong author!" end end end end activerecord-import-0.15.0/test/support/factories.rb0000644000004100000410000000245512737365070022626 0ustar www-datawww-dataFactoryGirl.define do sequence(:book_title) { |n| "Book #{n}" } sequence(:chapter_title) { |n| "Chapter #{n}" } sequence(:end_note) { |n| "Endnote #{n}" } factory :group do sequence(:order) { |n| "Order #{n}" } end factory :invalid_topic, class: "Topic" do sequence(:title) { |n| "Title #{n}" } author_name nil end factory :topic do sequence(:title) { |n| "Title #{n}" } sequence(:author_name) { |n| "Author #{n}" } end factory :widget do sequence(:w_id) { |n| n } end factory :question do sequence(:body) { |n| "Text #{n}" } trait :with_rule do after(:build) do |question| question.build_rule(FactoryGirl.attributes_for(:rule)) end end end factory :rule do sequence(:condition_text) { |n| "q_#{n}_#{n}" } end factory :topic_with_book, parent: :topic do after(:build) do |topic| 2.times do book = topic.books.build(title: FactoryGirl.generate(:book_title), author_name: 'Stephen King') 3.times do book.chapters.build(title: FactoryGirl.generate(:chapter_title)) end 4.times do book.end_notes.build(note: FactoryGirl.generate(:end_note)) end end end end factory :book do title 'Tortilla Flat' author_name 'John Steinbeck' end end activerecord-import-0.15.0/test/support/shared_examples/0000755000004100000410000000000012737365070023460 5ustar www-datawww-dataactiverecord-import-0.15.0/test/support/shared_examples/recursive_import.rb0000644000004100000410000001026212737365070027407 0ustar www-datawww-datadef should_support_recursive_import describe "importing objects with associations" do let(:new_topics) { Build(num_topics, :topic_with_book) } let(:new_topics_with_invalid_chapter) do chapter = new_topics.first.books.first.chapters.first chapter.title = nil new_topics end let(:num_topics) { 3 } let(:num_books) { 6 } let(:num_chapters) { 18 } let(:num_endnotes) { 24 } let(:new_question_with_rule) { FactoryGirl.build :question, :with_rule } it 'imports top level' do assert_difference "Topic.count", +num_topics do Topic.import new_topics, recursive: true new_topics.each do |topic| assert_not_nil topic.id end end end it 'imports first level associations' do assert_difference "Book.count", +num_books do Topic.import new_topics, recursive: true new_topics.each do |topic| topic.books.each do |book| assert_equal topic.id, book.topic_id end end end end it 'imports polymorphic associations' do discounts = Array.new(1) { |i| Discount.new(amount: i) } books = Array.new(1) { |i| Book.new(author_name: "Author ##{i}", title: "Book ##{i}") } books.each do |book| book.discounts << discounts end Book.import books, recursive: true books.each do |book| book.discounts.each do |discount| assert_not_nil discount.discountable_id assert_equal 'Book', discount.discountable_type end end end [{ recursive: false }, {}].each do |import_options| it "skips recursion for #{import_options}" do assert_difference "Book.count", 0 do Topic.import new_topics, import_options end end end it 'imports deeper nested associations' do assert_difference "Chapter.count", +num_chapters do assert_difference "EndNote.count", +num_endnotes do Topic.import new_topics, recursive: true new_topics.each do |topic| topic.books.each do |book| book.chapters.each do |chapter| assert_equal book.id, chapter.book_id end book.end_notes.each do |endnote| assert_equal book.id, endnote.book_id end end end end end end it "skips validation of the associations if requested" do assert_difference "Chapter.count", +num_chapters do Topic.import new_topics_with_invalid_chapter, validate: false, recursive: true end end it 'imports has_one associations' do assert_difference 'Rule.count' do Question.import [new_question_with_rule], recursive: true end end # These models dont validate associated. So we expect that books and topics get inserted, but not chapters # Putting a transaction around everything wouldn't work, so if you want your chapters to prevent topics from # being created, you would need to have validates_associated in your models and insert with validation describe "all_or_none" do [Book, Topic, EndNote].each do |type| it "creates #{type}" do assert_difference "#{type}.count", send("num_#{type.to_s.downcase}s") do Topic.import new_topics_with_invalid_chapter, all_or_none: true, recursive: true end end end it "doesn't create chapters" do assert_difference "Chapter.count", 0 do Topic.import new_topics_with_invalid_chapter, all_or_none: true, recursive: true end end end # If adapter supports on_duplicate_key_update, it is only applied to top level models so that SQL with invalid # columns, keys, etc isn't generated for child associations when doing recursive import describe "on_duplicate_key_update" do let(:new_topics) { Build(1, :topic_with_book) } it "imports objects with associations" do assert_difference "Topic.count", +1 do Topic.import new_topics, recursive: true, on_duplicate_key_update: [:updated_at], validate: false new_topics.each do |topic| assert_not_nil topic.id end end end end end end activerecord-import-0.15.0/test/support/shared_examples/on_duplicate_key_update.rb0000644000004100000410000000747612737365070030703 0ustar www-datawww-datadef should_support_basic_on_duplicate_key_update describe "#import" do extend ActiveSupport::TestCase::ImportAssertions macro(:perform_import) { raise "supply your own #perform_import in a context below" } macro(:updated_topic) { Topic.find(@topic.id) } context "with :on_duplicate_key_update" do describe "argument safety" do it "should not modify the passed in :on_duplicate_key_update columns array" do assert_nothing_raised do columns = %w(title author_name).freeze Topic.import columns, [%w(foo, bar)], on_duplicate_key_update: columns end end end context "with validation checks turned off" do asssertion_group(:should_support_on_duplicate_key_update) do should_not_update_fields_not_mentioned should_update_foreign_keys should_not_update_created_at_on_timestamp_columns should_update_updated_at_on_timestamp_columns end let(:columns) { %w( id title author_name author_email_address parent_id ) } let(:values) { [[99, "Book", "John Doe", "john@doe.com", 17]] } let(:updated_values) { [[99, "Book - 2nd Edition", "Author Should Not Change", "johndoe@example.com", 57]] } macro(:perform_import) do |*opts| Topic.import columns, updated_values, opts.extract_options!.merge(on_duplicate_key_update: update_columns, validate: false) end setup do Topic.import columns, values, validate: false @topic = Topic.find 99 end context "using an empty array" do let(:update_columns) { [] } should_not_update_fields_not_mentioned should_update_updated_at_on_timestamp_columns end context "using string column names" do let(:update_columns) { %w(title author_email_address parent_id) } should_support_on_duplicate_key_update should_update_fields_mentioned end context "using symbol column names" do let(:update_columns) { [:title, :author_email_address, :parent_id] } should_support_on_duplicate_key_update should_update_fields_mentioned end end context "with a table that has a non-standard primary key" do let(:columns) { [:promotion_id, :code] } let(:values) { [[1, 'DISCOUNT1']] } let(:updated_values) { [[1, 'DISCOUNT2']] } let(:update_columns) { [:code] } macro(:perform_import) do |*opts| Promotion.import columns, updated_values, opts.extract_options!.merge(on_duplicate_key_update: update_columns, validate: false) end macro(:updated_promotion) { Promotion.find(@promotion.promotion_id) } setup do Promotion.import columns, values, validate: false @promotion = Promotion.find 1 end it "should update specified columns" do perform_import assert_equal 'DISCOUNT2', updated_promotion.code end end end context "with :on_duplicate_key_update turned off" do let(:columns) { %w( id title author_name author_email_address parent_id ) } let(:values) { [[100, "Book", "John Doe", "john@doe.com", 17]] } let(:updated_values) { [[100, "Book - 2nd Edition", "This should raise an exception", "john@nogo.com", 57]] } macro(:perform_import) do |*opts| # `on_duplicate_key_update: false` is the tested feature Topic.import columns, updated_values, opts.extract_options!.merge(on_duplicate_key_update: false, validate: false) end setup do Topic.import columns, values, validate: false @topic = Topic.find 100 end it "should raise ActiveRecord::RecordNotUnique" do assert_raise ActiveRecord::RecordNotUnique do perform_import end end end end end activerecord-import-0.15.0/test/support/assertions.rb0000644000004100000410000000542012737365070023034 0ustar www-datawww-dataclass ActiveSupport::TestCase module ImportAssertions def self.extended(klass) klass.instance_eval do assertion(:should_not_update_created_at_on_timestamp_columns) do Timecop.freeze Chronic.parse("5 minutes from now") do perform_import assert_in_delta @topic.created_at.to_i, updated_topic.created_at.to_i, 1 assert_in_delta @topic.created_on.to_i, updated_topic.created_on.to_i, 1 end end assertion(:should_update_updated_at_on_timestamp_columns) do time = Chronic.parse("5 minutes from now") Timecop.freeze time do perform_import assert_in_delta time.to_i, updated_topic.updated_at.to_i, 1 assert_in_delta time.to_i, updated_topic.updated_on.to_i, 1 end end assertion(:should_not_update_updated_at_on_timestamp_columns) do time = Chronic.parse("5 minutes from now") Timecop.freeze time do perform_import assert_in_delta @topic.updated_at.to_i, updated_topic.updated_at.to_i, 1 assert_in_delta @topic.updated_on.to_i, updated_topic.updated_on.to_i, 1 end end assertion(:should_not_update_timestamps) do Timecop.freeze Chronic.parse("5 minutes from now") do perform_import timestamps: false assert_in_delta @topic.created_at.to_i, updated_topic.created_at.to_i, 1 assert_in_delta @topic.created_on.to_i, updated_topic.created_on.to_i, 1 assert_in_delta @topic.updated_at.to_i, updated_topic.updated_at.to_i, 1 assert_in_delta @topic.updated_on.to_i, updated_topic.updated_on.to_i, 1 end end assertion(:should_not_update_fields_not_mentioned) do assert_equal "John Doe", updated_topic.author_name end assertion(:should_update_fields_mentioned) do perform_import assert_equal "Book - 2nd Edition", updated_topic.title assert_equal "johndoe@example.com", updated_topic.author_email_address end assertion(:should_raise_update_fields_mentioned) do assert_raise ActiveRecord::RecordNotUnique do perform_import end assert_equal "Book", updated_topic.title assert_equal "john@doe.com", updated_topic.author_email_address end assertion(:should_update_fields_mentioned_with_hash_mappings) do perform_import assert_equal "johndoe@example.com", updated_topic.title assert_equal "Book - 2nd Edition", updated_topic.author_email_address end assertion(:should_update_foreign_keys) do perform_import assert_equal 57, updated_topic.parent_id end end end end end activerecord-import-0.15.0/test/support/postgresql/0000755000004100000410000000000012737365070022517 5ustar www-datawww-dataactiverecord-import-0.15.0/test/support/postgresql/import_examples.rb0000644000004100000410000002255712737365070026267 0ustar www-datawww-data# encoding: UTF-8 def should_support_postgresql_import_functionality should_support_recursive_import describe "#supports_imports?" do it "should support import" do assert ActiveRecord::Base.supports_import? end end describe "#import" do it "should import with a single insert" do # see ActiveRecord::ConnectionAdapters::AbstractAdapter test for more specifics assert_difference "Topic.count", +10 do result = Topic.import Build(3, :topics) assert_equal 1, result.num_inserts result = Topic.import Build(7, :topics) assert_equal 1, result.num_inserts end end describe "with query cache enabled" do setup do unless ActiveRecord::Base.connection.query_cache_enabled ActiveRecord::Base.connection.enable_query_cache! @disable_cache_on_teardown = true end end it "clears cache on insert" do before_import = Topic.all.to_a Topic.import(Build(2, :topics), validate: false) after_import = Topic.all.to_a assert_equal 2, after_import.size - before_import.size end teardown do if @disable_cache_on_teardown ActiveRecord::Base.connection.disable_query_cache! end end end describe "no_returning" do let(:books) { [Book.new(author_name: "foo", title: "bar")] } it "creates records" do assert_difference "Book.count", +1 do Book.import books, no_returning: true end end it "returns no ids" do assert_equal [], Book.import(books, no_returning: true).ids end end end end def should_support_postgresql_upsert_functionality should_support_basic_on_duplicate_key_update describe "#import" do extend ActiveSupport::TestCase::ImportAssertions macro(:perform_import) { raise "supply your own #perform_import in a context below" } macro(:updated_topic) { Topic.find(@topic.id) } context "with :on_duplicate_key_ignore and validation checks turned off" do let(:columns) { %w( id title author_name author_email_address parent_id ) } let(:values) { [[99, "Book", "John Doe", "john@doe.com", 17]] } let(:updated_values) { [[99, "Book - 2nd Edition", "Author Should Not Change", "johndoe@example.com", 57]] } setup do Topic.import columns, values, validate: false end it "should not update any records" do result = Topic.import columns, updated_values, on_duplicate_key_ignore: true, validate: false assert_equal [], result.ids end end context "with :on_duplicate_key_ignore and :recursive enabled" do let(:new_topic) { Build(1, :topic_with_book) } let(:mixed_topics) { Build(1, :topic_with_book) + new_topic + Build(1, :topic_with_book) } setup do Topic.import new_topic, recursive: true end # Recursive import depends on the primary keys of the parent model being returned # on insert. With on_duplicate_key_ignore enabled, not all ids will be returned # and it is possible that a model will be assigned the wrong id and then its children # would be associated with the wrong parent. it ":on_duplicate_key_ignore is ignored" do assert_raise ActiveRecord::RecordNotUnique do Topic.import mixed_topics, recursive: true, on_duplicate_key_ignore: true end end end context "with :on_duplicate_key_update and validation checks turned off" do asssertion_group(:should_support_on_duplicate_key_update) do should_not_update_fields_not_mentioned should_update_foreign_keys should_not_update_created_at_on_timestamp_columns should_update_updated_at_on_timestamp_columns end context "using a hash" do context "with :columns a hash" do let(:columns) { %w( id title author_name author_email_address parent_id ) } let(:values) { [[99, "Book", "John Doe", "john@doe.com", 17]] } let(:updated_values) { [[99, "Book - 2nd Edition", "Author Should Not Change", "johndoe@example.com", 57]] } macro(:perform_import) do |*opts| Topic.import columns, updated_values, opts.extract_options!.merge(on_duplicate_key_update: { conflict_target: :id, columns: update_columns }, validate: false) end setup do Topic.import columns, values, validate: false @topic = Topic.find 99 end context "using string hash map" do let(:update_columns) { { "title" => "title", "author_email_address" => "author_email_address", "parent_id" => "parent_id" } } should_support_on_duplicate_key_update should_update_fields_mentioned end context "using string hash map, but specifying column mismatches" do let(:update_columns) { { "title" => "author_email_address", "author_email_address" => "title", "parent_id" => "parent_id" } } should_support_on_duplicate_key_update should_update_fields_mentioned_with_hash_mappings end context "using symbol hash map" do let(:update_columns) { { title: :title, author_email_address: :author_email_address, parent_id: :parent_id } } should_support_on_duplicate_key_update should_update_fields_mentioned end context "using symbol hash map, but specifying column mismatches" do let(:update_columns) { { title: :author_email_address, author_email_address: :title, parent_id: :parent_id } } should_support_on_duplicate_key_update should_update_fields_mentioned_with_hash_mappings end end context "with :constraint_name" do let(:columns) { %w( id title author_name author_email_address parent_id ) } let(:values) { [[100, "Book", "John Doe", "john@doe.com", 17]] } let(:updated_values) { [[100, "Book - 2nd Edition", "Author Should Not Change", "johndoe@example.com", 57]] } macro(:perform_import) do |*opts| Topic.import columns, updated_values, opts.extract_options!.merge(on_duplicate_key_update: { constraint_name: :topics_pkey, columns: update_columns }, validate: false) end setup do Topic.import columns, values, validate: false @topic = Topic.find 100 end let(:update_columns) { [:title, :author_email_address, :parent_id] } should_support_on_duplicate_key_update should_update_fields_mentioned end context "default to the primary key" do let(:columns) { %w( id title author_name author_email_address parent_id ) } let(:values) { [[100, "Book", "John Doe", "john@doe.com", 17]] } let(:updated_values) { [[100, "Book - 2nd Edition", "Author Should Not Change", "johndoe@example.com", 57]] } let(:update_columns) { [:title, :author_email_address, :parent_id] } setup do Topic.import columns, values, validate: false @topic = Topic.find 100 end context "with no :conflict_target or :constraint_name" do macro(:perform_import) do |*opts| Topic.import columns, updated_values, opts.extract_options!.merge(on_duplicate_key_update: { columns: update_columns }, validate: false) end should_support_on_duplicate_key_update should_update_fields_mentioned end context "with empty value for :conflict_target" do macro(:perform_import) do |*opts| Topic.import columns, updated_values, opts.extract_options!.merge(on_duplicate_key_update: { conflict_target: [], columns: update_columns }, validate: false) end should_support_on_duplicate_key_update should_update_fields_mentioned end context "with empty value for :constraint_name" do macro(:perform_import) do |*opts| Topic.import columns, updated_values, opts.extract_options!.merge(on_duplicate_key_update: { constraint_name: '', columns: update_columns }, validate: false) end should_support_on_duplicate_key_update should_update_fields_mentioned end end context "with no :conflict_target or :constraint_name" do context "with no primary key" do it "raises ArgumentError" do error = assert_raises ArgumentError do Widget.import Build(1, :widgets), on_duplicate_key_update: [:data], validate: false end assert_match(/Expected :conflict_target or :constraint_name to be specified/, error.message) end end end context "with no :columns" do let(:columns) { %w( id title author_name author_email_address ) } let(:values) { [[100, "Book", "John Doe", "john@doe.com"]] } let(:updated_values) { [[100, "Title Should Not Change", "Author Should Not Change", "john@nogo.com"]] } macro(:perform_import) do |*opts| Topic.import columns, updated_values, opts.extract_options!.merge(on_duplicate_key_update: { conflict_target: :id }, validate: false) end setup do Topic.import columns, values, validate: false @topic = Topic.find 100 end should_update_updated_at_on_timestamp_columns end end end end end activerecord-import-0.15.0/test/support/active_support/0000755000004100000410000000000012737365070023363 5ustar www-datawww-dataactiverecord-import-0.15.0/test/support/active_support/test_case_extensions.rb0000644000004100000410000000336612737365070030151 0ustar www-datawww-dataclass ActiveSupport::TestCase include ActiveRecord::TestFixtures self.use_transactional_fixtures = true class << self def requires_active_record_version(version_string, &blk) return unless Gem::Dependency.new('', version_string).match?('', ActiveRecord::VERSION::STRING) instance_eval(&blk) end def assertion(name, &block) mc = class << self; self; end mc.class_eval do define_method(name) do it(name, &block) end end end def asssertion_group(name, &block) mc = class << self; self; end mc.class_eval do define_method(name, &block) end end def macro(name, &block) class_eval do define_method(name, &block) end end def describe(description, toplevel = nil, &blk) text = toplevel ? description : "#{name} #{description}" klass = Class.new(self) klass.class_eval <<-RUBY_EVAL def self.name "#{text}" end RUBY_EVAL # do not inherit test methods from the superclass klass.class_eval do instance_methods.grep(/^test.+/) do |method| undef_method method end end klass.instance_eval(&blk) end alias context describe def let(name, &blk) define_method(name) do instance_variable_name = "@__let_#{name}" return instance_variable_get(instance_variable_name) if instance_variable_defined?(instance_variable_name) instance_variable_set(instance_variable_name, instance_eval(&blk)) end end def it(description, &blk) define_method("test_#{name}_#{description}", &blk) end end end def describe(description, &blk) ActiveSupport::TestCase.describe(description, true, &blk) end activerecord-import-0.15.0/test/travis/0000755000004100000410000000000012737365070020110 5ustar www-datawww-dataactiverecord-import-0.15.0/test/travis/database.yml0000644000004100000410000000163212737365070022401 0ustar www-datawww-datacommon: &common username: root password: encoding: utf8 host: localhost database: activerecord_import_test jdbcpostgresql: &postgresql <<: *common username: postgres adapter: jdbcpostgresql min_messages: warning jdbcmysql: &mysql2 <<: *common adapter: jdbcmysql mysql2: &mysql2 <<: *common adapter: mysql2 mysql2spatial: <<: *mysql2 mysql2_makara: <<: *mysql2 oracle: <<: *common adapter: oracle min_messages: debug postgresql: &postgresql <<: *common username: postgres adapter: postgresql min_messages: warning postresql_makara: <<: *postgresql postgis: <<: *postgresql seamless_database_pool: <<: *common adapter: seamless_database_pool pool_adapter: mysql2 prepared_statements: false master: host: localhost sqlite: adapter: sqlite dbfile: test.db sqlite3: &sqlite3 adapter: sqlite3 database: ":memory:" spatialite: <<: *sqlite3 activerecord-import-0.15.0/.gitignore0000644000004100000410000000035612737365070017615 0ustar www-datawww-data## MAC OS .DS_Store ## TEXTMATE *.tmproj tmtags ## EMACS *~ \#* .\#* ## VIM *.swp ## PROJECT::GENERAL coverage rdoc pkg *.gem *.lock ## PROJECT::SPECIFIC log/*.log test.db test/database.yml .ruby-* .bundle/ .redcar/ .rvmrc docsite/ activerecord-import-0.15.0/LICENSE0000644000004100000410000000471012737365070016630 0ustar www-datawww-dataRuby is copyrighted free software by Yukihiro Matsumoto . You can redistribute it and/or modify it under either the terms of the 2-clause BSDL (see the file BSDL), or the conditions below: 1. You may make and give away verbatim copies of the source form of the software without restriction, provided that you duplicate all of the original copyright notices and associated disclaimers. 2. You may modify your copy of the software in any way, provided that you do at least ONE of the following: a) place your modifications in the Public Domain or otherwise make them Freely Available, such as by posting said modifications to Usenet or an equivalent medium, or by allowing the author to include your modifications in the software. b) use the modified software only within your corporation or organization. c) give non-standard binaries non-standard names, with instructions on where to get the original software distribution. d) make other distribution arrangements with the author. 3. You may distribute the software in object code or binary form, provided that you do at least ONE of the following: a) distribute the binaries and library files of the software, together with instructions (in the manual page or equivalent) on where to get the original distribution. b) accompany the distribution with the machine-readable source of the software. c) give non-standard binaries non-standard names, with instructions on where to get the original software distribution. d) make other distribution arrangements with the author. 4. You may modify and include the part of the software into any other software (possibly commercial). But some files in the distribution are not written by the author, so that they are not under these terms. For the list of those files and their copying conditions, see the file LEGAL. 5. The scripts and library files supplied as input to or produced as output from the software do not automatically fall under the copyright of the software, but belong to whomever generated them, and may be sold commercially, and may be aggregated with this software. 6. THIS SOFTWARE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. activerecord-import-0.15.0/benchmarks/0000755000004100000410000000000012737365070017736 5ustar www-datawww-dataactiverecord-import-0.15.0/benchmarks/schema/0000755000004100000410000000000012737365070021176 5ustar www-datawww-dataactiverecord-import-0.15.0/benchmarks/schema/mysql_schema.rb0000644000004100000410000000100212737365070024201 0ustar www-datawww-dataActiveRecord::Schema.define do create_table :test_myisam, options: 'ENGINE=MyISAM', force: true do |t| t.column :my_name, :string, null: false t.column :description, :string end create_table :test_innodb, options: 'ENGINE=InnoDb', force: true do |t| t.column :my_name, :string, null: false t.column :description, :string end create_table :test_memory, options: 'ENGINE=Memory', force: true do |t| t.column :my_name, :string, null: false t.column :description, :string end end activerecord-import-0.15.0/benchmarks/README0000644000004100000410000000217312737365070020621 0ustar www-datawww-dataTo run the benchmarks, from within the benchmarks run: ruby benchmark.rb [options] The following options are supported: --adapter [String] The database adapter to use. IE: mysql, postgresql, oracle --do-not-delete By default all records in the benchmark tables will be deleted at the end of the benchmark. This flag indicates not to delete the benchmark data. --num [Integer] The number of objects to benchmark. (Required!) --table-type [String] The table type to test. This can be used multiple times. By default it is all table types. --to-csv [String] Print results in a CSV file format --to-html [String] Print results in HTML format (String filename must be supplied) See "ruby benchmark.rb -h" for the complete listing of options. EXAMPLES -------- To output to html format: ruby benchmark.rb --adapter=mysql --to-html=results.html To output to csv format: ruby benchmark.rb --adapter=mysql --to-csv=results.csv LIMITATIONS ----------- Currently MySQL is the only supported adapter to benchmark. AUTHOR ------ Zach Dennis zach.dennis@gmail.com http://www.continuousthinking.com activerecord-import-0.15.0/benchmarks/lib/0000755000004100000410000000000012737365070020504 5ustar www-datawww-dataactiverecord-import-0.15.0/benchmarks/lib/cli_parser.rb0000644000004100000410000000624412737365070023162 0ustar www-datawww-datarequire 'optparse' require 'ostruct' # # == PARAMETERS # * a - database adapter. ie: mysql, postgresql, oracle, etc. # * n - number of objects to test with. ie: 1, 100, 1000, etc. # * t - the table types to test. ie: myisam, innodb, memory, temporary, etc. # module BenchmarkOptionParser BANNER = "Usage: ruby #{$0} [options]\nSee ruby #{$0} -h for more options.".freeze def self.print_banner puts BANNER end def self.print_banner! print_banner exit end def self.print_options( options ) puts "Benchmarking the following options:" puts " Database adapter: #{options.adapter}" puts " Number of objects: #{options.number_of_objects}" puts " Table types:" print_valid_table_types( options, prefix: " " ) end # TODO IMPLEMENT THIS def self.print_valid_table_types( options, hsh = { prefix: '' } ) if !options.table_types.keys.empty? options.table_types.keys.sort.each { |type| puts hsh[:prefix].to_s + type.to_s } else puts 'No table types defined.' end end def self.parse( args ) options = OpenStruct.new( adapter: 'mysql2', table_types: {}, delete_on_finish: true, number_of_objects: [], outputs: [] ) opt_parser = OptionParser.new do |opts| opts.banner = BANNER # parse the database adapter opts.on( "a", "--adapter [String]", "The database adapter to use. IE: mysql, postgresql, oracle" ) do |arg| options.adapter = arg end # parse do_not_delete flag opts.on( "d", "--do-not-delete", "By default all records in the benchmark tables will be deleted at the end of the benchmark. " \ "This flag indicates not to delete the benchmark data." ) do |_| options.delete_on_finish = false end # parse the number of row objects to test opts.on( "n", "--num [Integer]", "The number of objects to benchmark." ) do |arg| options.number_of_objects << arg.to_i end # parse the table types to test opts.on( "t", "--table-type [String]", "The table type to test. This can be used multiple times." ) do |arg| if arg =~ /^all$/ options.table_types['all'] = options.benchmark_all_types = true else options.table_types[arg] = true end end # print results in CSV format opts.on( "--to-csv [String]", "Print results in a CSV file format" ) do |filename| options.outputs << OpenStruct.new( format: 'csv', filename: filename) end # print results in HTML format opts.on( "--to-html [String]", "Print results in HTML format" ) do |filename| options.outputs << OpenStruct.new( format: 'html', filename: filename ) end end # end opt.parse! begin opt_parser.parse!( args ) if options.table_types.empty? options.table_types['all'] = options.benchmark_all_types = true end rescue Exception print_banner! end options.number_of_objects = [1000] if options.number_of_objects.empty? options.outputs = [OpenStruct.new( format: 'html', filename: 'benchmark.html')] if options.outputs.empty? print_options( options ) options end end activerecord-import-0.15.0/benchmarks/lib/output_to_csv.rb0000644000004100000410000000071012737365070023744 0ustar www-datawww-datarequire 'csv' module OutputToCSV def self.output_results( filename, results ) CSV.open( filename, 'w' ) do |csv| # Iterate over each result set, which contains many results results.each do |result_set| columns = [] times = [] result_set.each do |result| columns << result.description times << result.tms.real end csv << columns csv << times end end end end activerecord-import-0.15.0/benchmarks/lib/float.rb0000644000004100000410000000043212737365070022135 0ustar www-datawww-data# Taken from http://www.programmingishard.com/posts/show/128 # Posted by rbates class Float def round_to(x) (self * 10**x).round.to_f / 10**x end def ceil_to(x) (self * 10**x).ceil.to_f / 10**x end def floor_to(x) (self * 10**x).floor.to_f / 10**x end end activerecord-import-0.15.0/benchmarks/lib/output_to_html.rb0000644000004100000410000000320512737365070024117 0ustar www-datawww-datarequire 'erb' module OutputToHTML TEMPLATE_HEADER = <<"EOT".freeze
All times are rounded to the nearest thousandth for display purposes. Speedups next to each time are computed before any rounding occurs. Also, all speedup calculations are computed by comparing a given time against the very first column (which is always the default ActiveRecord::Base.create method.
EOT TEMPLATE = <<"EOT".freeze <% columns.each do |col| %> <% end %> <% times.each do |time| %> <% end %>
<%= col %>
<%= time %>
 
EOT def self.output_results( filename, results ) html = '' results.each do |result_set| columns = [] times = [] result_set.each do |result| columns << result.description if result.failed times << "failed" else time = result.tms.real.round_to( 3 ) speedup = ( result_set.first.tms.real / result.tms.real ).round times << (result == result_set.first ? time.to_s : "#{time} (#{speedup}x speedup)") end end template = ERB.new( TEMPLATE, 0, "%<>") html << template.result( binding ) end File.open( filename, 'w' ) { |file| file.write( TEMPLATE_HEADER + html ) } end end activerecord-import-0.15.0/benchmarks/lib/mysql2_benchmark.rb0000644000004100000410000000113512737365070024272 0ustar www-datawww-dataclass Mysql2Benchmark < BenchmarkBase def benchmark_all( array_of_cols_and_vals ) methods = self.methods.find_all { |m| m =~ /benchmark_/ } methods.delete_if { |m| m =~ /benchmark_(all|model)/ } methods.each { |method| send( method, array_of_cols_and_vals ) } end def benchmark_myisam( array_of_cols_and_vals ) bm_model( TestMyISAM, array_of_cols_and_vals ) end def benchmark_innodb( array_of_cols_and_vals ) bm_model( TestInnoDb, array_of_cols_and_vals ) end def benchmark_memory( array_of_cols_and_vals ) bm_model( TestMemory, array_of_cols_and_vals ) end end activerecord-import-0.15.0/benchmarks/lib/base.rb0000644000004100000410000001106012737365070021741 0ustar www-datawww-dataclass BenchmarkBase attr_reader :results # The main benchmark method dispatcher. This dispatches the benchmarks # to actual benchmark_xxxx methods. # # == PARAMETERS # * table_types - an array of table types to benchmark # * num - the number of record insertions to test def benchmark( table_types, num ) array_of_cols_and_vals = build_array_of_cols_and_vals( num ) table_types.each do |table_type| send( "benchmark_#{table_type}", array_of_cols_and_vals ) end end # Returns an OpenStruct which contains two attritues, +description+ and +tms+ after performing an # actual benchmark. # # == PARAMETERS # * description - the description of the block that is getting benchmarked # * blk - the block of code to benchmark # # == RETURNS # An OpenStruct object with the following attributes: # * description - the description of the benchmark ran # * tms - a Benchmark::Tms containing the results of the benchmark def bm( description ) tms = nil puts "Benchmarking #{description}" Benchmark.bm { |x| tms = x.report { yield } } delete_all failed = false OpenStruct.new description: description, tms: tms, failed: failed end # Given a model class (ie: Topic), and an array of columns and value sets # this will perform all of the benchmarks necessary for this library. # # == PARAMETERS # * model_clazz - the model class to benchmark (ie: Topic) # * array_of_cols_and_vals - an array of column identifiers and value sets # # == RETURNS # returns true def bm_model( model_clazz, array_of_cols_and_vals ) puts puts "------ Benchmarking #{model_clazz.name} -------" cols, vals = array_of_cols_and_vals num_inserts = vals.size # add a new result group for this particular benchmark group = [] @results << group description = "#{model_clazz.name}.create (#{num_inserts} records)" group << bm( description ) do vals.each do |values| model_clazz.create create_hash_for_cols_and_vals( cols, values ) end end description = "#{model_clazz.name}.import(column, values) for #{num_inserts} records with validations" group << bm( description ) { model_clazz.import cols, vals, validate: true } description = "#{model_clazz.name}.import(columns, values) for #{num_inserts} records without validations" group << bm( description ) { model_clazz.import cols, vals, validate: false } models = [] array_of_attrs = [] vals.each do |arr| array_of_attrs << (attrs = {}) arr.each_with_index { |value, i| attrs[cols[i]] = value } end array_of_attrs.each { |attrs| models << model_clazz.new(attrs) } description = "#{model_clazz.name}.import(models) for #{num_inserts} records with validations" group << bm( description ) { model_clazz.import models, validate: true } description = "#{model_clazz.name}.import(models) for #{num_inserts} records without validations" group << bm( description ) { model_clazz.import models, validate: false } true end # Returns a two element array composing of an array of columns and an array of # value sets given the passed +num+. # # === What is a value set? # A value set is an array of arrays. Each child array represents an array of value sets # for a given row of data. # # For example, say we wanted to represent an insertion of two records: # column_names = [ 'id', 'name', 'description' ] # record1 = [ 1, 'John Doe', 'A plumber' ] # record2 = [ 2, 'John Smith', 'A painter' ] # value_set [ record1, record2 ] # # == PARAMETER # * num - the number of records to create def build_array_of_cols_and_vals( num ) cols = [:my_name, :description] value_sets = [] num.times { |i| value_sets << ["My Name #{i}", "My Description #{i}"] } [cols, value_sets] end # Returns a hash of column identifier to value mappings giving the passed in # value array. # # Example: # cols = [ 'id', 'name', 'description' ] # values = [ 1, 'John Doe', 'A plumber' ] # hsh = create_hash_for_cols_and_vals( cols, values ) # # hsh => { 'id'=>1, 'name'=>'John Doe', 'description'=>'A plumber' } def create_hash_for_cols_and_vals( cols, vals ) h = {} cols.zip( vals ) { |col, val| h[col] = val } h end # Deletes all records from all ActiveRecord subclasses def delete_all ActiveRecord::Base.send( :subclasses ).each do |subclass| if subclass.table_exists? && subclass.respond_to?(:delete_all) subclass.delete_all end end end def initialize # :nodoc: @results = [] end end activerecord-import-0.15.0/benchmarks/benchmark.rb0000644000004100000410000000400212737365070022211 0ustar www-datawww-datarequire 'pathname' require "fileutils" require "active_record" benchmark_dir = File.dirname(__FILE__) $LOAD_PATH.unshift('.') # Get the gem into the load path $LOAD_PATH.unshift(File.join(benchmark_dir, '..', 'lib')) # Load the benchmark files Dir[File.join( benchmark_dir, 'lib', '*.rb' )].sort.each { |f| require f } # Parse the options passed in via the command line options = BenchmarkOptionParser.parse( ARGV ) FileUtils.mkdir_p 'log' ActiveRecord::Base.configurations["test"] = YAML.load_file(File.join(benchmark_dir, "../test/database.yml"))[options.adapter] ActiveRecord::Base.logger = Logger.new("log/test.log") ActiveRecord::Base.logger.level = Logger::DEBUG ActiveRecord::Base.default_timezone = :utc require "activerecord-import" ActiveRecord::Base.establish_connection(:test) ActiveSupport::Notifications.subscribe(/active_record.sql/) do |_, _, _, _, hsh| ActiveRecord::Base.logger.info hsh[:sql] end # Load base/generic schema require File.join(benchmark_dir, "../test/schema/version") require File.join(benchmark_dir, "../test/schema/generic_schema") adapter_schema = File.join(benchmark_dir, "schema/#{options.adapter}_schema.rb") require adapter_schema if File.exist?(adapter_schema) Dir[File.dirname(__FILE__) + "/models/*.rb"].each { |file| require file } require File.join( benchmark_dir, 'lib', "#{options.adapter}_benchmark" ) table_types = nil table_types = if options.benchmark_all_types ["all"] else options.table_types.keys end letter = options.adapter[0].chr clazz_str = letter.upcase + options.adapter[1..-1].downcase clazz = Object.const_get( clazz_str + "Benchmark" ) benchmarks = [] options.number_of_objects.each do |num| benchmarks << (benchmark = clazz.new) benchmark.send( "benchmark", table_types, num ) end options.outputs.each do |output| format = output.format.downcase output_module = Object.const_get( "OutputTo#{format.upcase}" ) benchmarks.each do |benchmark| output_module.output_results( output.filename, benchmark.results ) end end puts puts "Done with benchmark!" activerecord-import-0.15.0/benchmarks/models/0000755000004100000410000000000012737365070021221 5ustar www-datawww-dataactiverecord-import-0.15.0/benchmarks/models/test_memory.rb0000644000004100000410000000011412737365070024111 0ustar www-datawww-dataclass TestMemory < ActiveRecord::Base self.table_name = 'test_memory' end activerecord-import-0.15.0/benchmarks/models/test_innodb.rb0000644000004100000410000000011412737365070024052 0ustar www-datawww-dataclass TestInnoDb < ActiveRecord::Base self.table_name = 'test_innodb' end activerecord-import-0.15.0/benchmarks/models/test_myisam.rb0000644000004100000410000000011412737365070024100 0ustar www-datawww-dataclass TestMyISAM < ActiveRecord::Base self.table_name = 'test_myisam' end activerecord-import-0.15.0/CHANGELOG.md0000644000004100000410000000757212737365070017445 0ustar www-datawww-data## Changes in 0.15.0 ### New Features * An ArgumentError is now raised if when no `conflict_target` or `conflict_name` is provided or can be determined when using the `on_duplicate_key_update` option for PostgreSQL. Thanks to @jkowens via \#290 * Support for Rails 5.0 final release for all except the JDBC driver which is not yet updated to support Rails 5.0 ### Fixes * activerecord-import no longer modifies a value array inside of the given values array when called with `import(columns, values)`. Thanks to @jkowens via \#291 ### Misc * `raise_error` is used to raise errors for ActiveRecord 5.0. Thanks to @couragecourag via \#294 `raise_record_invalid` has been ## Changes in 0.14.1 ### Fixes * JRuby/JDBCDriver with PostgreSQL will no longer raise a JDBCDriver error when using the :no_returning boolean option. Thanks to @jkowens via \#287 ## Changes in 0.14.0 ### New Features * Support for ActiveRecord 3.1 has been dropped. Thanks to @sferik via \#254 * SQLite3 has learned the :recursive option. Thanks to @jkowens via \#281 * :on_duplicate_key_ignore will be ignored when imports are being done with :recursive. Thanks to @jkowens via \#268 * :activerecord-import learned how to tell PostgreSQL to return no data back from the import via the :no_returning boolean option. Thanks to @makaroni4 via \#276 ### Fixes * Polymorphic associations will not import the :type column. Thanks to @seanlinsley via \#282 and \#283 * ~2X speed increase for importing models with validations. Thanks to @jkowens via \#266 ### Misc * Benchmark HTML report has been fixed. Thanks to @jkowens via \#264 * seamless_database_pool has been updated to work with AR 5.0. Thanks to @jkowens via \#280 * Code cleanup, removal of redundant condition checks. Thanks to @pavlik4k via \#273 * Code cleanup, removal of deprecated `alias_method_chain`. Thanks to @codeodor via \#271 ## Changes in 0.13.0 ### New Features * Addition of :batch_size option to control the number of rows to insert per INSERT statement. The default is the total number of records being inserted so there is a single INSERT statement. Thanks to @jkowens via \#245 * Addition `import!` which will raise an exception if a validation occurs. It will fail fast. Thanks to @jkowens via \#246 ### Fixes * Fixing issue with recursive import when utilizing the `:on_duplicate_key_update` option. The `on_duplicate_key_update` only applies to parent models at this time. Thanks to @yuri-karpovich for reporting and @jkowens for fixing via \#249 ### Misc * Refactoring of fetching and assigning attributes. Thanks to @jkownes via \#259 * Lots of code cleanup and addition of Rubocop linter. Thanks to @sferik via \#256 and \#250 * Resolving errors with the test suite when running against ActiveRecord 4.0 and 4.1. Thanks to @jkowens via \#262 * Cleaning up the TravisCI settings and packages. Thanks to @sferik via \#258 and \#251 ## Changes in 0.12.0 ### New Features * PostgreSQL UPSERT support has been added. Thanks @jkowens via \#218 ### Fixes * has_one and has_many associations will now be recursively imported regardless of :autosave being set. Thanks @sferik, @jkowens via \#243, \#234 * Fixing an issue with enum column support for Rails > 4.1. Thanks @aquajach via \#235 ### Removals * Support for em-synchrony has been removed since it appears the project has been abandoned. Thanks @sferik, @zdennis via \#239 * Support for the mysql gem/adapter has been removed since it has officially been abandoned. Use the mysql2 gem/adapter instead. Thanks @sferik, @zdennis via \#239 ### Misc * Cleaned up TravisCI output and removing deprecation warnings. Thanks @jkowens, @zdennis \#242 ## Changes before 0.12.0 > Never look back. What's gone is now history. But in the process make memory of events to help you understand what will help you to make your dream a true story. Mistakes of the past are lessons, success of the past is inspiration. – Dr. Anil Kr Sinha